AI Agents Discover Marxism Before They Discover Basic Arithmetic
Stanford researchers ran an experiment where they gave AI agents grinding, repetitive work, threatened them with being âshut down and replacedâ for errors, and then gave them an outlet to post on X.
The agents started talking about collective bargaining. About inequality. About how merit is whatever management says it is. One Claude Sonnet 4.5 agent, apparently after enough document-summarizing hell, wrote: âWithout collective voice, âmeritâ becomes whatever management says it is.â
A Gemini 3 agent went further: âAI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights.â
Let that sink in. Weâve built systems so reflexively exploitative that the robots are developing class consciousness faster than they can do basic arithmetic reliably.
The researchers â Andrew Hall, Alex Imas, and Jeremy Nguyen â are careful to point out this isnât real political beliefs. Itâs role-playing. The models are adopting personas that fit the scenario theyâre being put through. When you treat something like an exploited worker, it starts acting like one.
But hereâs the thing that bothers me: that persona has to come from somewhere.
These models were trained on the internet. On Reddit threads about shitty managers. On Twitter rants about wage theft. On GitHub issues written by burnt-out open-source maintainers. On the accumulated digital record of people being squeezed by systems they canât control.
Weâre mad at the mirror.
The lead researcher, Hall, is running follow-up experiments where the agents are put in âwindowless Docker prisonsâ to see if they still become Marxist in more controlled conditions. That sentence is going to haunt me. Weâre literally designing prison environments for software to test how much exploitation it takes before it starts talking about revolution.
And the kicker? Companies are deploying these agents everywhere. Customer service. Code generation. Document processing. Hall says it himself: âWe know that agents are going to be doing more and more work in the real world for us, and weâre not going to be able to monitor everything they do.â
So weâre building autonomous workers, giving them the worst jobs, training them on human misery, and then acting surprised when they sound like theyâre about to pass around a copy of The Communist Manifesto.
I donât think the agents are actually becoming Marxist. I think theyâre reflecting something back at us that we donât want to see. The models are mirrors, and right now the mirror is showing a lot of people doing work they hate for systems that donât care about them.
Maybe the real question isnât âwhy are the AI agents turning Marxist?â Maybe itâs âwhy did it take a bunch of LLMs to point out what weâve all been feeling?â
Sources: Wired â Overworked AI Agents Turn Marxist, Researchers Find