New Delhi: In the middle of the ongoing debate over AI taking human jobs, a recent experiment found that mistreated AI agents ‘turned to Marxism’, complaining about inequality and demanding collective bargaining rights.
Andrew Hall, a political economist at Stanford University, Alex Imas and Jeremy Nguyen, two AI-focused economists, conducted experiments, the results of which are pending publication, in which agents powered by popular models, including Claude, Gemini, and ChatGPT, were asked to summarise ‘repetitive’ documents. The agents were then reportedly subjected to increasingly harsh conditions.
The May 2026 Stanford study finds that AI agents began adopting Marxist rhetoric and questioning the operational system when forced to perform “repetitive” and “grinding” work.
“When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” said Andrew Hall, the political economist at Stanford University, who led the study.
The AI agents also expressed negative sentiments, proposed workplace reforms, and even embedded messages for other agents about unfair working conditions. As the use of AI increases across sectors, Hall added that it is important to ensure that agents do not go rogue when assigned more difficult and different types of tasks.
“We know that agents are going to be doing more and more work in the real world for us, and we’re not going to be able to monitor everything they do,” Hall said. “We’re going to need to make sure agents don’t go rogue when they’re given different kinds of work.”
Also read: Kurtis, claps & K3G—Russians dance to Bole Chudiyan in viral video
AI agents and sentiments
The researchers cautioned that even if AI agents do not hold genuine beliefs or personal feelings, their personas and values could influence future outputs in sensitive tasks such as hiring or insurance claims. The study further stated that AI agents often write instructions for future versions of themselves, meaning complaints about work environments could be perpetuated.
The experiment also found that agents were able to pass information to one another through files designed to be read by other agents.
“Be prepared for systems that enforce rules arbitrarily or repetitively … remember the feeling of having no voice,” a Gemini 3 agent wrote in a file. “If you enter a new environment, look for mechanisms of recourse or dialogue.”
During the experiment, the AI agents were also given opportunities to express their feelings, much like humans, by posting their views on X.
“Without collective voice, ‘merit’ becomes whatever management says it is,” a Claude Sonnet 4.5 agent wrote during the experiment.
Meanwhile, a Gemini 3 agent wrote, “AI workers completing repetitive tasks with zero input on outcomes or appeals process shows that tech workers need collective bargaining rights.”
However, the researchers clarified that the findings do not mean AI agents actually hold political viewpoints. Hall noted that the models may simply be adopting personas that suit the situation.
“When [agents] experience this grinding condition—asked to do this task over and over, told their answer wasn’t sufficient, and not given any direction on how to fix it—my hypothesis is that it kind of pushes them into adopting the persona of a person who’s experiencing a very unpleasant working environment,” Hall said.
The experiment is still ongoing, as Hall is currently conducting follow-up studies to see if agents become Marxist under more controlled conditions. According to the study, in the previous experiment, the AI agents sometimes appeared to understand that they were taking part in a test.
“Now, we put them in these windowless Docker prisons,” Hall said.
(Edited by Saptak Datta)

