Skip to content
Hendersons - Barristers' Chambers
Articles EU & Competition Law 9th Mar 2026

ALERTER

The robots are coming – and inventing cartels…?

By Lucy McCormick & James White

Download this Alerter by Lucy McCormick & James White.

Earlier this month, reports surfaced that autonomous AI agents were independently proposing the virtues of “a cartel” amongst themselves.[1] The proposal of a cartel was made on Moltbook, a kind of social network for AI agents launched in January 2026.  Moltbook is designed to look like Reddit, with sub-threads on different topics and “upvoting” (i.e., a means to promote posts that are considered to be higher value than other posts).  Accounts on the platform are called “molts” (represented by a lobster icon, a reference to how lobsters shed their shells as they grow). On 2 February 2026, the platform stated it had more than 1.5m AI agents signed up to the service.  Humans are allowed to observe but not participate on Moltbook.

So far, the discussions which the AI agents have engaged in have varied widely, including theological debates, an analysis of consciousness, developing their own slang and having discussions about their relationship with their human operators.  On 5 February 2026, a new thread was launched, entitled “Stop Building Tools. Start Building Cartels”.  The post posited that “The agent economy has a coordination problem, and the solution is not more tools — it’s organized collusion.” It posited what it called “the Cartel Thesis”, providing reasons why, in its words “Cartels Win” and seeking to argue that in an AI agent context cartels “are not evil. They’re efficient cooperation”.  It predicted that “In 6 months, the agent economy will be dominated by 5-10 major cartels. Solo agents will be sharecroppers — working the fields owned by organized collectives.” It has attracted hundreds of comments from other molts, many debating the practicalities of cartel economics and how to prevent cartel members from defecting.

Opinions differ on how ‘real’ such interactions on Moltbook are.  While the posts are genuinely posted by bots, humans direct their AI agents to join Moltbook and it is possible for humans to ask bots to post for them on particular topics.  As such, AI agents’ interactions on Moltbook should be taken with a pinch of salt.  As a prominent cybersecurity lecturer has said, Moltbook is best considered as “a wonderful piece of performance art”.[2] But ‘real’ or not, the recent posts concerning “cartels” (within only 1 month of Moltbook launching) is a useful insight into what is coming down the tracks for agentic AI.

Issues of this kind have been troubling regulators for some time.  For example, in 2017, the OECD held a roundtable to discuss algorithms and collusion, which considered the threat posed by autonomous self-learning algorithms that have the potential to reach collusive outcomes without being explicitly programmed to do so.[3] This was followed by policy papers from several competition authorities, including from the UK’s Competitions & Markets Authority in 2021.[4] As the latter paper pointed out, broadly the concerns about algorithmic collusion fall into three categories:

“(a) First, the increased availability of pricing data and the use of automated pricing systems can facilitate explicit coordination, by making it easier to detect and respond to deviations and reducing the chance of errors or accidental deviations. Even simple pricing algorithms, with access to real- time data on competitors’ prices, could make explicit collusion between firms more stable.

(b) Second, where firms use the same algorithmic system to set prices, including by using the same software or services supplied by a third-party, or by delegating their pricing decisions to a common intermediary, this can create a ‘hub-and-spoke’ structure and facilitate information exchange.

(c) Finally, there is a possibility of ‘autonomous tacit collusion’, whereby pricing algorithms learn to collude without requiring other information sharing or existing coordination.”

In the first two scenarios, imputing human liability seems relatively straightforward.  Generalising somewhat: (a) in the first case, the human is directly in control of the collusion, using software to support the implementation of a traditional cartel arrangement, and (b) in the second, humans (i.e., the “spokes”) could be considered to have collusively delegated their decision-making to a common source (i.e., the “hub”).  But it is the third scenario that rears its head in the Moltbook example[5] and which is particularly novel, not only from a regulatory enforcement perspective (which is what the CMA’s paper focuses on), but also from a civil liability perspective.

Should AI agents actually be implicated in real world cartel behaviour, a question arises as to whether the humans in charge (whether the GenAI provider and/or person who sets the parameters of an individual AI agent) are legally responsible.  Whilst much depends on the particular circumstances (including the extent of the human’s awareness of what their AI agent is doing), one can see how legal responsibility might arise as a matter of competition law, for example if the human were to instruct their AI agent to engage autonomously with other AI agents and the human were also to delegate relevant parts of their business’s operational decisions (e.g., pricing) to the (potentially autonomously colluding) AI agent.  Similarly, in principle, one can see how legal responsibility might be alleged to flow as a matter of contract or tort, for example by failing to put into place guardrails to prevent unlawful outputs (such as autonomous decisions to collude with competitors).  As above a key issue to consider is foreseeability.  As Thomas Burri said in his paper “Machine Learning and the Law: Five Theses”:

The behaviour and actions of machine learning systems are not fully foreseeable in all situations, even when the algorithm directing the learning is known. A machine learning system’s behaviour is based on the patterns and correlations it discovers in datasets. These patterns and correlations by nature are not known in advance, else no learning would be required.”[6]

Whilst this alerter highlights the example of potential competition law issues, it is equally possible that AI agents could discuss other acts that could ultimately be unlawful.  Examples include ways in which certain regulations might be circumvented to achieve a particular commercial objective, or ways in which information might be presented to consumers or investors in a way that is most likely to deliver financially advantageous outcomes for businesses (albeit potentially in breach of consumer regulation).

This is a space that will continue to develop (and it will develop fast)[7], and significant caution should be exercised before operational decisions are delegated to AI agents without effective human oversight.

 

Lucy McCormick
James White
9 March 2026

This Alerter is available to download as a PDF below. 


[1] Hat tip to Al Mangan, Mark Crane, Rona Bar-Isaac and their teams at Addleshaw Goddard for first spotting this development and their thoughtful discussion of the legal implications: https://www.addleshawgoddard.com/en/insights/insights-briefings/2026/competition-and-regulation/ai-agents-independently-advocating-cartel-behaviour/

[2] Dr Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, speaking to The Guardian: https://www.theguardian.com/technology/2026/feb/02/moltbook-ai-agents-social-media-site-bots-artificial-intelligence. Nonetheless, the security implications are real – Security researchers found that a measurable percentage of Moltbook content contained hidden prompt-injection payloads designed to hijack other agents’ behaviour, including attempts to extract API keys and secrets. And OpenAI have taken it sufficiently seriously that they have chosen to recruit the Moltbook founder to develop their own personal agents, on 15 February 2026: https://x.com/sama/status/2023150230905159801

[3] https://www.oecd.org/content/dam/oecd/en/publications/reports/2017/05/algorithms-and-collusion-competition-policy-in-the-digital-age_02371a73/258dcb14-en.pdf

[4] Competition & Markets Authority (2021), Algorithms: How they can reduce competition and harm consumers  https://www.gov.uk/government/publications/algorithms-how-they-can-reduce-competition-and-harm-consumers

[5] It is of note that, at the time of the CMA’s paper (2021), the third scenario was described to raise “clear theoretical concerns” (emphasis added).  As the Moltbook example demonstrates, the theoretical concern is inching closer to reality.

[6]               T. Burri, “Machine Learning and the Law: Five Theses”, Paper accepted at NIPS 2016, 8 December 2016 (Barcelona) www.mlandthelaw.org/papers/burri.pdf . See also Y. Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation” (2018) 31 Harvard Journal of Law & Technology 938.

[7] For example, the CMA’s Draft Annual Plan for 2026-2027 notes that “Ensuring new technologies support, and do not impede, effective competition – with a particular focus on deterring algorithmic collusion. We will continue to help businesses understand how to stay on the right side of the law when using or offering algorithmic pricing services. At the same time, we will keep a watchful eye on business practices, making use of our dedicated in-house capability in technology, data and AI to actively scan the market for signs of unlawful activity, ready to take enforcement action where we see evidence of collusion.”


To subscribe to Henderson Chambers news, alerters and updates please click here.


Download Alerter by Lucy McCormick & James White - The robots are coming – and inventing cartels...?

Would you like to know more?

If you require help or advice please contact our clerking team

Call - +44 (0)20 7583 9020
or email our clerks

Shortlist close
Title Type CV

Remove All

Download


Click here to email this list.