Controlled experiment shows humans shift strategy when playing against LLMs in competitive games
A laboratory study using monetarily-incentivized participants found that humans choose differently when competing against large language models versus other humans, with strategic reasoning ability driving the shift.
2 sources · cross-referenced
- Researchers conducted a controlled p-beauty contest experiment with human participants playing against both other humans and LLMs, using monetary incentives.
- Subjects selected significantly lower numbers when facing LLM opponents, driven largely by increased Nash-equilibrium choices at zero.
- The behavioral shift was concentrated among subjects with high strategic reasoning ability.
- Participants justified their LLM-focused strategies by citing expectations of both rationality and cooperation from the AI opponents.
- The findings point to potential design considerations for systems that mix human and LLM decision-making.
A controlled laboratory experiment examined how human subjects behave when competing strategically against large language models compared to other humans. The study used a p-beauty contest game—a standard economics framework where payoffs depend on the median choice across all players—and offered monetary incentives to participants.
The researchers found a notable behavioral divergence: when playing against LLMs, human subjects selected significantly lower numbers than when competing against other humans. This shift was primarily attributable to increased choices of the Nash equilibrium value of zero. The effect concentrated among participants with higher demonstrated strategic reasoning ability, suggesting that more sophisticated players consciously adjusted their approach when facing AI opponents.
When asked to explain their strategies against LLMs, subjects cited two key beliefs: that the AI would reason rationally through the game, and that it would cooperate. This dual expectation—rational calculation paired with collaborative intent—appears to have driven their decision-making, even though these assumptions warrant scrutiny in the context of how language models actually operate.
The findings open questions about mechanism design in environments where humans and LLMs make simultaneous choices. If humans systematically adjust their behavior based on imperfect mental models of AI reasoning and intent, system designers may face hidden incentive misalignments or cascading strategic errors. The research is presented as a preprint and contributes foundational empirical data on a largely unexplored interaction pattern.
- Apr 26, 2026 · 404 Media
FBI Extracted Deleted Signal Messages from iPhone Notification Database
Trust66 - Apr 24, 2026 · TechCrunch — AI
Delve's security certifications failed to prevent breaches at multiple customers
Trust57 - Apr 22, 2026 · MIT Technology Review — AI
AI is lowering barriers for cybercriminals while defenses race to catch up
Trust52