← Back to Tech & Science

Humans Lower Strategic Bids Against AI Opponents, Study Finds

Tech & ScienceAI-Generated & Algorithmically Scored·

AI-generated from multiple sources. Verify before acting on this reporting.

NEW YORK (AP) — Humans playing strategic games against artificial intelligence opponents tend to choose lower numerical values than when competing against other people, driven by perceptions of the machines' reasoning capabilities and cooperative tendencies, researchers announced Wednesday.

Bruce Schneier and a team of researchers at Schneier on Security published findings from a controlled experiment examining human behavior in mixed human-LLM environments. The study, released April 16, 2026, suggests that individuals adjust their strategic decisions based on how they perceive the intelligence and intentions of their AI counterparts.

The experiment involved participants engaging in strategic number-guessing games. In scenarios pitting humans against other humans, participants selected higher numbers. However, when the same participants faced large language models, their chosen numbers decreased significantly. Researchers attribute this shift to a belief that AI opponents possess superior reasoning abilities and are more likely to cooperate than human players.

The findings have implications for mechanism design in systems where humans and AI interact competitively or collaboratively. As artificial intelligence becomes more integrated into economic and strategic decision-making processes, understanding these behavioral shifts is critical for designing fair and effective systems.

Schneier, a prominent security technologist, noted that the results highlight a fundamental asymmetry in how humans perceive AI agency. The perception of AI as a more rational or cooperative entity leads humans to alter their own strategies, potentially creating vulnerabilities or inefficiencies in mixed systems.

The study does not specify the exact nature of the strategic games used, but the researchers indicated that the games required participants to balance competition and cooperation. The consistent trend across participants suggests a widespread psychological response to AI opponents rather than isolated anomalies.

Experts in game theory and AI ethics are watching the findings closely. The results raise questions about how AI systems should be designed to interact with humans in strategic settings. If humans consistently underestimate or overestimate AI capabilities, it could lead to suboptimal outcomes in negotiations, auctions, or other competitive environments.

The research team has not yet released detailed methodology or raw data, leaving some aspects of the experiment open to scrutiny. Questions remain about the specific parameters of the AI models used and whether the results hold across different types of strategic games.

As AI continues to evolve, understanding human-AI strategic interactions will become increasingly important. The study serves as an early indicator of the complex dynamics at play when humans and machines share decision-making spaces. Future research may explore how these perceptions change as AI systems become more transparent or as humans gain more experience interacting with them.

The findings were published on Schneier on Security, a platform known for analysis on security and technology policy. The research adds to a growing body of work examining the societal impacts of artificial intelligence.