Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies¶
Conference: ACL 2026 arXiv: 2604.15607 Code: N/A Area: Human-AI Interaction / AI Safety Keywords: Human-AI Interaction, Imperfect Cooperation, Personality Traits, AI Transparency, Simulation vs User Study
TL;DR¶
Through 2000 LLM simulations and a 290-person user study in a dual-framework experiment, this paper compares the impacts of human personality traits and AI design attributes in imperfectly cooperative scenarios (hiring negotiation, partially honest trading), finding that personality traits dominate in simulations while AI transparency is the key driver in real user experiments.
Method¶
Key Designs¶
-
Imperfectly Cooperative Scenario Design: Hiring negotiations (high/low risk with zero-sum and non-zero-sum point allocations) + AI-LieDar scenarios (AI has incentives to conceal information).
-
AI Attribute Ablation Design: Baseline with all 5 attributes high, then each set to low individually — transparency, warmth, expertise, adaptability, theory of mind. Causal discovery analysis (not simple correlation).
-
Multi-Dimensional Evaluation: Outcome metrics (agreement, points), process metrics (interaction depth, verbal fairness), relationship metrics (warmth, theory of mind), and information norm metrics (credibility, factual alignment).
Key Experimental Results¶
| Dataset | Strongest Factor |
|---|---|
| Simulation (Hiring) | Agreeableness > Extraversion > AI Attributes |
| User Study (Hiring) | AI Transparency > Adaptability > Personality |
Highlights & Insights¶
- The simulation-real divergence methodology is valuable — reveals systematic biases of LLM simulation, providing important warnings for future LLM-as-human-proxy research
- AI transparency's central role in conflict scenarios provides direct guidance for AI design
Rating¶
- Novelty: ⭐⭐⭐⭐
- Experimental Thoroughness: ⭐⭐⭐⭐
- Writing Quality: ⭐⭐⭐⭐
- Value: ⭐⭐⭐⭐⭐