Behavioral game theory offers a valuable framework for understanding LLM behavior and highlights the need for further research to develop more socially intelligent and aligned AI systems.
This is a study of Behavior of Large Language Models (LLMs) when playing repeated games. The study uses behavioral game theory to analyze LLMs' cooperation and coordination abilities, comparing their performance against each other and against human players. 
*   **LLMs excel in self-interested games:** They perform well in games like the Prisoner's Dilemma, demonstrating a tendency towards defection and a lack of forgiveness.
*   **LLMs struggle with coordination:** They underperform in games requiring coordination, such as the Battle of the Sexes, often failing to adapt to simple strategies.
*   **Prompting can influence behavior:** Providing additional information about the opponent or using a "social chain-of-thought" (SCoT) prompting strategy can improve LLM performance, particularly in coordination games.
*   **Human experiments confirm findings:** Human participants interacting with LLMs showed increased coordination and cooperation when the LLMs were prompted with SCoT.