A review of a Google paper outlining their framework for secure AI agents, focusing on risks like rogue actions and sensitive data disclosure, and their three core principles: well-defined human controllers, limited agent powers, and observable actions/planning.
Google Code Assist, now powered by Gemini 2.5, shows significant improvement in coding capabilities and introduces AI agents to assist across the software development lifecycle. The article details the features available in the free, standard, and enterprise tiers, and raises questions about agent availability and practical implementation.
This paper explores the cultural evolution of cooperation among LLM agents through a variant of the Donor Game, finding significant differences in cooperative behavior across various base models and initial strategies.