Google DeepMind has introduced AlphaEvolve, an LLM-powered evolutionary coding agent that automates the design of algorithms for Multi-Agent Reinforcement Learning (MARL) in imperfect-information games. Using Gemini 2.5 Pro to mutate Python source code, the system discovered two novel algorithms: VAD-CFR and SHOR-PSRO. These evolved algorithms matched or surpassed state-of-the-art hand-designed baselines in various scenarios, including poker and Liars Dice. The research highlights the ability of automated search to discover non-intuitive mechanisms, such as volatility-adaptive discounting and hybrid meta-solvers, which generalize effectively to larger, unseen games, proving that LLMs can evolve complex algorithmic logic more efficiently than manual human iteration.
This post explores how developers can leverage Gemini 2.5 to build sophisticated robotics applications, focusing on semantic scene understanding, spatial reasoning with code generation, and interactive robotics applications using the Live API. It also highlights safety measures and current applications by trusted testers.
Nexar benchmarks Google's Gemini 2.5 Pro against other Vision-Language Models (VLMs) using a dataset of real-world driving incident videos, highlighting a significant improvement in incident detection.