Researchers from Japan and Seattle's Allen Institute have created a detailed supercomputer simulation of a mouse cortex, featuring nearly 10 million neurons and 26 billion synapses, using the world's fastest supercomputer Fugaku. This breakthrough could lead to new methods for studying brain diseases like Alzheimer's and epilepsy.
This poster presents a computational model for narrative generation that incorporates Theory of Mind (ToM). It focuses on generating stories where characters have beliefs, desires, and intentions, and where these mental states influence their actions and the plot. The model uses a planning approach with a belief-desire-intention (BDI) architecture to represent character agency and generate coherent narratives. Key aspects include representing character knowledge, reasoning about others' beliefs, and generating actions based on these beliefs. The poster details the model's architecture, implementation, and preliminary evaluation.
By mid-2025 China had become a global leader in open-source large language models (LLMs). According to Chinese state media, by July 2025 China accounted for 1,509 of the world’s ~3,755 publicly released LLMs, far more than any other country. This explosion reflects heavy state and industry investment in domestic AI, open licensing (often Apache- or MIT-style), and a strategic pivot by Chinese tech giants and startups toward publicly shared models. The result is a "revival" of open-source AI, with dozens of Chinese LLMs now available for download or use via Hugging Face, GitHub, or cloud APIs. These range from general-purpose foundation models dozens of billions of parameters in size to specialized chatbots and domain experts, many built on Mixture-of-Experts (MoE) architectures.
Researchers at MIT’s CSAIL are charting a more "modular" path ahead for software development, breaking systems into "concepts" and "synchronizations" to make code clearer, safer, and easier for LLMs to generate.
MIT researchers are proposing a new software development approach centered around "concepts" and "synchronizations" to address issues of complexity, safety, and LLM compatibility in modern software.
Concepts are self-contained units of functionality (like "sharing" or "liking") with their own state and actions, whereas synchronizations are explicit rules defining how these concepts interact, expressed in a simple, LLM-friendly language.
The benefits include ncreased modularity, transparency, easier understanding for both humans and AI, improved safety, and potential for automated software development. Real-world application: has been demonstrated by successfully restructuring features (liking, commenting, sharing) to be more modular and legible.
Future includes concept catalogs, a shift in software architecture, and improved collaboration through shared, well-tested concepts.
A new study by Google DeepMind explores whether artificial intelligence can exhibit genuine creativity through the composition of chess puzzles. Experts evaluated the AI-generated compositions, noting both positive aspects and areas for improvement.
- Raph Levien, who is an expert in Rust and rendering on GPUs, who founded Advogato, and who designed Inconsolata, a great monospace font. His talk's title is *I Want a Good Parallel Language*.
- Jeff Shrager will give a talk on reviving early AI programs like ELIZA and IPL-V. His talk's title is *RetroAI: Reanimating the Earliest AIs in the Lost Languages that Predated Lisp*.
3D simulations and movement control with PyBullet. This article demonstrates how to build a 3D environment with PyBullet for manually controlling a robotic arm, covering setup, robot loading, movement control (position, velocity, force), and interaction with objects.
Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal. We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers. With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.
This Perspective outlines ways in which generative artificial intelligence aligns with and supports the core ideas of generative linguistics, and how generative linguistics can provide criteria to evaluate and improve neural language models.
Italo Calvino's 'literature machine' is a prescient vision of the perils and promise of artificial intelligence. This article explores Calvino's thoughts on the future of literature in the age of computers, his embrace of fantasy as a way to represent the modern world, and why his work remains relevant today.