klotz: artificial intelligence*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. The U.S. Patent and Trademark Office (USPTO) issued new guidelines on Wednesday outlining when inventions created with the help of artificial intelligence can be patented, clarifying that AI is a tool used by human inventors.
  2. Peter Leyden discusses the 'great progression' – a period of rapid technological change driven by AI, clean energy, and bioengineering, and the potential for a new era of progress.

    He argues that we are at a pivotal moment, similar to other historical turning points (roughly every 80 years), driven by the convergence of AI, clean energy technologies, and bioengineering.

    He draws parallels to past eras like post-WWII America and the Civil War reconstruction, highlighting patterns of old systems collapsing, new ones emerging, and periods of intense political conflict followed by bursts of innovation. Leyden emphasizes the potential for these technologies to create abundance – in energy, resources, and even access to things like personalized education and healthcare.
  3. Researchers from Japan and Seattle's Allen Institute have created a detailed supercomputer simulation of a mouse cortex, featuring nearly 10 million neurons and 26 billion synapses, using the world's fastest supercomputer Fugaku. This breakthrough could lead to new methods for studying brain diseases like Alzheimer's and epilepsy.
  4. This poster presents a computational model for narrative generation that incorporates Theory of Mind (ToM). It focuses on generating stories where characters have beliefs, desires, and intentions, and where these mental states influence their actions and the plot. The model uses a planning approach with a belief-desire-intention (BDI) architecture to represent character agency and generate coherent narratives. Key aspects include representing character knowledge, reasoning about others' beliefs, and generating actions based on these beliefs. The poster details the model's architecture, implementation, and preliminary evaluation.
  5. By mid-2025 China had become a global leader in open-source large language models (LLMs). According to Chinese state media, by July 2025 China accounted for 1,509 of the world’s ~3,755 publicly released LLMs, far more than any other country. This explosion reflects heavy state and industry investment in domestic AI, open licensing (often Apache- or MIT-style), and a strategic pivot by Chinese tech giants and startups toward publicly shared models. The result is a "revival" of open-source AI, with dozens of Chinese LLMs now available for download or use via Hugging Face, GitHub, or cloud APIs. These range from general-purpose foundation models dozens of billions of parameters in size to specialized chatbots and domain experts, many built on Mixture-of-Experts (MoE) architectures.
  6. Researchers at MIT’s CSAIL are charting a more "modular" path ahead for software development, breaking systems into "concepts" and "synchronizations" to make code clearer, safer, and easier for LLMs to generate.

    MIT researchers are proposing a new software development approach centered around "concepts" and "synchronizations" to address issues of complexity, safety, and LLM compatibility in modern software.

    Concepts are self-contained units of functionality (like "sharing" or "liking") with their own state and actions, whereas synchronizations are explicit rules defining how these concepts interact, expressed in a simple, LLM-friendly language.

    The benefits include ncreased modularity, transparency, easier understanding for both humans and AI, improved safety, and potential for automated software development. Real-world application: has been demonstrated by successfully restructuring features (liking, commenting, sharing) to be more modular and legible.

    Future includes concept catalogs, a shift in software architecture, and improved collaboration through shared, well-tested concepts.
  7. A new study by Google DeepMind explores whether artificial intelligence can exhibit genuine creativity through the composition of chess puzzles. Experts evaluated the AI-generated compositions, noting both positive aspects and areas for improvement.
  8. - Raph Levien, who is an expert in Rust and rendering on GPUs, who founded Advogato, and who designed Inconsolata, a great monospace font. His talk's title is *I Want a Good Parallel Language*.
    - Jeff Shrager will give a talk on reviving early AI programs like ELIZA and IPL-V. His talk's title is *RetroAI: Reanimating the Earliest AIs in the Lost Languages that Predated Lisp*.
  9. 3D simulations and movement control with PyBullet. This article demonstrates how to build a 3D environment with PyBullet for manually controlling a robotic arm, covering setup, robot loading, movement control (position, velocity, force), and interaction with objects.
  10. Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal. We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers. With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: artificial intelligence

About - Propulsed by SemanticScuttle