This research presents a scalable method for extracting linear representations of concepts within large-scale AI models, including language, vision-language, and reasoning models. By mapping these internal representations, the authors demonstrate how to steer model behavior to mitigate misalignment, expose vulnerabilities, and enhance capabilities beyond traditional prompting. The study also shows that these concept representations are transferable across languages and can be combined for multi-concept steering. Additionally, the approach provides a superior method for monitoring misaligned content like hallucinations and toxicity compared to direct output judgment models.
Key points:
- Scalable extraction of linear concept representations
- Model steering for safety and capability enhancement
- Cross-language transferability and multi-concept steering
- Monitoring of hallucinations and toxic content via internal states
This article demonstrates how to perform text summarization using the scikit-llm library, which provides a simple interface for utilizing large language models within a scikit-learn style workflow. The guide walks through installing the necessary dependencies and implementing both extractive and abstractive summarization techniques on sample text data.
Key topics include:
- Introduction to the scikit-llm library
- Implementing abstractive summarization using LLMs
- Using scikit-llm for text classification and clustering tasks
- Practical code examples for integrating LLM capabilities into machine learning pipelines
An open-source, theoretical implementation of the Claude Mythos model architecture. The project implements a Recurrent-Depth Transformer (RDT) consisting of three stages: a Prelude, a looped Recurrent Block, and a final Coda. It utilizes switchable attention between Multi-Latent Attention (MLA) and Grouped Query Attention (GQA), alongside a sparse Mixture of Experts (MoE) design to facilitate compute-adaptive reasoning in continuous latent space.
Key technical features include:
* Recurrent-Depth Transformer architecture for implicit chain-of-thought reasoning.
* LTI-stable injection parameters to prevent residual explosion during training.
* Support for multiple model scales ranging from 1B to 1T parameters.
* Integration of Adaptive Computation Time (ACT) or similar halting mechanisms to manage overthinking.
* Use of fine-grained MoE with shared experts to balance breadth and depth.
OpenMythos is an open-source PyTorch project by Kye Gomez that proposes a theoretical reconstruction of Anthropic's Claude Mythos architecture. Instead of standard transformer layers, it suggests a Recurrent-Depth Transformer (RDT) design where weights loop through multiple iterations to increase reasoning depth during inference. By combining Mixture-of-Experts with Multi-Latent Attention and stability constraints, the model achieves performance parity between 770M parameters and a 1.3B parameter standard transformer.
* open-source PyTorch reconstruction of claude mythos
* proposes recurrent-depth transformer architecture
* reasoning depth scales via inference-time loops rather than parameter count
* uses mixture-of-experts for domain breadth
* implements multi-latent attention to reduce memory usage
* employs lti injection and adaptive computation time for stability
* achieves 1.3b parameter performance with only 770m parameters
Personal website of Jamie Simon, a scientist specializing in fundamental theory for deep learning. He runs a research lab at the Redwood Center at UC Berkeley with funding from Imbue and recently completed his PhD under Mike DeWeese. The site serves as a hub for his scientific research, personal blog posts regarding science and life adventures, and custom-made puzzles.
Main topics:
* Deep learning fundamental theory
* Research publications
* Science and lifestyle blog
* Puzzle creation
A practical pipeline for classifying messy free-text data into meaningful categories using a locally hosted LLM, no labeled training data required.
Learn how to label text without the need for task-specific training data by using zero-shot text classification. This guide explains how pretrained transformer models, such as BART, reframe classification as a reasoning task where labels are treated as natural language statements.
Key topics include:
* The core concept of zero-shot classification and its advantages for rapid prototyping.
* Using the Hugging Face transformers pipeline with the facebook/bart-large-mnli model.
* Implementing multi-label classification for texts belonging to multiple categories.
* Improving accuracy through custom hypothesis template tuning and clear label wording.
A comprehensive curated collection of Large Language Model (LLM) architecture figures and technical fact sheets. This gallery provides a visual and data-driven overview of modern model designs, ranging from classic dense architectures like GPT-2 to advanced sparse Mixture-of-Experts (MoE) systems and hybrid attention models. Users can explore detailed specifications including parameter scales, context windows, attention mechanisms, and intelligence indices for various prominent models.
Key features include:
* Detailed architecture fact sheets for a wide array of models such as Llama, DeepSeek, Qwen, Gemma, and Mistral.
* An architecture diff tool to compare two different model designs side-by-side.
* Comparative analysis across dense, MoE, MLA, and hybrid decoder families.
* Links to original source articles and technical reports for deeper research.
Simon Willison tests OpenAI's newly released ChatGPT Images 2.0 model using a complex Where's Waldo style prompt involving a raccoon holding a ham radio. By comparing results against previous versions and competitors like Google's Nano Banana, the article evaluates the model's ability to handle high-detail illustrations and specific text elements.
Drawing on Marshall McLuhan’s philosophy, this piece warns that while we build AI tools, those same tools ultimately reshape our creative processes. Designers face the dual risks of "AI sycophancy"—where algorithms validate existing biases—and an "illusion of authority" that prioritizes polished speed over genuine depth. To avoid losing their edge, creators must treat AI as a partner for iteration rather than a replacement for critical thinking and human intuition.
* **The Feedback Loop:** Tools aren't neutral; they actively mold the user's cognitive habits.
* **Sycophancy Risk:** AI can act as a "digital yes-man," reinforcing errors instead of challenging them.
* **Superficiality Trap:** Rapid, high-quality outputs can mask a lack of true accountability or substance.
* **Intentional Agency:** Maintaining human intuition is essential to prevent being shaped by the technology.