A specialized implementation of a 25,000-parameter decoder-only transformer designed to run on an unmodified Commodore 64. Written in hand-coded 6502 assembly, the model features real multi-head causal self-attention, RMSNorm, and softmax, achieving functionality similar to modern LLM architectures despite the extreme hardware constraints of a 1 MHz processor.
Key technical details include:
- Uses int8 quantized parameters with per-tensor shift scaling.
- Implements fixed-point arithmetic (Q8.8) for activations.
- Features a 128-token BPE vocabulary and a 20-token context window.
- Includes tools for quantization-aware training (QAT) to ensure model accuracy on integer hardware.
- Capable of running on real C64 hardware or emulators like VICE, with performance averaging 60 seconds per token.
An open-source, theoretical implementation of the Claude Mythos model architecture. The project implements a Recurrent-Depth Transformer (RDT) consisting of three stages: a Prelude, a looped Recurrent Block, and a final Coda. It utilizes switchable attention between Multi-Latent Attention (MLA) and Grouped Query Attention (GQA), alongside a sparse Mixture of Experts (MoE) design to facilitate compute-adaptive reasoning in continuous latent space.
Key technical features include:
* Recurrent-Depth Transformer architecture for implicit chain-of-thought reasoning.
* LTI-stable injection parameters to prevent residual explosion during training.
* Support for multiple model scales ranging from 1B to 1T parameters.
* Integration of Adaptive Computation Time (ACT) or similar halting mechanisms to manage overthinking.
* Use of fine-grained MoE with shared experts to balance breadth and depth.
This guide helps engineers build and ship LLM products by covering the full technical stack. It moves from core mechanics (tokenization, embeddings, attention) to training methodologies (pretraining, SFT, RLHF/DPO) and deployment optimizations (LoRA, quantization, vLLM). The focus is on managing critical production tradeoffs between accuracy, latency, memory, and cost
Cisco and Splunk have introduced the Cisco Time Series Model, a univariate zero shot time series foundation model designed for observability and security metrics. It is released as an open weight checkpoint on Hugging Face.
* **Multiresolution data is common:** The model handles data where fine-grained (e.g., 1-minute) and coarse-grained (e.g., hourly) data coexist, a typical pattern in observability platforms where older data is often aggregated.
* **Long context windows are needed:** It's built to leverage longer historical data (up to 16384 points) than many existing time series models, improving forecasting accuracy.
* **Zero-shot forecasting is desired:** The model aims to provide accurate forecasts *without* requiring task-specific fine-tuning, making it readily applicable to a variety of time series datasets.
* **Quantile forecasting is important:** It predicts not just the mean forecast but also a range of quantiles (0.1 to 0.9), providing a measure of uncertainty.
A deep dive into the process of LLM inference, covering tokenization, transformer architecture, KV caching, and optimization techniques for efficient text generation.
NVIDIA AI releases Nemotron-Elastic-12B, a 12B parameter reasoning model that embeds nested 9B and 6B variants in the same parameter space, allowing for multiple model sizes from a single training job.
An exploration of simple transformer circuit models that illustrate how superposition arises in transformer architectures, introducing toy examples and analyzing their behavior.
An in-depth look at the architecture of OpenAI's GPT-OSS models, detailing tokenization, embeddings, transformer blocks, Mixture of Experts, attention mechanisms (GQA and RoPE), and quantization techniques.
This paper proposes SkyMemory, a LEO satellite constellation hosted key-value cache (KVC) to accelerate transformer-based inference, particularly for large language models (LLMs). It explores different chunk-to-server mapping strategies (rotation-aware, hop-aware, and combined) and presents simulation results and a proof-of-concept implementation demonstrating performance improvements.