klotz: long context*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Long contexts in language models are bottlenecked by KV cache size. While summarization compacts token space, it can lose information. This work introduces Attention Matching, a fast method for compacting the KV cache in latent space by matching attention outputs. This allows for up to 50x compression with little quality degradation, offering a faster alternative to full optimization.
  2. Python implementation of Recursive Language Models for processing unbounded context lengths. Process 100k+ tokens with any LLM by storing context as variables instead of prompts.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: long context

About - Propulsed by SemanticScuttle