Ramp Labs has introduced Latent Briefing, a new method designed to optimize memory sharing within multi-agent systems. By compressing large model KV caches, this approach enables more efficient task decomposition and execution without sacrificing accuracy. Testing on the LongBench v2 benchmark revealed that the solution can reduce token consumption for worker models by up to 65% while actually improving accuracy by 3 percentage points. The technology has proven effective across various document types when tested with Claude Sonnet 4 and Qwen3-14B models.
Key highlights:
- Reduces token usage by up to 65%.
- Improves model accuracy by 3 percentage points on LongBench v2.
- Optimizes multi-agent architectures through KV cache compression.
- Demonstrates faster processing times and high adaptability.
Long contexts in language models are bottlenecked by KV cache size. While summarization compacts token space, it can lose information. This work introduces Attention Matching, a fast method for compacting the KV cache in latent space by matching attention outputs. This allows for up to 50x compression with little quality degradation, offering a faster alternative to full optimization.