Sparse Priming Representations (SPR) is a research project focused on developing and sharing techniques for efficiently representing complex ideas, memories, or concepts using a minimal set of keywords, phrases, or statements, enabling language models or subject matter experts to quickly reconstruct the original idea with minimal context.
Sparse autoencoders (SAEs) have been trained on Llama 3.3 70B, releasing an interpreted model accessible via API, enabling research and product development through feature space exploration and steering.