0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
AI researchers at Stanford and the University of Washington trained an AI 'reasoning' model named s1 for under $50 using cloud compute credits. The model, which performs similarly to OpenAI’s o1 and DeepSeek’s R1, is available on GitHub. It was developed using distillation from Google’s Gemini 2.0 Flash Thinking Experimental model and demonstrates strong performance on benchmarks.
"The paper introduces a technique called LoReFT (Low-rank Linear Subspace ReFT). Similar to LoRA (Low Rank Adaptation), it uses low-rank approximations to intervene on hidden representations. It shows that linear subspaces contain rich semantics that can be manipulated to steer model behaviors."
Chatbot that utilizes Wikipedia data to enhance its factual precision. Key aspects include the use of large language models, retrieval of accurate information from a reliable source, and addressing the challenge of maintaining factual consistency within conversational AI systems.
DSPy provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax. It upgrades "prompting techniques" like chain-of-thought and self-reflection from hand-adapted string manipulation tricks into truly modular generalized operations that learn to adapt to your task.
First / Previous / Next / Last
/ Page 1 of 0