Reid Hoffman and Clement Delangue are among the signatories of a new open letter calling for the creation of public data sets and incentives to develop 'small' AI models. The letter aims to encourage collaboration among governments, tech companies, and civil society groups to harness the benefits of AI while mitigating its risks.
Qwen2.5-VL-3B-Instruct is the latest addition to the Qwen family of vision-language models by Hugging Face, featuring enhanced capabilities in understanding visual content and generating structured outputs. It is designed to directly interact with tools and use computer and phone functions as a visual agent. Qwen2.5-VL can comprehend videos up to an hour long and localize objects within images using bounding boxes or points. It is available in three sizes: 3, 7, and 72 billion parameters.
Hugging Face researchers developed an open-source AI research agent called 'Open Deep Research' in 24 hours, aiming to match OpenAI's Deep Research. The project demonstrates the potential of agent frameworks to enhance AI model capabilities, achieving 55.15% accuracy on the GAIA benchmark. The initiative highlights the rapid development and collaborative nature of open-source AI projects.
Hugging Face's initiative to replicate DeepSeek-R1, focusing on developing datasets and sharing training pipelines for reasoning models.
The article introduces Hugging Face's Open-R1 project, a community-driven initiative to reconstruct and expand upon DeepSeek-R1, a cutting-edge reasoning language model. DeepSeek-R1, which emerged as a significant breakthrough, utilizes pure reinforcement learning to enhance a base model's reasoning capabilities without human supervision. However, DeepSeek did not release the datasets, training code, or detailed hyperparameters used to create the model, leaving key aspects of its development opaque.
The Open-R1 project aims to address these gaps by systematically replicating and improving upon DeepSeek-R1's methodology. The initiative involves three main steps:
1. **Replicating the Reasoning Dataset**: Creating a reasoning dataset by distilling knowledge from DeepSeek-R1.
2. **Reconstructing the Reinforcement Learning Pipeline**: Developing a pure RL pipeline, including large-scale datasets for math, reasoning, and coding.
3. **Demonstrating Multi-Stage Training**: Showing how to transition from a base model to supervised fine-tuning (SFT) and then to RL, providing a comprehensive training framework.
Alibaba's Qwen 2.5 LLM now supports input token limits up to 1 million using Dual Chunk Attention. Two models are released on Hugging Face, requiring significant VRAM for full capacity. Challenges in deployment with quantized GGUF versions and system resource constraints are discussed.
smolagents is a simple library that enables agentic capabilities for language models, allowing them to interact with external tools and perform tasks based on real-world data.
Hugging Face's SmolAgents simplifies the creation of intelligent agents by allowing developers to build them with just a few lines of code using powerful pretrained models.
A detailed guide on creating a text classification model with Hugging Face's transformer models, including setup, training, and evaluation steps.
HunyuanVideo is an open-source video generation model that showcases performance comparable to or superior to leading closed-source models. It includes features like a unified image and video generative architecture, a large language model text encoder, and a causal 3D VAE for spatial-temporal compression.
Hugging Face launches Gradio 5, a major update to its popular open-source tool for creating machine learning applications, aimed at making AI development more accessible and secure for enterprises.