0 bookmark(s) - Sort by: Date ↓ / Title /
Alibaba Cloud released its Qwen2.5-Omni-7B multimodal AI model, designed for cost-effective AI agents and capable of processing various inputs like text, images, audio, and video.
Alibaba's Qwen team aims to find out with its latest release, QwQ. Despite having a fraction of DeepSeek R1's claimed 671 billion parameters, Alibaba touts its comparatively compact 32-billion 'reasoning' model as outperforming R1 in select math, coding, and function-calling benchmarks.
Qwen2.5-VL is a flagship model of the Qwen vision-language series, showcasing advancements in visual recognition, object localization, document parsing, and long-video comprehension. It introduces dynamic resolution processing and absolute time encoding, allowing it to handle complex inputs and maintain native resolution. Available in three sizes, it suits various applications from edge AI to high-performance computing, matching state-of-the-art models in document and diagram understanding while preserving strong linguistic capabilities.
Alibaba has unveiled a new artificial intelligence model that the company says outperforms the capabilities of DeepSeek V3, a leading AI system.
Alibaba's Qwen 2.5 LLM now supports input token limits up to 1 million using Dual Chunk Attention. Two models are released on Hugging Face, requiring significant VRAM for full capacity. Challenges in deployment with quantized GGUF versions and system resource constraints are discussed.
Simon Willison reviews the new Qwen2.5-Coder-32B, an open-source LLM by Alibaba, which performs well on various coding benchmarks and can run on personal devices like his MacBook Pro M2.
First / Previous / Next / Last
/ Page 1 of 0