Tags: glm-4.7-flash* + deep learning*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Zhipu AI has released GLM-4.7-Flash, a 30B-A3B MoE model designed for efficient local coding and agent applications. It offers strong coding and reasoning performance with a 128k token context length and supports English and Chinese.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "glm-4.7-flash+deep learning"

About - Propulsed by SemanticScuttle