Tags: huggingface* + google*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This Hugging Face page details the Gemma 4 31B-it model, an open-weights multimodal model created by Google DeepMind. Gemma 4 can process both text and image inputs, generating text outputs, with smaller models also supporting audio. It comes in various sizes (E2B, E4B, 26B A4B, and 31B) allowing for deployment on diverse hardware, from phones to servers.
    The model boasts a context window of up to 256K tokens and supports over 140 languages. It utilizes dense and Mixture-of-Experts (MoE) architectures, excelling in tasks like text generation, coding, and reasoning. The page provides details on model data, training, ethics, usage, limitations, and best practices, along with code snippets for getting started with Transformers.
  2. Google DeepMind introduced PaliGemma 2, a new family of Vision-Language Models with parameter sizes ranging from 3 billion to 28 billion, designed to address challenges in generalizing across different tasks and adapting to various input data types, including diverse image resolutions.
  3. - create a custom base image for a Cloud Workstation environment using a Dockerfile
    . Uses:

    Quantized models from

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "huggingface+google"

About - Propulsed by SemanticScuttle