Tags: reddit*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. > Life Particles
    > https://codepen.io/lochlanduval/full/QwLOqqr
    >
    > And here was my attempt at an evolutionary version where I got the base force logic from a doctor dude on YouTube, so its much smoother.
    >
    > Particle Life - Evolution
    > https://codepen.io/lochlanduval/full/NPqgvYL
    2025-07-15 Tags: , , , by klotz
  2. >This Emacs major mode is designed for viewing the output from systemd’s journalctl within Emacs. It provides a convenient way to interact with journalctl logs, including features like fontification, chunked loading for performance, and custom keyword highlighting.
  3. A Reddit discussion about text user interfaces for systemd, including links to multiple implementations.
    2025-07-07 Tags: , , , , by klotz
  4. 2025-07-05 Tags: , , , by klotz
  5. A post with pithy observations and clear conclusions from building complex LLM workflows, covering topics like prompt chaining, data structuring, model limitations, and fine-tuning strategies.
  6. A user is seeking advice on deploying a new server with 4x H100 GPUs (320GB VRAM) for on-premise AI workloads. They are considering a Kubernetes-based deployment with RKE2, Nvidia GPU Operator, and tools like vLLM, llama.cpp, and Litellm. They are also exploring the option of GPU pass-through with a hypervisor. The post details their current infrastructure and asks for potential gotchas or best practices.
  7. A Reddit thread discussing preferred local Large Language Model (LLM) setups for tasks like summarizing text, coding, and general use. Users share their model choices (Gemma, Qwen, Phi, etc.) and frameworks (llama.cpp, Ollama, EXUI) along with potential issues and configurations.

    | **Model** | **Use Cases** | **Size (Parameters)** | **Approx. VRAM (Q4 Quantization)** | **Approx. RAM (Q4)** | **Notes/Requirements** |
    |----------------|---------------------------------------------------|------------------------|-----------------------------------|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
    | **Gemma 3 (Meta)** | Summarization, conversational tasks, image recognition, translation, simple writing | 3B, 4B, 7B, 8B, 12B, 27B+ | 2-4GB (3B), 4-6GB (7B), 8-12GB (12B) | 4-8GB (3B), 8-12GB (7B), 16-24GB (12B) | Excellent performance for its size. Recent versions have had memory leak issues (see Reddit post – use Ollama 0.6.6 or later, but even that may not be fully fixed). QAT versions are highly recommended. |
    | **Qwen 2.5 (Alibaba)** | Summarization, coding, reasoning, decision-making, technical material processing | 3.5B, 7B, 72B | 2-3GB (3.5B), 4-6GB (7B), 26-30GB (72B) | 4-6GB (3.5B), 8-12GB (7B), 50-60GB (72B) | Qwen models are known for strong performance. Coder versions specifically tuned for code generation. |
    | **Qwen3 (Alibaba - upcoming)**| General purpose, likely similar to Qwen 2.5 with improvements | 70B | Estimated 25-30GB (Q4) | 50-60GB | Expected to be a strong competitor. |
    | **Llama 3 (Meta)**| General purpose, conversation, writing, coding, reasoning | 8B, 13B, 70B+ | 4-6GB (8B), 7-9GB (13B), 25-30GB (70B) | 8-12GB (8B), 14-18GB (13B), 50-60GB (70B) | Current state-of-the-art open-source model. Excellent balance of performance and size. |
    | **YiXin (01.AI)** | Reasoning, brainstorming | 72B | ~26-30GB (Q4) | ~50-60GB | A powerful model focused on reasoning and understanding. Similar VRAM requirements to Qwen 72B. |
    | **Phi-4 (Microsoft)** | General purpose, writing, coding | 14B | ~7-9GB (Q4) | 14-18GB | Smaller model, good for resource-constrained environments, but may not match larger models in complexity. |
    | **Ling-Lite** | RAG (Retrieval-Augmented Generation), fast processing, text extraction | Variable | Varies with size | Varies with size | MoE (Mixture of Experts) model known for speed. Good for RAG applications where quick responses are important. |

    **Key Considerations:**

    * **Quantization:** The VRAM and RAM estimates above are based on 4-bit quantization (Q4). Lower quantization (e.g., Q2) will reduce memory usage further, but *may* impact quality. Higher quantization (e.g., Q8, FP16) will increase quality but require significantly more memory.
    * **Frameworks:** Popular frameworks for running these models locally include:
    * **llama.cpp:** Highly optimized for CPU and GPU, especially on Apple Silicon.
    * **Ollama:** Simplified setup and management of LLMs. (Be aware of the Gemma 3 memory leak issue!)
    * **Text Generation WebUI (oobabooga):** Web-based interface with many features and customization options.
    * **Hardware:** A dedicated GPU with sufficient VRAM is highly recommended for decent performance. CPU-only inference is possible but can be slow. More RAM is generally better, even if the model fits in VRAM.
    * **Context Length:** The "40k" context mentioned in the Reddit post refers to the maximum number of tokens (words or sub-words) the model can process at once. Longer context lengths require more memory.
  8. A user is asking about the compatibility of different modules (GPS, GPIO, I2C, 'Port C' UART) with the Cardputer, specifically whether modules using 'Port C' can be used.
  9. reddacted is a tool to surgically clean up your online footprint on Reddit by analyzing comments for PII and performing sentiment analysis, offering bulk remediation options and a zero-trust architecture.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "reddit"

About - Propulsed by SemanticScuttle