This post discusses the limitations of using cosine similarity for compatibility matching, specifically in the context of a dating app. The author found that high cosine similarity scores didn't always translate to actual compatibility due to the inability of embeddings to capture dealbreaker preferences. They improved results by incorporating structured features and hard filters.
Based on the discussion, /u/septerium achieved optimal performance for GLM 4.7 Flash (UD-Q6_K_XL) on an RTX 5090 using these specific settings and parameters:
- GPU: NVIDIA RTX 5090.
- 150 tokens/s
- Context: 48k tokens squeezed into VRAM.
- UD-Q6_K_XL (Unsloth quantized GGUF).
- Flash Attention: Enabled (-fa on).
- Context Size: 48,000 (--ctx-size 48000).
- GPU Layers: 99 (-ngl 99) to ensure the entire model runs on the GPU.
- Sampler & Inference Parameters
- Temperature: 0.7 (recommended by Unsloth for tool calls).
- Top-P: 1.0.
- Min-P: 0.01.
- Repeat Penalty: Must be disabled (llama.cpp does this by default, but users warned other platforms might not).
LLMs are powerful for understanding user input and generating human‑like text, but they are not reliable arbiters of logic. A production‑grade system should:
- Isolate the LLM to language tasks only.
- Put all business rules and tool orchestration in deterministic code.
- Validate every step with automated tests and logging.
- Prefer local models for sensitive domains like healthcare.
| **Issue** | **What users observed** | **Common solutions** |
|-----------|------------------------|----------------------|
| **Hallucinations & false assumptions** | LLMs often answer without calling the required tool, e.g., claiming a doctor is unavailable when the calendar shows otherwise. | Move decision‑making out of the model. Let the code decide and use the LLM only for phrasing or clarification. |
| **Inconsistent tool usage** | Models agree to user requests, then later report the opposite (e.g., confirming an appointment but actually scheduling none). | Enforce deterministic tool calls first, then let the LLM format the result. Use “always‑call‑tool‑first” guards in the prompt. |
| **Privacy concerns** | Sending patient data to cloud APIs is risky. | Prefer self‑hosted/local models (e.g., LLaMA, Qwen) or keep all data on‑premises. |
| **Prompt brittleness** | Adding more rules can make prompts unstable; models still improvise. | Keep prompts short, give concrete examples, and test with a structured evaluation pipeline. |
| **Evaluation & monitoring** | Without systematic “evals,” failures go unnoticed. | Build automated test suites (e.g., with LangChain, LangGraph, or custom eval scripts) that verify correct tool calls and output formats. |
| **Workflow design** | Treat the LLM as a *translator* rather than a *decision engine*. | • Extract intent → produce a JSON/action spec → execute deterministic code → have the LLM produce a user‑friendly response. <br>• Cache common replies to avoid unnecessary model calls. |
| **Alternative UI** | Many suggest a simple button‑driven interface for scheduling. | Use the LLM only for natural‑language front‑end; the back‑end remains a conventional, rule‑based system. |
A user shares their experience running the GPT-OSS 120b model on Ollama with an i7 6700, 64GB DDR4 RAM, RTX 3090, and a 1TB SSD. They note slow initial token generation but acceptable performance overall, highlighting it's possible on a relatively modest setup. The discussion includes comparisons to other hardware configurations, optimization techniques (llama.cpp), and the model's quality.
>I have a 3090 with 64gb ddr4 3200 RAM and am getting around 50 t/s prompt processing speed and 15 t/s generation speed using the following:
>
>`llama-server -m <path to gpt-oss-120b> --ctx-size 32768 --temp 1.0 --top-p 1.0 --jinja -ub 2048 -b 2048 -ngl 99 -fa 'on' --n-cpu-moe 24`
> This about fills up my VRAM and RAM almost entirely. For more wiggle room for other applications use `--n-cpu-moe 26`.
>This Emacs major mode is designed for viewing the output from systemd’s journalctl within Emacs. It provides a convenient way to interact with journalctl logs, including features like fontification, chunked loading for performance, and custom keyword highlighting.
A Reddit discussion about text user interfaces for systemd, including links to multiple implementations.
A post with pithy observations and clear conclusions from building complex LLM workflows, covering topics like prompt chaining, data structuring, model limitations, and fine-tuning strategies.