klotz: optimization*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. A user shares their optimal settings for running the gpt-oss-120b model on a system with dual RTX 3090 GPUs and 128GB of RAM, aiming for a balance between performance and quality.
  2. A recap of the author's Boggle project, including media coverage, a published paper on arXiv.org, new optimizations, and reflections on the challenges and future directions.
  3. Finding all the words on a Boggle board is a classic computer programming problem. With a fast Boggle solver, local optimization techniques such as hillclimbing and simulated annealing can be used to find particularly high-scoring boards. The sheer number of possible Boggle boards has historically prevented an exhaustive search for the global optimum board. We apply Branch and Bound and a decision diagram-like data structure to perform the first such search. We find that the highest-scoring boards found via hillclimbing are, in fact, the global optima.
  4. This article explores the challenges and possibilities of writing portable and efficient SIMD code in Rust, aiming for a "fearless SIMD" approach with high-level, safe, and composable primitives.
  5. LLM EvalKit is a streamlined framework that helps developers design, test, and refine prompt‑engineering pipelines for Large Language Models (LLMs). It encompasses prompt management, dataset handling, evaluation, and automated optimization, all wrapped in a Streamlit web UI.

    Key capabilities:

    | Stage | What it does | Typical workflow |
    |-------|-------------|------------------|
    | **Prompt Management** | Create, edit, version, and test prompts (name, text, model, system instructions). | Define a prompt, load/edit existing ones, run quick generation tests, and maintain version history. |
    | **Dataset Creation** | Organize data for evaluation. Loads CSV, JSON, JSONL files into GCS buckets. | Create dataset folders, upload files, preview items. |
    | **Evaluation** | Run model‑based or human‑in‑the‑loop metrics; compare outcomes across prompt versions. | Choose prompt + dataset, generate responses, score with metrics like “question‑answering‑quality”, save baseline results to a leaderboard. |
    | **Optimization** | Leveraging Vertex AI’s prompt‑optimization job to automatically search for better prompts. | Configure job (model, dataset, prompt), launch, and monitor training in Vertex AI console. |
    | **Results & Records** | Visualize optimization outcomes, compare versions, and maintain a record of performance over time. | View leaderboard, select best optimized prompt, paste new instructions, re‑evaluate, and track progress. |

    **Getting Started**

    1. Clone the repo, set up a virtual environment, install dependencies, and run `streamlit run index.py`.
    2. Configure `src/.env` with `BUCKET_NAME` and `PROJECT_ID`.
    3. Use the UI to create/edit prompts, datasets, and launch evaluations/optimizations as described in the tutorial steps.

    **Token Use‑Case**

    - **Prompt**: “Problem: {{query}}nImage: {{image}} @@@image/jpegnAnswer: {{target}}”
    - **Example input JSON**: query, choices, image URL, target answer.
    - **Model**: `gemini-2.0-flash-001`.

    **License** – Apache 2.0.
  6. A new paper demonstrates that the simplex method, a widely used optimization algorithm, is as efficient as it can be, and explains why it performs well in practice despite theoretical limitations.
  7. DeepScientist is a goal-oriented, fully autonomous scientific discovery system. It uses Bayesian Optimization and a hierarchical 'hypothesize, verify, and analyze' process with a Findings Memory to balance exploration and exploitation. It generated and validated thousands of scientific ideas, surpassing human SOTA on three AI tasks.
  8. A comprehensive guide covering the most critical machine learning equations, including probability, linear algebra, optimization, and advanced concepts, with Python implementations.
  9. This article explains how derivatives, gradients, Jacobians, and Hessians fit together and shows examples of what they are used for, including optimization and rendering.
  10. This article discusses why larger Meshtastic networks may benefit from switching from the default LongFast LoRa preset to higher bandwidth options like MediumSlow, MediumFast, ShortSlow, or ShortFast, detailing the trade-offs between range, speed, and reliability.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: optimization

About - Propulsed by SemanticScuttle