Tags: prompt engineering*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Tips on improving your GitHub repository organization and structure. Bullet Points:
    - Create meaningful branch names - Use descriptive commit messages - Keep a clean project history
    - Separate your code into well-organized directories - Follow a consistent naming convention - Make use of pull requests
    - Collaborate effectively by writing clear documentation - Maintain good communication within your team Keywords: GitHub, repository best practices, organization, structure, branch names, commit messages, project history, directories, naming conventions, pull requests, collaboration, documentation, effective communication
  2. - Prompt engineering is about experimenting with changes in prompts to understand their impacts on what large language models (LLMs) generate as the output. Prompt engineering yields better outcomes for LLM use with a few basic techniques
    - Zero-shot prompting is when an LLM is given a task, via prompt, for which the model has not previously seen data
    - For the language tasks in the literature, performance improves with a few examples, this is known as few-shot prompting
    - Chain-of-Thought (CoT) prompting breaks down multi-step problems into intermediate steps allowing LLMs to tackle complex reasoning that can't be solved with zero-shot or few-shot prompting
    - Built upon CoT, self-consistency prompting is an advanced prompting technique, that provides the LLM with multiple, diverse reasoning paths and then selects the most consistent answer among the generated responses
    2024-01-20 Tags: , by klotz
  3. Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).
    2023-12-31 Tags: , , by klotz
  4. hat - chat directly, character card is your prompt

    instruct- chat between "you" and "assistant" using the model's prompt format

    chat-instruct- chat with you and a character card as a prompt but with the instruct template applied. .i.e "you are an AI playing x character, respond as the character would" converted to alpaca, wizard or whatever

    There is no best, but for factual information, you probably want to keep to instruct mode. instruct-chat doesn't necessarily play the characters better or make them write longer. It's sort of hit or miss. one may work better than the other for a particular model and prompt.
  5. How computationally optimized prompts make language models excel, and how this all affects prompt engineering

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "prompt engineering"

About - Propulsed by SemanticScuttle