Tags: openai*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The author tests the new GPT-4o AI from OpenAI on a standard set of coding tests and finds that it delivers good results, but with one surprising issue.
    2024-05-28 Tags: , , , , , , by klotz
  2. A tutorial showing you how how to bring real-time data to LLMs through function calling, using OpenAI's latest LLM GTP-4o.
  3. In this article, the author tests ChatGPT-4o's vision feature by providing it with a series of images and asking it to describe what it can see. The author is impressed with the model's accuracy and descriptive abilities.
  4. In an interview with TechCrunch, Signal CEO Meredith Whittaker criticizes the media's obsession with AI-driven deepfakes, the encroaching surveillance state, and the concentration of power in the five main social media platforms. She also discusses the company's recent war of words with Elon Musk, Telegram's Pavel Durov, and OpenAI's leadership.
  5. OpenAI's new GPT-4o model is now available for free, but ChatGPT Plus subscribers still get access to more prompts and newer features. This article compares what's available to both free and paid users.
    2024-05-15 Tags: , , , , by klotz
  6. OpenAI introduces GPT-4, a new large language model that surpasses human performance on various tasks. Although not yet publicly available, the article provides insights into its capabilities and how it sets a new standard for AI.
    2024-05-15 Tags: , , , by klotz
  7. OpenAI, the artificial intelligence research laboratory, has launched ChatGPT-4, an upgraded version of its popular chatbot. ChatGPT-4 is reportedly more powerful, private, and able to handle longer conversations than its predecessor. The chatbot uses a larger model and improved training techniques, allowing it to generate more nuanced and detailed responses. OpenAI also introduced a new feature called Instruct-1, a more precise way to guide the chatbot's responses, and a new interface for easier interaction with the AI.
  8. This article discusses the process of training a large language model (LLM) using reinforcement learning from human feedback (RLHF) and a new alternative method called Direct Preference Optimization (DPO). The article explains how these methods help align the LLM with human expectations and make it more efficient.
  9. - standardization, governance, simplified troubleshooting, and reusability in ML application development.
    - integrations with vector databases and LLM providers to support new applications -
    provides tutorials on integrating

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "openai"

About - Propulsed by SemanticScuttle