klotz: incident*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. OpenAI is blaming one of the longest outages in its history on a 'new telemetry service' gone awry, which caused major disruptions to ChatGPT, Sora, and its developer-facing API.

    ### Postmortem Incident Investigation Report

    #### Incident Summary
    On December 13, 2024, OpenAI experienced a major service outage affecting its AI-powered chatbot platform, ChatGPT, its video generator, Sora, and its developer-facing API. The incident began around 3 p.m. Pacific Time and lasted approximately three hours before all services were fully restored.

    #### Root Cause
    The outage was caused by the deployment of a new telemetry service designed to collect Kubernetes metrics. This telemetry service was intended to monitor Kubernetes operations, but an issue with its configuration inadvertently triggered resource-intensive Kubernetes API operations.

    #### Detailed Analysis
    - **New Telemetry Service**: The telemetry service was rolled out to collect Kubernetes metrics. However, its configuration led to unintended and resource-intensive Kubernetes API operations.
    - **Kubernetes API Overload**: The resource-intensive operations overwhelmed the Kubernetes API servers, disrupting the Kubernetes control plane in most large Kubernetes clusters.
    - **DNS Resolution Impact**: The affected Kubernetes control plane impacted DNS resolution, a critical component that converts IP addresses to domain names. This complication delayed visibility into the full scope of the problem and allowed the rollout to continue before the issues were fully understood.
    - **DNS Caching**: The use of DNS caching further delayed visibility and slowed the implementation of a fix, as the system relied on cached information rather than the actual, disrupted state.

    #### Mitigating Factors
    - **Detection Delay**: OpenAI detected the issue "a few minutes" before customers noticed the impact, but was unable to quickly implement a fix due to the overwhelmed Kubernetes servers.
    - **Testing Shortcomings**: The testing procedures did not catch the impact of the changes on the Kubernetes control plane, leading to a slow remediation process.

    #### Preventive Measures
    - **Improved Monitoring**: Implementing better monitoring for infrastructure changes to detect issues early.
    - **Phased Rollouts**: Adopting phased rollouts with enhanced monitoring to ensure smoother deployment and quicker detection of issues.
    - **Kubernetes API Access**: Ensuring that OpenAI engineers have mechanisms to access the Kubernetes API servers under any circumstances to improve the remediation speed.
  2. ‘I’ve been to Bali too’ (and I will be going back): are terrorist shocks to Bali’s tourist arrivals permanent or transitory?,”

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: incident

About - Propulsed by SemanticScuttle