OpenAI releases gpt-oss-safeguard, an open-source AI model for content moderation that allows developers to define their own safety policies instead of relying on pre-trained models. It operates by reasoning about content based on custom policies, offering a more flexible and nuanced approach to moderation.