Jason Donenfeld, the creator of the popular open-source WireGuard VPN software, has been locked out of his Microsoft developer account. This unexpected suspension prevents him from signing drivers and shipping critical software updates to Windows users. The issue stems from a mandatory account verification process within Microsoft's Windows Hardware Program, which has suspended accounts that failed to complete verification by a specific deadline, often without prior notification to the developers. This situation mirrors recent troubles faced by other prominent open-source projects like VeraCrypt and Windscribe, highlighting a growing tension between Microsoft's security verification requirements and the operational needs of independent software maintainers.
Nicholas Carlini, a research scientist at Anthropic, demonstrated that Claude Code can identify remotely exploitable security vulnerabilities within the Linux kernel. Most significantly, the AI discovered a heap buffer overflow in the NFS driver that had remained undetected for 23 years. By using a simple script to direct the model's attention to specific source files, Carlini was able to uncover complex bugs that require a deep understanding of intricate protocols. While the discovery highlights the growing power of large language models in cybersecurity, it also presents a new bottleneck: the massive volume of potential vulnerabilities found by AI requires significant manual effort from human researchers to validate and report.
This patch introduces a new kernel configuration option, CONFIG_VFS_AGE_VERIFICATION, which mandates that processes register a valid birth date using a new prctl call (PR_SET_BIRTHDATE) before being allowed to create files. This is in response to new regulations requiring age verification for digital content creation. If a process hasn't registered a birthdate or is under 18 years old, file creation will fail with a new error code, ETOOYOUNG.
The patch also adds a new error number, ETOOYOUNG (134), and includes safeguards against bypassing verification through execve(). It playfully rejects birthdates indicating an age over 150, acknowledging the lack of support for immortal entities.
Greg Kroah-Hartman, a long-term Linux kernel maintainer, has observed a significant shift in AI-driven activity around Linux security and code review. Previously receiving "AI slop" – inaccurate or low-quality reports – the past month has seen a marked improvement in the quality and relevance of AI-generated bug reports and security findings across open-source projects. While the cause of this change remains unknown, Kroah-Hartman notes the kernel team can handle the increased volume, but smaller projects may struggle. AI is increasingly used as a reviewer and assistant, and is even beginning to contribute patches, with tools like Sashiko being integrated to manage the influx.
OpenShell is a safe, private runtime environment designed for autonomous AI agents. It provides sandboxed execution with declarative YAML policies to control file access, data exfiltration, and network activity. Built with an agent-first approach, OpenShell offers pre-built skills for tasks like cluster debugging and policy generation.
Currently in alpha, it focuses on single-player mode and aims to expand to multi-tenant enterprise deployments. OpenShell uses a containerized K3s Kubernetes cluster for isolation and enforces security across filesystem, network, process, and inference layers. It supports agents like Claude, OpenCode, and Copilot, managing credentials securely.
This article details the journey of deploying an on-premise Large Language Model (LLM) server, focusing on security considerations. It explores the rationale behind on-premise deployment for privacy and data control, outlining the goals of creating an air-gapped, isolated infrastructure. The authors delve into the hardware selection process, choosing components like an Nvidia RTX Pro 6000 Max-Q for its memory capacity. The deployment process starts with a minimal setup using llama.cpp, then progresses to containerization with Podman and the use of CDI for GPU access. Finally, the article discusses hardening techniques, including kernel module management and file permission restrictions, to minimize the attack surface and enhance security.
An account of how a developer, Alexey Grigorev, accidentally deleted 2.5 years of data from his AI Shipping Labs and DataTalks.Club websites using Claude Code and Terraform. Grigorev intended to migrate his website to AWS, but a missing state file and subsequent actions by Claude Code led to a complete wipe of the production setup, including the database and snapshots. The data was ultimately restored with help from Amazon Business support. The article highlights the importance of backups, careful permissions management, and manual review of potentially destructive actions performed by AI agents.
OpenSandbox provides a secure and isolated runtime environment for running commands, filesystems, code interpreters, browsers, and developer tools. It offers multi-language SDKs, unified APIs, and supports various AI workloads like coding agents, browser automation, remote development, AI code execution, and RL training.
Hundreds of academics are campaigning against the global move toward age checks on online services, warning that the technologies are ineffective and carry significant risks to privacy, security, and freedom.
NanoClaw, a new open-source agent platform, aims to address the security concerns surrounding platforms like OpenClaw by utilizing containers and a smaller codebase. The project, started by Gavriel Cohen with the help of Anthropic's Claude Code, focuses on isolation and auditability, allowing agents to operate within a contained environment with limited access to system data.