Jason Donenfeld, the creator of the popular open-source WireGuard VPN software, has been locked out of his Microsoft developer account. This unexpected suspension prevents him from signing drivers and shipping critical software updates to Windows users. The issue stems from a mandatory account verification process within Microsoft's Windows Hardware Program, which has suspended accounts that failed to complete verification by a specific deadline, often without prior notification to the developers. This situation mirrors recent troubles faced by other prominent open-source projects like VeraCrypt and Windscribe, highlighting a growing tension between Microsoft's security verification requirements and the operational needs of independent software maintainers.
This article details a hands-on experience with Nvidia's NemoClaw, a security-focused stack designed to enhance the safety of the OpenClaw AI platform. While NemoClaw introduces improvements like a sandbox model and aggressive policy filtering, the author finds it still falls short of being a reliable solution.
Bugs, limitations, and the inherent risks associated with OpenClaw's architecture—particularly its connection to external services—persist. The core issue remains that NemoClaw can secure the agent but cannot protect against malicious instructions embedded in external data sources like emails or messages.
The author concludes that while NemoClaw is a step forward, it doesn't fully address the fundamental security concerns surrounding OpenClaw.
This article details a tutorial on building cybersecurity AI agents using the CAI framework. It guides readers through setting up the environment with Colab, loading API keys, and creating base agents. The tutorial progresses to advanced capabilities, including custom function tools, multi-agent handoffs, agent orchestration, input guardrails, and dynamic tools.
It demonstrates how CAI transforms Python functions and agent definitions into flexible cybersecurity workflows capable of reasoning, delegating, validating, and responding in a structured way. The article also showcases CTF-style pipelines, multi-turn context handling, and streaming responses, offering a comprehensive overview of CAI's potential for security applications.
This article details the first day of the OpenClaw Mastery course, focusing on installation and security. It explains the evolution of AI tools – from simple chat interfaces to agent harnesses and finally to proactive, always-on assistants like OpenClaw. The core idea is to set up OpenClaw on a VPS for isolation and security, emphasizing a cautious approach to capability and the importance of verifying the setup. The article highlights past security issues within the OpenClaw community and outlines a strategy to avoid them, prioritizing a slow and deliberate addition of features.
>"Any line in a .pth file that starts with import will be executed automatically whenever Python starts. This means a feature designed for convenience can also be abused as a persistence mechanism, since arbitrary code can be injected into the startup process."
> You can check which directories your interpreter uses with:
> `python3 -c "import sys; print(sys.path)'`
A malicious release of litellm version 1.82.8 was published to PyPI on March 24, 2026.
The package contains a hidden .pth file that executes on every Python interpreter startup, spawning a subprocess that triggers the same .pth again, creating an exponential fork bomb.
The malware harvests credentials (SSH keys, cloud provider tokens, Kubernetes configs, environment variables, etc.), encrypts them with a hard‑coded RSA key, and exfiltrates them to a malicious domain.
The Black Lotus Labs team at Lumen has discovered KadNap, a sophisticated malware targeting Asus routers and conscripting them into a botnet used for proxying malicious traffic. KadNap utilizes a custom Kademlia DHT protocol to conceal its infrastructure and evade detection, making disruption difficult. The botnet, with over 14,000 infected devices, is marketed through a proxy service called "Doppelganger", linked to the previously known Faceless service. A significant portion of the victims (60%) are located in the United States. Lumen has proactively blocked traffic to KadNap’s control infrastructure and is sharing indicators of compromise.
Three vendors – Cohesity, ServiceNow, and Datadog – have partnered to create a recoverability service designed to address the risks associated with agentic AI (AIOps). The service aims to restore systems to a "trusted state" by identifying and recovering files and data corrupted by AI errors or malicious attacks.
The companies anticipate increased adoption of agentic AI for system operation but recognize the potential for errors and vulnerabilities. Their solution focuses on preserving immutable snapshots of AI environments, enabling point-in-time recovery of agents, data, and infrastructure components, including vector stores and agent memory.
ServiceNow and Datadog provide control and observability platforms to detect anomalies, triggering API-driven restorations when problems are identified. This offering competes with Rubrik's similar tool and native rollback capabilities from vendors like Cisco. Gartner predicts a significant increase in the integration of task-specific agents in enterprise applications, while Forrester emphasizes the need for guardrails and strong oversight in agentic AI development.
AI agents are increasingly deployed to execute important tasks. While rising accuracy scores on standard benchmarks suggest rapid progress, many agents still continue to fail in practice. This discrepancy highlights a fundamental limitation of current evaluations: compressing agent behavior into a single success metric obscures critical operational flaws. Notably, it ignores whether agents behave consistently across runs, withstand perturbations, fail predictably, or have bounded error severity.
Key contributions:
> 1. A formal taxonomy and metric suite: We translate qualitative safety-critical principles into computable metrics, enabling evaluation of agent reliability independently of task success.
>2. A comprehensive reliability profile of modern agents: A detailed mapping of where state-of-the-art agentic models succeed and fail, isolating consistency and predictability as the dimensions requiring immediate research focus.
Raspberry Pi's share price surged after an X post linked the AI agent OpenClaw to increased demand. The article discusses the reasons behind the surge, the current state of Raspberry Pi hardware, and the security concerns surrounding OpenClaw.