OpenShell is a safe, private runtime environment designed for autonomous AI agents. It provides sandboxed execution with declarative YAML policies to control file access, data exfiltration, and network activity. Built with an agent-first approach, OpenShell offers pre-built skills for tasks like cluster debugging and policy generation.
Currently in alpha, it focuses on single-player mode and aims to expand to multi-tenant enterprise deployments. OpenShell uses a containerized K3s Kubernetes cluster for isolation and enforces security across filesystem, network, process, and inference layers. It supports agents like Claude, OpenCode, and Copilot, managing credentials securely.
This article details the journey of deploying an on-premise Large Language Model (LLM) server, focusing on security considerations. It explores the rationale behind on-premise deployment for privacy and data control, outlining the goals of creating an air-gapped, isolated infrastructure. The authors delve into the hardware selection process, choosing components like an Nvidia RTX Pro 6000 Max-Q for its memory capacity. The deployment process starts with a minimal setup using llama.cpp, then progresses to containerization with Podman and the use of CDI for GPU access. Finally, the article discusses hardening techniques, including kernel module management and file permission restrictions, to minimize the attack surface and enhance security.
NanoClaw, a new open-source agent platform, aims to address the security concerns surrounding platforms like OpenClaw by utilizing containers and a smaller codebase. The project, started by Gavriel Cohen with the help of Anthropic's Claude Code, focuses on isolation and auditability, allowing agents to operate within a contained environment with limited access to system data.
A guide on running OpenClaw (aka Clawdbot aka Moltbot) in a Docker container, including setup, configuration, and accessing the web UI.
This article details the integration of Docker Model Runner with the NVIDIA DGX Spark, enabling faster and simpler local AI model development. It covers setup, usage, and benefits like data privacy, offline availability, and ease of customization.
Fly.io provides a secure and fast platform for deploying AI workflows and LLM-generated code using ephemeral, kernel-isolated virtual machines (Fly Machines). It offers features like secure sandboxing, fast startup times, a clean slate for each run, a simple API, and support for whole applications, not just code snippets.
DockaShell is an MCP (Model Context Protocol) server that gives AI agents isolated Docker containers to work in. Each agent gets its own persistent environment with shell access, file operations, and full audit trails. It aims to remove limitations of current AI assistants like lack of persistent memory, tool babysitting, limited toolsets, and no self-reflection, enabling self-evolving agents, continuous memory, autonomous exploration, and meta-learning.
This article details significant security vulnerabilities found in the Model Context Protocol (MCP) ecosystem, a standardized interface for AI agents. It outlines six critical attack vectors – OAuth vulnerabilities, command injection, unrestricted network access, file system exposure, tool poisoning, and secret exposure – and explains how Docker MCP Toolkit provides enterprise-grade protection against these threats.
The article discusses Apple Container, a new tool for running Linux containers on macOS, comparing its performance and efficiency to Docker Desktop. It highlights its ease of setup on Silicon Macs, compatibility with Dockerfiles, and potential as a lightweight alternative for home lab enthusiasts.
The article details the author's experience switching from NGINX to Traefik as a reverse proxy for Docker Compose applications, citing scalability and ease of management as key benefits. It explains what a reverse proxy is and highlights Traefik’s automatic configuration and SSL certificate renewal features.