Learn to deploy your own local LLM service using Docker containers for maximum security and control, whether you're running on CPU, NVIDIA GPU or AMD GPU.
Dozzle is a lightweight, self-hosted solution that provides a real-time look into your container logs, offering an intuitive UI, real-time logging, intelligent search, and support for multiple use cases like home labs and local development.
This Docker-based network visualizer deploys in under two minutes and automatically maps all devices on your network with an interactive web dashboard.
The article discusses the increasing complexity of Kubernetes and suggests that Silicon Valley is exploring alternative technologies for container orchestration, citing a benchmark showing a stripped-down stack outperforming Kubernetes.
A curated guide to code sandboxing solutions, covering technologies like MicroVMs, application kernels, language runtimes, and containerization. It provides a feature matrix, in-depth platform profiles (e2b, Daytona, microsandbox, WebContainers, Replit, Cloudflare Workers, Fly.io, Kata Containers), and a decision framework for choosing the right sandboxing solution based on security, performance, workload type, and hosting preferences.
Docker is making it easier for developers to run and test AI Large Language Models (LLMs) on their PCs with the launch of Docker Model Runner, a new beta feature in Docker Desktop 4.40 for Apple silicon-powered Macs. It also integrates the Model Context Protocol (MCP) for streamlined connections between AI agents and data sources.
A Microsoft engineer demonstrates how WebAssembly modules can run alongside containers in Kubernetes environments, offering benefits like reduced size and faster cold start times for certain workloads.
Docker offers various logging drivers that dictate the storage location and format of log messages. These include json-file, syslog, journald, fluentd, awslogs, gelf, logentries, and splunk.