How to use AI skills—reusable packages of instructions and files—to automate repetitive data science workflows. By moving beyond simple prompting into structured skills, users can maintain shorter context windows while ensuring consistent, high-quality outputs for complex tasks like data visualization or metric investigation.
* A skill consists of a SKILL.md file with metadata and detailed instructions to guide an AI through specific recurring processes.
* Using skills helps keep the main LLM context lightweight by only loading detailed resources when they are relevant to the task.
* The author demonstrates this by automating a weekly visualization habit, reducing a one-hour manual process to less than ten minutes.
* Building effective skills requires iterative testing, incorporating personal domain knowledge, and researching external best practices.
* Combining skills with Model Context Protocol (MCP) allows AI to both follow specific procedural playbooks and access external data tools seamlessly.
As AI agents evolve from autocomplete tools to active contributors (opening PRs, managing infrastructure), DevOps must adapt. This playbook outlines the shift through these key strategic pillars:
* **Foundational Prerequisites:** Robust CI/CD, automated testing, and Infrastructure as Code are essential for agentic workflows.
* **Evolving Engineering Roles:** Engineers transition from code producers to system designers, agent operators, and quality stewards.
* **Structured Collaboration:** Integration across IDEs, PRs, pipelines, and production environments is required.
* **Repository Design:** Repositories must act as explicit interfaces using skill profiles and instruction files.
* **Development Methodology:** Shift from ephemeral prompt engineering to durable, specification-driven development.
* **Governance & Security:** Implement frameworks for custom agent consistency/auditability and transform CI/CD into active verifiers of semantic intent and security.
* **New Success Metrics:** Move from volume-based productivity counts to outcome-based and trust-boundary measurements.
AWS has released the general availability of its DevOps Agent, a generative AI assistant designed to automate incident investigation and operational tasks. Built on Amazon Bedrock AgentCore, the tool integrates with observability platforms, code repositories, and CI/CD pipelines to autonomously triage issues and correlate telemetry data. New capabilities include support for investigating applications in Azure and on-premises environments, custom agent skills, and personalized reporting.
Key highlights:
* Autonomous incident investigation triggered by webhooks from sources like CloudWatch or PagerDuty.
* Integration with major tools including Datadog, Grafana, Splunk, GitHub, and GitLab.
* Reported performance improvements of up to 75% lower MTTR during preview.
* Pricing model based on cumulative time spent on operational tasks per second.
Schematik is a new AI-driven program designed to democratize hardware engineering by allowing users to "vibe code" physical devices. Much like Cursor has revolutionized software development through AI assistance, Schematik helps non-experts design electronics, suggests necessary components, and provides links for purchasing parts. The tool aims to lower the barrier to entry for makers while ensuring safety through low-voltage constraints.
Key points:
* Schematik functions as an assistant that guides users from concept to physical assembly.
* The startup recently secured $4.6 million in funding from Lightspeed Venture Partners.
* Anthropic has signaled interest by releasing a Bluetooth API for makers to connect hardware with Claude.
* The tool focuses on low-voltage architecture to prevent dangerous electrical failures during the learning process.
Ghostty, a high-performance GPU-accelerated terminal developed by Mitchell Hashimoto, is now available in the Ubuntu 26.04 LTS repositories via `apt install`. Designed to feel native on both macOS (Swift) and Linux (GTK4/libadwaita), it offers a lightweight, bloat-free alternative to the default Ptyxis.
* **Native Performance:** Seamless integration with system APIs using GTK4/libadwaita (Linux) and Swift (macOS).
* **Feature-Rich:** Supports terminal splits, tabs, ligatures, emoji clustering, and the Kitty graphics protocol.
* **Easy Installation:** Available in the Ubuntu "universe" repository via App Center or `sudo apt install ghostty`.
* **Cross-Platform Optimization:** Provides a consistent workflow for developers moving between macOS and Linux.
Guide
Unsloth AI presents performance benchmarks for Qwen3.6-35B-A3B GGUF quantizations, claiming state-of-the-art results in mean KL divergence across most model sizes. The discussion includes community analysis regarding SWE-bench Verified performance, where some users noted unexpected discrepancies between Qwen3.5 and Qwen3.6 quantization results during coding tasks.
Key points:
- Unsloth ranks first in 21 of 22 model sizes for mean KL divergence.
- Community debate over SWE-bench testing methodology and sample sizes.
- Reported performance variations between different quantization levels (Q4, Q5, Q6, Q8).
- Discussion on system prompt adherence and error rates in coding benchmarks.
Drawing on Marshall McLuhan’s philosophy, this piece warns that while we build AI tools, those same tools ultimately reshape our creative processes. Designers face the dual risks of "AI sycophancy"—where algorithms validate existing biases—and an "illusion of authority" that prioritizes polished speed over genuine depth. To avoid losing their edge, creators must treat AI as a partner for iteration rather than a replacement for critical thinking and human intuition.
* **The Feedback Loop:** Tools aren't neutral; they actively mold the user's cognitive habits.
* **Sycophancy Risk:** AI can act as a "digital yes-man," reinforcing errors instead of challenging them.
* **Superficiality Trap:** Rapid, high-quality outputs can mask a lack of true accountability or substance.
* **Intentional Agency:** Maintaining human intuition is essential to prevent being shaped by the technology.
The article explores how artificial intelligence is poised to disrupt traditional organizational structures by collapsing the translation costs between roles. Rather than just speeding up existing workflows, AI enables a fundamental shift from sequential handoffs—like PM to design to engineering—to highly autonomous, small squads and composable capability atoms. As information routing becomes automated, middle management must pivot toward judgment and coaching, while competitive advantage shifts from execution speed to learning speed.
Key points:
- Hierarchy's true function is information routing rather than just authority.
- AI eliminates the translation bottlenecks between product managers, designers, engineers, and QA.
- Organizational models will shift from relay races to simultaneous squad-based work.
- Departments may decompose into independent, composable capability atoms.
- The competitive moat moves from shipping speed to organizational learning speed.
Adam Johnson introduces profiling-explorer, a new tool designed to explore Python profiling data stored in pstats files through an interactive web interface. The tool provides a more convenient and modern alternative to the standard command-line pstats interface, featuring dark mode, column sorting, search filtering by filename or function, and easy navigation between callers and callees.
* table-based UI for inspecting call counts, internal time, and cumulative time in milliseconds.
* low-overhead sampling profiler (Tachyon) in Python 3.15.
The author distinguishes between vibe coding, a reckless approach where developers prompt and accept AI output without review, and agentic engineering, a disciplined professional workflow. While vibe coding is useful for rapid prototyping and MVPs, it lacks the rigor required for scalable or secure systems. Agentic engineering involves orchestrating AI agents under strict human oversight, treating them as fast but unreliable junior developers who require architectural direction and relentless testing.
Key points:
- Distinction between vibe coding (prototyping) and agentic engineering (professional discipline).
- The importance of design docs, rigorous code reviews, and comprehensive test suites in AI workflows.
- How AI-assisted development rewards strong engineering fundamentals rather than replacing them.
- The risk of skill atrophy among junior developers who rely on prompting without understanding underlying principles.