Dr. Ora Lassila is a Principal Graph Technologist at AWS, working within the Amazon Neptune team with a primary focus on knowledge graphs. Throughout his extensive career, he has held significant roles, including Managing Director at State Street and positions at Nokia Research Center and HERE. A recognized pioneer in his field, he co-authored the original W3C RDF specification and the seminal article on the Semantic Web. His professional expertise covers AI, ontologies, the Semantic Web, RDF, and Knowledge Representation. In addition to his technical contributions, he is an enthusiast of aviation photography and scale modeling, even applying knowledge graph technologies to manage his aviation photography business, So Many Aircraft.
AWS has introduced S3 Files, a new feature designed to provide native NFS file system access to Amazon S3 buckets. This innovation allows compute resources like EC2, EKS, and Lambda to interact with S3 data using standard file system operations, including creating, reading, updating, and deleting files. Unlike previous third-party tools or the S3 API alone, S3 Files supports advanced features like file locking and in-place edits by leveraging Amazon Elastic File System (EFS) as a high-performance layer. This architecture is particularly beneficial for collaborative workloads, such as machine learning training pipelines and agentic AI workflows, where multiple resources need simultaneous, low-latency access to shared data without requiring migrations.
This repository provides a learning-focused proof of concept for secure multi-account AWS networking using AWS IPAM and Transit Gateway. It demonstrates how to centralize IP address management to prevent conflicts and establish hub-and-spoke connectivity, replacing traditional VPC peering. The setup utilizes cross-account Terraform with least-privilege IAM roles and AWS RAM for resource sharing.
The repository includes detailed documentation, architecture diagrams, and a runbook for deployment, validation, and teardown. It aims to teach users how to effectively implement and manage a scalable and secure network infrastructure in AWS.
Amazon Bedrock AgentCore provides an enterprise-grade infrastructure for deploying and managing AI agents. It's model-agnostic, supporting models from Amazon Bedrock, Anthropic, Google Gemini, and OpenAI, and integrates with frameworks like Strands, LangGraph, and CrewAI. Core services include a runtime, memory (short and long-term), a gateway, identity management, a code interpreter, a browser, observability, an evaluation service, and a policy capability. The article details a customer support agent demo, highlighting both the capabilities and potential issues encountered during setup and execution, like deployment warnings and model behavior with policies.
Amazon outages linked to rapid AI integration were discussed in a recent internal meeting. AI glitches in algorithms managing infrastructure caused disruptions (e.g., issues viewing product details, Freevee streaming). While Amazon is aggressively using AI, sources say the speed is creating instability. The company is focused on reliability amidst growing AI competition. Amazon declined to comment specifically but affirmed commitment to customer experience
An account of how a developer, Alexey Grigorev, accidentally deleted 2.5 years of data from his AI Shipping Labs and DataTalks.Club websites using Claude Code and Terraform. Grigorev intended to migrate his website to AWS, but a missing state file and subsequent actions by Claude Code led to a complete wipe of the production setup, including the database and snapshots. The data was ultimately restored with help from Amazon Business support. The article highlights the importance of backups, careful permissions management, and manual review of potentially destructive actions performed by AI agents.
AWS has released Agent Plugins for AWS, an open-source repository enabling AI coding agents to automate cloud deployment workflows. The initial deploy-on-aws plugin accepts natural language commands to generate complete deployment pipelines with architecture recommendations, cost estimates, and infrastructure-as-code.
Amazon Web Services (AWS) recently made a significant move by laying off approximately 40% of its DevOps staff. This decision wasn't a sign of downsizing, but rather a strategic shift towards automation and a new tool called 'Dahlia'. This article explores the reasons behind the layoffs, the capabilities of Dahlia, and its potential impact on the future of DevOps.
The article details Amazon Web Services' (AWS) recent decision to lay off a significant portion (around 40%) of its DevOps workforce, specifically those involved in managing and maintaining its own internal infrastructure. This isn't a sign of AWS abandoning DevOps, but rather a strategic shift *towards* fully embracing a "platform engineering" approach and leveraging automation tools.
* **Shift to Platform Engineering:** AWS is building internal "developer platforms" – self-service tools and standardized components – to empower application development teams to manage their own infrastructure and deployments with less reliance on centralized DevOps teams.
* **Key Tools Driving the Change:** The article highlights three main tools enabling this transition:
* **Pulumi:** An Infrastructure-as-Code (IaC) tool allowing developers to define infrastructure using familiar programming languages (Python, JavaScript, Go, etc.).
* **Crossplane:** An open-source Kubernetes add-on that extends Kubernetes to manage infrastructure across multiple cloud providers.
* **Backstage:** A developer portal created by Spotify, now open-source, that provides a centralized interface for developers to discover, create, and manage software components and infrastructure.
* **Impact of the Layoffs:** The layoffs were concentrated in teams traditionally responsible for manual infrastructure provisioning and maintenance. The remaining DevOps staff are being re-focused on building and maintaining the internal developer platforms.
* **Wider Industry Trend:** This move by AWS reflects a broader trend in the industry towards platform engineering, driven by the need for faster innovation, increased developer productivity, and reduced operational overhead.
In essence, AWS is automating away much of the traditional DevOps work, allowing developers to self-serve their infrastructure needs through these platform tools. This is a strategic move to scale its internal development efforts and accelerate innovation.
Amazon S3 Vectors is now generally available with increased scale and production-grade performance capabilities. It offers native support to store and query vector data, potentially reducing costs by up to 90% compared to specialized vector databases.
SRE.ai, a Y Combinator-backed startup, has raised $7.2 million to develop AI agents that automate complex enterprise DevOps workflows, offering chat-like experiences across multiple platforms.