This Splunk Lantern blog post highlights new articles on instrumenting LLMs with Splunk, leveraging Kubernetes for Splunk, and using Splunk Asset and Risk Intelligence.
This article discusses the benefits of a disaggregated observability (o11y) stack for modern distributed architectures, addressing issues of flexibility, high cost, and lack of autonomy in traditional solutions. It highlights key layers of a disaggregated stack — agents, collection, storage, and visualization — and suggests the use of systems like Apache Pinot and Grafana.
How to ensure data quality and integrity using open-source tools for observability in data pipelines.
Data pipelines are essential for connecting data across systems and platforms. This article provides a deep dive into how data pipelines are implemented, their use cases, and how they're evolving with generative AI.
OpenTelemetry is not just an observability platform, it's a set of best practices and standards that can be integrated into platform engineering or DevOps.
Encore is an open-source backend framework and cloud platform for building distributed systems. It automates infrastructure, offers end-to-end type-safety, and provides built-in observability.
- Automated Infrastructure:
Local & Cloud (AWS/GCP) provisioning
Infrastructure semantics within application code
No separate config tools needed
- Type-Safe Microservices:
Define & call APIs like normal functions
Full type-safety & IDE auto-complete
Automatic protocol communication boilerplate
- Faster Development:
Hot reload & automatic local infrastructure setup
Simplified, speedier development process
Observability:
API explorer, distributed tracing, architecture diagrams
Service catalog with automatic API documentation
- Cloud Platform:
Seamless workflow with CI/CD, testing, & infrastructure provisioning
Preview environments for every PR
- Security & Scalability:
Battle-tested AWS/GCP services
Best practices for security & scalability
Metrics & logging for critical aspects
Outlier treatment is a necessary step in data analysis. This article, part 3 of a four-part series, eases the process and provides insights on effective methods and tools for outlier detection.
Hydrolix is a streaming data lake platform designed to handle large amounts of immutable log data at a lower cost than traditional solutions. The platform is particularly well-suited for observability data and offers real-time query performance on terabyte-scale data. Hydrolix uses an ANSI-compliant SQL interface, is schema-based and fully indexed, and is designed for high-cardinality data. It is purpose-built for log data and focuses on data that comes in once and never changes. Hydrolix is currently used by companies in industries like media, gaming, ad tech, and telecom security that require long-term retention of data. The company recently announced a $35 million Series B round, and its technology serves as the basis for Akamai's observability product TrafficPeak. The platform is designed to save costs for companies dealing with billions of transactions a day and terabytes of data, as it can store data for longer periods than traditional solutions like Splunk or Datadog, thereby reducing costs or increasing retention.
A digital twin is a virtual replica of a real-world physical product, system, or process, serving as its digital counterpart for purposes such as simulation, integration, testing, monitoring, and maintenance. The concept originated from NASA in 2010 as an attempt to improve the physical-model simulation of spacecraft. Digital twins exist throughout the entire lifecycle of the physical entity they represent and are the underlying premise for Product Lifecycle Management. In the manufacturing industry, digital twin technology is being extended to the entire manufacturing process, allowing benefits such as virtualization to be extended to domains such as inventory management, machinery crash avoidance, tooling design, troubleshooting, and preventive maintenance. Digital twinning also enables extended reality and spatial computing to be applied not just to the product itself but also to all of the business processes that contribute towards its production.
The article discusses the use of digital twins in scientific research, with a focus on NASA's James Webb Space Telescope (JWST). Engineers at Raytheon, the company responsible for JWST's software and flight operations, created a digital twin of the telescope to monitor its complex deployment in space and to help troubleshoot any problems that might arise. The digital twin updates itself daily with 800 million data points and is used to train operators, predict the effects of software updates, and troubleshoot issues. The concept of digital twins was first introduced by Michael Grieves in 2002, and the term was popularized by NASA employee John Vickers in 2010. As technology has advanced, digital twins have become more common in both the defense and scientific industries, with the space industry being a particular area where the two sectors converge. The JWST's digital twin is just one example of how these twins are helping scientists run the world's most complex instruments and revealing more about the world and the universe beyond.