Tags: splunk*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This Splunk blog post announces the general availability of **Search Processing Language version 2 (SPL2)**, the next generation of Splunk’s data search and preparation language. SPL2 aims to improve upon the existing SPL language by addressing user feedback and modernizing data interaction.

    **Key benefits and features of SPL2 include:**

    * **Unified Language:** SPL2 provides a single syntax for both searching data within Splunk and preparing data in-stream (via Edge and Ingest Processor).
    * **SQL-like Syntax:** It supports both SPL-like and SQL-like syntax, making it more accessible to users familiar with database languages.
    * **Enhanced User Experience:** A multi-statement “module” editor offers features like autocomplete, in-product documentation, and a point-and-click interface.
    * **Improved Data Management:** "Data views" allow administrators to define and permission access to data, improving data sharing and reducing index bloat. Custom data types enable data quality validation and conditional dropping of poor data.
    * **Code Reusability:** Developers can create and share custom functions for use across the Splunk ecosystem.
    * **Streamlined Workflows:** The “learn once, use everywhere” model allows for consistent data processing across search and ingest solutions.
    * **App Development Enhancement:** SPL2 module files allow developers to ship apps with curated data, custom functions, and packaged views.
    2026-01-26 Tags: , , , by klotz
  2. Cisco and Splunk have introduced the Cisco Time Series Model, a univariate zero shot time series foundation model designed for observability and security metrics. It is released as an open weight checkpoint on Hugging Face.

    * **Multiresolution data is common:** The model handles data where fine-grained (e.g., 1-minute) and coarse-grained (e.g., hourly) data coexist, a typical pattern in observability platforms where older data is often aggregated.
    * **Long context windows are needed:** It's built to leverage longer historical data (up to 16384 points) than many existing time series models, improving forecasting accuracy.
    * **Zero-shot forecasting is desired:** The model aims to provide accurate forecasts *without* requiring task-specific fine-tuning, making it readily applicable to a variety of time series datasets.
    * **Quantile forecasting is important:** It predicts not just the mean forecast but also a range of quantiles (0.1 to 0.9), providing a measure of uncertainty.
  3. Replays of the .conf25 Global Broadcast sessions, including the Welcome Keynote, Product Keynote, and various sessions covering topics like AI, security, observability, and Splunk platform updates.
  4. This page showcases a demo of kapa.ai, an AI assistant that turns knowledge bases into reliable and production-ready AI solutions. It highlights features like an answer engine, data source integrations, deployment options, analytics, and security features.
    2025-07-24 Tags: , , , by klotz
  5. .conf25 offers hundreds of sessions led by industry experts designed to enhance your career. The event is scheduled for September 8-11, 2025 in Boston, Massachusetts.
  6. Information about the social events planned for .conf25, including the Welcome Reception, Happy Hour, and Search Party. Also includes a link to photos from .conf24 social events.
    2025-04-29 Tags: , , by klotz
  7. This Splunk Lantern article outlines the steps to monitor Gen AI applications with Splunk Observability Cloud, covering setup with OpenTelemetry, NVIDIA GPU metrics, Python instrumentation, and OpenLIT integration to monitor GenAI applications built with technologies like Python, LLMs (OpenAI's GPT-4o, Anthropic's Claude 3.5 Haiku, Meta’s Llama), NVIDIA GPUs, Langchain, and vector databases (Pinecone, Chroma) using Splunk Observability Cloud. It outlines a six-step process:

    1. **Access Splunk Observability Cloud:** Sign up for a free trial if needed.
    2. **Deploy Splunk Distribution of OpenTelemetry Collector:** Use a Helm chart to install the collector in Kubernetes.
    3. **Capture NVIDIA GPU Metrics:** Utilize the NVIDIA GPU Operator and Prometheus receiver in the OpenTelemetry Collector.
    4. **Instrument Python Applications:** Use the Splunk Distribution of OpenTelemetry Python agent for automatic instrumentation and enable Always On Profiling.
    5. **Enhance with OpenLIT:** Install and initialize OpenLIT to capture detailed trace data, including LLM calls and interactions with vector databases (with options to disable PII capture).
    6. **Start Using the Data:** Leverage the collected metrics and traces, including features like Tag Spotlight, to identify and resolve performance issues (example given: OpenAI rate limits).

    The article emphasizes OpenTelemetry's role in GenAI observability and highlights how Splunk Observability Cloud facilitates monitoring these complex applications, providing insights into performance, cost, and potential bottlenecks. It also points to resources for help and further information on specific aspects of the process.
  8. This gist contains BNF (Backus-Naur Form) syntax definitions for commands data used in Splunk
    2025-02-25 Tags: , , , by klotz
  9. This gist contains BNF (Backus-Naur Form) syntax definitions for various data types used in Splunk, such as boolean, field, field-and-value, and more.
    2025-02-25 Tags: , , , , by klotz
  10. A discussion thread about finding a grammar for the Splunk query language, providing links to BNF grammars for search and datatypes generated from a Splunk instance.
    2025-02-25 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "splunk"

About - Propulsed by SemanticScuttle