This tutorial explores how to use LLM embeddings as features in time series forecasting models. It covers generating embeddings from time series descriptions, preparing data, and evaluating the performance of models with and without LLM embeddings.
This paper provides a theoretical analysis of Transformers' limitations for time series forecasting through the lens of In-Context Learning (ICL) theory, demonstrating that even powerful Transformers often fail to outperform simpler models like linear models. The study focuses on Linear Self-Attention (LSA) models and shows that they cannot achieve lower expected MSE than classical linear models for in-context forecasting, and that predictions collapse to the mean exponentially under Chain-of-Thought inference.
This article explores how prompt engineering can be used to improve time-series analysis with Large Language Models (LLMs), covering core strategies, preprocessing, anomaly detection, and feature engineering. It provides practical prompts and examples for various tasks.
This article demonstrates how to use the attention mechanism in a time series classification framework, specifically for classifying normal sine waves versus 'modified' (flattened) sine waves. It details the data generation, model implementation (using a bidirectional LSTM with attention), and results, achieving high accuracy.
This paper introduces Toto, a time series forecasting foundation model with 151 million parameters, and BOOM, a large-scale benchmark for observability time series data. Toto uses a decoder-only architecture and is trained on a large corpus of observability, open, and synthetic data. Both Toto and BOOM are open-sourced under the Apache 2.0 License.
Datadog announces the release of Toto, a state-of-the-art open-weights time series foundation model, and BOOM, a new observability benchmark. Toto achieves SOTA performance on observability metrics, and BOOM provides a challenging dataset for evaluating time series models in the observability domain.
Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
A comprehensive guide to ultrascale machine learning, covering techniques, tools, and best practices.
SHREC is a physics-based unsupervised learning framework that reconstructs unobserved causal drivers from complex time series data. This new approach addresses the limitations of contemporary techniques, such as noise susceptibility and high computational cost, by using recurrence structures and topological embeddings. The successful application of SHREC on diverse datasets highlights its wide applicability and reliability in fields like biology, physics, and engineering, improving the accuracy of causal driver reconstruction.
Discussion on the challenges and promises of deep learning for outlier detection in various data modalities, including image and tabular data, with a focus on self-supervised learning techniques.