MIT researchers have developed a method using large language models to detect anomalies in complex systems without the need for training. The approach, called SigLLM, converts time-series data into text-based inputs for the language model to process. Two anomaly detection approaches, Prompter and Detector, were developed and showed promising results in initial tests.
This article discusses causal inference, an emerging field in machine learning that goes beyond predicting what could happen to focus on understanding the cause-and-effect relationships in data. The author explains how to detect and fix errors in a directed acyclic graph (DAG) to make it a valid representation of the underlying data.