Using Digital Twins to optimize data center operations and eliminate wasted IT infrastructure can save significant costs and improve sustainability.
MIT researchers have developed a method using large language models to detect anomalies in complex systems without the need for training. The approach, called SigLLM, converts time-series data into text-based inputs for the language model to process. Two anomaly detection approaches, Prompter and Detector, were developed and showed promising results in initial tests.
This paper describes a computational cognitive model of instrument operations at the Linac Coherent Light Source (LCLS), a leading scientific user facility.
- The model simulates aspects of human cognition at multiple scales, ranging from seconds to hours, and among agents playing multiple roles.
- The model can predict impacts stemming from proposed changes to operational interfaces and workflows, and its code is open source.
- Example results demonstrate the model's potential in guiding modifications to improve operational efficiency and scientific output.
The model's primary focus is on the decision of what to measure when and for how long, made by the experiment manager in consultation with the team.
The model represents a rough approximation of the LCLS setting but produces sensible results that provide insights into human-in-the-loop instrument operations.
The model can help optimize scientific productivity at LCLS by enhancing aspects of the human-machine interface and cognitive factors.
Conclusions:
1. The model's primary focus is on the decision of what to measure when and for how long, made by the experiment manager in consultation with the team.
2. The model represents a rough approximation of the LCLS setting but produces sensible results that provide insights into human-in-the-loop instrument operations.
3. The model can help optimize scientific productivity at LCLS by enhancing aspects of the human-machine interface and cognitive factors.
4. Future work includes extending the model to capture more detailed measurements of individual and team behavior, inter- and intra-team communications, and learning at multiple scales.
A digital twin is a virtual replica of a real-world physical product, system, or process, serving as its digital counterpart for purposes such as simulation, integration, testing, monitoring, and maintenance. The concept originated from NASA in 2010 as an attempt to improve the physical-model simulation of spacecraft. Digital twins exist throughout the entire lifecycle of the physical entity they represent and are the underlying premise for Product Lifecycle Management. In the manufacturing industry, digital twin technology is being extended to the entire manufacturing process, allowing benefits such as virtualization to be extended to domains such as inventory management, machinery crash avoidance, tooling design, troubleshooting, and preventive maintenance. Digital twinning also enables extended reality and spatial computing to be applied not just to the product itself but also to all of the business processes that contribute towards its production.
The article discusses the use of digital twins in scientific research, with a focus on NASA's James Webb Space Telescope (JWST). Engineers at Raytheon, the company responsible for JWST's software and flight operations, created a digital twin of the telescope to monitor its complex deployment in space and to help troubleshoot any problems that might arise. The digital twin updates itself daily with 800 million data points and is used to train operators, predict the effects of software updates, and troubleshoot issues. The concept of digital twins was first introduced by Michael Grieves in 2002, and the term was popularized by NASA employee John Vickers in 2010. As technology has advanced, digital twins have become more common in both the defense and scientific industries, with the space industry being a particular area where the two sectors converge. The JWST's digital twin is just one example of how these twins are helping scientists run the world's most complex instruments and revealing more about the world and the universe beyond.
The paper proposes the "law of increasing functional information," a new law of nature that could help explain the evolution of complex systems across multiple scales in the universe, from atoms and molecules to stars and brains.
These systems are characterized by three attributes: they form from numerous components, processes generate numerous configurations, and configurations are preferentially selected based on function.
The law suggests that functional information of a system will increase over time when subjected to selection for function(s). The authors argue this law could help predict the behavior of these systems and provide a unified framework for understanding their evolution.
They suggest it could be a missing piece in our understanding of the universe.
The relationship between predictability and reconstructability, and how it can vary in opposite directions in complex systems. The work is based on information theory and was performed on various dynamics on random graphs, including continuous deterministic systems, and provides analytical calculations of the uncertainty coefficients for many different systems.