Memoir is an AI-powered plugin that enriches existing AI companions in the Text Generation Web UI with advanced memory capabilities and emotional intelligence.
An AI memory layer with short- and long-term storage, semantic clustering, and optional memory decay for context-aware applications.
A study found that exposing older adults to various odorants at night using an odorant diffuser improved their memory and increased activity in the uncinate fasciculus.
This article discusses the importance of real-time access for Retrieval Augmented Generation (RAG) and how Redis can enable this through its real-time vector database, semantic cache, and LLM memory capabilities, leading to faster and more accurate responses in GenAI applications.
A study reveals that the brain stores memories in three parallel copies using different sets of neurons. This could have implications for treating traumatic memories.
Sleep not only consolidates memories but also resets the brain’s memory storage mechanism. This process, governed by specific regions in the hippocampus, allows neurons to prepare for new learning without being overwhelmed, opening potential pathways for enhancing memory and treating neurological disorders.
Psychologists found that 44.7% of recorded earworms matched the original song's pitch perfectly, suggesting a common 'musical superpower'.
A new study reveals the role of the molecule KIBRA in forming long-term memories. Researchers found that KIBRA acts as a “glue,” binding with the enzyme PKMzeta to strengthen and stabilize synapses, crucial for memory retention.
The article discusses the limitations of Large Language Models (LLMs) in planning and self-verification tasks, and proposes an LLM-Modulo framework to leverage their strengths in a more effective manner. The framework combines LLMs with external model-based verifiers to generate, evaluate, and improve plans, ensuring their correctness and efficiency.
"Simply put, we take the stance that LLMs are amazing giant external non-veridical memories that can serve as powerful cognitive orthotics for human or machine agents, if rightly used."
This paper proposes a new method called MoRA for parameter-efficient fine-tuning of large language models (LLMs). The proposed method, MoRA, employs a square matrix to achieve high-rank updating, maintaining the same number of trainable parameters. The paper suggests that low-rank updating, as implemented in LoRA, may limit the ability of LLMs to effectively learn and memorize new knowledge. MoRA outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.