>"TL;DR: We unify over 23 methods in contrastive learning, dimensionality reduction, spectral clustering, and supervised learning with a single equation."
>"As the field of representation learning grows, there has been a proliferation of different loss functions to solve different classes of problems. We introduce a single information-theoretic equation that generalizes a large collection of mod- ern loss functions in machine learning. In particular, we introduce a framework that shows that several broad classes of machine learning methods are precisely minimizing an integrated KL divergence between two conditional distributions: the supervisory and learned representations. This viewpoint exposes a hidden information geometry underlying clustering, spectral methods, dimensionality re- duction, contrastive learning, and supervised learning. This framework enables the development of new loss functions by combining successful techniques from across the literature. We not only present a wide array of proofs, connecting over 23 different approaches, but we also leverage these theoretical results to create state-of-the-art unsupervised image classifiers that achieve a +8% improvement over the prior state-of-the-art on unsupervised classification on ImageNet-1K. We also demonstrate that I-Con can be used to derive principled debiasing methods which improve contrastive representation learners."
This paper explores 'TeleAbsence,' a concept extending telepresence to address emotional distance from lost loved ones through poetic encounters with their digital and physical traces, inspired by the Portuguese concept of 'Saudade.' It outlines five design principles โ presence of absence, illusory communication, materiality of memory, traces of reflection, and remote time โ and explores applications using mediums like poetry, phone, piano, and pen.
A reference manual for the extensible, customizable, self-documenting real-time display editor. This manual corresponds to EMACS version 162.
New genetic research suggests that humans first developed language around 135,000 years ago, with its widespread social use around 100,000 years ago. This study, using data from 15 genetic studies, indicates that language likely began as a cognitive system before becoming crucial for social communication.
MIT's Tech Square has played a significant role in the evolution of computing, hosting key figures and research from time-shared computing to the World Wide Web.
A new MIT study shows that both humans and animals continue to explore different approaches to a task even after learning the optimal strategy, due to potential benefits of discovering new, better alternatives or adapting to changes in the environment.
A study by MIT suggests that humans and animals have a built-in tendency to continuously tweak their methods, driven by the potential for discovering superior strategies and adapting to unforeseen changes.
The article from Earth.com discusses a study revealing that both humans and animals have an inherent tendency to experiment and explore, even after mastering a task. Conducted by researchers at MIT, the study suggests that this behavior serves two main purposes: adapting to potential changes in task rules and discovering potentially better solutions. The study involved humans and marmosets performing a task that required them to react when an image disappeared. Despite learning optimal strategies, participants continued to alter their responses based on past experiences, indicating an exploratory approach to improve their internal model of the environment. This behavior has implications for understanding learning processes and could provide insights into autism spectrum disorders, as marmosets are increasingly used in related research. The full study was published in the journal Current Biology.
Quotes:
> First, he says, simply because a task's rules seem set one moment doesn't mean they'll stay that way in this uncertain world, so altering behavior from the optimal condition every so often could help reveal necessary adjustments.
>
>Second, and of equal importance, continuous exploration could also offer a chance to discover something superior to our current best.
>
>"If the goal is to maximize reward, you should never deviate once you have found the perfect solution, yet you keep exploring. Why? It's like food. We all like certain foods, but we still keep trying different foods because you never know, there might be something you could discover," noted the researchers.
Dan Weinreb's thesis details the development of ZWEI, a real-time display-oriented editor for the Lisp Machine. It emphasizes ZWEI's design, implementation using Lisp, and integration with the Lisp environment. Key aspects include the use of buffer pointers (bps), intervals, and Lisp macros, as well as the impact of the Lisp Machine's architecture on the editor's functionality.
Researchers discovered long-lost computer code and used it to resurrect the early chatbot ELIZA from MIT. Named after Eliza Doolittle from 'Pygmalion,' ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum. It was designed to emulate a psychotherapist in conversation and used a unique programming language called MAD-SLIP. Rediscovered in 2021, the original code was brought back to life after 60 years, demonstrating the chatbot's functionality and highlighting the historical significance of early artificial intelligence.
The ELIZA chatbot, created in the 1960s by Joseph Weizenbaum at MIT, has been painstakingly reconstructed from archived records and run for the first time in over half a century. This effort marks a significant step in preserving one of the earliest examples of artificial intelligence. Despite its rudimentary nature compared to modern AI, ELIZA's resurrection highlights its historical importance.