WebMCP is a new technology that allows AI agents to interact with web pages more directly. It works by turning web pages into MCP (Model Context Protocol) servers via a Chrome extension. This enables agents to understand and manipulate web content in a structured way, potentially improving efficiency and user experience.
The technology, backed by Google and Microsoft, is designed to work alongside human users, allowing them to ask agents questions about the page they are viewing. WebMCP uses a Declarative API for standard actions and an Imperative API for more complex tasks. Early experiments demonstrate the ability to query web pages and receive structured data back.
AI safety and alignment research has predominantly been focused on methods for safeguarding individual AI systems, resting on the assumption of an eventual emergence of a monolithic Artificial General Intelligence (AGI). The alternative AGI emergence hypothesis, where general capability levels are first manifested through coordination in groups of sub-AGI individual agents with complementary skills and affordances, has received far less attention. Here we argue that this patchwork AGI hypothesis needs to be given serious consideration, and should inform the development of corresponding safeguards and mitigations.
LLM Council works together to answer your hardest questions. A local web app that uses OpenRouter to send queries to multiple LLMs, have them review/rank each other's work, and finally a Chairman LLM produces the final response.
Google will host a separate event, 'The Android Show: I/O Edition,' on May 13th to discuss Android updates, a week before Google I/O. This suggests Google I/O will focus more on Gemini and other AI efforts. Android will still be present at I/O, but it's no longer the primary focus.
Google’s John Mueller downplayed the usefulness of LLMs.txt, comparing it to the keywords meta tag, as AI bots aren’t currently checking for the file and it opens potential for cloaking.
AlexNet, a groundbreaking neural network developed in 2012 by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, has been released in source code form by the Computer History Museum in collaboration with Google. This model significantly advanced the field of AI by demonstrating a massive leap in image recognition capabilities.
The latest news about Gemini. Chat to start writing, planning, learning and more with Google AI.
In an interview with TechCrunch, Signal CEO Meredith Whittaker criticizes the media's obsession with AI-driven deepfakes, the encroaching surveillance state, and the concentration of power in the five main social media platforms. She also discusses the company's recent war of words with Elon Musk, Telegram's Pavel Durov, and OpenAI's leadership.
Key concept: Setting mental models can help users understand how to interact with products that adapt over time. This chapter covers:
Identifying existing mental models
Onboarding in stages
Planning for co-learning
Accounting for user expectations of human-like interaction
Key concept: To build effective mental models of AI-powered products, consider what you want people to know about your product before their first use, how to explain its features, and when it will need feedback from them to improve.