Google's recent Pixel Drop introduces a groundbreaking, albeit unusual, screen automation feature for Gemini. Unlike previous assistants limited by strict APIs, Gemini uses visual reasoning to interact with third-party applications directly. By reading on-screen elements like menus and text fields, the AI can perform complex tasks such as ordering food or booking rides within a secure sandbox. While this offers significant benefits for multitasking and accessibility, it also raises critical questions regarding privacy, the stability of automation when app UIs change, and the potential disruption of the ad-supported economy. Currently, this beta feature is limited to high-end devices like the Pixel 10 and Galaxy S26 series in select regions.
Google has removed the "design for accessibility" section from within the Understand the JavaScript SEO basics documentation.
Google said this was removed because the information was "out of date and not as helpful as it used to be."
The old text advised that using JavaScript for page content "may be hard for Google to see," but Google now states this hasn't been true for many years.
While Google Search can handle JavaScript well, it's still important to double-check what Google Search sees using the URL inspection tool in Google Search Console.