Sundar Pichai, Google CEO, Google I/O, Google I/O 2024

It’s that moment you’ve been waiting for all year: Google I/O keynote day! Google kicked off its developer conference each year with a rapid-fire stream of announcements, including many unveilings of recent things it’s been working on. Brian already kicked us off by sharing what we are expecting.

Since you might not have had time to watch the whole two-hour presentation Tuesday, we took that on and delivered quick hits of the biggest news from the keynote as they were announced, all in an easy-to-digest, easy-to-skim list.

Privacy concerns over AI voice call scans

Google showcased a demo of a call scam detection feature during I/O, which it says will be added to a future version of Android. The feature scans voice calls as they’re happening with AI, which effectively is client-side scanning, which has already sparked such a backlash on iOS that Apple abandoned its plans to adopt it in 2021. And as expected, a number of privacy advocates and experts voiced concerns over Google’s use of the technology, which they say could swiftly expand beyond applying just to scams and be used in more malicious ways. Read more

Updated security features

On Wednesday, Google announced it is adding new security and privacy protections to Android, including on-device live threat detection to catch malicious apps, new safeguards for screen sharing, and better security against cell site simulators.

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions. It also uses AI to detect if apps are trying to interact with other services and apps in an unauthorized manner.

Google said if the system is certain about malicious behavior, it disables the app automatically. Otherwise, it alerts the company for a review and then alerts users. Read more

And to protect devices in the real world, Google also announced Theft Detection Lock, an AI-powered addition that identifies motion commonly associated with theft, like a swift movement in an opposite direction. Once detected, the phone screen automatically locks, preventing future usage of the device without clearing whatever safeguards you’ve put in place. Read more

Google TV

Image Credits: Google

Google worked its Gemini into its Google TV smart TV operating system so it can generate descriptions for movies and TV shows. When a description is missing on the home screen, the AI will fill it in automatically to ensure that viewers never have to wonder what a title is about. It’ll also translate descriptions into the viewer’s native language, making the content more discoverable to a wider audience. The best part? The AI-generated descriptions are also personalized based on a viewer’s genre and actor preferences. Read more

Private Space feature

Image Credits: Google

Now here’s a fun one. Private Space is a new Android feature that lets users silo a portion of the operating system for sensitive information. It’s a bit like Incognito mode for the mobile operating system, sectioning designated apps into a “container.”

The space is available from the launcher and can be locked as a second layer of authentication. Apps in Private Space will be hidden from notifications, settings and recents. Users can still access the apps through a system sharesheet and photo picker in the main space, so long as the private space has been unlocked.

Developers can play around with it now, but there is a caveat — there is a bug. Google says it expects to address the bug in the coming days. Read more

Google Maps gets geospatial AR

Google Maps users will soon have a new layer of content on their phones — they will have access to geospatial augmented reality content. The feature will first appear in Singapore and Paris as part of a pilot program launching later this year.

Users will be able to access the AR content by first searching for a location in Google Maps. If the location has AR content and the user is near the place, they will have to tap on the image that says “AR Experience” and then lift their phone. 

If someone is exploring a place remotely, they can see the same AR experience in Street View. After exploring the AR content, users can share the experience through a deep link URL or QR code on social media. Read more

Wear OS 5

Image Credits: Google

Google gave a developer preview of the new version of its smartwatch operating system, Wear OS 5. The latest release focuses on improved battery life and other performance improvements, like more efficient workout tracking. Developers are also getting updated tools for creating watch faces, as well as new versions of Wear OS tiles and Jetpack Compose for building watch apps. Read more

TechCrunch Minute

As we note all over this post, the Google I/O developer conference came with a big dose of AI. See how Anthony Ha summed it up Wednesday. Read more

Even Elon Musk took note

“Web” search filter

Google introduced a new way to filter for just text-based links. The new “Web” filter appears at the top of the results page and enables users to filter for text links the way they can today filter for images, video, news or shopping.

As Sarah Perez reports, the launch is an admission that sometimes people will want to just surface text-based links to web pages, aka the classic blue links, that today are often of secondary importance as Google either answers the question in its informational Knowledge Panels or, now, through AI experiments. Read more

Firebase Genkit

Image Credits: TechCrunch

There’s a new addition to the Firebase platform, called Firebase Genkit, that aims to make it easier for developers to build AI-powered applications in JavaScript/TypeScript, with Go support coming soon. It’s an open source framework, using the Apache 2.0 license, that enables developers to quickly build AI into new and existing applications.

Some of the use cases for Genkit the company is highlighting Tuesday include many of the standard GenAI use cases: content generation and summarization, text translation and generating images. Read more

AI ad nauseam

Tuesday’s Google I/O ran for 110 minutes, but Google managed to reference AI a whopping 121 times during (by its own count) the event. CEO Sundar Pichai referenced the figure to wrap up the presentation, cheekily stating that the company was doing the “hard work” of counting for us. Again, it was no surprise — we were ready for it. Read more

Generative AI for learning

Google LearnLM
Image Credits: Google

Also today, Google unveiled LearnLM, a new family of generative AI models “fine-tuned” for learning. It’s a collaboration between Google’s DeepMind AI research division and Google Research. LearnLM models are designed to “conversationally” tutor students on a range of subjects, Google says.

Though it is already available on several of Google’s platforms, the company is taking LearnLM through a pilot program in Google Classroom. It is also working with educators to see how LearnLM might simplify and improve the process of lesson planning. LearnLM could help teachers discover new ideas, content and activities, Google says, or find materials tailored to the needs of specific student cohorts. Read more

Quiz master

Image Credits: Google

Speaking of education, new to YouTube are AI-generated quizzes. This new conversational AI tool allows users to figuratively “raise their hand” when watching educational videos. Viewers can ask clarifying questions, get helpful explanations or take a quiz on the subject matter. 

This is going to be some relief for those who have to watch longer educational videos, such as lectures or seminars, due to the Gemini model’s long-context capabilities. These new features are rolling out to select Android users in the U.S. Read more

Gemma 2 updates

Image Credits: Google

One of the top requests Google heard from developers is for a bigger Gemma model, so Google will be adding a new 27-billion-parameter model to Gemma 2. This next generation of Google’s Gemma models will launch in June. This size is optimized by Nvidia to run on next-generation GPU and can run efficiently on a single TPU host and vertex AI, Google said. Read more

Google Play

Image Credits: Nasir Kachroo / NurPhoto / Getty Images

Google Play is getting some attention with a new discovery feature for apps, new ways to acquire users, updates to Play Points and other enhancements to developer-facing tools like the Google Play SDK Console and Play Integrity API, among other things.

Of particular interest to developers is something called the Engage SDK, which will introduce a way for app makers to showcase their content to users in a full-screen, immersive experience that’s personalized to the individual user. Google says this isn’t a surface that users can see at this time, however. Read more

Detecting scams during calls

Image Credits: Google

Tuesday, Google previewed a feature it believes will alert users to potential scams during the call. 

The feature, which will be built into a future version of Android, utilizes Gemini Nano, the smallest version of Google’s generative AI offering, which can be run entirely on-device. The system effectively listens for “conversation patterns commonly associated with scams” in real time. 

Google gives the example of someone pretending to be a “bank representative.” Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters. Read more

Ask Photos

Image Credits: TechCrunch

Google Photos is getting an AI infusion with the launch of an experimental feature, Ask Photos, powered by Google’s Gemini AI model. The new addition, which rolls out later this summer, will allow users to search across their Google Photos collection using natural language queries that leverage an AI’s understanding of their photo’s content and other metadata.

While before users could search for specific people, places, or things in their photos, thanks to natural language processing, the AI upgrade will make finding the right content more intuitive and less of a manual search process.

And the example was cute, too. Who doesn’t love a tiger stuffed animal/Golden Retriever band duo called “Golden Stripes?” Read more

All About Gemini

Image Credits: Sarah Perez

Gemini in Gmail

Gmail users will be able to search, summarize, and draft their emails using its Gemini AI technology. It will also be able to take action on emails for more complex tasks, like helping you process an e-commerce return by searching your inbox, finding the receipt and filling out an online form. Read more

Image Credits: TechCrunch

Gemini 1.5 Pro

Another upgrade to the generative AI is that Gemini can now analyze longer documents, codebases, videos and audio recordings than before.

In a private preview of a new version of Gemini 1.5 Pro, the company’s current flagship model, it was revealed that it can take in up to 2 million tokens. That’s double the previous maximum amount. With that level, the new version of Gemini 1.5 Pro supports the largest input of any commercially available model. Read more

Gemini Live

The company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.

At first glance, Live doesn’t seem like a drastic upgrade over existing tech. But Google claims it taps newer techniques from the generative AI field to deliver superior, less error-prone image analysis — and combines these techniques with an enhanced speech engine for more consistent, emotionally expressive and realistic multi-turn dialogue. Read more

Gemini Nano

Now for a tiny announcement. Google is also building Gemini Nano, the smallest of its AI models, directly into the Chrome desktop client, starting with Chrome 126. This, the company says, will enable developers to use the on-device model to power their own AI features. Google plans to use this new capability to power features like the existing “help me write” tool from Workspace Lab in Gmail, for example. Read more

Image Credits: Google

Gemini on Android

Google’s Gemini on Android, its AI replacement for Google Assistant, will soon be taking advantage of its ability to deeply integrate with Android’s mobile operating system and Google’s apps. Users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps. Meanwhile, YouTube users will be able to tap “Ask this video” to find specific information from within that YouTube video, Google says. Read more

Google Maps AI highlights
Image Credits: Google

Gemini on Google Maps

Gemini model capabilities are coming to the Google Maps platform for developers, starting with the Places API. Developers can show generative AI summaries of places and areas in their own apps and websites. The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors. What’s better? Developers will no longer have to write their own custom descriptions of places. Read more

Tensor Processing Units get a performance boost

Google unveiled its next generation — the sixth, to be exact — of its Tensor Processing Units (TPU) AI chips. Dubbed Trillium, they will launch later this year. If you recall, announcing the next generation of TPUs is something of a tradition at I/O, even as the chips only roll out later in the year. 

These new TPUs will feature a 4.7x performance boost in compute performance per chip when compared to the fifth generation. What’s maybe even more important, though, is that Trillium features the third generation of SparseCore, which Google describes as “a specialized accelerator for processing ultra-large embeddings common in advanced ranking and recommendation workloads.” Read more

AI in search

Google is adding more AI to its search, assuaging doubts that the company is losing market share to competitors like ChatGPT and Perplexity. It is rolling out AI-powered overviews to users in the U.S. Additionally, the company is also looking to use Gemini as an agent for things like trip planning. Read more

Google plans to use generative AI to organize the entire search results page for some search results. That’s in addition to the existing AI Overview feature, which creates a short snippet with aggregate information about a topic you were searching for. The AI Overview feature becomes generally available Tuesday, after a stint in Google’s AI Labs program. Read more

Generative AI upgrades

Google Imagen 3
Image Credits: Google

Google announced Imagen 3, the latest in the tech giant’s Imagen generative AI model family.

Demis Hassabis, CEO of DeepMind, Google’s AI research division, said that Imagen 3 more accurately understands the text prompts that it translates into images versus its predecessor, Imagen 2, and is more “creative and detailed” in its generations. In addition, the model produces fewer “distracting artifacts” and errors, he said.

“This is [also] our best model yet for rendering text, which has been a challenge for image generation models,” Hassabis added. Read more

Project IDX

Project IDX, the company’s next-gen, AI-centric browser-based development environment, is now in open beta. With this update comes an integration with the Google Maps Platform into the IDE, helping add geolocation features to its apps, as well as integrations with the Chrome Dev Tools and Lighthouse to help debug applications. Soon, Google will also enable deploying apps to Cloud Run, Google Cloud’s serverless platform for running front- and back-end services. Read more


Google’s gunning for OpenAI’s Sora with Veo, an AI model that can create 1080p video clips around a minute long given a text prompt. Veo can capture different visual and cinematic styles, including shots of landscapes and time lapses, and make edits and adjustments to already-generated footage.

It also builds on Google’s preliminary commercial work in video generation, previewed in April, which tapped the company’s Imagen 2 family of image-generating models to create looping video clips. Read more

Circle to Search

person holding phone using Google Circle to Search
Image Credits: Google

The AI-powered Circle to Search feature, which allows Android users to get instant answers using gestures like circling, will now be able to solve more complex problems across psychics and math word problems. It’s designed to make it more natural to engage with Google Search from anywhere on the phone by taking some action — like circling, highlighting, scribbling or tapping. Oh, and it’s also better to help kids with their homework directly from supported Android phones and tablets. Read more

Pixel 8a

Pixel 8-Call Screen Update
Image Credits: Google

Google couldn’t wait until I/O to show off the latest addition to the Pixel line and announced the new Pixel 8a last week. The handset starts at $499 and ships Tuesday. The updates, too, are what we’ve come to expect from these refreshes. At the top of the list is the addition of the Tensor G3 chip. Read more

Pixel Slate

Image Credits: Brian Heater

Google’s Pixel Tablet, called Slate, is now available. If you recall, Brian reviewed the Pixel Tablet around this time last year, and all he talked about was the base. Interestingly enough, the tablet is available without it. Read more

We’ll be updating this post throughout the day …

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch

Source link