Google I/O 2024 is the company’s last shot to dominate mobile AI

Google I/O 2024 is just a few weeks away, and artificial intelligence figures to be the main focus point for the annual developer conference yet again. We expect Google to unveil a new budget-friendly Pixel 8a, improvements to Wear OS, and much more in Mountain View, California on May 14. The company announces new AI features all the time, so what makes I/O 2024 have a higher significance? The mobile AI race is heating up, but so far everyone has been on the same team. Competitors, such as Samsung and MediaTek, are actually working with Google to provide mobile AI features on their devices and platforms.



That’s going to change in a big way this June when Apple hosts its yearly Worldwide Developers Conference with a heavy focus on AI. A not-so-subtle teaser all but confirms we’re going to see AI announcements at WWDC 2024, and we could even see related previews at Apple’s May 7 iPad event. Google has one last chance to outline a clear and compelling strategy for mobile AI at I/O 2024, cleaning up what has been a pretty scattered approach so far.


Related

What to expect from Google Gemini 1.5

Google’s next-gen AI model focuses on features for professional developers and enterprise users


Most of Google’s mobile AI features run in the cloud

Only two Pixel 8 Pro features use Gemini Nano on-device

Google was the first to release an “AI smartphone” last year in the Pixel 8 Pro, and, to a lesser degree, in the Pixel 8. There’s no shortage of AI-based features available on the Pixel 8 series, but these new offerings come with two major caveats. For starters, some Pixel features that are now marketed as using AI appear very similar to the same ones that used machine learning in the past. Pixel cameras have featured ML and computational photography for years, so is calling these nearly-identical features AI-based really that different?


There are some ways generative AI has really improved things, like Magic Editor for photos and Audio Magic Eraser for videos. Still, these new features are better viewed as a natural evolution of previously-available tools — such as the Magic Eraser — than AI-enhanced revolutions.

Related

Google Pixel 8 Pro review: Living up to its name

If you want to see the future of Google, the Pixel 8 Pro is the phone to buy

The bigger problem with Google’s current mobile AI plan is that nearly all of its best features need to run in the cloud. In fact, there are only two AI-based tools that can run on-device on the Pixel 8 Pro using the Gemini Nano model: Summarize in Recorder and Smart Reply in Gboard. That’s it. Everything else that matters, from Circle to Search to Video Boost, leverages cloud processing. Not all cloud-based features are bad, to be clear. They’re just not all that unique or innovative, as there are now more than a handful of companies offering similar AI features in the cloud.


It’s easy to ship cloud-based AI features

Getting useful tools to run on-device is still the real goal

A Samsung smartphone sitting on a mixing bowl, showing a recipe for an omelette in Google Gemini

If there’s anything we’ve learned from the dismal debuts of the Humane AI Pin and the Rabbit R1, it’s that cloud-based AI features are quite easy to ship. Both devices include very low-end processors from Qualcomm and MediaTek, respectively. They also use the Android Open Source Project as a foundation and feature lightweight operating systems that can interface with AI models running on cloud servers. If these two awful first-generation products can serve up off-device AI features, it’s not a surprise that the top Android phones can.


Of course, there are very practical benefits to having AI features that utilize on-device processing. Since the data is never leaving your device, you have more privacy. For that reason, it’s easier to give AI tools permission to access your information for personalized assistance without worrying about where your data is going. Computation is generally faster, too, because requests and responses don’t need to be sent back and forth from phones to servers. Despite Google’s vast portfolio of mobile AI features, it hasn’t really won until it can deliver many or all of them via on-device processing.

Apple isn’t waiting much longer to go all-in on AI

We will definitely see AI features at WWDC 2024 — and maybe sooner

The iPhone 15 Pro Max next to the Google Pixel 8 Pro on a kitchen counter, both displaying the Android Police website

The sense of urgency for Google derives from the reality that Apple is finally entering the mobile AI race. Apple sat on the sidelines throughout the AI boom thus far, but that will change starting at WWDC 2024. No one is exactly sure what the company has up its sleeves, just that AI features will hit iOS, iPadOS, and perhaps macOS later this year. Though the state of some Apple services would suggest Google has nothing to worry about — I’m looking at you, Siri — it’s impossible to underestimate Apple. Plus, Apple has stealthily shipped a handful of features powered by AI and ML under the hood.


Apple is in a unique position to dominate mobile AI early on due to its extensive experience with Neural Engines, which are what the company calls the Neural Processing Units (NPUs) in its systems-on-a-chip. Neural Engines and NPUs allow AI-based features to run using on-device processing, and they’ve been present in iPhones for a while. To be exact, the first iPhone to feature a Neural Engine was released in 2017, and every single iPhone since includes one as well.

This is a massive advantage for Apple, as it could deploy on-device AI features on every iPhone supporting iOS 18, dating back generations. On top of that, it’s undisputed that Apple’s A-series chips are far more powerful than Google’s Tensor chips.

Related

iPhone 15 Pro Max review: The phone everyone else is choosing

The bubbles might be bluer, but is the grass greener?


Your move, Google

It’s time to see what Google’s real mobile AI strategy will be

Gemini in Google Messages explaining Android Police.

So far, Google has implemented a scattered approach to mobile AI. The company’s strategy has been to basically throw AI everywhere it can and see which places users find it helpful. We’ve seen AI in Google apps and services like Pixel Camera, Photos, Messages, Gemini, all throughout Workspace, and many more.

With in-house Gemini models and Tensor processors, Google has everything it needs to become the decisive leader in mobile AI, but we still need to see more. A clear plan and more on-device AI features can help Google fend off Apple’s attempt to break into the mobile AI market. However, I/O 2024 could be Google’s last chance to leave an uncontested impression.


Google Pixel 8 Pro in Bay, front and back views

Google Pixel 8 Pro

$799 $999 Save $200

The Google Pixel 8 Pro is the company’s latest flagship, boasting a new Tensor G3 chip, a brighter screen, and a new camera array capable of capturing even more light. As usual, the real power lies in Google’s Tensor chip, which offers even more photo enhancement and image editing features.

Reference

Denial of responsibility! Web Today is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment