We need to relearn how to use AI when it’s on our bodies

- Advertisement -


Gemini has arrived on the wrist. It’s now in the latest Samsung Galaxy Watch 8 series, the Pixel Watch, and rolling out to a handful of other smartwatches. This is big. Huge, even. AI is out here disrupting life as we know it. Now, it’s making the leap from phones and laptops and onto the body. When the Galaxy Watch 8 launched, several product people told me this was going to make everything so much more convenient. Imagine, they said, having all the power of AI on you. Literally.

I’d love a more convenient, efficient life. Hands-free computing is, forgive the pun, genuinely handy. A competent, helpful AI assistant that you could interact with while on the go isn’t the worst use case for AI I’ve ever heard of.

The problem is I’ve spent the last 20-plus years of my life reaching for my phone. It’s not something I think about. It’s something I just do. I’ve also spent roughly a decade using Google Assistant. I know how to talk to Assistant. I’m acutely aware of what it can and can’t do. When I have to adjust my lights, set timers, or have a weird one-off question, I know exactly what to say and what will likely happen.

That’s not something I have with AI. Yet.

When it came time to test Gemini on the Galaxy Watch 8, I actually had to remember it was there. Even though it’s better at natural language, I froze when it came time to talk to it. My brain glitched. This isn’t Assistant! But you can still use the Hey Google command. Shit! You paused too long, and now it’s doing something awkward! Ahhhhhhhh!!!!!!

The other conundrum is knowing when and how to use Gemini on the wrist, versus Gemini on the phone, versus Gemini in your browser. At my Samsung demos, I was shown examples like, “Look up the nearest gym locations and text them to my wife,” “Start a run for the number of calories in a pizza slice,” and “Make a playlist for a 10-minute run.” When I probed reps to give other examples, some were game. Others looked at me like deer caught in headlights.

Over-the-shoulder shot of the Gemini screen on a Galaxy Watch 8 with the question “are blueberries high in carotenoids?”

This is a question Assistant could answer. It’s also something you could look up on your phone.

Can’t say I blame them. I ran into several snags when I tried those examples for myself. I tried starting a run for the number of calories in a pizza (a totally weird metric in the first place.) Apparently, when you don’t specify the word “slice,” you get a target of 1,080 calories. For me, that’s approximately 10 miles of running. I canceled that immediately. There were only so many playlists I could prompt Gemini to make before I got the itch to make my own again. I tried having Gemini look up coffee shops and send them to various people across different messaging apps. It worked a few times. Other times, it either didn’t have access to an app like Slack, and would write out a list of 10 coffee shops. Another time, it recommended two shops forty blocks away.

It’s one thing to know Gemini is capable of more complex tasks. It’s another to know how to slot that into your life. Which is why I asked the team behind Gemini on the wrist to give me pointers.

“It’s really easy to use Gemini as your second brain to offload whatever you need to remember,” says Jean Lee, senior product manager on Gemini. “The beauty is it has context. It can access your chat history, but it also knows what you’ve told Gemini about yourself in the past.”

At that moment, Lee spoke into her wrist and asked Gemini what she should pack for the day. After a few seconds, it spat out that there would be scattered thunderstorms, a high of 97 degrees Fahrenheit with a real feel of 104 degrees, so Lee ought to pack pilates gear for a class later that day, a small packable umbrella, breathable and comfy clothing and shoes, and to avoid leather or suede materials.

The key here, Lee said, was previously telling Gemini that she doesn’t like getting caught in the rain while wearing her suede loafers. When I asked what data it was pulling from, Lee noted that it was “incorporating from saved information” that she’d told Gemini over time. Things like a preference for Sichuan food so that the next time she’s in a new city, Gemini will remember to surface Sichuan restaurants in its recommendations.

“With Assistant, you had to dictate the message you wanted to send,” adds Jaime Williams, group product manager of Wear OS. “Then it gives you the message, asks if you’re ready to send it. It was several steps, and you had to be very prescriptive.”

With Gemini, Williams says, you can just give the details of the message and a tone. Maybe you’re running late, so you say, “Tell my spouse I’m 15 minutes late and send it in a jokey tone.” Instead of having to think of what to say, the bot will write it for you.

Other examples Lee and Williams shared included remembering locker combinations for the gym and setting a reminder to pack an umbrella 10 minutes before it starts raining. While cooking, instead of having to look up the time for al dente penne pasta, you can just ask Gemini to set one for that.

“We’ve trained ourselves to be like, ‘Okay, if I want to do this thing, then there’s five steps. I have to first look this up, then send that message, and add it to my account. Gemini can do all of that for you,” says Lee. “It can help you generate a video, but it’s also like, how can we help ease the mental burden, the cognitive overload we all have through the day.”

View of a Gemini request on the Galaxy Watch 8 that reads “I don’t like to be caught without an umbrella when it’s raining so remember that if the forecast has rain to set a reminder to…”

One of my multiple attempts to get a reminder to carry an umbrella when it rains. This didn’t work either.

Conceptually, I get it. Cutting out the extraneous middle is the big pitch for most AI products. But to get to that point requires you to first rewire your brain. You have to invest effort into training an AI to know you. That, in turn, demands that you fight your existing programming.

I tried Lee’s example of setting a reminder to pack an umbrella 10 minutes before rain. I’ve since been rained on twice.

Much of that is on me. I started with a clumsy prompt: “Hey Google, is it going to rain today, and if so, remind me to pack an umbrella.” It was a sunny day, so Gemini declined. I then asked it to remember that I hate getting caught without an umbrella during rainstorms. It did. I then asked it to remind me to pack an umbrella the next time it rains. It, again, told me that it was sunny. Frustrated, I pulled out my phone to look up the 10-day forecast myself.

It doesn’t help that generative AI can be so open-ended and unpredictable compared to voice assistants. The latter has fixed phrases and capabilities. Their stricter limitations make them easier to learn, even if they can be fairly frustrating. Conversely, tech CEOs keep telling us that they can’t wait to see what use cases we come up with for generative AI. But because the possibilities are endless, it’s hard to know where to start. You end up defaulting to what you’re familiar with. That leads to unimpressive prompts, such as commands to schedule your smart lights or low-stakes queries about calories in pizza — things that Assistant can already do.

But because the possibilities are endless, it’s hard to know where to start. You end up defaulting to what you’re familiar with.

The unpredictability of generative AI also means you can ask for one thing (a reminder to carry an umbrella) and end up with a result you didn’t want (Gemini declining because it’s sunny). With Gemini on smartwatches, I have another layer to factor in: optimization. Because it’s on the body, I have to think about what queries make more sense to ask on the wrist versus my phone. Is it that Gemini on the wrist is only a hands-free backup for my phone? Or are there scenarios and prompts that make more sense on that form factor? I’m honestly struggling to figure it out.

I’m not giving up. Mostly because I’m stubborn, but also because my job incentivizes me to figure out how to best use this tech and discover what its limits are. I have a vested interest in not always reaching for my phone. But the average person? You can’t just give them a new tool and say, “Have at it!” That’s like giving someone a bucket of Legos and telling them to build the Millennium Falcon from memory. Sure, a few prodigies will make it look easy. The rest of us will probably give up because it’s easier. But anyone could build it if you simply gave them a blueprint to work off of.

That’s the thing about new tech. It’s not enough to say, ‘This will make your life easier.’ It has to be intuitive. If it’s not intuitive, you have to spell it out. With Gemini on the wrist, you’re asking people to do something new that requires a whole new mindset and muscle memory. You have to give them a reason why that effort is worth it. Otherwise, everyone will simply go back to what they know: their phones.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
- Advertisement -
FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
error: Content is protected !!
Exit mobile version