Google Photos’ new AI tools are as complicated and messy as a memory

Image of Pixel 8 Pro lying on a pool table with the rear panel facing up and light reflecting on the camera lenses.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Photo by Vjeran Pavic / The Verge

In its eight years of existence, Google Photos has shifted its official mission just once: from “home for all your photos and videos” to “home for all your memories,” says Google Photos VP Shimrit Ben-Yair.

That tagline switch starts to make a lot more sense when you look at the Pixel 8 and 8 Pro. Inside the new phones, Google Photos will see some of its biggest changes to date, all designed to refine your memories. There’s a lot to be proven about how well the new features work, and there are the usual messy questions to ask about the role of generative AI in photography. But you can start to see very clearly how Google is shifting from saving “your photos” to cataloging your messy, complicated memories.

The Pixel 8 and 8 Pro introduce a lot of AI-forward photo features: Magic Editor uses generative AI to change scenery, remove distractions, and shift people around an image. Audio Magic Eraser will help you separate audio tracks in a video to minimize distracting sounds. Best Take lets you pick the best face for each subject in a photo when you take a series of similar images, letting you merge them into a final perfect picture.

None of these are particularly new photo editing techniques, but as Ben-Yair puts it, they haven’t been accessible to just anyone. “All these features are using AI and ML to do what otherwise would have been very technical and laborious.”

That’s exactly the problem with generative AI, though. Historically, there have been more barriers to convincingly changing out a sky, moving a photo subject, or replacing the face in your photo with another. These were things you could do with expensive software and skill — putting them a few taps away in your photo library raises some questions about what we consider to be just an editing tweak and what’s going too far.

I asked Ben-Yair how she thinks about this line, but her answer was more technical than philosophical. Google is committing to adding metadata to flag images edited with generative AI, so at least people will be able to know when an altered photo gets shared around. You, too, will be reminded that you changed something, if you bother to check.

GIF showing magic editor removing a cooler from a camping photo and changing the sky.

GIF showing magic editor removing a cooler from a camping photo and changing the sky.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>GIF: Google

Ben-Yair also emphasizes Photos’ role as a personal image library — not a generalized photo editing tool. “This is really about user selection and user choice,” she says. The Photos team approaches these new features by asking, “What tools make sense to people, and then how do we design an interface that puts the user in control of the changes that they are making?”

Personally, I have no problem with the ethics of swapping out an image of my child’s face where he’s blinking for a nearly identical one where he’s not. Much of what Google is introducing this year treads carefully, just tiptoeing up to that line of “acceptable” AI edits and not crossing over. And something like Best Take isn’t a far cry from the techniques already employed in Google’s cameras — like face unblur merging data from two camera lenses to help you get a sharp photo of your squirming toddler.

But even if we imagine that Google’s AI metadata tactic will work or that somehow nobody will ever use Google Photos for evil, there’s still something about these kinds of edits becoming commonplace that feels a little icky. A lot depends on how good it actually is, but if I can edit a sunrise into my photo of the Grand Canyon, doesn’t that feel a little like cheating if I didn’t actually get there at 5AM? It feels unearned, even if it’s not fully evil. 

I think Google is wise to call photos and videos memories because, in an age of easy access to generative AI, that’s an accurate description. Memories are elastic and imperfect. They’re subject to our biases and moods, and they change over time. Generative AI is about to be everywhere, and people will want to use these tools to make their photos look more like their memories. Ben-Yair puts it this way: “This is a time of a lot of changes, and we’re hearing from our users that they would love access to these things. So we want to meet them where their needs are, and we also want to put in the right controls, the right checks and balances.”

Our understanding of “truth” in photography has always been a little weird. Photos aren’t the impeachable bearers of absolute truth that we’d like them to be — it’s been possible to manipulate them ever since, well, the beginning of photography. Regardless of the implications for society as a whole, our individual understanding of the “truth” of our own photos will probably have to change, too. Do you want to remember the dull lighting you saw when you got to the Grand Canyon? Or do you want it to exist in your memory — and your photo library — more vibrantly? The answer is as messy as a memory.


Leave a Comment

. . . . . . . . . . . . . . . . . . . . . . . .