[The abridged stories below from Fast Company and Mike Elgan’s Machine Society newsletter describe the latest impressive but dangerous advances in the ability of artificial intelligence to quickly and easily manufacture images that mimic photos of reality and thereby evoke presence. As Elgan puts it, “it’s amazing and awful at the same time.” See the original versions of both stories for several more examples and see the Fast Company story for a 6:40 minute video demonstration (also available on YouTube). –Matthew]

[Image: Credit: NASA/Thomas Smith]
Why Google’s new Nano Banana means you can never trust a photo you see online again
I faked the moon landing with it, and that’s just the beginning.
By Thomas Smith
September 5, 2025
Despite billions of dollars of AI investment, Google’s Gemini has always struggled with image generation. The company’s Flash 2.5 model has long felt like a sidenote in comparison to far better generators from the likes of OpenAI, Midjourney, and Ideogram.
That all changed last week with the release of Google’s new Nano Banana image AI. The wonkily named new system is live for most Gemini users, and its capabilities are insane.
To be clear, Nano Banana still sucks at generating new AI images.
But it excels at something far more powerful, and potentially sinister—editing existing images to add elements that were never there, in a way that’s so seamless and convincing that even experts like myself can’t detect the changes.
That makes Nano Banana (and its inevitable copycats) both invaluable creative tools and an existential threat to the trustworthiness of photos—both new and historical.
In short, with tools like this in the world, you can never trust a photo you see online again.
Come fly with me
As soon as Google released Nano Banana, I started putting it through its paces. Lots of examples online—mine included—focus on cutesy and fun uses of Nano Banana’s powerful image-editing capabilities.
In my early testing, I placed my dog, Lance, into a Parisian street scene filled with piles of bananas and showed how I would look wearing a Tilley Airflo hat. (Answer: very good.)
Immediately, though, I saw the system’s potential for generating misinformation. To demonstrate this on a basic level, I tried editing my standard professional headshot to place myself into a variety of scenes around the world.
[snip]
200’s a crowd
These personal examples are fun. I’m sure I could post the Maui beach photo on social media and immediately expect a flurry of comments from friends asking how I enjoyed my trip.
But I was after something bigger. I wanted to see how Nano Banana would do at producing misinformation with potential for real-life impact.
During last year’s Presidential elections here in America, accusations of AI fakery flew between both candidates. In an especially infamous example, now-President Donald Trump accused Kamala Harris’s campaign of using AI to fake the size of a crowd during a campaign rally.
All reputable accounts of the event support the fact that photos of the Harris rally were real. But I wondered if Nano Banana could create a fake visual of a much smaller crowd, using the real rally photo as input.
[See the result in the original version of the story]
The edited version looks extremely realistic, in part because it keeps specific details from the actual photo, like the people in the foreground holding Harris-Walz signs and phones.
But the fake image gives the appearance that only around 200 people attended the event and were densely concentrated in a small space far from the plane, just as Trump’s campaign claimed.
If Nano Banana had existed at the time of the controversy, I could easily see an AI-doctored photo like this circulating on social media, as “proof” that the original crowd was smaller than Harris claimed.
Before, creating a carefully altered version of a real image with tools like Photoshop would have taken a skilled editor days—too long for the result to have much chance of making it into the news cycle and altering narratives.
Now, with powerful AI editors, a bad actor wishing to spread misinformation could convincingly alter photos in seconds, with no budget or editing skills needed.
Fly me to the moon
Having tested an example from the present day, I decided to turn my attention to a historical event that has yielded countless conspiracy theories: the 1969 moon landing.
Conspiracists often claim that the moon landing was staged in a studio. Again, there’s no actual evidence to support this. But I wondered if tools like Nano Banana could fake some.
To find out, I handed Nano Banana a real NASA photo of astronaut Buzz Aldrin on the moon.
I then asked it to pretend the photo had been faked, and to show it being created in a period-appropriate photo studio.
The resulting image [above and in the original version of the story] is impressive in its imagined detail. A group of men (it was NASA in the 1960s—of course they’re all men!) in period-accurate clothing stand around a soundstage with a fake sky backdrop, fake lunar regolith on the floor, and a prop moon lander.
In the center of the regolith stands an actor in a space suit, his stance perfectly matching Aldrin’s slight forward lean in the actual photo. Various flats and other theatrical equipment are unceremoniously stacked to the sides of the room.
As a real-life professional photographer, I can vouch for the fact that the technical details in the Nano Banana’s image are spot-on. A giant key light above the astronaut actor stands in for the bright, atmosphere-free lighting of the lunar surface, while various lighting instruments provide shadows perfectly matching the lunar lander shadow in the real image.
A photographer crouches on the floor, capturing the imagined astronaut actor from an angle that would indeed match the angle in the real-life photograph. Even the unique lighting on the slightly crumpled American flag—with a small circular shadow in the middle of the flag—matches the real image.
In short, if you were going to fake the moon landing, Nano Banana’s imagined soundstage would be a pretty reasonable photographic setup to use.
If you posted this AI photo on social media with a caption like “REVEALED! Deep in NASA’s archive, we found a photo that PROVES the moon landing was staged. The Federal Government doesn’t want you to see this COVER UP,” I’m certain that a critical mass of people would believe it.
But why stop there? After using Nano Banana to fake the moon landing, I figured I’d go even further back in history. I gave the system the Wright Brothers’ iconic 1903 photo of their first flight at Kitty Hawk, and asked the system to imagine that it, too, had been staged.
Sure enough, Nano Banana added a period-accurate wheeled stand to the plane.
Presumably, the plane could have been photographed on this wheeled stand, which could then be masked out in the darkroom to yield the iconic image we’ve all seen reprinted in textbooks for the last century.
Believe nothing
In many ways, Nano Banana is nothing new. People have been doctoring photos for almost as long as they’ve been taking them.
An iconic photo of Abraham Lincoln from 1860 is actually a composite of Lincoln’s head and the politician John Calhoun’s much more swole body, and other examples of historical photographic manipulation abound.
Still, the ease and speed with which Nano Banana can alter photos is new. Before, creating a convincing fake took skill and time. Now, it takes a cleverly written prompt and a few seconds.
To their credit, Google is well aware of these risks, and is taking important steps to defend against them.
Each image created by Nano Banana comes with an (easy to remove) physical watermark in the lower right corner, as well as a (harder to remove) SynthID digital watermark invisibly embedded directly into the image’s pixels.
This digital watermark travels with the image, and can be read with special software. If a fake Nano Banana image started making the rounds online, Google could presumably scan for its embedded SynthID and quickly confirm that it was a fake. They could likely even trace its provenance to the Gemini user that created it.
Google scientists have told me that the SynthID can survive common tactics that people use to obscure the origin of an image. Cropping a photo, or even taking a screenshot of it, won’t remove the embedded SynthID.
Google also has a robust and nuanced set of policies governing the use of Nano Banana. Creating fake images with the intent to deceive people would likely get a user banned, while creating them for artistic or research purposes, as I’ve done for this article, is generally allowed.
Still, once a groundbreaking new AI technology rolls out from one provider, others quickly copy it. Not all image generation companies will be as careful about provenance and security as Google.
The (rhinestone-studded, occasionally surfing) cat is out of the bag; now that tools like Nano Banana exist, we need to assume that every image we see online could have been created with one. Nano Banana and its ilk are so good that even photographic experts like myself won’t be able to reliably spot its fakes.
As users, we therefore need to be consistently skeptical of visuals. Instead of trusting our eyes as we browse the Internet, our only recourse is to turn to reputation, provenance, and good old-fashioned media literacy to protect ourselves from fakes.
[snip to end]
—
[From Machine Society]
AI image editing has crossed a line
Thanks to Google’s Nano Banana, just about anyone can create just about anything.
By Mike Elgan
September 2, 2025
AI image tools keep improving, but Google’s latest iteration has crossed a threshold.
The new model is Gemini 2.5 Flash Image. Google shipped the tool August 25. During development, it was code-named Nano Banana. Most users still call it that. (Now there’s evidence that Google itself is calling it Nano Banana. I’ll do the same.)
Nano Banana generates, blends, and edits images with prompt-level precision, fast response, and built-in markers for AI detection. It’s available to developers in preview through the Gemini API, Google AI Studio, and Vertex AI, priced at $30 per 1 million output tokens and $0.039 per image based on 1,290 output tokens.
You can try it for free here.
A huge leap in this model is in the realm of coherence and consistency. You can take an existing image, like a photograph, and tell it to change one detail. It will often return the exact same picture with the changed detail.
[snip]
With Nano Banana, people can do multi-image compositing to merge references into a single scene; maintain strict character consistency and visual style; perform conversational editing with precise, local, natural-language edits (remove or add objects or people, blur or swap backgrounds, fix stains, colorize black‑and‑white photographs, tweak poses); follow design templates for rapid brand or product variations; and achieve robust, production‑grade multi-image fusion and iterative, multi‑turn edits that previously required complex manual workflows.
[snip]
All images created or edited with Gemini 2.5 Flash Image include an invisible SynthID digital watermark to detect AI‑generated or AI‑edited content. But it hardly matters. Scammers can take a screenshot or modify the results through compression and decompression until the SynthID watermark becomes undetectable. Google still deserves credit for using this system.
Because Nano Banana is in the Gemini API, other companies are likely to build this into their offerings. Based on public announcements, the confirmed Nano Banana integration sites and tools are OpenRouter, fal.ai, Adobe Firefly and Adobe Express, Poe (Quora), Freepik, Figma, and Pollo AI, with WPP Open and Leonardo.ai.
Nano Banana ushers in a new world where anyone can instantly create anything. Illustrators and marketers will go nuts with the consistency between and control over iterations.
All the normal ripping of hair and gnashing of teeth over job losses, dumbing down, art theft, and visual imagery genericization apply to this advancement. As expected, it’s amazing and awful at the same time.
Leave a Reply