Google’s Gemini Nano-powered AI photo tool, Nano Banana, has gone viral with saree edits, but it also raises privacy and safety concerns about AI image generation.
The new AI-powered photo-editing tool “Nano Banana,” based on Google’s Gemini Nano model, has gone viral on social media.The tool changes normal selfies into 3D figurine-style portraits with shiny skin that looks like plastic, big eyes that look like they are having fun, and cartoonish proportions.
At first, people liked the weird figurine look, but the trend quickly changed into the now-famous vintage saree AI edit.The filter turns portraits, mostly of women, into glamorous retro-style saree looks, with backgrounds that look like classic Bollywood movie posters.Millions of people are trying the trend, and Instagram is full of chiffon sarees, flowing drapes, and golden-hour lighting.
But like with most AI-driven trends, more and more people are worried about privacy, consent, and data safety.
Google’s newest AI-powered editing tool is called Nano Banana, but it is also known as Gemini 2.5 Flash Image.Medium said that by the middle of September, users had already made or changed well over 500 million images in the Gemini app. On other platforms, users have created hundreds of millions more images.
It’s simple to use the tool:Please add a prompt and a photo, and observe as your picture transforms.
The scary side of cute
An Instagram user named Jhalakbhawani recently discussed a scary thing that happened to her with the saree trend.She said that when she uploaded her picture to Gemini, the picture that was made showed a mole on her left hand.This is a significant detail about her body that wasn’t visible in the initial upload.
“How did Gemini discover that I had a mark on that part of my body?” She wrote, “It’s very scary and creepy.”Her post led to a lot of discussion in the comments, with some people talking about safety concerns and others saying it was just a coincidence or that the post was trying to get attention.
How safe is the Nano Banana?
Google says that all images made or changed with Gemini have SynthID, an invisible digital watermark, and metadata tags that make it clear that they were made by AI.These identifiers, according to Google’s AI Studio, assist creators and platforms in determining the origin of content.
Google, OpenAI, and xAI (Elon Musk’s AI company) have also said that images that are uploaded are not stored forever.Privacy advocates, on the other hand, still say that users need to be careful.
Is watermarking enough to protect against AI misuse?
Experts, internet users, and industry experts have all been warning people to be careful about this since AI images became more common.The tools needed to read SynthID are not yet available to the public, so most people can’t confirm that it is real.Many people have also said that it’s easy to fake, ignore, or take off watermarks.
Hany Farid, a professor at UC Berkeley’s School of Information, remarked in a Wired article, “No one thinks watermarking alone will be enough.” He and others say that watermarking alone won’t be enough to protect against deepfakes; it needs to be used with other security measures.Soheil Feizi, a professor of computer science at the University of Maryland, said in the same article, “We don’t have any reliable watermarking at this point.”
How can you use AI image tools safely?
Since the popularity of AI image generation, experts have provided several safety suggestions, including
- Avoid uploading private pictures that reveal your face or other identifiable information.
- Before sharing, remove metadata like location tags.
- Look at the app’s permissions and take away camera or gallery access if it’s not needed. Limit exposure by posting low-resolution copies of high-quality originals instead of them.
- Read privacy policies carefully to find out how your data might be used again or kept.
- Tools like Glaze and Nightshade that are made for this purpose can also add small amounts of “noise” to images, which makes it harder for AI to scrape them for training.
In the end
Nano Banana has made AI portrait editing fun, but you should still be careful. Although they aren’t flawless, SynthID and other invisible watermarks might be useful in holding people accountable.