For over a week now, users have clogged OpenAI’s servers to use its latest ChatGPT update that featured an image-generation tool that birthed an internet obsession; turning selfies into Studio Ghibli-style illustrations.
Instagram, Reddit, Discord, Facebook, X, everywhere is flooded with AI-generated portraits of couples, houses, vacation memories, and even historical events. But when all the fun is had, is it safe to “Ghiblify” your photos? Some analysts are telling you no.
There are serious concerns about ChatGPT’s copyright infringement, data privacy, and AI ethics, but with Ghibli, there’s one main problem: data safety practices.
Sam Altman and his team are rather tight-lipped about how they have trained the chatbot to produce the images so cleanly, and in a matter of minutes. Privacy experts warn that millions of people are unknowingly handing their biometric data to OpenAI, a company that has already been called out for its data practices.
Last week, OpenAI expanded its image-generation capabilities in ChatGPT, allowing users to create images in a variety of artistic styles. What netizens quickly jumped into was the Ghibli-style filter, which generated ordinary photos and selfies into anime-like illustrations.
OpenAI CEO Sam Altman, Tesla head Elon Musk, Ripple Lab’s higher-ups, and several executives have posted their own versions of Ghibli-style images on social media, so everyone can arguably say it’s safe.
Altman claimed that the Ghibli art prompts hit the company’s computing resources hard, admitting in a March 27 post on X that OpenAI’s GPUs were “melting” under the demand.
it's super fun seeing people love images in chatgpt.
— Sam Altman (@sama) March 27, 2025
but our GPUs are melting.
we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won't be long!
chatgpt free tier will get 3 generations per day soon.
The company revealed that the Ghibli trend alone reportedly added 1 million new sign-ups. This explosion of interest is exactly what OpenAI hopes for: more users means more data, which ultimately benefits their AI training. Is that at the expense of personal data? Well, it is tough to say yes, but we are obviously not naive enough to say no.
Some users experimented with Ghibli-style recreations of historical moments, such as the assassination of John F. Kennedy and the September 11 terrorist attack. Even the White House got involved, posting an AI-generated version of a well-known image of a crying woman being arrested by an ICE officer.
https://t.co/PVdINmsHXs pic.twitter.com/Bw5YUCI2xL
— The White House (@WhiteHouse) March 27, 2025
It is a fun trend, and we are not going to be the ones to put out the fire when everyone is looking for a reason to smile. But altering and styling historical imagery is a slippery slope to opening the floodgates for unchecked supremacists and vulgar artwork development tendencies.
AI models are trained on vast amounts of data scraped from the Internet, and that most definitely includes copyrighted works. The biggest worry for artists and creators is that AI will take over their jobs. If a model can produce an output similar to a creator’s after illegally obtaining a copy of the work they do, it’s all downhill from there.
“Authors and artists are getting increasingly angry with the large-scale theft that is happening,” beckoned Ed Newton-Rex, CEO of nonprofit AI training company Fairly Trained.
Studio Ghibli has never authorized the use of its artistic style for AI-generated content, given co-founder Hayao Miyazaki’s well-documented disdain for AI in art.
In a now-famous 2016 interview, Miyazaki reacted to AI-generated animation with disgust, saying, “Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted… I strongly feel that this is an insult to life itself.”
The Ghibli-style trend is precisely the kind of AI-driven artistic commodification he detested. So why does OpenAI seem relaxed in its policies around generating images in artistic styles? They are merely opening the floodgates for AI recreations of copyrighted works.
Before you get a Ghiblified selfie, you have to upload the said personal photos to ChatGPT. Have you considered that you could be unknowingly giving OpenAI the right to use both images for future model training? If that’s the case, is there an explicit way to opt out? Hardly doubt most people thought about all this.
Rachel Tobac, CEO of SocialProof Security, said that most people may assume their uploaded images disappear after use, but ChatGPT may instead retain and incorporate them into future AI models. “If you want to retain ownership of a photo, Ghiblifying it is not the way to go,” she surmised.
On March 29, through an X thread, Luiza Jarovsky, co-founder of aitechprivacy.com, explained that under European data laws (specifically Article 6.1.a of the GDPR), users who voluntarily upload their images are giving legal consent for OpenAI to process them.
“Thousands of people are now voluntarily uploading their faces and personal photos to ChatGPT. As a result, OpenAI is gaining free and easy access to many thousands of new faces to train its AI models,” she wrote.
OpenAI has yet to make a formal statement addressing privacy concerns but insists that privacy and security are among its top priorities. On Monday, a company spokesperson said that OpenAI minimizes personal information collection and does not seek out private user data to train its models.
They also claimed that users can control how their data is used through self-service tools, and can delete their content or opt out of model improvements.
Some netizens believe the Ghibli trend is “one of the largest and most covert facial data collection schemes ever.” Another warned that billions of people had “unknowingly logged into their accounts, unintentionally surrendering their facial recognition data to AI-driven systems.”
Exactly, how are people not realising this?
— विक्रमादित्य शिवम चौहान (@chau_vik1947) April 2, 2025
They are just trapped in the name of a trend giving up their facial recognition data to AI without even thinking what harm it may bring upon them and they cry about privacy every other day.
Still, not everyone shares this concern. A cyber engineer on X pushed back against the fears, claiming that AI models do not store image data beyond the immediate transformation process.
“AI doesn’t store image data information as of now, so the pictures we use for Ghibli or any other transformation are SAFE,” they wrote. They recommended clearing cache files as an extra precaution but refuted claims that AI could reverse-engineer images back to their original form to use them in any of its training practices.
AI don’t store image data information as of now so the pictures we use for ghibli or any other transformation is SAFE .
— Juhi Jain (@juhijain199) April 2, 2025
Still clear the cache for safer side.
Also saw many reels saying we can reverse the images so NO AI won’t change ghibli to original image .. #Ghibli pic.twitter.com/fd0k9zVLqR
Other theorists say that AI image generation is no more invasive than existing cloud storage services. “Funny how people worry about Ghibli misusing our data, as if the cloud storage holding all our pictures is any safer. Lol,” one user commented.
The “Ghiblified” selfie trend is, beneath the surface, a bubble of concerns about copyright infringement, AI ethics, and data privacy. If ChatGPT retains any of the photos and information shared by users, all it takes is just one server breach and an image reversal software for your personal life to be aired out on the internet.
Is the frenzy of animated AI-generated art worth the cost of your personal data? Maybe yes, but it’s much safer to think twice about what you share with any chatbot.