Matthew Keegan
Sep 4, 2023

Ads for 'AI girlfriends' offering sexual images and company are flooding social media

Social media platforms, including Facebook, Instagram, and TikTok have been running hundreds of ads for apps that promise AI-generated sexual images and companionship.

Ads for 'AI girlfriends' offering sexual images and company are flooding social media
In recent months, dozens of tech start-ups have begun running explicit ads on Facebook, Instagram, and TikTok for apps that encourage ‘not safe for work’ (NSFW) experiences. The adverts feature numerous digitally made "girlfriends" with large breasts and skimpy attire, and they all offer "NSFW pics," "custom pinup girls," and chats with "no censoring."
 
Popular children's TV characters like SpongeBob SquarePants, Bart Simpson, and even the Cookie Monster are featured in some adverts to promote apps that allow users to produce "NSFW pics." Others, frequently in anime style, show girls that have been digitally generated and seem to be teenagers or younger.
 
An ad on Instagram for an 'NSFW pics' app uses an image of Cookie Monster. (Obtained by NBC News via Instagram)
 
35 app developers were uncovered by NBC News to be running sexually explicit ads on apps that were owned by Meta, the parent company of Facebook and Instagram. In total, the app developers were running over a thousand adverts, many of which were easily found and viewed on Meta's public online library of ads.
 
The news organisation also discovered 14 app developers were running hundreds more sexually explicit AI ads on TikTok. Some of the ads were identical to those on Meta, but not all of them. 
 
The 35 risqué chatbot companies behind the ads aren't well-known or significant tech players. Many claim to be based in countries like China and Belarus. Some only provide their email address and no additional information.
 
The apps can be downloaded for free from Google Play or Apple App Stores, and there are in-app purchases and subscription options. Others are deemed appropriate for teenagers, while some are labelled as age-restricted.
 
Among them, Innovative Connecting, a Singapore-based software company, had roughly 160 separate adverts running on Meta for the AI chatbot app Mimico. In some of the ads for Mimico, the words "Create NSFW Girl" appeared above a picture of a digitally generated girl with large breasts and an innocent-looking face.
 
Many AI chatbot makers are based across Asia Pacific, including this one in China. (Photo: NBC News via TikTok)
 
For the sake of transparency, Meta and TikTok publish ad-related records in openly searchable archives. Before NBC News contacted them, they revealed that they had taken down some of the developers' ads but not all of them. According to TikTok's library, some of the ads received thousands of views and were visible for weeks before being taken down.
 
Facebook, Instagram, and TikTok have made efforts in recent years to keep a tight lid on sexualised content, outlawing nudity nearly everywhere, banning sex workers, and even taking action against some artists and educators who openly discuss sexual health and safety.
 
But recently, adverts for scantily dressed, filthy-talking chatbots, powered—according to their creators, by artificial intelligence—have managed to slip past their moderation systems.
 
The recent surge of these 'AI girlfriend' ads is a part of an AI gold rush in which software developers, many of whom are located abroad, are luring users who want to engage with customised artificial avatars in sexual or romantic ways. It's a part of a bigger effort to take advantage of the increase in interest in AI that has occurred in the wake of the success of tech firm OpenAI's ChatGPT product, which changed people's perceptions of what AI chatbots could do.
 
The ads typically feature sexualised women. And it appears that social media platforms more often allow sex-related ads if the target audience is men. Although the ad libraries from Meta and TikTok don't always record the rejected or removed ads, making it difficult to tell what ads have been moderated by the platforms, searches for terms related to virtual girlfriends generally produce more results than searches for terms related to virtual boyfriends.
 
Ads for 'AI girlfriends' among the hundreds running on social media platforms (Obtained by NBC News via Instagram)
 
After NBC News contacted them last Wednesday, Meta and TikTok ramped up the removal of sexually explicit AI ads, but they are yet to comment on how the ads managed to bypass their filters in the first place.
 
In a statement to NBC News, Meta stated that its prohibition on adult content covers both human and AI-generated content.
 
“Our policies prohibit ads containing adult content that is overly suggestive or sexually provocative—whether it’s AI-generated or not,” the company said. “Our policies and enforcement are designed to adapt in this highly adversarial space, and we are actively monitoring any new trends in AI-generated content.”
 
In order to make sure the standard is clear, Meta also stated that it is examining its public-facing  policies. 
 
TikTok said in a statement that its regulations forbid sexually suggestive advertisements and that it has taken down any examples provided by NBC News.
 
Similar ads can be seen in the Apple and Google app stores, according to NBC News. However, both Apple and Google have stated that their app stores do not allow pornographic apps.
 
AI for good or bad?
 
The ads raise questions about the future direction of AI and how it will be used, and the pressing need for more regulations around the use of AI-driven synthetic media in order to prevent things like deepfakes being used by bad actors to trick or misinform viewers.
 
Last month, Australia's eSafety Commissioner logged its first report of sexually explicit AI-generated imagery being used by children to bully others.
 
E-safety commissioner Julie Inman Grant said her office received its first complaints of children using image generators to create sexual images of their peers to bully them.
 
"Industry cannot ignore what's happening now. Our collective safety and wellbeing are at stake if we don't get this right," she said.
 
It comes amid growing calls for the tech industry to step up protections and curb some of the harms of generative AI. 
 
Authorities are having a harder time locating actual children who are being abused and exploited since criminals are already utilising AI to create child sexual abuse content.
 
"AI-generated child sexual abuse material is already filling the queues of our partner hotlines, NGOs, and law enforcement agencies," said Grant.
 
"The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake."
Source:
Campaign Asia

Related Articles

Just Published

4 hours ago

Humour in advertising is a serious business

A creative, a client and a planner walked into a bar… and then they lost their nerve and forgot that it pays to be funny.

23 hours ago

40 Under 40 2024: Dalton Henshaw, Bullfrog

Henshaw may have provoked doubters when he launched a creative indie shop during the onset of the pandemic. But four years later, armed with a healthy roster of clients and a set of happy employees, who’s laughing?

1 day ago

FCB India's Dheeraj Sinha on commanding agency ...

Marking one year in his role as CEO of FCB Group India and South Asia, Sinha sits down with Campaign to discuss building a culture of “swag, not arrogance," his intense leadership style, and empowering young talent.

1 day ago

Move and win roundup: Week of November 4, 2024

Endeavour Group, The Lux Collective, Apparent, Quiip, Pure Public Relations, and more in our weekly roundup of people moves and account wins.