Innovation and Technology
AI Apps Are Undressing Women Without Consent And It’s A Problem
Introduction to AI Nudification Apps
The rise of AI “nudification” tools makes it shockingly easy for anyone to create a fake naked image of you—or any of your family, friends or colleagues—using nothing more than a photo and one of many readily available AI apps. The existence of tools that let users create non-consensual sexualized images might seem like an inevitable consequence of the development of AI image generation. But with 15 million downloads since 2022, and deepfaked nude content increasingly used to bully victims and expose them to danger, it’s not a problem that society can or should ignore.
What Are Nudification Apps And What Are The Dangers?
Nudification apps use AI to create naked or sexualized images of people from the sort of everyday, fully-clothed images that anyone might upload to Facebook, Instagram or LinkedIn. While men are occasionally the targets, research suggests that 99 per cent of non-consensual, sexualized deepfakes feature women and girls. Overwhelmingly, it’s used as a form of abuse to bully, coerce or extort victims. Media coverage frequently suggests that this is increasingly having a real impact on women’s lives.
While faked nude images can be humiliating and potentially career-affecting for anyone, in some parts of the world, it could leave women at risk of criminal prosecution or even serious violence. Another shocking factor is the growing number of fake images of minors that are being created, which may or may not be derived from images of real children. The Internet Watch Foundation reported a 400 percent rise in the number of URLs hosting AI-generated child sex abuse content in the first six months of 2025. This type of content is seen as particularly dangerous, even when no real children are involved, with experts saying it can normalize abusive images, fuel demand, and complicate law enforcement investigations.
Unfortunately, media reports suggest that criminals have a clear financial incentive to get involved, with some making millions of dollars from selling fake content. So, given the simplicity and scale with which these images can be created, and the devastating consequences they can have on lives, what’s being done to stop it?
How Are Service Providers And Legislators Reacting?
Efforts to tackle the issue through regulation are underway in many jurisdictions, but so far, progress has been uneven. In the US, the Take It Down Act makes online services, including social media, responsible for taking down non-consensual deepfakes when asked to do so. And some states, including California and Minnesota, have passed laws making it illegal to distribute sexually explicit deepfakes.
In the UK, there are proposals to take matters further by imposing penalties for making, not simply distributing, non-consensual deepfakes, as well as an outright ban on nudification apps themselves. However, it isn’t clear how the tools would be defined and differentiated from AI used for legitimate creative purposes. China’s generative AI measures contain several provisions aimed at mitigating the harm of non-consensual deepfakes. Among these are requirements that tools should have built-in safeguards to detect and block illegal use, and that AI content should be watermarked in a way that allows its origin to be traced.
One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it “isn’t real”. In Australia, this prompted the government commissioner for online safety to call on schools to ensure all incidents are reported to police as sex crimes against children.
Of course, online service providers have a hugely important role to play, too. Just this month, Meta announced that it is suing the makers of the CrushAI app for attempting to circumvent its restrictions on promoting nudification apps on its Facebook platform. This came after online investigators found that the makers of these apps are frequently able to evade measures put in place by service providers to limit their reach.
What Can The Rest Of Us Do?
The rise of AI nudification apps should act as a warning that transformative technologies like AI can change society in ways that aren’t always welcome. But we should also remember that the post-truth age and “the end of privacy" are just possible futures, not guaranteed outcomes. How the future turns out will depend on what we decide is acceptable or unacceptable now, and the actions we take to uphold those decisions.
From a societal point of view, this means education. Critically, there should be a focus on the behavior and attitudes of school-age children to help make them aware of the harm that can be caused. From a business point of view, it means developing an awareness of how this technology can impact workers, particularly women. HR policies should ensure there are systems and policies in place to help those who may become victims of blackmail or harassment campaigns involving deepfaked images or videos.
And technological solutions have a role to play in detecting when these images are transferred and uploaded, and potentially removing them before they can cause harm. Watermarking, filtering and collaborative community moderation could all be part of the solution. Failing to act decisively now will mean that deepfakes, nude or otherwise, are likely to become an increasingly problematic part of everyday life.
Conclusion
The issue of AI nudification apps is a complex and multifaceted one, requiring a comprehensive approach that involves governments, service providers, and individuals. By working together, we can mitigate the harm caused by these apps and ensure that the benefits of AI are realized without compromising our safety and dignity.
FAQs
Q: What are AI nudification apps?
A: AI nudification apps are tools that use artificial intelligence to create naked or sexualized images of people from fully-clothed images.
Q: What are the dangers of AI nudification apps?
A: The dangers of AI nudification apps include the creation of non-consensual sexualized images, which can be used to bully, coerce, or extort victims, and can have serious consequences for individuals, particularly women and girls.
Q: What is being done to stop the misuse of AI nudification apps?
A: Efforts to tackle the issue through regulation are underway in many jurisdictions, and online service providers are taking steps to limit the reach of these apps.
Q: How can individuals protect themselves from AI nudification apps?
A: Individuals can protect themselves by being aware of the risks, being cautious when sharing images online, and reporting any incidents of non-consensual deepfakes to the authorities.
Q: What role can education play in preventing the misuse of AI nudification apps?
A: Education can play a critical role in preventing the misuse of AI nudification apps by raising awareness of the harm that can be caused and promoting positive attitudes and behaviors among school-age children.
-
Resiliency7 months agoHow Emotional Intelligence Can Help You Manage Stress and Build Resilience
-
Career Advice1 year agoInterview with Dr. Kristy K. Taylor, WORxK Global News Magazine Founder
-
Diversity and Inclusion (DEIA)1 year agoSarah Herrlinger Talks AirPods Pro Hearing Aid
-
Career Advice1 year agoNetWork Your Way to Success: Top Tips for Maximizing Your Professional Network
-
Changemaker Interviews1 year agoUnlocking Human Potential: Kim Groshek’s Journey to Transforming Leadership and Stress Resilience
-
Diversity and Inclusion (DEIA)1 year agoThe Power of Belonging: Why Feeling Accepted Matters in the Workplace
-
Global Trends and Politics1 year agoHealth-care stocks fall after Warren PBM bill, Brian Thompson shooting
-
Changemaker Interviews12 months agoGlenda Benevides: Creating Global Impact Through Music
