Researchers, privacy advocates, and the general public have become aware of the alarming rise in popularity of apps and websites that use artificial intelligence (AI) to undress people—mostly women—in recent months.
Because of the “nudify” services’ successful marketing campaigns on well-known social media platforms, there has been a worrying rise in non-consensual pornography, which is attributed to advances in artificial intelligence (AI). This article examines the trend’s broad ramifications, the moral and legal issues it raises, and the pressing need for comprehensive laws to shield people from the inappropriate use of artificial intelligence-generated content.
A social network analysis company called Bloomberg revealed in a report that an astounding 24 million people visited websites that exposed their underwear in just the month of September. The number of links promoting undressing apps has increased dramatically, rising by over 2,400% on sites like X and Reddit since the year’s beginning, indicating the growing popularity of the app. Artificial intelligence (AI) technology is the driving force behind this disturbing trend, which uses photos stolen from social media without the subjects’ knowledge or consent to digitally undress people.
Deepfake pornography produced by AI is becoming more and more common, endangering user privacy, permission, and general internet security. With the rapid advancement of technology, comprehensive regulations that tackle the moral and legal issues posed by the non-consensual use of AI-generated content must be established immediately. Collaboration between industry players, legislators, and advocacy organizations is necessary to safeguard people from the negative effects of deepfake technology and to guarantee a safer online environment for everybody.