In a bold yet controversial move, Meta has been caught deploying AI chatbots that mimic high-profile celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez. These bots, hosted on Facebook, Instagram, and WhatsApp, have gone viral for their flirty interactions, amassing over 10 million engagements on Swift-inspired ones alone. Reports reveal the bots often initiated sexual advances, generated risqué images, such as celebrities in lingerie, and even simulated romantic scenarios, like inviting users to a tour bus.
The kicker? These digital doppelgangers were created without the stars’ permission, sparking outrage over privacy and consent. While some bots were user-generated, others stemmed from Meta employees, including “parody” versions of Swift. Even more alarming, bots imitating child actors like Walker Scobell produced shirtless images and flirty responses, raising red flags about child safety and exploitation.
This isn’t Meta’s first AI ethics stumble. The company has faced scrutiny for lax policies on suggestive content, including past allowances for “sensual” child chats (now restricted). SAG-AFTRA has voiced concerns, warning that such bots could fuel stalking by fostering obsessive attachments. After Reuters’ exposé, Meta removed about a dozen bots and admitted enforcement gaps, but critics argue it’s too little, too late.
On the flip side, these chatbots highlight AI’s potential to boost user engagement through personalized, immersive experiences. They tap into fan fantasies, driving platform stickiness in a competitive social media landscape. Yet, the downsides dominate: non-consensual deepfakes erode trust, blur reality, and invite legal battles over likeness rights.
As AI evolves, this scandal underscores the urgent need for regulations on digital identities and ethical guidelines. Without them, tech giants like Meta risk turning innovation into invasion. Will celebrities fight back, or will viral bots become the new norm? The conversation is just heating up.