Author: Matt Burgess
-
This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
When talking with a chatbot, you might inevitably give up your personal information—your name, for instance, and maybe details about where you live and work, or your interests. The more you share with a large language model, the greater the risk of it being abused if there’s a security flaw. A group of security researchers
-
Millions of People Are Using Abusive AI Nudify Bots on Telegram
In early 2020, deepfake expert Henry Ajder uncovered one of the first Telegram bots built to “undress” photos of women using artificial intelligence. At the time, Ajder recalls, the bot had been used to generate more than 100,000 explicit photos—including those of children—and its development marked a “watershed” moment for the horrors deepfakes could create.
-
Harmful ‘Nudify’ Websites Used Google, Apple, and Discord Sign-On Systems
Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech
-
Microsofts AI Can Be Turned Into an Automated Phishing Machine
Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers. Today at the Black Hat security conference