“
                     Yevgeny Demkivskyi Mezha.Media news writer and geek. I write about technology, movies and games. Maybe about games with a little more passion.
AI has long been used for fraud, deepfakes and pornography without permission, and now it is being used for violent threats. One of the victims was Australian activist Caitlin Roper from the Collective Shout organization. She received generated images of her being executed or set on fire. In some photos, the AI even recreated her actual clothes, which made the threats even more personalized.
Experts say that technology is now able to create realistic copies of a person even from a single photo.
“Now, with just a motive and a few clicks, almost anyone can create fake scenes of violence,” says UC Berkeley professor Ghani Farid.
The problem goes beyond individual cases. Videos of femicides appeared on YouTube, causing channels to be blocked, deepfakes caused schools to be closed and the police to be called, and a chatbot Grok Maska provided instructions for violence. New application Sora from OpenAI allows you to integrate users’ photos into hyper-realistic scenes, showing people in dangerous situations.
Security experts accuse big companies of not taking enough measures. According to Data & Society researcher Alice Marwick, current security systems “resemble a lazy policeman” who is easy to bypass.
OpenAI, Meta and other developers say they are improving filtering and moderation algorithms, but experts stress that the technology has already become too accessible for its effects to be fully controlled.
”, — write: www.pravda.com.ua
                     Yevgeny Demkivskyi Mezha.Media news writer and geek. I write about technology, movies and games. Maybe about games with a little more passion.
AI has long been used for fraud, deepfakes and pornography without permission, and now it is being used for violent threats. One of the victims was Australian activist Caitlin Roper from the Collective Shout organization. She received generated images of her being executed or set on fire. In some photos, the AI even recreated her actual clothes, which made the threats even more personalized.
Experts say that technology is now able to create realistic copies of a person even from a single photo.
“Now, with just a motive and a few clicks, almost anyone can create fake scenes of violence,” says UC Berkeley professor Ghani Farid.
The problem goes beyond individual cases. Videos of femicides appeared on YouTube, causing channels to be blocked, deepfakes caused schools to be closed and the police to be called, and a chatbot Grok Maska provided instructions for violence. New application Sora from OpenAI allows you to integrate users’ photos into hyper-realistic scenes, showing people in dangerous situations.
Security experts accuse big companies of not taking enough measures. According to Data & Society researcher Alice Marwick, current security systems “resemble a lazy policeman” who is easy to bypass.
OpenAI, Meta and other developers say they are improving filtering and moderation algorithms, but experts stress that the technology has already become too accessible for its effects to be fully controlled.
