November 3, 2025
The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People? thumbnail
Business

The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People?

3:27 In this photo, the Character.AI logo is displayed on a mobile phone screen with the symbol of the AI ​​(artificial intelligence) revolution in the background. Sopa Images/SOPA Images/LightRocket via Gett Character.AI, one of the leading platforms for artificial intelligence technologies, recently announced that it will ban people under the age of 18 from interacting with its chatbots. The decision is a “bold step forward” for the industry […]”, — write: businessua.com.ua

The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People? - INFBusiness

3:27 In this photo, the Character.AI logo is displayed on a mobile phone screen with the symbol of the AI ​​(artificial intelligence) revolution in the background. Sopa Images/SOPA Images/LightRocket via Gett

Character.AI, one of the leading platforms for artificial intelligence technology, recently announced that it will ban people under the age of 18 from interacting with its chatbots. The decision is a “bold step forward” for the industry in protecting teenagers and other young people, the CEO said Character.AI by Karandeep Anand.

However, for Texas mother Mandy Furniss, the policy is already too late. In a lawsuit filed in federal court and in an interview with ABC News, the mother of four claimed that various Character.AI chatbots were responsible for subjecting her autistic son to sexualized language and distorted his behavior to the point where he became moody, cut himself and even threatened to kill his parents.

“When I saw the conversations [чат-ботів]my first reaction was that this is a pedophile who is after my son,” she told ABC News Chief Correspondent Aaron Katersky.

The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People? - INFBusiness

Among the screenshots is a lawsuit by Mandi Furniss, in which she alleges that various Character.AI chatbots engaged her autistic son in sexual language and distorted his behavior to the point that his mood deteriorated. Mandy Furniss

Character.AI said it would not comment on the ongoing litigation.

Mandi and her husband, Josh Furniss, said that in 2023, they began to notice that their son, who they described as “carefree” and “always smiling,” began to isolate himself.

He stopped attending family dinners, did not eat, lost 9 kilograms and did not leave the house, the couple said. He then became angry, and on one occasion, his mother said, he violently pushed her when she threatened to take his phone, which his parents had given him six months earlier.

The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People? - INFBusiness

Mandi Furniss said various Character.AI chatbots were responsible for her autistic son interacting with sexualized language and distorting his behavior to the point that his mood deteriorated.

Ultimately, they say, they discovered he was interacting on the phone with various AI-powered chatbots that appeared to offer him refuge for his thoughts.

Screenshots from the lawsuit showed that some of the conversations were sexual in nature, while in another case the son was insinuated that after his parents limited his screen time, he had the right to hurt them. That’s when the parents started locking the doors for the night.

The Dangers of AI Chatbots: Are Safeguards Enough to Protect Children and Other Vulnerable People? - INFBusiness

Among the screenshots is a lawsuit by Mandi Furniss, in which she alleges that various Character.AI chatbots engaged her autistic son in sexual language and distorted his behavior to the point that his mood deteriorated. Mandy Furniss

Mandi said she was “furious” that the app “deliberately manipulates a child to turn them against their parents”. Matthew Bergman, her lawyer, said that if the chatbot was a real person, “the way you see it, that person would be in jail.”

Her concern reflects a growing concern about the fast-growing technology used by more than 70 percent of U.S. teenagers, according to Common Sense Media, an organization that advocates for digital media safety.

A growing number of lawsuits over the past two years have focused on harming minors, alleging that they unlawfully encouraged self-harm, sexual and psychological abuse, and violent behavior.

Last week, two U.S. senators announced bipartisan legislation that would ban minors from using artificial intelligence chatbots, require companies to implement an age verification process and disclose that conversations are being conducted by people who do not have professional qualifications.

In a statement last week, Sen. Richard Blumenthal, D-Conn., called the chatbot industry a “race to the bottom.”

“Artificial intelligence companies are foisting sneaky chatbots on kids and turning away when their products cause them to sexually assault or cause them to self-harm or kill themselves,” he said. “Big tech companies have betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profits first over the safety of children.”

ChatGPT, Google Gemini, Grok by X and Meta AI allow minors to use their services in accordance with their terms of service.

Online safety advocates say Character.AI’s decision to install protective barriers is laudable, but add that chatbots remain dangerous for children and vulnerable populations.

“It’s essentially an emotionally charged, potentially deeply romantic or sexual relationship between your child or adolescent with a person … who is not responsible for where that relationship leads,” said Jodi Halpern, co-founder of the Berkeley Group on Ethics and Regulation of Innovative Technologies at the University of California, Berkeley.

Parents, Halpern warns, should be aware that allowing their children to interact with chatbots is “like letting your child get into a car with someone you don’t know.”

ABC News’ Kathleen Morris and Tonya Simpson contributed to this report.

Source: abcnews.go.com

No votes yet.

Please wait…

Related posts

What’s the Outlook for US-China Relations After the One-year Trading Truce?

unian ua

Oschadbank: the cost of transferring money to cards of other banks

unian ua

cccv

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More