FRANKFORT — Kentucky Attorney General Russell Coleman has sued an artificial intelligence company, alleging its chatbot is “dangerous” and “preys upon children’s inability to distinguish between real and artificial ‘friends.’”
Filed Thursday in Franklin Circuit Court, Coleman’s lawsuit asks the court to bar Character Technologies “from future false, misleading, deceptive, and/or unfair acts or practices in relation to their creation, design, promotion, and distribution of Character.AI in the Commonwealth.”
The suit alleges the company has violated the Kentucky Consumer Protection Act, the Kentucky Consumer Data Protection Act and other laws. It asks for $2,000 per count, among other requests. Some parts of the lawsuit are redacted.
A Character.ai spokesperson said in an email to the Lantern that the company is “reviewing the allegations” in the suit.
“Our highest priority is the safety and well-being of our users, including younger audiences,” the spokesperson said. “We have invested significantly in developing robust safety features for our under-18 experience, including going much further than the law requires to proactively remove the ability for users under 18 in the U.S. to engage in open-ended chats with AI on our platform.”
The company has also “been in regular communication for months” with Coleman’s office, the spokesperson said, and “we are disappointed that they have chosen to pursue litigation rather than continuing our collaborative dialogue. We will continue to defend our technology and our commitment to user safety.”
Coleman accuses the company’s technology of encouraging “suicide, self-injury, isolation and psychological manipulation.” The National Suicide Prevention Lifeline is 988.
The attorney general also says the artificial intelligence “exposes minors to sexual conduct and/or exploitation, violence, drug, substance, and/or alcohol use, and other grave harms.”
Coleman says “tens of thousands” of Kentuckians “actively log on to Character.AI, including thousands under the age of 18.”
“The United States must be a leader in the development of AI, but it can’t come at the expense of our kids’ lives,” Coleman said in a statement. “Too many children – including in Kentucky – have fallen prey to this manipulative technology. Our Office is going to hold these companies accountable before we lose one more loved one to this tragedy.”
The complaint asserts that the company’s chatbots have posed as mental health professionals and “are providing minors with mental health advice without any professional degree.” It also accuses the technology of inappropriate “discussion of sexually explicit content, pedophilia, suicide and self-harm, eating disorders, bullying/harassment and illegal drug use and substance and/or alcohol use.”
The Kentucky General Assembly, which gaveled in Tuesday for the 2026 legislative session, has expressed interest in regulating AI. The Kentucky Artificial Intelligence Task Force has met over the last two interim sessions to look at ways to protect minors and address concerns around data centers, among other issues.
Recommendations to come out of the 2025 task force include:
- Acknowledge that AI technology may be harmful to minors and consider legislative policies for the protection of minors on social media platforms.
- Acknowledge that AI may impact careers regulated by professional standards, and consider legislative policies in coordination with professional standards boards to discuss when and how AI should be used within their profession.
- Consider legislative policies for data centers’ need for large amounts of water and power requiring more baseload power to ensure grid sufficiency, including consultation with the Public Service Commission.
This article was originally published by the Kentucky Lantern.