News and Music Discovery
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Florida AG launches criminal investigation into ChatGPT over FSU shooting

Police investigate the scene of a shooting near the student union at Florida State University on April 17, 2025 in Tallahassee, Florida. Two people were killed and five injured in the attack. Florida's attorney general is now investigating OpenAI because the alleged shooter used ChatGPT to help plan the attack.
Miguel J. Rodriguez Carrillo
/
Getty Images
Police investigate the scene of a shooting near the student union at Florida State University on April 17, 2025 in Tallahassee, Florida. Two people were killed and five injured in the attack. Florida's attorney general is now investigating OpenAI because the alleged shooter used ChatGPT to help plan the attack.

Florida's attorney general is launching a criminal investigation into ChatGPT and its parent company OpenAI over claims that the accused gunman in a shooting at Florida State University last year consulted the AI chatbot before killing two people and injuring five more.

The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs.

"My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others."

OpenAI spokesperson Kate Waters said in a written statement to NPR: "Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime." She said the company reached out to share information about the alleged shooter's account with law enforcement after the shooting and continues to cooperate with authorities.

Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability.

"We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable."

OpenAI's Waters said that the chatbot "provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."

She continued: "ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise."

Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Growing concerns about AI chatbots

The Florida investigation comes amid growing concerns over the role of AI chatbots in mass violence. Uthmeier had already announced a civil investigation into ChatGPT's role in the FSU shooting, which is ongoing, and attorneys for the family of one of the victims say they plan to sue OpenAI.

OpenAI is already facing a lawsuit from the family of a victim critically wounded in an attack in British Columbia in February 2026 that killed eight people and injured dozens more. The alleged shooter discussed gun violence scenarios with ChatGPT and was even banned from the platform months before the shooting, but was able to evade detection and create another account, OpenAI told Canadian authorities.

The Wall Street Journal reported that OpenAI's internal systems flagged the account's posts and staffers were alarmed enough to consider alerting law enforcement, but that the company decided not to. OpenAI has said it is making changes to "strengthen" its protocol for referring accounts to law enforcement in the aftermath of the Canadian shooting.

Lawsuits are also mounting against OpenAI and other makers of AI chatbots alleging they've contributed to mental health crises and suicides. (OpenAI has said the cases are "an incredibly heartbreaking situation" and that it's working with mental health experts to improve how ChatGPT responds to signs of mental or emotional distress.)

A wrongful death lawsuit filed against Google in March over the suicide of a Florida man accuses the company's Gemini chatbot of pushing the man to "stage a mass casualty attack near the Miami International Airport [and] commit violence against innocent strangers," according to court documents.

In response to that lawsuit, Google said: "Gemini is designed to not encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they're not perfect." The company added that in this specific case, Gemini had "referred the individual to a crisis hotline many times."

Copyright 2026 NPR

Tristan Wood
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.