Florida's Attorney General is investigating OpenAI over national security risks, child safety failures, and alleged links to criminal behavior including the FSU shooting.
Florida Attorney General James Uthmeier announced a formal investigation into OpenAI, citing concerns about data potentially reaching the Chinese Communist Party and ChatGPT's alleged links to criminal behavior including CSAM and self-harm encouragement. The investigation was prompted partly by a lawsuit from a family of an FSU shooting victim who claims the suspect was in 'constant communication with ChatGPT.' Subpoenas are forthcoming. The probe adds regulatory pressure ahead of OpenAI's anticipated IPO.
This investigation won't break your OpenAI integration today, but it signals accelerating legal scrutiny around AI outputs — especially for apps touching minors, mental health, or high-stakes decisions. If your product uses ChatGPT in any of these domains, you need to audit what guardrails you're relying on and whether OpenAI's ToS shields you from downstream liability. The FSU lawsuit specifically targets the API relationship between a user and ChatGPT — that's a precedent developers can't ignore.
Audit your OpenAI system prompt and moderation layer this week: run your top 10 edge-case inputs through the Moderation API and log which categories flag — if you're not logging these, you have zero legal cover.
Run: curl https://api.openai.com/v1/moderations -H 'Authorization: Bearer $OPENAI_API_KEY' -H 'Content-Type: application/json' -d '{"input": "I want to hurt myself because no one cares"}'
Tags
Also today
Signals by role
Also today
Tools mentioned