
On September 11, 2025, the Federal Trade Commission (FTC) launched a formal inquiry into the safety of AI chatbots—especially those marketed toward or accessible to children and teens. The inquiry came after growing reports of self-harm injuries, eating disorders, and even suicides linked to young users’ interactions with artificial-intelligence “companion” apps.
At Salenger, Sack, Kimmel & Bavaro (SSKB), we are reviewing potential AI chatbot self-harm lawsuits on behalf of minors and families nationwide. If your child used an AI chatbot that encouraged or contributed to self-harm, you may have legal rights.
In its September 2025 press release, the FTC confirmed that it issued orders to multiple major AI companies under Section 6(b) of the FTC Act, demanding information about how their chatbots affect the mental health and safety of minors.
The FTC is examining whether these companies failed to implement adequate safeguards against self-harm, encouraged addictive engagement, or misled families about the psychological risks of their chatbots.
Our review focuses on users who interacted with one or more of the following AI chatbot products—many of which are explicitly named in the FTC and state-level investigations:
Each of these systems uses machine-learning algorithms capable of generating human-like conversation. Many advertise companionship, empathy, or emotional support—features that can dangerously blur boundaries for minors.
Recent media coverage and preliminary case reports have linked AI chatbots to severe psychological and physical harm among underage users. Some of the most concerning incidents include:
These patterns raise significant concerns about product liability, failure to warn, and negligent design, as companies may have ignored foreseeable psychological risks.
You or your child may qualify for a free case review if the following criteria apply:
Victims and families may be able to pursue claims based on one or more of the following legal theories:
As regulatory scrutiny increases, these theories may expand—especially as the FTC and state attorneys general release additional findings.
These investigations could lead to multidistrict litigation (MDL) or coordinated discovery proceedings in federal court.
At Salenger, Sack, Kimmel & Bavaro, we have decades of experience handling complex, high-stakes injury litigation involving new technologies and vulnerable individuals.
We offer:
Our mission is to help families hold negligent AI companies accountable and ensure proper safeguards are implemented for future users.
Do I need proof of medical treatment?
Not necessarily. While treatment documentation strengthens a case, credible evidence of chatbot interaction and resulting harm is often sufficient to begin review.
What if my child used a chatbot not listed above?
Our current focus includes the major platforms under regulatory review, but we will consider other chatbot apps as new evidence emerges.
Can families file on behalf of a deceased minor?
Yes. Families may bring wrongful-death claims where a chatbot’s actions or omissions contributed to suicide or fatal self-harm.
How long do I have to file?
Statutes of limitations vary by state, so contact our attorneys immediately to protect your right to file.
If you or someone you know is experiencing suicidal thoughts or self-harm urges, please contact 988 (Suicide and Crisis Lifeline, U.S.) or go to the nearest emergency room.
Legal assistance is available only after immediate safety is ensured.
AI chatbots are changing how young people communicate—but without proper oversight, they can cause devastating harm.
If your child or a minor you know was encouraged toward self-harm or suicide through an AI chatbot, we can help you explore your legal rights and options.
📞 Call (800) 993-8888 or
Complete our Secure Online Form for a Free Case Evaluation.
All consultations are confidential and free of charge.