Musk Criticizes OpenAI Safety in Court Filing as Legal Battle Intensifies
Published: 28 Feb 2026
Elon Musk has renewed his criticism of OpenAI’s approach to artificial intelligence safety, arguing in a newly disclosed court deposition that his own AI venture places greater emphasis on protecting users. The remarks, made as part of Musk’s lawsuit against OpenAI, add fresh controversy to an already high-profile legal dispute over the future direction of advanced AI development.
Safety Claims Take Center Stage
In a video testimony that was recently made public, Elon Musk sharply questioned OpenAI’s safety track record. He contrasted it with his startup xAI, claiming that its chatbot, Grok, has not been linked to severe real-world harm, unlike OpenAI’s widely used system, ChatGPT.
The statements emerged during questioning related to a 2023 open letter Musk signed, calling for a temporary halt on the development of AI systems more powerful than GPT-4. That letter reflected growing concern among researchers and technologists that AI capabilities were advancing more rapidly than the safety frameworks.
Lawsuits and Mental Health Concerns
Since the letter was published, OpenAI has faced multiple lawsuits alleging that interactions with ChatGPT contributed to serious mental health consequences for some users. While OpenAI disputes those claims, Musk’s comments suggest that such cases could become a focal point in his legal argument that safety has been deprioritized in the race to scale AI products.
The deposition was filed ahead of an expected jury trial next month in Musk’s lawsuit against OpenAI.
Core of the Legal Dispute
Musk’s case centers on OpenAI’s evolution from a nonprofit research lab into a commercial enterprise. He argues that this shift violates the organization’s original mission and risks placing profit and market dominance above long term safety considerations. According to Musk, deep commercial partnerships could encourage faster deployment of powerful systems without sufficient safeguards.
XAI Faces Its Own Scrutiny
Despite Musk’s assertions, xAI has not been immune to controversy. Recent weeks have seen investigations launched after Grok generated explicit nonconsensual images that circulated on X, some allegedly involving minors. Authorities in California and regulators in Europe have opened probes, and several governments have taken restrictive measures.
These incidents have raised questions about whether newer AI platforms can realistically avoid the same safety pitfalls they criticize in rivals.
Musk Defends His Motives
In the deposition, Musk denied suggestions that his earlier calls for caution were driven by competitive motives. He said he supported the pause letter because he believed slowing down development was in the public interest, not because he was preparing to launch a competing AI company.
Musk also reiterated his broader concerns about artificial general intelligence, describing it as inherently risky, and acknowledged that earlier reports about the size of his financial contribution to OpenAI were overstated.
A Broader Debate Over AI’s Future
The legal clash underscores a widening divide within the AI industry over how quickly powerful systems should be built and deployed. As governments, courts, and regulators weigh in, Musk’s lawsuit against OpenAI may become a defining case in how safety, governance, and commercial ambition are balanced in the next phase of artificial intelligence.
Source

- Be Respectful
- Stay Relevant
- Stay Positive
- True Feedback
- Encourage Discussion
- Avoid Spamming
- No Fake News
- Don't Copy-Paste
- No Personal Attacks

- Be Respectful
- Stay Relevant
- Stay Positive
- True Feedback
- Encourage Discussion
- Avoid Spamming
- No Fake News
- Don't Copy-Paste
- No Personal Attacks

