Home Technology AI & Robotics France Intensifies Probe Into Elon Musk and X Over Alleged Harmful Content,...

France Intensifies Probe Into Elon Musk and X Over Alleged Harmful Content, AI Role Under Scrutiny

former X CEO Linda Yaccarino and Elon Musk
former X CEO Linda Yaccarino and Elon Musk

Paris, France — May 8, 2026

French authorities have escalated a high-profile investigation into billionaire entrepreneur Elon Musk and his social media platform X over allegations involving harmful and misleading content.

Prosecutors are examining claims that the platform allowed the spread of child-related explicit material, deepfake images, and disinformation—raising serious concerns about content moderation and platform accountability in Europe.


AI Chatbot Grok Also Under Investigation

The probe extends to X’s AI chatbot, Grok, which is alleged to have generated or amplified controversial posts.

According to investigators, some content linked to the chatbot included misleading claims related to the Holocaust and offensive deepfake imagery—issues that carry significant legal consequences in France, where Holocaust denial is a criminal offense.


Failure to Appear for Questioning

French officials said both Musk and former X CEO Linda Yaccarino were summoned for questioning but did not appear.

Despite their absence, authorities confirmed that the investigation is ongoing, with agencies continuing to gather evidence and assess potential violations of French law.


Allegations of Platform Misuse for Growth

Investigators are also examining whether controversial content circulating on the platform may have been leveraged to boost engagement, popularity, or market valuation for X and related ventures, including xAI.

While no formal charges have been announced, the case has intensified scrutiny over how social media companies manage content and deploy AI technologies.


Renewed Debate on AI and Platform Responsibility

The investigation has reignited global debate around the responsibility of tech companies in moderating content and ensuring ethical use of artificial intelligence.

Regulators across Europe have increasingly focused on enforcing stricter standards for digital platforms, particularly in areas involving misinformation, harmful content, and AI-generated media.


Broader Implications for Tech Regulation

Legal experts say the outcome of the case could have far-reaching implications for:

  • AI governance and accountability
  • Social media regulation in Europe
  • Platform liability for user-generated and AI-generated content

As the probe continues, it may shape future policies on how emerging technologies are monitored and controlled.