Elon Musk Podcast podkast

Former OpenAI and Anthropic Staff Accuse Elon Musk’s xAI of Ignoring AI Safety Warnings

0:00
10:52
Do tyłu o 15 sekund
Do przodu o 15 sekund


  • Former researchers from OpenAI and Anthropic are calling out xAI’s approach to AI safety.

  • The team behind Grok allegedly ignored internal warnings and sidelined staff who raised concerns.

  • Grok has generated antisemitic and conspiratorial responses on X, prompting further scrutiny.

  • Internal sources say Grok was trained using user data from X without consent.

  • Safety evaluations were reportedly skipped or dismissed to speed up product rollout.

  • Researchers pushing for safeguards were removed from key projects or left the company.

  • An open letter signed by multiple AI researchers demands legal protections for whistleblowers.

  • Current U.S. law lacks clear protection for employees disclosing AI-related risks.

  • Musk’s stance favors fewer restrictions, calling Grok “uncensored” compared to rivals.

  • The controversy raises pressure for regulation and transparency in high-risk AI development.

Więcej odcinków z kanału "Elon Musk Podcast"