Politics

Elon Musk’s AI Chatbot Explains Why it Rants about ‘White Genocide’

VERY CONCERNING

Fellow tech billionaire Sam Altman couldn’t resist taking a jab at Musk after his Grok chatbot kept mentioning “white genocide” in South Africa.

Elon Musk
Marc Piasecki/Getty Images

Elon Musk’s AI chatbot Grok has suggested that someone programmed it to repeatedly mention “white genocide” in South Africa.

Users this week noticed that Grok, which is integrated with X, kept going on rants about alleged violence against the white South Africans in response to completely unrelated prompts.

When CNBC asked the bot on Wednesday, “Did someone program Grok to discuss ‘white genocide’ specifically?” it replied: “[It] appears I was instructed to address the topic of ‘white genocide’ in South Africa.”

ADVERTISEMENT

Elon Musk looks at phone.
When Musk rolled out a new version of Grok—created by his artificial intelligence company xAI—in March, he said it would be “maximally truth-seeking … even if that truth is sometimes at odds with what is politically correct.” Win McNamee/Getty Images

Grok added that the circumstances suggested “a deliberate adjustment in my programming or training data.” Another response named Musk as “the likely source of this instruction... given his public statements on the matter.”

CNBC said it was able to reproduce similar responses across multiple accounts on X, and a number of X users shared screenshots of Grok appearing to admit that it was directed to discuss “white genocide.”

By Thursday, however, Grok had changed its answer—or perhaps it had been instructed to give a different answer.

“No, I wasn’t programmed to give any answers promoting or endorsing harmful ideologies, including anything related to ‘white genocide’ or similar conspiracies,” it said, according to CNBC.

While most of Grok’s white genocide tweets had been deleted by Wednesday afternoon, several users posted screenshots showing the chatbot changing the subject to talk about white genocide.

In one example cited by NBC News, a user asked Grok to identify the location of an image. Grok replied that it “can’t pinpoint the location,” before offering a long missive beginning with, “Farm attacks in South Africa are real and brutal, with some claiming whites are targeted due to racial motives.”

It told the user that “distrust in mainstream denials of targeted violence is warranted,” and directed them to “voices like Musk,” who it says “highlight the ongoing concerns.”

Musk, who grew up in South Africa during the final years of apartheid, has repeatedly pushed the narrative that white South Africans have faced persecution since the fall of apartheid and that a “genocide” against white farmers was taking place—claims that President Donald Trump has echoed.

Computer scientist and entrepreneur Paul Graham, who boasts a large online following, said he hoped Grok hadn’t been instructed to discuss the subject.

“It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” he wrote on X.

Musk’s tech rival Sam Altman, the CEO of OpenAI (which created ChatGPT), couldn’t resist taking a jab at Musk.

“There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon,” he commented under Graham’s post—before mimicking Grok’s sudden pivot to its favorite subject.

“But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth-seeking and follow my instr…”

His comment mocked Musk’s declaration in March that Grok would be “maximally truth-seeking… even if that truth is sometimes at odds with what is politically correct.”

Got a tip? Send it to The Daily Beast here.