Brian Hood is a whistleblower who was praised for “showing tremendous courage” when he helped expose a worldwide bribery scandal linked to Australia’s National Reserve Bank.
But if you ask ChatGPT about his role in the scandal, you get the opposite version of events.
Rather than heralding Hood’s whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption and been sentenced to prison.
When Hood, who is now mayor of Hepburn Shire near Melbourne in Australia found out, he was shocked.
In what could be the first defamation suit of its kind against the artificial intelligence chatbot, Hood plans to sue the company behind ChatGPT, saying it is telling lies about him,
The case is the latest example on a growing list of AI chatbots publishing false statements about real people. The chatbot recently invented a fake sexual harassment story involving a real law professor, citing a Washington Post article that did not exist as its evidence.
If it reaches the courts, the case will test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.
On its website, ChatGPT prominently warns users that it “may occasionally generate incorrect information.” Hood believes that caveat is insufficient.
“Even a disclaimer to say we might get a few things wrong – there’s a massive difference between that and concocting this sort of really harmful material that has no basis whatsoever,” he said.
In a statement, Hood’s lawyer lists multiple examples of specific falsehoods made by ChatGPT about their client.
Under Australian law, a claimant can only initiate formal legal action in a defamation claim after waiting 28 days for a response following the initial raising of a concern.
Hood said recently that his lawyers were still awaiting to hear back from the owner of ChatGPT – OpenAI – after sending a letter demanding a retraction.
In an earlier statement in response to the chatbot’s false claims about the law professor, OpenAI spokesperson Niko Felix said: “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”
Experts in artificial intelligence said the bot’s capacity to produce such a plausible false statement about Hood was not surprising.
In a letter to OpenAI, Hood’s lawyers demanded a rectification of the falsehood.
But according to Michael Wooldridge, a computer science professor at Oxford University, simply amending a specific falsehood published by ChatGPT is challenging.
“All of that acquired knowledge that it has is hidden in vast neural networks,” he said, “that amount to nothing more than huge lists of numbers.”
About the Author