On April 15th German magazine Die Aktuelle published an AI-generated ‘interview’ with Formula One legend Michael Schumacher. Schumacher has not been seen in public since a skiing accident in 2013 which caused serious brain damage, and the details of his condition are kept closely guarded by his family. Following the immediate backlash from the public and the threat of legal action from Schumacher’s family, the article was pulled and editor-in-chief Anne Hoffman fired. This is not the first time legal action has been taken in relation to tabloid reports about Schumacher’s health. A 2015 article in Die Bunte reported that the former world-champion driver could walk again following his accident, a claim swiftly denied his lawyer. But this seems qualitatively different. Rather than the usual fabrications of tabloid journalists, a mixture of rumours and gossip, Die Aktuelle fabricated a veneer of truth by claiming, at least initially, that Schumacher himself had spoken to them. The controversy raises concerns over the realiability of journalism in light of new technologies. Some have responded to these concerns with strict regulation, with Italy becoming the first western country to ban ChatGPT. Innovator and entrepreneur Elon Musk, along with other leading scientists, academics and tech CEOs, has signed an open letter calling for a pause on the development of these technologies. At the same time, however, Musk is apparently working on his own autonomous chat bot, and tweeted in February of this year “What we need is TruthGPT”.
Reading the ‘interview’ led many to ask a basic question of provenance, in Talmudic terms m’na hanei milei – where are these words from? People are not inclined to believe everything they hear. The Talmud is constantly prompting its readers to ask: Who said this? Where did they say it? In what context? Pointing to reliable sources is not just a question of truth, but a question of morality. Anyone genuinely interested in Schumacher asked this very question. It’s unlikely that many took the claim of its provenance seriously, Die Aktuelle even running the tongue-in-cheek strap-line “deceptively real”. Yet Schumacher’s family, and his thousands of fans worldwide, did not see the funny side, and the media stunt clearly touched a nerve. There is something distinctly human about a family protecting the privacy of a loved one, and something almost inhumane about disregarding their express wishes to be left alone. It is not AI who lacked a human touch here, but the very human team at Die Aktuelle.
In their apology the magazine described the stunt as “tasteless and misleading”, while Futurism accurately characterized the article as “brazen and tone-deaf”. It was misleading indeed, yet one wonders exactly how a text-generating bot is supposed to understand the concepts of “tone” and “taste”. The decision to fake an interview with a disabled person unable or unwilling to speak for themselves in the public arena demonstrates not that malignant AI will create such content to fool credulous people, but that some journalists will dispense with respect for the disabled and sensitivity to those who care for them in order to get clicks. It is not AI which has struck a blow to decency and humanity, but human beings themselves. The fact that, according to the statement from Die Aktuelle, the interview “in no way meets the standards of journalism” expected by readers, is not the fault of AI. This is proven by the fact that Die Aktuelle felt the need for “immediate personnel consequences”. At the end of the day, whatever groundbreaking capabilities AI may possess, and it is still far from perfect, it is the human writers and editors who put the ‘person’ in “personnel”, and therefore they bear the responsibility.
The increasing autonomy of AI causes fear, as well as excitement, amongst many, as the world takes its first steps into a new technological age. Like all revolutionary technologies, autonomous AI can provoke a range of responses from people, from the reactionary to the utopian. Much of the conversation focuses on the relationship between technology and truth. It leads us to profound questions about ways in which we can verify what we read, see or hear. Indeed, we always want to verify, to ask ‘where are these words from?’ While the article from Die Aktuelle was exposed within minutes, this will not be the last case of fabricated interviews, which now can easily be accompanied by fake photo, video or audio “evidence”. This is further confounded by the social and political discourse about “post-truth”, “fake news” and other ideas which, rightly or wrongly, undermine traditional notions about how we decide what is true.
But perhaps we are missing the more important question, that of the relationship between people and morality. Just like the combustion engine or the internet, there is nothing inherently moral or immoral about AI. Nonetheless, we can and must make moral judgments about the way in which people use these tools. The key words in the statement of apology from Die Aktuelle – “tasteless”, “standards”, “consequences” – relate more to human sensitivities than to the truth value of the output from a chat-bot. Even if AI has the ability to produce falsehoods, people still have the right and responsibility to shine a light on the truth. Ultimately it is not AI that is lying to us, but the people who employ it to unsavoury ends. Whether we use AI as a tool or not, all human beings, including journalists, have the responsibility to make honest judgements about their actions, and we all have the right to ask questions about the provenance of content.