When Technology Deceives
The Dangers of Artificial Intelligence in a Post-Truth Society
In the spring of 2024, an alarming incident unfolded in Baltimore County that encapsulated the profound dangers posed by artificial intelligence in today’s increasingly “post-truth” world. Dazhon Darien, a former high school athletic director, was arrested for using AI technologies to mimic the voice of the school principal, fabricating racist and antisemitic remarks that led to widespread backlash and temporary removal of the principal from his post.
This incident offers a disturbing glimpse into a far graver and potentially national crisis in politics and media.
To make a long story short, the principal initiated an investigation into Darien because he approved a $1,916 payment to another coach, who was also his roommate, under false pretenses. Apparently, Darien wasn’t happy about it, so he used artificial intelligence to mimic the principal’s voice, making racist and antisemitic comments about Black students and teachers, and Jewish parents.
The recording made references to “ungrateful Black kids who can’t test their way out of a paper bag” and said, “If I have to get one more complaint from one more Jew in this community, I’m going to join the other side.”
Predictably, the recording went viral after it was sent to other teachers and a student. The principal became the recipient of all kinds of vitriol and threats from people who believed he had made the comments.
Fortunately, law enforcement determined that the recording was faked, and Darien was arrested.
The incident highlights what many have been concerned about with the advent of AI technology and its inevitable misuse in creating deepfakes – convincingly realistic audio or video clips intended to deceive. In this case, the perpetrator used the tools to carry out a personal vendetta, which is already terrifying enough.
But what happens when these technologies are applied on a larger scale – especially in the political realm?
American politics is already fraught enough with division and tension. Introducing technology that could deceive the masses will only further exacerbate the rifts in society.
The implications of this are chilling. In politics and media, where the manipulation of public opinion has already been weaponized to divide the public while attacking political opponents, AI deepfakes could push the dissemination of disinformation into overdrive.
It is not hard to imagine how these tools can be used to influence elections, undermine trust in institutions, and even incite violence, is it? Unlike traditional “fake news” that can be more easily debunked, these AI-generated falsehoods can be honed to the point that it is nearly impossible to distinguish them from reality.
Remember when the media promoted the “fine people” hoax, in which they claimed former President Donald Trump had made complimentary remarks about white supremacists and Nazis? CNN and other alleged news outlets promoted this mass deception simply by cutting out part of the press conference in which he commented on the Charlottesville march.
This was already bad enough. But what if someone fabricated audio recordings of Trump dropping the N-word or saying other incendiary remarks? If the deepfake was convincing enough, untold numbers of unsuspecting Americans would be taken in by it. Even after the lie was exposed, there would still be many who believe it. Indeed, there are still people who believe that Trump actually said Nazis were “fine people” or that he told people to inject Clorox into their veins.
We already reside in a world where politicians, media figures, and other members of the chattering class are working to trick us into believing their lies. With sophisticated AI, distinguishing between truth and lies will become exponentially harder, which could lead to some disastrous results when taken to its logical conclusion.
For some, they would simply withdraw from political engagement altogether because they do not know what to believe. This is already horrible enough, but it is the best-case scenario. The more likely outcome is that America becomes even more divided and disinformed.
The Baltimore County incident underscores the need for solutions. It is not clear what role, if any, the government could or should play. Perhaps more technology will need to be developed in order to identify false information spread using AI. Law enforcement agencies will need to adopt more robust strategies to determine whether they are being tricked by the technology. Otherwise, it could lead to unjust arrests of people who were portrayed as having committed a crime using deepfakes.
The tech industry will have to be held accountable for regulating themselves and the use of their technologies. Their assistance in helping to uncover deepfakes will likely be invaluable. If this happens, then America might have a fighting chance at offsetting the potential harm that the misuse of AI technology could cause.
Wrong Speak is a free-expression platform that allows varying viewpoints. All views expressed in this article are the author's own.
Great job Jeff.