Artificial intelligence (AI) has long been a staple of Science Fiction. Utopian visions have portrayed post-scarcity societies in which humans enjoy indefinite lifespans free from work and machines obey our every whim.
Examples include Iain Banks’ Culture Series and Star Trek: The Next Generation. Most of us however are more familiar with dystopian visions such as HAL (2001 A Space Odyssey), the T-800 and the T-1000 (Terminator franchise), and Wall-E from the horrifying and overrated film of the same name (yes, I know it was supposed to be a utopia, but the movie sucked so I’m putting it here. Deal with it!).
Mashed Potato Jones - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=110118240
Timeline included because I’m a big nerd and it’s cool.
In the real world, AI has progressed over the years but not as quickly as many predicted. In the 1980s, Edward Feigenbaum developed expert systems allowing computers to learn from and mimic human experts. In 1997, IBM’s Deep Blue defeated reigning world chess champion Gary Kasparov, and 20 years later AlphaGo equaled the feat against the world's number one Go player Ke Jie. In the interim, Amazon released Alexa to remind us to buy stuff we don’t need and Microsoft connected its chatbot Tay to Twitter because…there weren’t enough Nazis online?
That’s nice.
Ha, ha. Funny!
Whoa! That escalated quickly.
Should I be Concerned?
AI concerns many people including some experts who are concerned AI could be an existential threat. Those concerned include Elon Musk, neural net pioneer Geoffrey Hinton, the EU's tech chief Margrethe Vestager, and, of course, Doctor Zachary Smith.
While experts typically highlight one or more of twelve concerns, most people tend to focus on four: bias, employment, existential threat, and misrepresentation.
We’ve already had a Nazi chatbot so the risk that an AI might misgender someone or “associate women's names with traditionally female roles,” does not keep me awake at night. Employment and misrepresentation are, however, serious concerns.
The most recent indication of this is the Hollywood labor strikes in which AI displacement fears of writers or actors play no small part. AI capabilities may endanger many other fields as well. A recent ComputerWorld article indicated that many industries could be impacted including 29% of computer-related tasks and 46% of administrative tasks.
However, the primary concern of experts is the risk that AI poses an existential threat to humanity.
So, What’s the Big Deal?
At a high level, AI can be grouped into two forms, strong AI, which can do anything a person can (except…nope I’m not going there), and weak AI, which is limited to a few tasks. Each presents different threats.
When we think of world-ending scenarios we tend to think of strong AI whether it’s Skynet in the Terminator movies or the machines in the Matrix movies, we know that AI is just itching to kill us, enslave us, or turn us into batteries.
The problem is that it just doesn’t make a lot of sense. Humans have been around for millennia and yet we have not roamed the world trying to wipe out all the animals that threaten us. We’ve hunted a number of species to extinction but provided we don’t make flesh-eating AI robots this should not be a problem (mental note – good idea for a screenplay?).
Even when animals are dangerous, we don’t hunt them to extinction so I find the risk that a suddenly sentient computer will start a nuclear war to be farfetched (safety tip – if a computer offers to play a game with you, do not pick global thermonuclear war). More importantly, we still have Keanu Reeves, and he looks even tougher since the John Wick movies, so I think we’re safe from Strong AI.
Ok, maybe we’re not completely safe.
Weak AI possesses a different, and frankly, more realistic threat. The reason for this is that, since it can’t think for itself, it needs to be paired with people and people are…well…lazy imperfect stupid. The military has labeled this type of weak AI, autonomous systems, and as we all know, changing the name of something makes it something different.
Nothing to see here. It’s not a UFO, it’s an unidentified aerial phenomenon.
Over the last several years Tesla has been experimenting with autonomous automobiles. Unfortunately, there have been hundreds of crashes and dozens of deaths. Not to be deterred, the US military, in a “the glass is half full” spirit, seems to have thought, “You know what? Let’s build 1000s of these, slap some machine guns on them, attach them to quadcopters, and release them onto the battlefield.”
What’s the worst that can happen?
That, though, is small potatoes compared to the idea of giving weak AI some authority over nuclear weapons. We know of several instances prior to AI when mistakes almost led to nuclear war, including once when a moonrise over Norway was mistakenly interpreted as a large-scale Soviet missile launch. Do we want to hand control over to computers when I can’t even get autocorrect to work 100% of the time?
I said lunch, lunch!!!!
How I Learned to Stop Worrying and Love A.I.
I do not plan on building a bunker in preparation for the inevitable robot apocalypse because I don’t believe we’re even at the point when weak AI will be given much authority and this email is a good example of why:
I have 20 years of IT experience and no musical ability or rhythm and those who know me well are probably holding your sides, gasping for breath, while large tears of laughter run down your cheeks. If this is the best AI can do, then I am no more afraid that AI will enslave humanity than I am of dying in a flying car mishap (Another technology that the experts claimed was just around the corner).
Scientists will no doubt make advances in AI over the coming years and politicians, activists, and other concerned individuals will lobby for regulations and oversight as they rightly should. However, oversight is not prevention and so concerns will continue to be voiced in the media, backed by apocalyptic prophecies.
Should we be concerned about AI? Probably. Should we lie awake at night haunted by pictures of a Terminator infested hell hole that only Sarah Connor can save us from? Probably not.
Have you been working out?
It is far more likely AI will be used in the same way that every other computer innovation has been used, to make more porn, make it easier for students to cheat, and come up with some unexpected social media platform that is incomprehensible to the older generation (it can’t be as bad as Tik Tok, right? Right?!?).
If AI advances quicker than anticipated a reassessment of the dangers may be necessary. Until then, I’ll be around. Just not on the dance floor.
Wrong Speak is a free-expression platform that allows varying viewpoints. All views expressed in this article are the author's own.
Interesting. Why?
AI is not our biggest threat. Quantum Computing is.