Unpacking All Three Sides of the AI Debate
The heated public debate over AI risk is confusing because there are not just two sides to the argument but three:
the “AI Doomers,” aka pessimists, are the worriers. They argue that AI poses a threat to humanity and to civilization and the whole planet.
The “AI Boosters,” aka optimists, are the cheerleaders. They believe AI is a miracle tech that we can use to solve huge problems like curing Alzheimer’s and cancer and stopping climate change.
The “AI Realists,” aka pragmatists, are the straight-talkers. They think both the Doomers and Boosters are wasting time debating science fiction scenarios. They say that’s a distraction from talking about real-world problems like AI being used for surveillance, for spreading disinformation, and for invading privacy.
Why It Matters. We are all trying to figure out what AI means and what to do about it. Should we try strongly regulate AI? Should we ban it? Or should we welcome it with open arms?
The experts can’t seem to agree on basic questions: how fast is AI progressing? What can it do now? What will it be able to do in 5 years or 10? What should we do about it? If the experts can’t agree on these basic questions, what hope do the rest of us have?
We’re introducing our 🔺 “three-sided framework” to help you start to think about these questions and these debates.
Of course, it’s not perfect, and reducing any complex debate to three sides (or even 20 sides) risks oversimplifying. Every expert brings their own perspective, and two experts on the same side will likely disagree on a lot. But we think the 🔺 three-sided framework is a good starting point.
Here’s how it works. One way to understand each of the three viewpoints is that each group sees a different “number 1” risk with AI:
⚖️ 🛠️ Realists are most worried about AI and power. As with any powerful tech, AI can be used by a few at the expense of the many. In fact, the realists argue, AI is already being used by a few to grab power, and we should be focused on these real-world problems, rather than hypotheticals.
💥 🌏 Doomers are most worried about the end of the world: They think AI will become more powerful and start acting in its own interests, not ours. They admit this might sound like sci-fi, but, they argue AI is unlike any tech we’ve ever seen, with risks we’ve never seen before.
🚀💡Boosters are most worried about missing out: They see AI as a miracle tech that could solve many of our biggest problems and save and enrich literally billions of lives. Like any tech, it has risks, but they believe the biggest risks are measured in lives lost through dithering and years of delay in solving critical problems like climate change and global malnutrition.
Putting it all together. If you are watching an argument and the experts seem to be talking past each other, try analyzing their views with the 🔺three-sided framework.
For example, a little while back, we covered the Munk Debate on AI Risk. Yoshua Bengio and Max Tegmark argued that “AI research and development poses an existential threat.” In other words, the “AI Doomer” position.
Arguing for the other side was a team made up of Yann LeCun and Melanie Mitchell. When I first listened to the debate, I thought they both made strong arguments, but I found it hard to make their arguments fit together.
But after applying the 🔺three-sided framework, it was clear that LeCun and Mitchell were both strongly opposed to the “doomer” position but for very different reasons.
LeCun, as an 🚀 AI Booster, argued for the vast potential and positive impact of AI. He acknowledged challenges but saw these as technical issues to be resolved rather than insurmountable obstacles or existential threats. He argued for AI as a powerful tool that can improve society and solve complex problems.
On the other hand, Mitchell, representing the ⚖️ AI Realist perspective, questioned whether, in the near term, AI could ever reach a stage where it could pose an existential threat. While she agreed that AI presents risks, she argued that the most important risks are related to immediate, tangible concerns like job losses or the spread of disinformation.
Analyzed under the 🔺three-sided framework, then, the Munk Debate was between:
An “All Doomer” team of Bengio and Tegmark, each making Doomer arguments; versus
A “Mixed Realist/Booster” team of “Realist” Mitchell making Realist anti-Doomer arguments and “Booster” LeCun, making Booster anti-Doomer arguments.
Mitchell and LeCun found common ground in arguing that the Doomers were wrong about existential risk, but beyond that, their views diverged strongly.
Consider, for example, a future debate on the question, “Which is a bigger threat to the healthy development of AI, over-regulation or under-regulation?” In this debate, we would likely see Mitchell and LeCun taking opposite sides. And interestingly, “Doomers” Bengio and Tegmark would likely take Mitchell’s side against LeCun.
The 🔺three-sided framework, then, does more than just categorize expert stances on a specific issue—it uncovers deeper beliefs that influence these stances, providing a clearer picture of the complexities of the AI discourse.
Where I Stand. In discussing the🔺three-sided framework with a friend, he asked, “ok, but, where do you stand? You want to put everyone else in a box, which box are you in?” My honest answer is that I’m not sure, and it depends on what day you ask me. I tend towards the “realist, pragmatic” view. But then I read about amazing progress in AI, and I start to lean boosterish. Then I read some of the arguments by the doomers, and I wind up awake at night staring at the ceiling, worried that we don’t understand what’s coming. And then, I go back to the pragmatic practical view. And so it goes.
I’m trying to figure it out. I hope we can all try to figure it out together.
Go Deeper. On the LawSnap Wiki, we are building out our guide to AI Risk. Our goal is to provide an overview of the main thinkers from each viewpoint and the main arguments. We hope you’ll take a look. Feedback is always appreciated.