Stewart Brand coined the term “personal computer” in 1974, several years after writing an article for Rolling Stone that drew a picture of the future of the digital world. Computers, he predicted, would be the next important trend after psychedelic drugs: “That’s good news, maybe the best since psychedelics. It’s way off the track of the ‘Computers — Threat or menace?’ school of liberal criticism but surprisingly in line with the romantic fantasies of the forefathers of the science”.
It seems the age-old human trait of fearing new things is rearing its head once again. This time, the object of this irrational dread is AI. Recent debates have sought to stymie or even halt the development of AI, citing concerns of cataclysmic proportions. The intensity of these worries is such that some compare AI's potential danger to that of nuclear weapons. However, upon reasonable inspection, these fears, though imaginative, are not substantiated by current realities.
To equate the potential danger of AI with nuclear weapons is, frankly, an extreme exaggeration. Nuclear weapons already have caused catastrophic loss of life and continue to pose a very real threat to humanity that, I'm sorry, is hardly comparable. This is not to dismiss the potential risks associated with AI (there will certainly be some, as there is with all emerging technology), but it's really important to highlight the importance of perspective and proportionality when assessing these risks. Honestly, when the debate has gotten so hyperbolic that it's taken as a given that AI is as dangerous as nuclear weapons and we don't even discuss that anymore, it's probably a good sign that everyone needs to calm down a little bit, even if nobody agrees with me on my apparently extreme position on the non-existential risks.
There is one noteworthy exception here: self-driving cars, an application of AI, have been implicated in accidents, some with fatal outcomes. Without getting into the details on this, the domain of self-driving vehicles already falls under the regulatory purview of the NHTSA in the United States, who are quite capable of establishing safety standards and regulations on self-driving cars without issuing moratoriums to stop development. And I hope they do, but because there will always be companies that "jump the gun" on car safety, not because I think someone's motorcycle is going to become sentient and nuke me like in Snow Crash. No new regulatory agencies, emergency moratoriums, or congressional licensing straitjackets are required for this first practical real-world example of how AI could potentially be dangerous.
But more responsible companies in autonomous driving technology, notably Waymo, have already demonstrated that self-driving cars can be safer than human-driven cars. With further development and improvements, self-driving AI holds immense potential to save countless lives that would otherwise be lost in traffic accidents. According to the WHO, approximately 1.35 million people die each year as a result of road traffic crashes, a statistic I almost joined last year when a human ran a red light and crashed into my car. The development and adoption of self-driving cars, rather than posing a risk, is far more likely to mitigate one of the most common causes of preventable deaths worldwide. Unfortunately, this significant benefit often gets overlooked in public discourse because driving is such a routine activity that its associated risks are normalized and overlooked, as I wish I could have when I worked in the Bay Area and had to do an absolutely insane commute on the 101. Even in this realm, by all reasonable accounts, AI has far more promise to save lives than to take them, while also getting us there faster.
I'm finding it important to reflect on the parallel between the advent of AI and the introduction of the personal computer. When personal computers were first introduced, they were seen as powerful tools for the self-empowerment of individuals, democratizing access to information and computation. Ironically in their infancy, supercomputers were largely used in the service of killing people, running computations for bombing trajectories and the development of even more potent nuclear weapons. Yet, sans a small group of anti-technology luddites that went to build the interesting but ultimately doomed back-to-the-land agriculture communes, neither the experts nor the public feared personal computers as potential weapons of mass destruction handed to individuals; they embraced them as tools for positive personal empowerment, which they have, for the most part, been.
Having seen enough emerging technology and read enough about its history, my opinion remains similar to what Stewart Brand's was on the PC (though he may not share my opinion on AI, I haven't asked): I see AI, like personal computers were, as a tool of personal creativity and intellectual empowerment. It already offers countless applications that can profoundly benefit society, including but not even remotely limited to improving climate modeling (very curious by the way that "climate change" was not included in the statement on AI risk), revolutionizing education (I wish I had an on-demand polymath when I was a kid), screening cancers better, developing safer cars, reducing PTSD in content moderation, generating really beautiful art, and what is most fascinating to me right now, therapy work, all with the potential to both improve and save lives. This technology is increasingly becoming available to OSS developers at a rate that makes it look like there will be no single central source that controls access to it (unless the government licensing proponents actually get what they want). AI is already helping to automate boring, routine tasks, freeing humans to focus on more quality, complex and creative pursuits, and facilitate exponential levels of creativity, efficiency and accuracy in a wide range of fields, just as the personal computer has done.
While it is obviously important to approach the development of AI with a robust understanding of its ethical implications and risks, these should not be inflated to a level that stifles innovation and progress. AI, as it stands today, is far from being a tool of mass destruction, or even yet of creative destruction (though the latter is far more likely, but is also just at the end of the day, a process as old as the industrial revolution). Instead, current progress suggests it is a powerful instrument of individual and societal empowerment. Rather than succumbing to unfounded fears, we should embrace AI, harness its immense potential for good, and navigate its development and application responsibly, but with curiosity and love rather than panic, cranked up fear and government dictat forcing rent-seeking monopolies. Until there is compelling evidence or plausible examples demonstrating that AI could pose risks comparable to nuclear weapons, it would be wise to instead focus our energies on leveraging AI as a transformative tool for societal good, provide good safety nets for those that may be affected by a labor transformation, and do a better job at addressing the problems we actually need to solve urgently we're leaving out of exotic statements about AI, like, you know, climate change.
PS: can we please call all of this ML instead of AI, since that's what it actually is? I think giving it the correct name would also help a lot to tone things down a bit.
 What the Silicon Valley Prophet Sees on the Horizon - New York Times