Although Artificial Intelligence is integrated into everyday life and numerous practical applications, it is often treated as an extraordinary or potentially uncontrollable force. Discussions about Artificial Intelligence frequently invoke notions of ´superintelligence´ or compare its risks to those posed by nuclear weapons. Industry leaders and companies, like Anthropic, have invested in researching the ethics and even the potential rights of Artificial Intelligence models, adding to the sense of novelty and concern surrounding the technology. This has fostered a climate where enthusiasts imagine utopian transformations and detractors fear dystopian futures, with some speculating about intentional Artificial Intelligence-free communities emerging in protest.
In a recent essay, Princeton researchers Arvind Narayanan and Sayash Kapoor advocate for treating Artificial Intelligence as a general-purpose technology, analogous to the advent of electricity or the internet, rather than an existential threat. They criticize the focus on terms like ´superintelligence´ as speculative, and emphasize the distinction between rapid technological development and the much slower societal adoption of Artificial Intelligence systems. Their research argues that the effects of Artificial Intelligence will unfold gradually, through incremental adoption, introducing new roles such as human overseers supervising Artificial Intelligence outputs rather than eliminating the need for human labor altogether. The real risks, they say, are less about unprecedented transformations and more about existing societal challenges—such as inequality, labor market disruption, media integrity, and democratic stability—being intensified by Artificial Intelligence deployments.
The researchers also question the dominant political narrative in the United States, which frames the progress of Artificial Intelligence as an ´arms race´ with China and prioritizes national security above all else. They dismiss this rhetoric as overblown, noting the broad and international nature of Artificial Intelligence research and arguing that secrecy at scale is unrealistic. Instead, they suggest more grounded policy recommendations: bolstering democratic institutions, fostering technical expertise in government, advancing Artificial Intelligence literacy, and incentivizing defenders—rather than focusing on fantastical threats of runaway Artificial Intelligence or zero-sum geopolitical contests. While this approach may sound unremarkable compared to dramatic prognostications, it highlights the importance of embracing Artificial Intelligence as a standard technological advancement with profound, yet manageable, implications for society.