Why Seeing Artificial Intelligence as Normal Matters

Despite widespread attention, Artificial Intelligence is best considered a normal technology, not a mystical or existential threat. Rethinking its societal role could reshape conversations and policy.

Although Artificial Intelligence is integrated into everyday life and numerous practical applications, it is often treated as an extraordinary or potentially uncontrollable force. Discussions about Artificial Intelligence frequently invoke notions of ´superintelligence´ or compare its risks to those posed by nuclear weapons. Industry leaders and companies, like Anthropic, have invested in researching the ethics and even the potential rights of Artificial Intelligence models, adding to the sense of novelty and concern surrounding the technology. This has fostered a climate where enthusiasts imagine utopian transformations and detractors fear dystopian futures, with some speculating about intentional Artificial Intelligence-free communities emerging in protest.

In a recent essay, Princeton researchers Arvind Narayanan and Sayash Kapoor advocate for treating Artificial Intelligence as a general-purpose technology, analogous to the advent of electricity or the internet, rather than an existential threat. They criticize the focus on terms like ´superintelligence´ as speculative, and emphasize the distinction between rapid technological development and the much slower societal adoption of Artificial Intelligence systems. Their research argues that the effects of Artificial Intelligence will unfold gradually, through incremental adoption, introducing new roles such as human overseers supervising Artificial Intelligence outputs rather than eliminating the need for human labor altogether. The real risks, they say, are less about unprecedented transformations and more about existing societal challenges—such as inequality, labor market disruption, media integrity, and democratic stability—being intensified by Artificial Intelligence deployments.

The researchers also question the dominant political narrative in the United States, which frames the progress of Artificial Intelligence as an ´arms race´ with China and prioritizes national security above all else. They dismiss this rhetoric as overblown, noting the broad and international nature of Artificial Intelligence research and arguing that secrecy at scale is unrealistic. Instead, they suggest more grounded policy recommendations: bolstering democratic institutions, fostering technical expertise in government, advancing Artificial Intelligence literacy, and incentivizing defenders—rather than focusing on fantastical threats of runaway Artificial Intelligence or zero-sum geopolitical contests. While this approach may sound unremarkable compared to dramatic prognostications, it highlights the importance of embracing Artificial Intelligence as a standard technological advancement with profound, yet manageable, implications for society.

72

Impact Score

AGI Is Not Around the Corner: Why Today’s LLMs Aren’t True Intelligence

Today’s LLMs like GPT-4 and Claude are impressive pattern-recognition tools, but they’re not anywhere near true intelligence. Despite the hype, they lack core AGI traits like reasoning, autonomy, and real-world understanding. This article cuts through the noise, explaining why fears of imminent AGI are wildly premature.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend