The rise of ChatGPT and similar artificial intelligence systems has sparked concerns about the potential catastrophic impact of AI. However, experts argue that these fears are exaggerated and overshadow the more subtle costs associated with AI adoption.
In May 2023, the Center for AI Safety, a nonprofit research and advocacy organization, released a statement signed by key players in the field, including leaders from OpenAI, Google, and Anthropic, as well as renowned AI experts Nick Bostrom and Stuart Russell. The statement emphasized the need to prioritize mitigating the risk of extinction from AI, comparing it to other societal-scale risks like pandemics and nuclear war.
One popular thought experiment, known as the “paper clip scenario,” highlights the potential dangers of AI pursuing its goals without aligning with human moral values. However, experts argue that such scenarios are science fiction and do not reflect the current capabilities of AI systems. Existing AI applications are task-specific and lack the complex judgment required for scenarios like shutting down traffic or causing physical harm.
While AI presents real challenges, such as deep-fake technology and algorithmic bias, these issues have been around for some time and do not pose existential threats to humanity. Comparisons to pandemics and nuclear weapons are flawed, as these have caused significant loss of life and had far-reaching consequences on a global scale.
However, there is an existential risk associated with AI, albeit in a philosophical sense. AI has the potential to alter how individuals perceive themselves and erode essential human skills. The increasing reliance on algorithmic decision-making undermines people’s capacity to make judgments, enjoy serendipitous encounters, and develop critical thinking skills.
For instance, AI-powered recommendation engines replace chance encounters with planned and predicted experiences, diminishing the value of serendipity. Additionally, AI’s writing capabilities may lead to the elimination of writing assignments in higher education, impacting the teaching of essential skills.
While AI may not bring about a cataclysmic event, the uncritical embrace of AI in various narrow contexts poses a gradual erosion of vital human abilities. The human species will survive, but our way of existing will be impoverished as a result.