More than 1,000 scientists and tech executives recently issued a one-sentence warning about artificial intelligence that, if from a different source, would seem alarmist: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Extinction.
Computer scientist Geoffrey Hinton, widely regarded as the godfather of artificial intelligence, and Sam Altman, CEO of the groundbreaking AI firm OpenAI, were among those who signed the warning.
Some critics have characterized the statement as a clever bit of reverse marketing, emphasizing the rapidly accelerating capability of AI systems. But the warning, following another one by scientists who called for AI regulation, comes in the context of previous ignored warnings from scientists in other areas.
Global policymakers’ failures to heed scientists’ warnings about nuclear weapons and climate change, for example, have resulted in some of the major global threats facing humanity.
The scientists’ AI warning was succinct and not specific because many experts in the field disagree as to how AI’s proliferation might play out in terms of specific threats.
However, there is broad consensus among those scientists that the technology clearly has the capacity to outrun human control without regulatory intervention early in its development.
The European Union is farther along than the United States in devising a regulatory regime to handle AI. It should, however, be a global endeavor, perhaps in the same vein that the United Nations monitors nuclear proliferation through the International Atomic Energy Agency.
— Tribune News Service