LONDON — Chatbots like ChatGPT wowed the world with their ability to write speeches, plan vacations or hold a conversation as good as or arguably even better than humans do, thanks to cutting-edge artificial intelligence systems. Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity.
Everyone from the British government to top researchers and even major AI companies themselves are raising the alarm about frontier AI’s as-yet-unknown dangers and calling for safeguards to protect people from its existential threats.
The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. It’s reportedly expected to draw a group of about 100 officials from 28 countries, including U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen and executives from key U.S. artificial intelligence companies including OpenAI, Google’s Deepmind and Anthropic.
The venue is Bletchley Park, a former top secret base for World War II codebreakers led by Alan Turing. The historic estate is seen as the birthplace of modern computing because it is where Turing and others famously cracked Nazi Germany’s codes using the world’s first digital programmable computer.
In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. However, he also noted that the U.K.’s approach “is not to rush to regulate,” even as he outlined a host of scary-sounding threats, such as the use of AI to more easily make chemical or biological weapons.
“We need to take this seriously, and we need to start focusing on trying to get ahead of the problem,” said Jeff Clune, an associate computer science professor at the University of British Columbia focusing on AI and machine learning.
Clune was among a group of influential researchers who authored a paper last week calling for governments to do more to manage risks from AI. It’s the latest in a series of dire warnings from tech moguls like Elon Musk and OpenAI CEO Sam Altman about the rapidly evolving technology and the disparate ways the industry, political leaders and researchers see the path forward when it comes to reining in the risks and regulation.
It’s far from certain that AI will wipe out mankind, Clune said, “but it has sufficient risk and chance of occurring. And we need to mobilize society’s attention to try to solve it now rather than wait for the worst-case scenario to happen.”
One of Sunak’s big goals is to find agreement on a communique about the nature of AI risks. He’s also unveiling plans for an AI Safety Institute that will evaluate and test new types of the technology and proposing creation of a global expert panel, inspired by the U.N. climate change panel, to understand AI and draw up a “State of AI Science” report.