
The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a recent Reuters/Ipsos poll.
EU takes first steps to implement stricter rules around AI
European lawmakers came a step closer to passing new rules regulating tools such as ChatGPT, following a crunch vote on Thursday where they agreed…
see more
More than two-thirds of Americans are concerned about the negative effects of AI and 61% believe it could threaten civilization.
Public anxiety has spread as OpenAI’s ChatGPT has become the fastest-growing app of all time. ChatGPT has kicked off an AI arms race, with tech heavyweights like Microsoft and Google vying to outdo each other.
The integration of AI into everyday life has catapulted AI to the forefront of public discourse, spurring Congressional hearings looking into potential risks. At a Senate hearing this month, OpenAI’s CEO Sam Altman said that “if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”
AI godfather speaks up
Widely known as one of the “godfathers of AI”, computer scientist Geoffrey Hinton recently announced he had quit Google after a decade at the firm, saying he wanted to speak out on the risks of the technology without it affecting his former employer.
Hinton’s work is considered essential to the development of contemporary AI systems. In 1986, he co-authored a paper widely seen as a milestone in the development of the “neural networks” undergirding AI technology. In 2018, he was awarded the Turing Award in recognition of his breakthroughs.
But Hinton is now among a growing number of tech leaders publicly warning about the possible threat posed by AI if machines achieve greater intelligence than humans and take control of the planet.
I still cannot believe that Geoffrey Hinton said that digital intelligence is already better than the human brain.
pic.twitter.com/m2sVyeJYzu
— Nima Ebadi (@nmebadi) May 26, 2023
“I suddenly realized that maybe the computer models we have now are actually better than the brain. And if that’s the case, then maybe quite soon they’ll be better than us. So that the idea of superintelligence, instead of being something in the distant future, might come much sooner than I expected,” he stressed.
Hinton and OpenAI’s Altman have both voiced concerns that AI systems could learn from human examples how to manipulate people with misinformation or to eventually pursue goals that do not align with the well-being of humanity.
“I think they will quickly realize that if they got more control they could realize their goals much more easily,” Hilton said. “Once they want to get control things start looking bad for people,” he added.
Hinton compares the danger to the threat posed by the advent of nuclear weapons in the mid-20th century.