The people who make artificial intelligence say that artificial intelligence is an existential threat to all life on earth, and if someone doesn’t do something, we might be in real trouble. .
“AI experts, journalists, policy makers, and the public are increasingly discussing a wide range of important and urgent risks from AI,” says AI Safety Center foreword. AI Risk Statement state. “Still, it can be difficult to raise concerns about some of the most serious risks of advanced AI.
“The brief statement below aims to overcome this obstacle and broaden the discussion, as well as the number of experts and public figures taking seriously some of the most serious risks of advanced AI. We also aim to create a common perception that there is an increasing number of
And finally, the statement itself looks like this:
“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This is a really crazy story, and more than 300 researchers, university professors, heads of institutions, etc. are listed in this plan.Top two signatories, Jeffrey Hinton and Joshua Bengio, both of which have been called the “godfathers” of AI in the past. Other prominent names include Google Deepmind CEO (former Lionhead lead AI programmer) Demis Hassabis, OpenAI CEO Sam Altman, and Microsoft CTO Kevin Scott.
It’s just a bottomless buffet of big brains, and I wonder how they collectively seem to miss what I think is a very obvious question: that their work is humanity? If you seriously believe that you threaten the “extinction” of the stop?
Perhaps they say they think they are careful, but others are not so careful. And, of course, there are legitimate concerns about the risks posed by runaway and unregulated AI development. Still, I can’t help thinking that this sensationalist statement isn’t strategic. Indicating that they are considering a Skynet scenario unless government regulators intervene could make entry difficult for startups and benefit established AI companies. It could also be an opportunity for big players like Google and Microsoft (also long-established AI research firms) to have a say in how such regulations are formulated, which could be to their advantage. There is also
Ryan Kallo, a professor of law at the University of Washington, cited other possible reasons for the warning, including distraction from more pressing and manageable issues about AI, building hype.
“The first reason is to focus public attention on outlandish scenarios that do not require major changes to business models. Protection from AI is not somehow ‘awakening’ or ‘up’,” says Caro. tweeted.
“The second is to try to convince everyone that the AI is so powerful, so powerful it could threaten humanity! They want to make people think they split the atom again. But in reality, they’re using human training data to make guesses: words, pixels, sounds.”
Kallo said AI threatens the future of humanity “by accelerating existing trends in wealth and income inequality, lack of information integrity, and exploitation of natural resources.”
“I understand that many of these people have sincere and sincere beliefs,” Caro said. “But ask yourself how plausible that is. , consider whether it is worth investing your resources.”
Professor Emily M. Bender gave a somewhat blunt assessment of the letter, calling it “a wall of shame in which people voluntarily add their names.”
“We should be concerned not with Skynet, but with the real harm the Legion and its people are doing in the name of ‘AI,'” Bender wrote.
New work “AI will destroy humanity !! 1!” Letters are a wall of shame. This is where people are voluntarily adding their names. We need to be concerned not with Skynet, but with the actual harm that the Corps and those who make up the Corps are doing in the name of “AI”. https:/ /t.co/YsuDm8AHUsMay 30, 2023
Hinton, who recently resigned from his research position at Google, compared AI to the “intelligent equivalent of a backhoe” — a powerful tool that could save a lot of work — in April, highlighting the potential dangers of AI development. expressed more nuanced thoughts about However, it is also potentially dangerous if abused. A single sentence like this has no real complexity, but it certainly gets attention, as you can see from the extensive discussion on this statement.
Interestingly, Hinton said in April that it was virtually impossible to track what individual research institutes were trying to do, and that no company or national government would risk gaining an advantage for someone else. He suggested that government restrictions on AI development might be pointless because they don’t want to be burdened. So it’s up to the world’s leading scientists to work together to get the technology under control, he said. You probably won’t get away with just posting a tweet asking someone to intervene.