Both the UK and US governments have begun to warn of the recent emergence of powerful AI technologies and are taking the first steps to curb the sector. Fresh out of Microsoft’s proposed acquisition of Activision Blizzard, the UK Competition and Markets Authority (CMA) has begun reviewing the underlying systems behind various AI tools. The U.S. government also issued a statement, saying AI companies “have a fundamental responsibility to ensure that their products are secure before they are deployed or released to the public.”
This is all due to Dr. Jeffrey Hinton, sometimes called the “Godfather of Deep Learning” retired from google (opens in new tab) He also warned that the industry needs to stop scaling AI technology and ask “can we control it?” Google is one of many very large technology companies that have invested heavily in AI technology, including Microsoft and OpenAI. That investment could be part of the problem. Such companies ultimately want to know where their revenue comes from.
Dr. Hinton’s resignation comes amid widespread concern in the field. Last month, there was a joint letter signed by his 30,000 signatories, including high-profile technologists like Elon Musk, warning of the impact AI could have on fields such as work, the potential for fraud and, of course, good old misinformation. doing. Sir Patrick Vallance, a scientific adviser to the British Government, pleaded with the government to “preempt” these issues, comparing the emergence of technology to the Industrial Revolution.
“Although AI has rapidly penetrated the public consciousness over the past few months, it has remained on our radar for some time.” CMA Chief Executive Sarah Cardell told The Guardian: (opens in new tab)“While people are protected from issues such as false or misleading information, it is important that the potential benefits of this innovative technology are readily accessible to UK businesses and consumers.”
The CMA’s review will report in September and aims to establish “guiding principles” for the future of the sector. The UK is arguably one of the leaders in this space, with UK-based DeepMind (owned by Google’s parent company Alphabet) among other large AI companies including Stability AI (Stable Diffusion).
Meanwhile, in the United States, Vice President Kamala Harris met at the White House with executives from Alphabet, Microsoft and OpenAI, after which she said, “The private sector has an ethical, moral and legal responsibility to ensure safety and security. ‘ issued a statement. product”.
While this feels like closing a stable door after a horse stops a horse, the Biden administration has also focused on creating technology that is “ethical, trustworthy, responsible and helpful,” 7 announced that it will spend $140 million on three new national AI research institutes. public interest. AI development at this point is almost entirely within the private sector.
At least, I think they’re finally paying attention. , “It’s hard to see how you can prevent bad guys from using it for bad things”.
“If you have good mechanical skills, you can quickly build something like a backhoe that can burrow into the road. But of course, a backhoe can knock your head out,” Hinton says. said. “But not developing a backhoe would be disgusting because it would be considered silly.”