Sign In  |  Register  |  About Walnut Creek Guide  |  Contact Us

Walnut Creek, CA
September 01, 2020 1:43pm
7-Day Forecast | Traffic
  • Search Hotels in Walnut Creek Guide

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Hinton issues another AI warning: World needs to find a way to control artificial intelligence

Geoffrey Hinton issued another warning to world, saying humans need to figure out a way to control artificial intelligence as it develops to contain potential dangers.

Geoffrey Hinton, who recently resigned from his position as Google's vice president of engineering to sound the alarm about the dangers of artificial intelligence, cautioned in an interview published Friday that the world needs to find a way to control the tech as it develops.

The "godfather of AI" told EL PAÍS via videoconference that he believed a letter calling for a sixth-month-long moratorium on training AI systems more powerful than OpenAI's GPT-4 is "completely naive" and that the best he can recommend is that many very intelligence minds work to figure out "how to contain the dangers of these things." 

"AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI," Hinton urged. "There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."

For instance, Hinton believes all governments insist that fake images be flagged.

'GODFATHER OF ARTIFICIAL INTELLIGENCE' SAYS AI IS CLOSE TO BEING SMARTER THAN US, COULD END HUMANITY

The scientist said that the best thing to do now is to "put as much effort into developing this technology as we do into making sure it’s safe" – which he says is not happening right now. 

"How [can that be] accomplished in a capitalist system? I don’t know," Hinton noted.

When asked about sharing concerns with colleagues, Hinton said that many of the smartest people he knows are "seriously concerned." 

"We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us?" he asked. "We have no experience dealing with these things."

Hinton says there are many different dangers to AI, citing job reduction and the creation of fake news. Hinton noted that he now believes AI may be doing things more efficiently than the human brain, with models like ChatGPT having the ability to see thousands of times more data than anyone else.

"That’s what scares me," he said.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

In a rough estimate – he said he wasn't very confident about this prediction – Hinton said it will take AI between five and 20 years to surpass human intelligence.

EL PAÍS asked if AI would eventually have its own purpose or objectives.

"That’s a key question, perhaps the biggest danger surrounding this technology," Hinton replied. He said synthetic intelligence hasn't evolved and doesn't necessarily come with innate goals. 

"So, the big question is, can we make sure that AI has goals that benefit us? This is the so-called alignment problem. And we have several reasons to be very concerned. The first is that there will always be those who want to create robot soldiers. Don’t you think Putin would develop them if he could?" he questioned. "You can do that more efficiently if you give the machine the ability to generate its own set of targets. In that case, if the machine is intelligent, it will soon realize that it achieves its goals better if it becomes more powerful."

While Hinton said Google has behaved responsibly, he pointed out that companies operative in a "competitive system." 

In terms of national regulation going forward, while Hinton said he tends to be quite optimistic, the U.S. political system does not make him feel very confident. 

CLICK HERE TO READ MORE ON FOX BUSINESS

"In the United States, the political system is incapable of making a decision as simple as not giving assault rifles to teenagers. That doesn’t [make me very confident] about how they’re going to handle a much more complicated problem such as this one," he explained. 

"There’s a chance that we have no way to avoid a bad ending … but it’s also clear that we have the opportunity to prepare for this challenge. We need a lot of creative and intelligent people. If there’s any way to keep AI in check, we need to figure it out before it gets too smart," Hinton asserted.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 WalnutCreekGuide.com & California Media Partners, LLC. All rights reserved.