‘Existential catastrophe’ from AI is likely unavoidable, DeepMind researcher warns

- Advertisement -

- Advertisement -

Researchers from the University of Oxford and Google’s artificial intelligence division DeepMind have claimed that advanced forms of AI have a high probability of becoming “dangerous to life on Earth”.

recently Article in peer-reviewed journals AI StoreThe researchers warned that there would be “disastrous consequences” if development of some AI agents continues.

Prominent philosophers such as Nick Bostrom of Oxford University have previously spoken of the danger posed by advanced forms of artificial intelligence, although one of the authors of the new paper claimed that such warnings did not go far enough.

“Bostrom, [computer scientist Stuart] Russell, and others have argued that advanced AI poses a threat to humanity,” Michael Cohen wrote in A. twitter thread with article.

“Under the circumstances we have identified, our conclusion is much stronger than in any previous publication – an existential disaster is not only possible, but likely.”

The paper proposes a scenario in which an AI agent employs a deceiving strategy to obtain a reward that is pre-programmed to seek.

To maximize its reward potential, it requires as much energy as possible to achieve it. The thought experiment sees humanity finally competing against AI for energy resources.

“Winning the competition to use the last bit of available energy while playing against something smarter than us would probably be very difficult,” Mr Cohen wrote. “Losing would be fatal.

“These possibilities, although theoretical, mean that we should move slowly – if at all – toward the goal of more powerful AI.”

DeepMind has already proposed a defense against such an incident, calling it a “big red button”. In a 2016 paper titled ‘Safely Interruptible Agents’, the AI ​​firm outlined A framework to prevent advanced machines from ignoring the turn-off command and becomes an out of control evil agent.

Professor Bostrom previously described DeepMind – whose AI achievements include defeating human champions in the boardgame Go and manipulating nuclear fusion – as Closest to creating human-scale artificial intelligence,

The Swedish philosopher also said that it would be a “great tragedy” if AI did not continue to develop, as it has the potential to cure diseases and advance civilization at an otherwise impossible rate.

Credit: www.independent.co.uk /

- Advertisement -

Recent Articles

Related Stories