Blog

What is Google’s DeepMind Project and Should We be Worried?

10/05/2017

Google DeepMind is an Artificial Intelligence division within Google created in 2014. Google bought DeepMind Technologies, a spin out company from University College London for a reported £400 million.

The research lab aims to develop machines capable of learning without the need to be preprogrammed. They acheieve this by building a set of ‘general purpose learning algorithms’ that allow the machine to 'think' with intelligence comparable to that of the human mind. The concern is that these machines have the potential to develop Super Intelligences - intelligence that would supersede even the brightest human mind.

In 2016 Google DeepMind made history when its AI system AlphaGo defeated world champion Lee Sedol to the ancient chinese game of ‘Go’ - a strategy based board game that requires a particular level of intuition. 'Go' is considered significantly more complex than chess, but AlphaGo won the series with a score of 4 - 1.

Anyone that has ever played a game of chess on their computer will understand, this isn’t the first time a machine has beaten a person to a strategy based game. So how is this different? With these types of strategy game, a computer is able to calculate all the potential moves on the board in order to defeat its opponent. However, the game 'Go' has endless potential moves, making this task nearly impossible. The tournament demonstrated that AlphaGo is actually capable of learning, which is a huge truimph for the project, which aims to formalise human intelligence.

So how does a machine learn?  

The type of AI DeepMind is pioneering combines two forms of brain inspired machine learning: a reinforcement learning algorithm and an artificial neural network. This system allows the machine to learn and improve itself incrementally based on experience. It can then avoid error by learning from its past mistakes.

What can this type of AI be used for?

DeepMind Heath is a division of the project that focuses on solving health care issues. Currently it's working directly with clinicians in the NHS to improve patient care. So far, they've developed an app called Streams which uses an algorithm to anaylse patient data. This app will alert clinicians when a patient's vital signs are flagged as abnormal. This vital information could normally go unnoticed or take much longer to detect, but with Streams the clinician is notified immediately, which can be life saving.

Although this app does not currently use AI, Mustafa Suleyman co-founder of DeepMind Health has said, “We are of course excited by machine learning in the future. One day there is the potential to train machine learning models to personalise alerts and notifications to individual patients...”

As well as the potential to anaylse data in health care and help save lives this AI has endless other possible use cases. DeepMind are already in discussions with the national grid in order to make energy use more efficient. They also aim to close the gap between human and computer speech recognition.

Do we need this level of AI?

With machines getting more intelligent every year, there are mixed feelings concerning the development of AI. The fear is that once these machines reach human levels of intelligence they will improve themselves and supersede humans.

Stephen Hawkings believes that whilst early forms of AI have been very useful, AI could end mankind, “It would take off on its own, and redesign itself at an ever increasing rate” and Elon Musk has warned AI has the potential to be “more dangerous than nukes”.

In contrast others are more positive; DeepMind Co Founder Demis Hassabis believes that we will need this type of AI in order to help humans progress, “I think we're going to need artificial assistance to make the breakthroughs that society wants...Climate, economics, disease -- they're just tremendously complicated interacting systems. It's just hard for humans to analyse all that data and make sense of it. And we might have to confront the possibility that there's a limit to what human experts might understand. AI-assisted science will help the discovery process.”

Should we be worried?

Tech giants have recently formed a group called “Partnership on AI” this group aims to ensure that AI is developed safely and ethically. They have outlined a set of 23 principles to help achieve this which has been signed by Hassabis and also endorsed by Stephen Hawking and Elon Musk.

Hassabis believes it's important for tech giants and individuals to co-ordinate the development of AI. On the cusp of this superintelligence, Hassabis highlights that AI leaders need to be able to slow down development when required. At this point they should carry out further research to prevent the advancement of a potentially harmful AI.

Share this:
Back