The researchers at the Google DeepMind department have provided a Google A.I. news update. They reported that the company's artificial intelligence has learned how to dream by playing through a classic video game.
The new research breakthrough has brought DeepMind nearer to its objective of producing artificial intelligence that can instruct itself in a much more advance level. The Google DeepMind researchers used the Unsupervised Reinforcement and Auxiliary Learning system (UNREAL) to their A.I. which has resulted in a dream state.
Also reported in the Google A.I. news update is the fact that the new system is using the classic Atari video game "Labyrinth" as a model for the artificial intelligence learning method. In the previous application conducted last January, the model used by the researchers was the board game "Go."
A very substantial step in artificial intelligence learning was already taken by Google's DeepMind A.I. when it was able to defeat 49 "Atari" games. Facebook lost out to Google Inc. when it bought DeepMind Technologies in 2014 and there's plenty of conjecture on what the acquisition would bring to the table in terms of robotics.
The answer lies in today's Internet where Google computers can see various patterns in high volume of data. In the Google A.I. news update, the methods used by DeepMind are shown to be improvements of previous versions from decades ago. This time, the applications are combined for a more modern user interface.
The first time artificial intelligence mastered a complex game was back in 1997 when IBM's Deep Blue defeated the chess world champion Garry Kasparov. Pre-programming was used in the artificial intelligence system to give it the necessary instruction manual to help it play chess.
Various scientists from all over the world are now trying to develop computer algorithms that can expose artificial intelligence to huge amounts of data for processing. Sounds and images will then be analyzed and useful patterns will be automatically extracted.
The Google DeepMind researchers used UNREAL to enhance the learning by asking the artificial intelligence to fulfil tasks along with gameplay goals. The Google A.I. was taught visual inputs by asking it to control different pixels on screen.
The artificial intelligence performed at an expert pace with an 87 percent completion rate in the video game "Labyrinth." The new tasks have helped UNREAL to improve ten times faster than previous DeepMind applications. Google has made major inroads in making artificial intelligence learn new things in a more sophisticated manner.
What do you think of the new developments in Google's artificial intelligence? Share your comments below.