As we continue to forge our path towards Artificial Intelligence becoming mainstream, it is important to reflect on the breakthrough discoveries along the way. Yesterday marks just that event for DaVinci IITM Data Science team – when AlphaGo, an AI, outmatched the Chinese Grandmaster Ke Jie in a chess match.
The event demonstrated the live capacity of a machine to collaborate with the other players for tactics and showcased its mastering of the abstract thought process. Furthermore, AlphaGo AI demonstrated its instinctively ability to play the game, and taught itself through a self-reinforcement learning process.
AlphaGo AI played and won Grandmaster Ke Jie in March of 2016 as well. However, yesterday’s match was significant as Ke Jie played the game using some of the tactic AlphaGo had used in one of the prior matches with other players. To Ke Jie’s nice surprise, the program went on to use brilliant new tactics.
“There was a cut that quite shocked me, because it was a move that would never happen in a human-to-human Go match. Last year, it was still quite human-like when it played. But this year, it became like a god of Go.” – Ke Jie
The fundamental idea behind AI is to imitate human brain’s decision-making process. This is where vast majority of our researchers have spent the past 50+ years to understand the mechanism and program it algorithmically inside a machine. Nonetheless, thanks to the cutting-edge digital innovation focused around big data, advances in computational mathematics and increasingly powerful computers, researchers are now constructing additional layers to further complete the simulated “neural network”.
We are super happy at DaVinci IITM that a machine algorithm won the game playing against the Grandmaster – proving machine learning’s efficiency to self-teach. This is exactly the type of ‘machine training’ our data scientists are incorporating within Mona Lisa!