Plenty of publicity was generated by AlphaGo becoming the world champion of strategy board game Go. The Artificial Intelligence algorithm created by Alphabet’s DeepMind AI unit made light work of the best human players back in 2017, adding to the roster of games that AI is now superior to humans at. AlphaZero, another DeepMind algorithm repurposed from AlphaZero is also the chess champion. The algorithm took four hours to learn the game from scratch by itself before defeating Stockfish 8, another computer programme which held the title at the time, over 100 games.
Chess and games such as Go have been dominated by AI champions since as far back as 1997, when IBM’s Deep Blue became the first computer to defeat a reigning human world champion. Gary Kasparov, the world’s top grandmaster at the time, had triumphed in the first match up with Deep Blue a year earlier before an upgraded version overcame him at the second time of asking.
However, while AI is very good at getting to grips with board games that have a fixed set of rules and conditions, even the latest technology in the world of Machine Learning struggles with the less formulaic conditions of the real world. The most advanced AI is currently based on the Machine Learning approach which is data and mathematics-centric.
An algorithm quickly processes huge amounts of data provided to it and finds patterns in that data from which it draws conclusions. If those conclusions prove to be incorrect, this then informs subsequent decisions until the algorithm gets statistically close to always taking the optimal decision. However, in situations where the rules can ‘change’ in an unexpected way, the success of AI algorithms drops off a cliff.
The data-based approach of Machine Learning, which makes it inflexible, has so far limited the way AI algorithms can be applied in many real world environments. Many leading computer scientists believe the biggest AI-powered technology breakthroughs will only become possible once AI develops to the point it can outperform humans in more complex environments with many moving parts and changing external conditions and ‘rules’.
To that end, leading AI scientists are now focusing on a very different kind of game – esports. Esports are popular video games such as Counter-Strike, StarCraft and Dota 2 which are played on a competitive professional level between individual players or teams. Esports have become big business and huge spectator sports with the most prestigious tournaments attracting viewing figures of millions from around the world. Esports are also a far greater challenge to AI than board games because they involve teams working together, changing conditions and rules, such as someone finding a new weapon or power, and often no clear information on the opponent. Basically, they are much closer to the real world.
OpenAI, a competitor of Alphabet’s DeepMind backed by investors such as Elon Musk, is targeting the Dota 2 championship to be held in Vancouver in August as a first competitive attempt at taking on the best human esports teams. The OpenAI team’s algorithm has demonstrated some success against strong human Dota 2 teams but only when the game was simplified by removing some of the options normally in play. The AI improves its skills by playing against itself and learning from trial and error what works best. In the game’s simpler one-on-one player mode the algorithm has reached a strong level but the game is far more complex when played in teams of five. However, the OpenAI team are bullish, stating that their algorithms required fewer changes than they expected to be adapted to a multi-player environment.
If OpenAI achieves a reasonable level of success in Vancouver in August, it will be a positive indication of the kind of environments AI might be applied to in the future. Driverless car technology is among the ‘real life’ challenges most important to the AI industry and mastering a game such as Dota 2 will be a step in the right direction.