Last night, the Google-owned artificial intelligence outfit DeepMind gave us a taste of the future by pitching two esports professionals against a nonhuman player built from its proprietary technology.
DeepMind has already seen its AI defeat world go champion Lee Sedol, in an astounding showcase of the potential of ‘general AI’; that is, artificial intelligence programs that can not simply carry out a specific function, but that can also learn from experience, evolve abilities, and make decisions.
The ancient Chinese game go, along with chess, has long served as a sparring partner for emerging AI, because its rigid, linear rule set brings with it remarkable depth. Infamously, there are more potential different games of chess than atoms in the known universe (actually, it’s a little more complex than that, but we won’t go down that rabbit hole right here).
It may seem odd, then, that the DeepMind team decided that having mastered go, it was time to tackle esports. For all the commercial success of the competitive gaming movement, esports doesn’t quite carry the regard of go and chess. Those board games are often seen as higher forms of gaming, that transcend the apparent trivially of video games.
But to an AI researcher, the online real-time strategy game and esports sensation StarCraft II presents a significant challenge to AI. In contrast to the relatively simple ruleset of chess and go, StarCraft II offers numerous entwined systems of interaction, played out over landscapes that engender chaotic variation, with different humans employing wildly varied strategies that are sometimes motivated by emotion as much as rationale. It might be impossible to quantify if chess, go, or StarCraft II has the most depth; but the latter absolutely offers the greatest gameplay breadth. That’s the reason the StarCraft series has long fascinated AI developers.
“Starcraft is one of the most complex real-time strategy games and presents immense research challenges”, confirmed David Silver, co-lead on the AlphaStar project at DeepMind. “The tactical complexity is greater than chess, since each move can command hundreds of units at once; the strategic complexity is perhaps richer than even go, due to the possibility of exploitation by a diverse range of approaches; and players have even less information about their opponent than in poker. Plus, these complex decisions must be made in real-time, when every fraction of a second counts. The AI research community has been wrestling with the combination of these challenges for more than a decade.”
Back in 2017, DeepMind began work on AlphaStar, an AI StarCraft II player that at the time had no comprehension of what a mouse or keyboard was; let alone the game’s rules or infinitesimally complex strategies. By November 2018, after watching thousands of human player’s games and sparring against multiple versions of itself, AlphaStar had proved it understood how to play StarCraft at a fundamental level, deploying rudimentary gameplay strategies. Within days, it had beaten DeepMind’s best human StarCraft II players working at DeepMind.
It was time to call in the game’s best players. As such, esteemed esports team Liquid sent over their StarCraft II aces Dario ‘TLO’ Wünsch and Gregorz ‘MaNa’ Komincz. Each played five games against AlphaStar in private sessions. Last night saw DeepMind and Starcraft II developer Blizzard Entertainment stream those game sessions to a global audience of academics, esports fans, and StarCraft’s sizeable community.
AlphaStar won all ten matches, leaving both the pro gamers and audience somewhat confounded. It shouldn’t have been that easy, but thanks to the AI’s knack for adopting unconventional strategies, the victories came thick and fast. It’s worth noting that AlphaStar was also somewhat restricted in its abilities. It was prevented from being able to perform interactions at anything more than human speed, and it was tailored to guide it away from deploying strategies abstract enough to make the game unplayable. On that latter point, there is a difference between playing a game, and playing meaningfully.
There was one concession for humanity. MaNa managed to claw back one win in an exhibition match played live at the event. TheGamingEconomy was lucky enough to be in the studio where the match and live-streaming took place, tucked away in Google’s London HQ, and the tension and excitement in the room was palpable. MaNa had proven us humans can still put up a good fight, but the overall trouncing got everyone pondering what AlphaStar’s remarkable victories mean for the future of games, and AI’s potential more generally.
“Today’s news marks a significant breakthrough for both DeepMind and the StarCraft community”, said Tim Morton, production director at Blizzard Entertainment, and a senior member of the StarCraft team. “StarCraft’s sheer complexity is one of the reasons it’s loved by so many players around the world, and it’s been inspiring to see DeepMind’s progress as they’ve worked towards this world first. This is just the latest achievement in our partnership – having already jointly released an open-source set of tools known as ‘PySC2’ to the community in 2017, which included the largest set of anonymised game replays ever released. I’m excited to see what comes next.”
What does come next? DeepMind are focused on developing their AI to the point it can help in sectors like science, energy, and healthcare. For esports players, a period of reflection is beginning. What can they learn from playing AlphaStar and take back to human-vs-human esports competitions? And can AI that can play games so well be used to test, balance, and even design video games?
AlphaStar has left us with a lot of questions that may be prominent talking points in the games industry over the coming years. But one thing is for sure: AI is getting better than humans at more and more things.