Poker-Playing AI Tapped For Military Use In $10M Pentagon Deal

Wednesday, 23 January 2019 - 12:50PM
Technology
Military Tech
Dystopias
Wednesday, 23 January 2019 - 12:50PM
Poker-Playing AI Tapped For Military Use In $10M Pentagon Deal
< >
Composite adapted from Pixabay images
In the 1983 film War Games, a teenage hacker accesses WOPR (War Operation Plan Response), a military supercomputer programmed to evaluate the American "winnability" of global nuclear war scenarios. In short, the machine is a form of artificial intelligence evaluating potential outcomes derived from datasets including enemy nuclear arsenal capacity, response time, ICBM range, and, of course, speculative casualties across major targets. In the film, the hacker triggers WOPR to run what he thinks is a simulation game – Global Thermonuclear War – but is actually a full-scale nuclear response tied the American arsenal. Suffice it to say, the entire globe is pushed to the very brink of nuclear annihilation, but is saved at the last minute by WOPR's hermitic creator.  

via GIPHY

In a move that recalls the cold war film, Wired reports that Libratus, an AI best known for destroying four professional poker players during a series of No Limits Texas Hold 'Em games in 2017, has just been granted a $10 million, two-year contract to work for the Pentagon's Defense Innovation Unit (DIU). The system, created by Carnegie Mellon University professor Tuomas Sandholm and CMU doctoral candidate Noam Brown, will serve in what amounts to an advisory or support capacity in war-gaming and simulation exercises designed to keep military strategy sharp. That Libratus is being used for military war-gaming comes as no surprise: Sandholm showed his hand last year when he founded Strategy Robot, a startup intended to leverage computational game theory technology for defense. 


Wired notes that traditional military war-gaming models are limited by their small datasets. "That opens yourself up to a lot of exploitation," Sandholm told the site, "because the real adversary may not play according to your assumptions."


Sandholm and Brown described some of Libratus' programming in an article published in Science journal last summer. Libratus' approach to game-solving consists of three models. The first computes an abstraction of the game it intends to play, developing a blueprint for strategy. As the AI plays more games, the second module begins to seek more finely tuned solutions and strategies for individual subgames. It is the third module that is most interesting. In the authors' own words,


"the third module of Libratus-the self-improver-enhances the blueprint strategy. It fills in missing branches in the blueprint abstraction and computes a game-theoretic strategy for those branches. In principle, one could conduct all such computations in advance, but the game tree is way too large for that to be feasible. To tame this complexity, Libratus uses the opponents' actual moves to suggest where in the game tree such filling is worthwhile."


In other words – or as best we can tell – Libratus begins to develop computational maps of the game as a whole and the game that it is playing in real time, using its opponents movements as input: variables that can fit into any number of possible outcomes and exploited for victory.


Naturally, this raises questions of what one can call "victory" in a scenario like Global Thermonuclear War. What might the system calculate as an acceptable number of casualties in order to wreak maximum destruction on an opponent? If we exercise the argumentum ad absurdum, we could assert that, to an AI, a military victory – albeit a Pyrrhic one – could be declared in a war between countries that results in a single, lone survivor. Admittedly, this is a bit of leap, as the intention thus far is to use Libratus as a training and simulation device, not hand it the nuclear football. Nevertheless, we'd be remiss if we did not express a certain discomfort in having such a tool – brutal, efficient, utterly inhuman – considering the fate of entire populations as part of a game with stakes far higher than those encountered when going all-in with the hopes of hitting a flush draw on the flop. 

via GIPHY

Science
Artificial Intelligence
Technology
Military Tech
Dystopias
No