Towards a Science of Security Games: Key Algorithmic Principles, Deployed Applications and Research Challenges
Presenter
July 23, 2015
Abstract
Milind Tambe
University of Southern California (USC)
Computer Science
Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, interdicting the illegal flow of drugs, weapons and money, protecting endangered species, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times. Instead, these limited security resources must be deployed efficiently, simultaneously taking into account adversary responses to the security coverage (e.g., an adversary can exploit predictability in security allocation), adversary preferences and past available data.
To help in efficient and randomized security resource allocation, we have founded the "security games" framework to build decision-aids for security agencies around the world. Security games is a novel area of research based on computational and behavioral game theory, while also incorporating elements of AI planning under uncertainty and machine learning. We have deployed security-games based decision aids for security of ports and ferry traffic with the US coast guard (in the ports of New York, Boston, Los Angeles/Long Beach, Houston and others), for security of airports and air traffic with the US Federal Air Marshals and the Los Angeles World Airport (LAX) police, and tested this framework for security of metro trains with the Los Angeles Sheriff's Department. Moreover, recent work on green security games has led to testing our decision aids for protection of fisheries with the US Coast Guard and protection of wildlife at sites in multiple countries. I will introduce security games and discuss use-inspired research in security games, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors.