Hernandez-Leal, PTaylor, MERosman, Benjamin SSucar, LEMunoz de Cote, E2017-05-172017-05-172016-02Hernandez-Leal, P., Taylor, M.E., Rosman, B.S., Sucar, L.E. and Munoz de Cote, E. 2016. Identifying and tracking switching, non-stationary opponents: a Bayesian approach. Workshop on Multiagent Interaction without Prior Coordination (MIPC) at AAAI-16, 13 February 2016, Phoenix, Arizona USA, p. 560-566https://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12584/12424http://mipc.inf.ed.ac.uk/2016/papers/mipc2016_hernandezleal_etal.pdfhttp://hdl.handle.net/10204/9091Workshop on Multiagent Interaction without Prior Coordination (MIPC) at AAAI-16, 13 February 2016, Phoenix, Arizona USAIn many situations, agents are required to use a set of strategies (behaviors) and switch among them during the course of an interaction. This work focuses on the problem of recognizing the strategy used by an agent within a small number of interactions. We propose using a Bayesian framework to address this problem. Bayesian policy reuse (BPR) has been empirically shown to be efficient at correctly detecting the best policy to use from a library in sequential decision tasks. In this paper we extend BPR to adversarial settings, in particular, to opponents that switch from one stationary strategy to another. Our proposed extension enables learning new models in an online fashion when the learning agent detects that the current policies are not performing optimally. Experiments presented in repeated games show that our approach is capable of efficiently detecting opponent strategies and reacting quickly to behavior switches, thereby yielding better performance than state-of-the-art approaches in terms of average rewards.enPolicy reuseNon-stationary opponentsRepeated gamesIdentifying and tracking switching, non-stationary opponents: a Bayesian approachConference PresentationHernandez-Leal, P., Taylor, M., Rosman, B. S., Sucar, L., & Munoz de Cote, E. (2016). Identifying and tracking switching, non-stationary opponents: a Bayesian approach. Association for the Advancement of Artificial Intelligence (AAAI). http://hdl.handle.net/10204/9091Hernandez-Leal, P, ME Taylor, Benjamin S Rosman, LE Sucar, and E Munoz de Cote. "Identifying and tracking switching, non-stationary opponents: a Bayesian approach." (2016): http://hdl.handle.net/10204/9091Hernandez-Leal P, Taylor M, Rosman BS, Sucar L, Munoz de Cote E, Identifying and tracking switching, non-stationary opponents: a Bayesian approach; Association for the Advancement of Artificial Intelligence (AAAI); 2016. http://hdl.handle.net/10204/9091 .TY - Conference Presentation AU - Hernandez-Leal, P AU - Taylor, ME AU - Rosman, Benjamin S AU - Sucar, LE AU - Munoz de Cote, E AB - In many situations, agents are required to use a set of strategies (behaviors) and switch among them during the course of an interaction. This work focuses on the problem of recognizing the strategy used by an agent within a small number of interactions. We propose using a Bayesian framework to address this problem. Bayesian policy reuse (BPR) has been empirically shown to be efficient at correctly detecting the best policy to use from a library in sequential decision tasks. In this paper we extend BPR to adversarial settings, in particular, to opponents that switch from one stationary strategy to another. Our proposed extension enables learning new models in an online fashion when the learning agent detects that the current policies are not performing optimally. Experiments presented in repeated games show that our approach is capable of efficiently detecting opponent strategies and reacting quickly to behavior switches, thereby yielding better performance than state-of-the-art approaches in terms of average rewards. DA - 2016-02 DB - ResearchSpace DP - CSIR KW - Policy reuse KW - Non-stationary opponents KW - Repeated games LK - https://researchspace.csir.co.za PY - 2016 T1 - Identifying and tracking switching, non-stationary opponents: a Bayesian approach TI - Identifying and tracking switching, non-stationary opponents: a Bayesian approach UR - http://hdl.handle.net/10204/9091 ER -