Rosman, Benjamin SRamamoorthy, S2014-08-252014-08-252014-05Rosman, B.S. and Ramamoorthy, S. 2014. Robust Autonomy and Decisions Group. In: 2014 IEEE International Conference on Robotics and Automation (ICRA 2014), Hong Kong, China, 31 May - 7 June 2014http://rad.inf.ed.ac.uk/data/publications/2014/icra14rosman.pdfhttp://hdl.handle.net/10204/76202014 IEEE International Conference on Robotics and Automation (ICRA 2014), Hong Kong, China, 31 May - 7 June 2014This paper considers the problem of providing advice to an autonomous agent when neither the behavioural policy nor the goals of that agent are known to the advisor. We present an approach based on building a model of “common sense” behaviour in the domain, from an aggregation of different users performing various tasks, modelled as MDPs, in the same domain. From this model, we estimate the normalcy of the trajectory given by a new agent in the domain, and provide behavioural advice based on an approximation of the trade-off in utility between potential benefits to the exploring agent and the costs incurred in giving this advice. This model is evaluated on a maze world domain by providing advice to different types of agents, and we show that this leads to a considerable and unanimous improvement in the completion rate of their tasks.enRobot companionsSocial human-robot interactionAutonomous agentsIntelligent transportation systemsGiving advice to agents with hidden goalsConference PresentationRosman, B. S., & Ramamoorthy, S. (2014). Giving advice to agents with hidden goals. Robust Autonomy and Decisions Group. http://hdl.handle.net/10204/7620Rosman, Benjamin S, and S Ramamoorthy. "Giving advice to agents with hidden goals." (2014): http://hdl.handle.net/10204/7620Rosman BS, Ramamoorthy S, Giving advice to agents with hidden goals; Robust Autonomy and Decisions Group; 2014. http://hdl.handle.net/10204/7620 .TY - Conference Presentation AU - Rosman, Benjamin S AU - Ramamoorthy, S AB - This paper considers the problem of providing advice to an autonomous agent when neither the behavioural policy nor the goals of that agent are known to the advisor. We present an approach based on building a model of “common sense” behaviour in the domain, from an aggregation of different users performing various tasks, modelled as MDPs, in the same domain. From this model, we estimate the normalcy of the trajectory given by a new agent in the domain, and provide behavioural advice based on an approximation of the trade-off in utility between potential benefits to the exploring agent and the costs incurred in giving this advice. This model is evaluated on a maze world domain by providing advice to different types of agents, and we show that this leads to a considerable and unanimous improvement in the completion rate of their tasks. DA - 2014-05 DB - ResearchSpace DP - CSIR KW - Robot companions KW - Social human-robot interaction KW - Autonomous agents KW - Intelligent transportation systems LK - https://researchspace.csir.co.za PY - 2014 T1 - Giving advice to agents with hidden goals TI - Giving advice to agents with hidden goals UR - http://hdl.handle.net/10204/7620 ER -