Vol. 16 No. 02 (2024): Advances in Urban Resilience and Sustainable City Design-1602
Articles

Reinforcement Learning-based Approaches for Improving Safety and Trust in Robot-to-Robot and Human-Robot Interaction

Mahmoud Abouelyazid
Purdue University

Published 2024-02-08

How to Cite

Abouelyazid, M. (2024). Reinforcement Learning-based Approaches for Improving Safety and Trust in Robot-to-Robot and Human-Robot Interaction. Advances in Urban Resilience and Sustainable City Design, 16(02), 18–29. Retrieved from http://orientreview.com/index.php/aurscd-journal/article/view/53

Abstract

The increasing deployment of robotic systems in various domains has emphasized the need for ensuring safety and trust in robot-to-robot and human-robot interaction. Existing approaches often prioritize performance metrics without explicitly addressing safety constraints or trust-building behaviors. The transfer of learned policies from simulation to real-world environments remains a challenge due to the complexities and uncertainties of real-world scenarios. This study aims to address the gap in the literature by proposing a framework that integrates multiple RL techniques to enhance safety and trust in robotic systems. The proposed framework contains several key components. First, constrained RL approaches, such as Constrained Policy Optimization or Safe Exploration via Constrained Policy Optimization, are employed to incorporate safety constraints into the learning process, ensuring that the learned policies adhere to specified safety requirements. Second, a reward shaping mechanism is designed to include metrics for quantifying safety and trust, enabling the learning of behaviors that prioritize safety and enhance trust in human-robot interaction. Third, active learning techniques are integrated with RL to enable human-in-the-loop learning, allowing the robot to efficiently learn from human feedback and demonstrations. Fourth, domain randomization techniques are applied to improve the sim-to-real transfer of learned policies, making the policies more robust and adaptable to real-world scenarios. Finally, the proposed framework is extended to multi-robot systems by employing multi-agent RL algorithms that enable safe and efficient coordination through learned communication protocols. The study provides a theoretical and algorithmic foundations for the proposed framework, discussing the benefits and challenges of integrating these RL techniques for safety and trust in robotics applications. This research aims to contribute to the advancement of RL techniques for ensuring safety and trust in robotic systems. The proposed framework can be applied as a conceptual foundation for future research and development of more reliable, trustworthy, and human-friendly robotic systems. Although experimental validation for the study is beyond the scope of this study, it highlights the possibility of integrating RL techniques to enhance safety and trust in robot-to-robot and human-robot interaction for further empirical investigations and practical applications.