Preprint / Version 1

Unlocking Robotic Manipulator Potential:

Exploring Model-Free Reinforcement Learning for Adaptive Task Performance

##article.authors##

  • Aaditya Prabhu student

DOI:

https://doi.org/10.58445/rars.1265

Keywords:

Robotic Manipulator Potential, Model-Free Reinforcement Learning, Adaptive Task Performance

Abstract

Robotic manipulators hold significant promise across various industrial applications, yet their adaptability to dynamic and evolving environments remains a challenge. Leveraging artificial intelligence, particularly reinforcement learning (RL), presents a compelling avenue to enhance the capabilities of these manipulators. In this study, we investigate the efficacy of model-free RL algorithms in training robotic manipulators for pushing tasks, crucial for applications like high-mix-low-volume manufacturing and household assistance. We explore five prominent model-free RL algorithms: Proximal Primal Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), Advantage actor critic (A2C), and Soft Actor Critic (SAC). These algorithms offer distinct approaches to training agents by iteratively interacting with their environments to maximize cumulative rewards. We study their mechanisms, focusing on policy optimization and value function estimation. Furthermore, we provide a detailed setup elucidating the components of RL, including agents, environments, observations, actions, and reward functions. We discuss the nuances between fully observed and partially observed environments, as well as discrete and continuous action spaces, crucial for training robotic manipulators. Utilizing Python libraries such as stable-baselines3 and panda-gym, we conduct experiments with a Franka Panda robot simulated in the pybullet physics environment.Ultimately, this study contributes to advancing the field of robotic manipulation through the integration of cutting-edge RL techniques.

References

“Robotic Manipulation Research: From the Laboratory to the Real World.” Open Access Government, 25 Sept. 2020, www.openaccessgovernment.org/robotic-manipulation-research/94838/. Accessed 17 May 2024.

Hui, Jonathan. “RL — the Math behind TRPO & PPO.” Medium, 14 Sept. 2018, jonathan-hui.medium.com/rl-the-math-behind-trpo-ppo-d12f6c745f33.

Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.

Deep Deterministic Policy Gradient — Spinning Up documentation. (2014). Openai.com. https://spinningup.openai.com/en/latest/algorithms/ddpg.html

Downloads

Posted

2024-07-06