Constellation, le dépôt institutionnel de l'Université du Québec à Chicoutimi

Vision-Based Robotic Arm Control Algorithm Using Deep Reinforcement Learning for Autonomous Objects Grasping

Sekkat Hiba, Tigani Smail, Saadane Rachid et Chehri Abdellah. (2021). Vision-Based Robotic Arm Control Algorithm Using Deep Reinforcement Learning for Autonomous Objects Grasping. Applied Sciences, 11, (17), p. 7917.

[thumbnail of J43.pdf]
PDF - Version publiée
Disponible sous licence Creative Commons Attribution (CC-BY 2.5).


URL officielle:


While working side-by-side, humans and robots complete each other nowadays, and we may say that they work hand in hand. This study aims to evolve the grasping task by reaching the intended object based on deep reinforcement learning. Thereby, in this paper, we propose a deep deterministic policy gradient approach that can be applied to a numerous-degrees-of-freedom robotic arm towards autonomous objects grasping according to their classification and a given task. In this study, this approach is realized by a five-degrees-of-freedom robotic arm that reaches the targeted object using the inverse kinematics method. You Only Look Once v5 is employed for object detection, and backward projection is used to detect the three-dimensional position of the target. After computing the angles of the joints at the detected position by inverse kinematics, the robot’s arm is moved towards the target object’s emplacement thanks to the algorithm. Our approach provides a neural inverse kinematics solution that increases overall performance, and its simulation results reveal its advantages compared to the traditional one. The robot’s end grip joint can reach the targeted location by calculating the angle of every joint with an acceptable range of error. However, the accuracy of the angle and the posture are satisfied. Experiments reveal the performance of our proposal compared to the state-of-the-art approaches in vision-based grasp tasks. This is a new approach to grasp an object by referring to inverse kinematics. This method is not only easier than the standard one but is also more meaningful for multi-degrees of freedom robots.

Type de document:Article publié dans une revue avec comité d'évaluation
Pages:p. 7917
Version évaluée par les pairs:Oui
Identifiant unique:10.3390/app11177917
Sujets:Sciences naturelles et génie > Génie
Sciences naturelles et génie > Génie > Génie informatique et génie logiciel
Sciences naturelles et génie > Sciences appliquées
Département, module, service et unité de recherche:Départements et modules > Département des sciences appliquées > Module d'ingénierie
Mots-clés:grasp task, deep reinforcement learning, robotic applications, deep deterministic policy gradient, autonomous robots, robotic arm, object detection, inverse kinematics, You Only Look Once v5, tâche de saisie, apprentissage par renforcement approfondi, applications robotiques, gradient de politique déterministe profond, robots autonomes, bras robotique, détection d'objets, cinématique inverse, détection d'objets unifiée et en temps réel
Déposé le:13 avr. 2022 14:34
Dernière modification:13 avr. 2022 14:34
Afficher les statistiques de telechargements

Éditer le document (administrateurs uniquement)

Creative Commons LicenseSauf indication contraire, les documents archivés dans Constellation sont rendus disponibles selon les termes de la licence Creative Commons "Paternité, pas d'utilisation commerciale, pas de modification" 2.5 Canada.

Bibliothèque Paul-Émile-Boulet, UQAC
555, boulevard de l'Université
Chicoutimi (Québec)  CANADA G7H 2B1
418 545-5011, poste 5630