IIIT Hyderabad Publications |
|||||||||
|
Advancing Visual Servoing Controller for Robotic Manipulation: Dynamic Object Grasping and Real-World ImplementationAuthor: Gunjan gupta Date: 2024-06-10 Report no: IIIT/TH/2024/71 Advisor:Madhava Krishna AbstractVisual servoing been gaining popularity in various real-world vision-centric robotic applications, offering enhanced end-effector control through visual feedback. In the realm of autonomous robotic grasping, where environments are often unseen and unstructured, visual servoing has demonstrated its ability to provide valuable guidance. However, traditional servoing-aided grasping methods encounter challenges when faced with dynamic environments, particularly those involving moving objects. In the first part of the thesis (Chapter 3), we introduce DynGraspVS, a novel Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom (6 DoF). By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We show the effectiveness of our approach by reporting against various evaluation metrics such as photometric error, success rate, time taken, and trajectory length. In addition to introducing DynGraspVS, this thesis in the second half (Chapter 4) explores the integration and implementation of Image-Based Visual Servoing (IBVS) mechanisms on the XARM7 robotic platform. Through successful integration, our work demonstrates the feasibility and practical applicability of IBVS in real-world robotic systems. Furthermore, a comprehensive analysis of the Recurrent Task-Visual Servoing (RTVS) frameworkâs performance in diverse real-world scenarios sheds light on its robustness and versatility. Additionally, the introduction of Imagine2Servo, a conditional diffusion model for generating target images, enhances the capabilities of IBVS for complex tasks. Through a combination of experimental validation and rigorous testing, this thesis provides valuable insights into the effectiveness and potential applications of IBVS in real-world robotic systems, setting the stage for future advancements in visual servoing technology Full thesis: pdf Centre for Robotics |
||||||||
Copyright © 2009 - IIIT Hyderabad. All Rights Reserved. |