概述
Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
Abstact
- real-time, object-independent grasp synthesis method used for close-loop grasping
- one-to-one mapping from a depth image (2015 IJRR的文章有做过实验,发现depth image对抓取的预测至关重要) overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times(选取candidate的方法非常的耗时,本文直接去掉了这个方式)
- light-weight && single pass ( 50hz for close-loop grasping)
- enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies
- real world test (unseen object 83%, moving household object 88%, dynamic clutter 81%)
Introduction
- in order to perform grasping and manipulation tasks in the unstructured environment && dynamic environment of real world, a robot must be able to compute grasps for almost unlimited number of objects.
- deep-learning takes biggest advancement in grasp synthesis for unknow items.
- most methods using deep-learning technique are based on adapted versions of CNN architecture designed for Object detection, it computation times(1s - 10s) and rely on precise camera calibration and precise robot control, even in static environment
- GG-CNN, use “generative” to differential our direct grasp generation method from methods which sample candidates.
- GG-CNN has twofold advantages over SOTA CNN based grasp synthesis:
- directly generate grasp poses on pixelwise bases, use semantic segmentation rather sliding windows or bounding boxes
- has orders of magnitude fewer parameters than SOTA, 19ms in PC with GPU, fast enough for close-loop grasping
- close-loop methods has obviously advantages
Related Work
- Grasping Unkown Object
- Closed-loop Grasping
- Benchmarking for Robotic Grasping
Grasp Point definition
consider detecting and executing antipodal grasps on unknown objects, perpendicular to planar surface.
Experimental Set-up
-
Physical Components
- Kinova 6DOF robot fitted with Kinova KG2 2-fingered gripper
- Realsense SR300 RGB-D camera(80mm above the closed fingertips and inclined at 14° towards the gripper)
- PC with Ubuntu 16.04 with 3.6GHz i7-7700 Nvidia GeForce GTX 1070(6ms single depth image, 19ms entire grasping pipeline)
(Realsense has a specified minimum range of 200 mm, in reality > 150mm, because camera’s infra-red projector and camera cause shadowing in the depth image caused by the object, Kinova KG-2 gripper has maximum stroke of 175mm, minimum of 15mm)
-
Test object
- Adversarial Set
- Household Set
-
Grasp Detection Pipeline
- image process: crop to square, and scaled to 300 * 300 pixel, inpatient invalid depth values using OpenCV
- evaluation of the GG-CNN (produce the grasp map and filter grasp map with Gaussian Kernel)
- computation of grasp pose(use both open loop and close loop. in close loop evaluation, in order to avoid rapidly switching between multiple similarly-rank good quality grasp, select 3 grasps from the highest local maxima of grasp map and select the one which is closest to the grasp usedon the previous iteration)
Experiments
- Static Grasping
- Dynamic Grasping
- Dynamic Grasping in Clutter
- Dynamic Clutter Objects
Conclusion
Our System is able to gain SOTA results in grasping unknown, dynamic objects, including objects in dynamic clutter.
Closed-loop grasping method significantly outperforms an open-loop method in the presence of simulated robot control error
最后
以上就是野性金针菇为你收集整理的基于视觉的抓取论文分享Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach的全部内容,希望文章能够帮你解决基于视觉的抓取论文分享Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复