The invention discloses a
robot grasping
pose estimation method based on an object recognition depth learning model, which relates to the technical field of
computer vision. The method is based on anRGBD camera and depth learning. The method comprises the following steps: S1, camera parameter calibration and hand-eye calibration being carried out; S2, training an
object detection object model; S3, establishing a three-dimensional
point cloud template library of the target object; 4, identifying the types and position of each article in the area to be grasped; 5, fusing two-dimensional and three-dimensional vision information and obtaining a
point cloud of a specific target object; 6, completing the
pose estimation of the target object; S7: adopting an error avoidance
algorithm based on sample accumulation to avoid errors; S8: Steps S4 to S7 being continuously repeated by the vision
system in the process of moving the
robot end toward the target object, so as to realize iterative optimization of the
pose estimation of the target object. The
algorithm of the invention utilizes a target detection YOLO model to carry out early-stage fast target detection, reduces the calculation amount of three-dimensional
point cloud segmentation and matching, and improves the operation efficiency and accuracy.