Hand and eye coordinate converting method of visual positioning robot

A robot hand and coordinate transformation technology, applied in the field of coordinate transformation, can solve problems such as insufficient precision, inconsistent sizes in all directions, irregular errors, etc., and achieve strong robustness

Active Publication Date: 2019-01-18
HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI
View PDF6 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The limitations of this method in actual use include: the accurate transformation relationship from the visual reference point to the kinematics reference point cannot be obtained accurately by measurement, and the installation of the robot often cannot ensure that the visual coordinate system and the kinematics coordinate system constitute a complete The translation relationship makes the coordinates in the kinematic coordinate system obtained by the translation transformation not very accurate during the subsequent geometric transformation, and the corresponding rotation transformation needs to be added
Therefore, the accuracy of traditional coordinate acquisition and conversion methods usually cannot meet the requirements, and the errors are irregular and inconsistent in all directions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hand and eye coordinate converting method of visual positioning robot
  • Hand and eye coordinate converting method of visual positioning robot
  • Hand and eye coordinate converting method of visual positioning robot

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] In this embodiment, the hand-eye coordinate conversion method of the visual positioning robot is:

[0044] see figure 1 , the rotary platform 1 of the mechanical arm chassis is set at the position of the central area 5 of the fixed support device 2, and the visual sensor platform 4 is arranged on the rotary platform 1, and the visual sensor platform 4 rotates with the rotation of the rotary platform 1; The sensor platform 4 itself has two degrees of freedom of horizontal rotation and vertical rotation, forming a biaxial rotation structure with a horizontal rotation shaft and a vertical rotation shaft, and a visual sensor is set on the visual sensor platform 4, and the visual sensor is used to obtain detection The data includes the laser distance sensor used to detect the linear distance d from the target to the visual sensor, the horizontal angle sensor used to detect the horizontal rotation angle α of the visual sensor, and the vertical angle sensor used to detect the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a hand and eye coordinate converting method of a visual positioning robot. The method is characterized in that a visual sensor cloud deck rotating when a rotating platform rotates is arranged on the rotating platform of a mechanical arm chassis, the visual sensor cloud deck is of a dual-shaft rotating structure with a horizontal rotating shaft and a vertical rotating shaft,detection data is obtained by a visual sensor on the visual sensor cloud deck, and a mechanical arm is driven by the robot to move to a target for positioning through the coordinate conversion of thedetection data between a visual coordinate system and a kinematics coordinate system. The errors caused by coordinate obtaining and converting are effectively reduced, and the method can be widely used for coordinate converting treatment of various visual positioning type robots.

Description

technical field [0001] The invention relates to a coordinate conversion method, more specifically a hand-eye coordinate conversion method for a visual positioning robot, which includes acquiring coordinates and converting the coordinates into a usable form. Background technique [0002] In the process of visual positioning movement, there are transformations between coordinate systems. The traditional method is to deduce the coordinate transformation from vision to kinematics according to the geometric relationship between several coordinate systems. The limitations of this method in actual use include: the accurate transformation relationship from the visual reference point to the kinematics reference point cannot be obtained by measurement, and the installation of the robot often cannot ensure that the visual coordinate system and the kinematics coordinate system constitute a complete The translation relationship makes the coordinates in the kinematic coordinate system obt...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): B25J9/16
CPCB25J9/16B25J9/1697
Inventor 王容川赵江海施亚军
Owner HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products