Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Radar-assisted camera calibration method based on deep learning

A camera calibration and deep learning technology, applied in neural learning methods, computer parts, image data processing, etc. The problem is to avoid mathematical models and complicated calculations, reduce the amount of parameters and calculations, and avoid the problem of gradient explosion.

Pending Publication Date: 2021-12-17
XIDIAN UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The traditional camera calibration method requires the cooperation of specific auxiliary calibration boards and calibration equipment to obtain the internal and external parameters of the camera model by establishing the correspondence between points with known coordinates on the calibration object and their image target points, such as "2D Laser Research on Calibration Method of Extrinsic Parameters of Radar and Visible Light Camera" by using the measurement data of two sensors in different poses, and using the method of fitting parameters to obtain the external parameters of the camera; "Automatic driving vehicle lidar and Research on Online Camera Calibration"Using the classic directional solution method to calculate the rigid transformation relationship between the camera and the lidar, the external parameters of the camera obtained from the two sensors, and Zhang Zhengyou's calibration method, pseudo-inverse method, and least squares fitting method Similarly, these traditional camera calibration methods need to establish complex and proprietary mathematical models, and complete them through cumbersome calculations
[0005] In the camera calibration method based on deep learning, for example, "An Automatic Calibration Method for Millimeter-Wave Radar and Camera" uses the deep learning method to calibrate the millimeter-wave radar point trace and the coordinates of the image target point, which reduces the calibration workload. Integral calibration of the camera and external calibration of the sensor are combined, but the amount of parameters and calculation of the deep neural network is relatively large; "A Camera Calibration Method, System and Process Based on Deep Learning" solves the problem of existing visual measurement The problem that the position and attitude of the camera must remain unchanged in the system avoids the complicated calculation of related mathematical models and physical variables and the dependence on proprietary fixed auxiliary structures, but it still has a relatively complicated process and a large amount of calculation
[0006] In industrial applications, compared with the traditional camera calibration method, although the camera calibration method based on deep learning solves the complex mathematical model and complicated calculation of the traditional camera calibration, it still has a large amount of parameters and calculations, which is not conducive to industrial applications. application, and the traditional camera calibration method and camera calibration method based on deep learning can only convert radar trace data into target points in the image, but in the process of image target detection, the target appears in the form of a target frame in the image , only converting the radar trace data into the target point in the image will cause several radar trace data to correspond to the same image target frame, thus resulting in poor accuracy of the system
[0007] In the existing camera calibration technologies, most of them require complex mathematical models and complicated calculations, and only one of the internal parameters of the camera or the external parameters of the camera can be calibrated, and the flexibility and generalization ability of the model are low.
In the process of camera calibration, it is the conversion of radar traces to image target points, without converting radar traces to image target frame information, the accuracy of multi-sensor fusion tasks is low, and it is not convenient for subsequent data association between radar and camera task and multi-sensor fusion object detection task

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Radar-assisted camera calibration method based on deep learning
  • Radar-assisted camera calibration method based on deep learning
  • Radar-assisted camera calibration method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] With the rapid development of new technologies such as automatic driving and unmanned monitoring, due to the inevitable limitations of a single sensor itself, the industrial field often adopts multi-sensor fusion solutions. Cameras Cameras and radars are currently the main ranging sensor components and are widely used in multi-sensor data fusion. Since the camera and radar are installed in different locations, their coordinate systems have spatial differences, so the camera and radar need to be calibrated in space.

[0036] The traditional camera calibration method requires the cooperation of specific auxiliary calibration boards and calibration equipment to obtain the internal and external parameters of the camera model by establishing the correspondence between points with known coordinates on the calibration object and their image target points, such as Zhang Zhengyou calibration method, Pseudo-inverse method, least squares fitting method, etc. These traditional came...

Embodiment 2

[0051] The radar-assisted camera calibration method based on deep learning is the same as in embodiment 1, and the deep neural network model is constructed in step 5, including the following steps:

[0052] 5.1: Construct the overall framework of the deep neural network model: the input of the deep neural network model is the training input data set constructed by radar traces Input the input data set into a fully connected layer with an input of 2 and an output of p to obtain a p-dimensional vector, and then pass this p-dimensional vector through K consecutive RC layers, and then input the output of the last RC layer into In a fully connected layer with an input of p and an output of 4, the input dimension of the RC layer is p≥2, the number of RC layers is K≥1, and p and K are integers, which are changed according to the amount of input data.

[0053]5.2: Construct the RC layer in the deep neural network model: the input of the RC layer in the deep neural network model is a ...

Embodiment 3

[0056] The radar-assisted camera calibration method based on deep learning is the same as embodiment 1-2, and the training deep neural network model described in step 6 includes the following steps:

[0057] 6.1: Forward propagation of the deep neural network model: the training input data set constructed by the radar trace As the input of the deep neural network model, the output of the deep neural network model is obtained through the neural network described in step 5 in The horizontal axis coordinates of the bottom center of the image target box output by the neural network model, The vertical axis coordinates of the bottom center of the image target frame output by the neural network model, is the height of the image target box output by the neural network model, Image target box width for neural network model output.

[0058] 6.2: Construct a point-to-frame IOA loss function: In the deep neural network model, construct a point-to-frame IOA loss function IOA, an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a radar-assisted camera calibration method based on deep learning, and solves the technical problem of conversion from a radar plot to an image target frame. The method comprises the following steps of: acquiring data to form radar plots and image target frame data; aligning the acquisition time; forming a training input data set by the radar plot data; forming a training output data set by the converted image target frame data; constructing and training a deep neural network model; and obtaining a camera calibration function. According to the method, the deep neural network model containing the cross-layer link is utilized to convert the radar plot data into the image target frame data to form the calibration function, so that the personal error and the additional work in the calibration process are reduced, and the flexibility and efficiency of camera calibration are improved; the calculated amount is small, and the calibration accuracy is high. The method is used for multi-sensor fusion target detection, and more particularly used for camera calibration under the condition that radar and camera cameras conduct target detection at the same time.

Description

technical field [0001] The invention belongs to the technical field of multi-sensor data fusion, and mainly relates to the conversion of radar trace data and image target position information, specifically a radar-assisted camera calibration method based on deep learning, which can be used for camera calibration when radar and camera heads exist at the same time Task. Background technique [0002] With the rapid development of new technologies such as automatic driving and unmanned monitoring, due to the inevitable limitations of a single sensor, the accuracy and stability of target detection using a single sensor are low. Therefore, in order to improve the accuracy and stability of target detection, multi-sensor fusion solutions are often adopted in the industrial field. Cameras and radars are currently the main ranging sensor components. Cameras are cheap and can quickly capture environmental information and perform digital image processing to complete target detection, b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/80G06T7/246G06K9/62G06N3/04G06N3/08
CPCG06T7/80G06T7/246G06N3/084G06T2207/10044G06N3/045G06F18/25G06F18/214
Inventor 杨淑媛翟蕾高全伟武星辉杨莉龚龙雨李璐宇柯希鹏李奕彤马宏斌王敏
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products