Method, device and system for automatically labeling target object in image

A technology for target objects and images, applied in the field of image processing, can solve problems such as affecting generality, inability to guarantee accuracy, and difficulty in obtaining CAD models of target objects, etc., to achieve the effect of improving generality and easy access.

Active Publication Date: 2019-04-05
ALIBABA GRP HLDG LTD
View PDF8 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, its shortcomings are also very obvious, that is, under normal circumstances, the CAD model of the target object is usually provided by the manufacturer or designer of the target object, but if the manufacturer or designer cannot provide the CAD model, it will not be possible to use the above method to achieve automatic Labeling, and in practical applications, this phenomenon is very common, that is, it is difficult to obtain the CAD model of the target object, thus affecting the generality of this method
Secondly, even if the CAD model of the target object can be found, since the tracking of the target object usually relies on enough feature points on the object, when the object itself is pure color, highly reflective or transparent, model-based tracking will It cannot guarantee its sufficient accuracy, which will affect the effect of automatic labeling

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method, device and system for automatically labeling target object in image
  • Method, device and system for automatically labeling target object in image
  • Method, device and system for automatically labeling target object in image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] see Image 6 , the embodiment of the present application provides a method for automatically marking the target object in the image, the method may specifically include:

[0069] S601: Obtain image training samples, including multiple images, each image is obtained by shooting the same target object, and there are the same environmental feature points between adjacent images;

[0070] Wherein, the image training sample can be obtained from a target video file, or can also be obtained from a plurality of pre-shot photos and other files. For example, the target video file can be pre-recorded. Specifically, it can be used to perform machine learning on the characteristics of a certain target object, and then the target object can be recognized in scenarios such as AR, and then the image of the target object can be collected in advance. , and then, each picture obtained by image acquisition is used as an image training sample, and a specific target image is marked from eac...

Embodiment 2

[0086] The second embodiment is an application of the automatic labeling method provided in the first embodiment, that is, after the automatic labeling of the target object in the image training sample is completed, it can be applied to the creation process of the target object recognition model. Specifically, Embodiment 2 of the present application provides a method for establishing a target object recognition model, see Figure 7 , the method may specifically include:

[0087]S701: Obtain image training samples, including multiple images, each image is obtained by shooting the same target object, and there are the same environmental feature points between adjacent images; each image also includes the location of the target object Annotation information of the position, the annotation information is obtained by using one of the images as a reference image, and creating a three-dimensional space model based on the reference three-dimensional coordinate system, and determining ...

Embodiment 3

[0091] The third embodiment further provides a method for providing augmented reality AR information on the basis of the second embodiment. Specifically, see Figure 8 , the method may specifically include:

[0092] S801: Collect a real-scene image, and use a pre-established target object recognition model to identify the location information of the target object from the real-scene image, wherein the target object recognition model is established by the method in the second embodiment above;

[0093] S802: Determine a display position of an associated virtual image according to the position information of the target object in the real-scene image, and display the virtual image.

[0094] During specific implementation, when the position of the target object in the real-scene image changes, the position of the virtual image follows the change of the position of the real-scene image.

[0095] However, in the prior art, it often occurs that the positions of the virtual image and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention discloses a method, device and system for automatically labeling a target object in an image, and the method comprises the steps: obtaining an image training sample which comprises a plurality of images, each image is obtained through the shooting of the same target object, and the adjacent images have the same environmental feature points; Taking one image as a reference image, determining a reference coordinate system, and creating a three-dimensional space model based on the reference three-dimensional coordinate system; When the three-dimensional space modelis moved to the position where a target object is located in the reference image, determining position information of the target object in the reference three-dimensional coordinate system; And respectively mapping the three-dimensional space model to the image plane of each image according to the respective corresponding camera attitude information determined by the environmental feature pointsin each image. According to the embodiment of the invention, automatic image labeling can be carried out more accurately and effectively, and the universality of the method is improved.

Description

technical field [0001] The present application relates to the technical field of image processing, in particular to a method, device and system for automatically marking target objects in an image. Background technique [0002] In AR / VR and other related businesses, the use of machine learning methods to identify scenes / objects in images is widely used. In the process of machine learning, a large number of image training samples are needed, and the image training samples need to Label the target object. The so-called labeling means that the position of the target object in the image needs to be marked, so that the machine learning can perform feature extraction from the image of the target object for learning. [0003] In the prior art, there are mainly two types of labeling for image training samples, one is labeling based on two-dimensional images, and the other is labeling based on three-dimensional images of object CAD models. The so-called two-dimensional image annota...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06K9/00G06V10/764
CPCG06T7/74G06T2207/30244G06T2207/10004G06V20/64G06T7/75G06T2207/20081G06T2207/20084G06T2207/20104G06V10/764G06F18/2413G06T2207/30204G06V20/20
Inventor 李博韧谢宏伟
Owner ALIBABA GRP HLDG LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products