Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Training method and device of detection segmentation model, and target detection method and device

A technology for segmentation models and training methods, applied in neural learning methods, biological neural network models, image analysis, etc., to achieve the effect of improving segmentation accuracy and reducing quantity requirements

Active Publication Date: 2022-01-21
HANGZHOU SUPERACME MICROELECTRONICS TECH CO LTD
View PDF8 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

According to relevant statistical results, the average time for pixel-level labeling samples of a frame of image is about 1 minute. Obtaining large-scale and accurate pixel-level labeling sample data requires a lot of manpower and time costs.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method and device of detection segmentation model, and target detection method and device
  • Training method and device of detection segmentation model, and target detection method and device
  • Training method and device of detection segmentation model, and target detection method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0107] see figure 2 as shown, figure 2 It is a schematic flowchart of training a pair of detection and segmentation models in an embodiment. This training method comprises, the first stage of execution step 201 and the second stage of execution step 202, wherein:

[0108] Step 201 , using pixel-level labeled sample data for training with a detection and segmentation task to obtain a detection and segmentation model with first model parameters.

[0109] see image 3 as shown, image 3 It is a schematic diagram of a framework for training a detection and segmentation model using pixel-level labeled sample data in Embodiment 1. The backbone network is connected with a detection segmentation model.

[0110] Step 2011, when the pixel-level labeled sample data is input to the backbone network, for example, a frame of pixel-level labeled sample image data is input to the backbone network, and the backbone network extracts the features in the pixel-level labeled sample data to ...

Embodiment 2

[0146] see Figure 5 as shown, Figure 5 It is a schematic flow chart of training the detection and segmentation model in the second embodiment. This training method comprises, the first stage of execution step 501 and the second stage of execution step 502, wherein:

[0147] Step 501 , use pixel-level labeled sample data to conduct multi-task training including detection and segmentation tasks and target classification tasks to obtain a detection and segmentation model with the first model parameters and an image classification model with the third model parameters.

[0148] see Figure 6 as shown, Figure 6 A schematic diagram of a framework for training a detection segmentation model using pixel-level labeled sample data. The backbone network is connected with image classification model and detection segmentation model in parallel. Step 5011, when the pixel-level labeled sample data is input to the backbone network, for example, a frame of pixel-level labeled sample im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a training method for a detection segmentation model, and the method comprises the steps: in a first stage, training a to-be-trained detection segmentation model through employing pixel-level labeling sample data, and obtaining a detection segmentation model with a first model parameter; and, in a second stage, during each training, randomly selecting one piece of image-level category annotation sample data and one piece of pixel-level annotation sample data, respectively inputting the image-level category annotation sample data and the pixel-level annotation sample data into the current detection and segmentation model, training the current detection and segmentation model, repeatedly iterating the process of the second stage until all training is finished to obtain a detection and segmentation model with a second model parameter, and taking the current detection segmentation model as a trained detection segmentation model. According to the invention, the segmentation accuracy of the detection segmentation model is improved, and the quantity demand of the pixel-level marked sample data is reduced.

Description

technical field [0001] The invention relates to the field of machine learning, in particular to a training method for a detection and segmentation model. Background technique [0002] With the development of artificial intelligence (AI) technology, the machine learning model is the basis for realizing artificial intelligence technology. The accuracy of the machine learning model for task processing depends on the training of the machine learning model using the training set containing sample data. [0003] The detection and segmentation model can be used for target detection and to identify different instances of the same category of targets, for example, to identify different forms of sheep in the image (different forms can be understood as different instances), so that the detection and segmentation model can also be used for the same category Target for target segmentation. [0004] However, the training of detection and segmentation models relies on large-scale pixel-le...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T7/12G06V10/26G06V10/44G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06T7/0004G06T7/12G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06T2207/30108G06N3/045G06F18/241
Inventor 艾国杨作兴房汝明向志宏
Owner HANGZHOU SUPERACME MICROELECTRONICS TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products