Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network training method, image processing method, network, terminal device and medium

A training method and network technology, applied in the field of image processing, can solve problems such as poor effect of replacing image background, inability of mask to accurately represent the contour edge of target object, and inability to accurately segment target object, etc.

Active Publication Date: 2020-01-07
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
View PDF8 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, the mask output by the current image segmentation network cannot accurately represent the contour edge of the target object, so the target object cannot be accurately segmented, resulting in a poor effect of replacing the image background

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network training method, image processing method, network, terminal device and medium
  • Network training method, image processing method, network, terminal device and medium
  • Network training method, image processing method, network, terminal device and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] The following describes the training method of the image segmentation network provided by Embodiment 1 of the present application, please refer to the attached figure 2 , the training method includes:

[0044] In step S101, each sample image containing the target object, a sample mask corresponding to each sample image, and sample edge information corresponding to each sample mask are obtained, wherein each sample mask is used to indicate the corresponding sample The image area where the target object is located in the image, and each sample edge information is used to indicate the contour edge of the image area where the target object indicated by the corresponding sample mask;

[0045] In the embodiment of the present application, a part of sample images can be obtained from the data set first, and then the number of sample images used to train the image segmentation network can be expanded in the following ways: mirror inversion, scaling and / or Gamma changes, etc., s...

Embodiment 2

[0092] The following describes another image segmentation network training method provided by Embodiment 2 of the present application. Compared with the training method described in Embodiment 1, this training method includes a training process for the edge neural network. Please refer to the attached Figure 7 , the training method includes:

[0093] In step S301, each sample image including the target object, a sample mask corresponding to each sample image, and sample edge information corresponding to each sample mask are acquired, wherein each sample mask is used to indicate the corresponding sample The image area where the target object is located in the image, and each sample edge information is used to indicate the contour edge of the image area where the target object indicated by the corresponding sample mask;

[0094] For the specific implementation process of step S301, please refer to the part of step S101 in the first embodiment, which will not be repeated here. ...

Embodiment 3

[0125] Embodiment 3 of the present application provides an image processing method, please refer to the attached Figure 9 , the image processing method includes:

[0126] In step S401, the image to be processed is obtained, and the image to be processed is input into the trained image segmentation network to obtain the mask corresponding to the image to be processed, wherein the trained image segmentation network uses the trained edge Obtained by neural network training, the trained edge neural network is used to output the edge profile of the region where the target object indicated by the mask is located according to the input mask;

[0127] Specifically, the trained edge neural network described in step S401 is a neural network trained by the method described in the above-mentioned embodiment 1 or embodiment 2;

[0128] In step S402, the target object included in the image to be processed is segmented based on the mask corresponding to the image to be processed.

[0129]...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a network training method, an image processing method, a network, terminal equipment and a medium. The training method comprises the following steps: S1, acquiring a sample image accommodating a target object, a sample mask corresponding to the sample image and sample edge information corresponding to the sample mask; s2, inputting the sample image into an image segmentationnetwork to obtain a generated mask output by the image segmentation network; s3, inputting the generated mask into the trained edge neural network to obtain generated edge information output by the edge neural network; s4, determining a loss function according to the difference between the sample mask and the generated mask and the difference between the generated edge information and the sampleedge information; and S5, adjusting each parameter of the image segmentation network, and returning to the step S2 until the loss function is smaller than the threshold. According to the invention, the mask image output by the image segmentation network can represent the contour edge of the target object more accurately.

Description

technical field [0001] The present application belongs to the technical field of image processing, and in particular relates to a training method for an image segmentation network, an image processing method, an image segmentation network, terminal equipment, and a computer-readable storage medium. Background technique [0002] After the user has taken the image, he often wishes to change the background in the image (for example, changing the background to an outdoor beach scene, or changing the background to a solid color background for taking ID photos). In order to achieve the above technical effects, the current common practice is: use the trained image segmentation network to output the mask used to represent the area where the target object (that is, the foreground, such as a portrait) is located, and then use the mask to segment the target object out, and then replace the image background. [0003] However, the mask output by the current image segmentation network ca...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11G06T7/12G06T7/136G06T7/194G06K9/62G06N3/08
CPCG06T7/11G06T7/12G06T7/136G06T7/194G06N3/08G06T2207/20081G06T2207/20084G06T2207/30196G06F18/214Y02T10/40
Inventor 刘钰安
Owner GUANGDONG OPPO MOBILE TELECOMM CORP LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products