Target edge extraction method, image segmentation method and system

An edge extraction and image segmentation technology, applied in the field of image processing, can solve problems such as difficult to recognize granularity, unable to protect accurate edges of large-area targets, and easy to lose small-area details.

Pending Publication Date: 2022-08-05
CHANGAN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, due to the interference of different imaging lighting conditions and inconsistent target sizes, it is difficult for the current segmentation method to r...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target edge extraction method, image segmentation method and system
  • Target edge extraction method, image segmentation method and system
  • Target edge extraction method, image segmentation method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0042] The example of this embodiment adopts the edge extraction method of the present invention image 3 The original image image (a) is processed to obtain the results shown in the graph (b). In the target edge extraction method of this embodiment, n = 7, and the dual -linear interpolation method in Step 1.1 uses the two linear interpolation. In Step1.2, the Gaussian filter uses the edge extraction of the image pyramid. In Step1.3, the most adjacent interpolation re -sampling method is used to adjust each layer in the edge pyramid.

Embodiment 2

[0044] The embodiment of the images of the invention is paired by the embodiment of this embodiment. image 3 and 4 The original image dataset of the example is divided;

[0045] In the target edge extraction method of this embodiment, n = 7, STEP1.1 uses a dual -linear interpolation heavy sampling method to establish an image pyramid to the original image, and the Gaussra Peslas filter is used in Step1.2 In Step1.3, each layer of the edge pyramid is adjusted to adjust each layer in the edge pyramid;

[0046] The initial network of the image segmentation neural network of this embodiment is an improved reshuffle neural network. The training data set used in the process is that it is collected by itself. Figure 4 (A) and 5 (a) data sets shown in the original picture, Figure 4 (b) and and Figure 5 (B) The ground value of the corresponding original image obtained by manual labeling.

Embodiment 3

[0048] This embodiment is different from the embodiment 1 that the initial network of the image segmentation network is MASK-RCNN, and the network is trained with exactly the same samples as the network 1 network.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a target edge extraction method and an image segmentation method and system. According to the target edge extraction method, an unsupervised rough segmentation method based on edge search is constructed to simulate a visual working memory principle, overall cognition of a scene is carried out, and an accurate edge of a large-area target is obtained. The image segmentation method is based on a target extraction method and a segmentation neural network, visual attention is simulated to carefully observe a scene, and small-area details are obtained. And finally, generating a segmentation result by combining an accurate edge in rough segmentation according to the inference ability shown visually. According to the method, on the basis of simulating visual reasoning to respectively recognize the whole and details of the ground scene, mutual complementation and mutual correction of the whole information and the detail information are carried out, noise removal and error repair are implemented, and thus the image segmentation precision is effectively improved.

Description

Technical field [0001] The present invention is a field of image processing technology, which involves a division method based on visual reasoning combined with unsupervised clustering and deep learning. Background technique [0002] The image segmentation is currently the cornerstone of a variety of applications. It can divide the image category according to the characteristics of the spectrals, textures, and shapes of various targets to obtain the category information in the scene. However, with the continuous improvement of the current image resolution, it not only brings richer and more detailed scenario information, but also leads to high details of the image information, and the spectral spectrum of similar targets may be inconsistent. The increasing differences in the class and the decrease in inter -class variance have brought new challenges to the current image segmentation. [0003] The emergence of current deep learning neural networks provides a very good new idea for...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/12G06T7/13G06K9/62G06V10/762G06V10/82G06N3/04G06N3/08
CPCG06T7/12G06T7/13G06N3/08G06T2207/20016G06T2207/20081G06T2207/20084G06T2207/20192G06N3/044G06F18/23
Inventor 丛铭韩玲崔建军陈斯亮席江波顾俊凯张庆芳
Owner CHANGAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products