An Automatic Image Annotation Method Fusing Deep Features and Semantic Neighborhoods

An automatic image and depth feature technology, which is applied in character and pattern recognition, instruments, calculations, etc., to improve the labeling effect, achieve flexible and practical effects

Active Publication Date: 2019-08-09
FUZHOU UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the purpose of the present invention is to provide an automatic image labeling method that combines deep features and semantic neighborhoods to overcome the defects in the prior art and solve the problem of automatic image labeling for multiple objects and multiple labels

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Automatic Image Annotation Method Fusing Deep Features and Semantic Neighborhoods

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0031] The present invention provides an automatic image labeling method that combines depth features and semantic neighborhoods, such as figure 1 As shown, in view of the time-consuming and labor-intensive manual feature selection and the traditional label propagation algorithm ignoring semantic similarity, which makes it difficult for the labeling model to be applied to the real image environment, an image labeling method that combines deep features and semantic neighborhoods is proposed. The method first utilizes a multi-layer CNN deep feature extraction network to achieve general and effective deep feature extraction. Then, the semantic groups are divided according to the keywords, and the visual neighbors are limited to the semantic groups to ensure that the images in the neighborhood image set are semantically adjacent and visually adjacent. Fina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to an automatic image labeling method that integrates depth features and semantic neighborhoods: it takes time and effort to manually select features in traditional image labeling methods, and the traditional label propagation algorithm ignores semantic neighbors, resulting in visual similarity but semantic dissimilarity, which affects the labeling effect and other issues, an automatic image annotation method that combines deep features and semantic neighborhoods is proposed. This method first builds a unified and adaptive deep feature extraction framework based on deep convolutional neural networks (CNN), and then divides the training set into semantic groups and establishes The neighborhood image set of the image to be labeled, and finally calculate the contribution value of each label of the neighborhood image according to the visual distance and sort to get the tagged keywords. The invention is simple and flexible, and has strong practicability.

Description

technical field [0001] The invention relates to an automatic image labeling method that combines depth features and semantic neighborhoods. Background technique [0002] With the rapid development of multimedia image technology, image information on the Internet is growing explosively. These digital images are widely used in business, news media, medicine, education and so on. Therefore, how to help users quickly and accurately find the desired image has become one of the hot topics in multimedia research in recent years. The most important technology to solve this problem is image retrieval and automatic image annotation technology. [0003] Automatic image annotation is a key step in image retrieval and image understanding. It is a technology to add keywords to unknown images that can describe the semantic content of the image. This technology mainly uses the image training set that has been marked with keywords to train the labeling model, and then uses the trained mod...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62
CPCG06F18/214
Inventor 柯逍周铭柯
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products