Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual context fused image description method

An image description, context technology, applied in instrumentation, biological neural network models, character and pattern recognition, etc., can solve problems affecting test performance, etc.

Active Publication Date: 2020-04-10
GUANGXI NORMAL UNIV
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, sentences that have not appeared will seriously affect the performance of the test during the test

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual context fused image description method
  • Visual context fused image description method
  • Visual context fused image description method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0058] refer to figure 1 , an image description method for fusing visual context, comprising the following steps:

[0059] 1) Divide the images in the MS-COCO image description dataset into a training set and a test set at a ratio of 7:3, horizontally flip and luminance transform the images in the training set, and finally normalize the images to the values ​​of all pixels in each image The mean value is 0, the variance is 1, the photo size of the test set is fixed to 512×512 pixels, and the rest of the processing is not performed;

[0060] 2) Image description tags are preprocessed: 5 sentences corresponding to each image in the MS-COCO image description dataset are used as image description tags, and the description of each image is set to 16 words in length. Sentences are filled with tokens, words that appear less than 5 times are filtered and discarded, and a vocabulary containing 10369 words is obtained, where the description tag corresponding to the image is a fixed val...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image description method fusing visual context. The method comprises the following steps: 1) preprocessing; 2) preprocessing of description labels of images; 3) feature extraction; 4) mean value pooling; 5) convolution and mean value sampling pooling; 6) acquisition of detection image entities; 7) acquisition of entity attributes; 8) convolution; 9) acquisition of entityattribute features; 10) convolution; 11) convolution; 12) convolution; 13) acquisition of relationships between the entities and the attributes; 14) matching of the relationships between entities andattributes; 15) LSTM training; 16) solving of exposure deviation; 17) dimension reduction; 18) normalization; 19) acquisition of a description statement, namely a model, of the current image; 20) acquisition of description sentences of all images; and 21) testing and verification of the training effect and performance of the model. With the method, the accuracy of image feature extraction can beguaranteed, visual errors are avoided, generated description is smoother so as to accord with grammatical rules of human beings, and higher scores are obtained for revaluation indexes.

Description

technical field [0001] The invention relates to the technical field of computer vision and the field of natural language processing, in particular to an image description method integrating visual context in a deep neural network and a reinforcement learning method. Background technique [0002] Image description can be understood as giving a picture and generating a text described in natural language. Image description and visual question answering belong to the intersection of computer vision and natural language processing, and are more effective than target detection, image classification and semantic segmentation. It is challenging because it extracts image entities and attributes while inferring the relationship between entities and attributes. Image description has broad application prospects in blind navigation, early childhood education, and image-text retrieval. [0003] Image description needs to use encoding network and decoding network. The proposal of residual ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/048G06N3/044G06N3/045G06F18/24G06F18/253G06F18/214
Inventor 张灿龙周东明李志欣
Owner GUANGXI NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products