Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A cross-layer multi-model feature fusion and image description method based on convolutional decoding

A feature fusion and image description technology, applied in neural learning methods, still image data retrieval, still image data indexing, etc., can solve the problem of inaccurate information description, achieve good performance, accurate description, and improve image description capabilities. Effect

Active Publication Date: 2022-03-29
JIANGXI UNIV OF SCI & TECH
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the deficiencies of the prior art, the present invention provides an image description method based on cross-layer multi-model feature fusion and convolution decoding, which solves the problem of inaccurate description when the information contained in the image is complex in the existing image description method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cross-layer multi-model feature fusion and image description method based on convolutional decoding
  • A cross-layer multi-model feature fusion and image description method based on convolutional decoding
  • A cross-layer multi-model feature fusion and image description method based on convolutional decoding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] Such as Figure 1-5 The illustrated embodiment of the present invention provides a cross-layer multi-model feature fusion and image description method based on convolutional decoding, including the following steps:

[0029] S1. Firstly, in the vision module, the low-level and high-level cross-layer image feature fusion is realized in a single model, and then the feature maps obtained by multiple visual feature extraction models are averagely fused, and each sentence contained in the corresponding image is combined. words are mapped to words with D e dimensional embedding space, get their embedding vector sequences, and then obtain the final text features through 6 layers of causal convolution operations. When performing visual feature extraction, the rich feature information has a good guiding effect on the image description results, so using three A VGG16 structure is used as the extraction module of image visual features. At the same time, in order to fuse low-level ...

Embodiment 2

[0046] Such as Figure 1-7 The shown embodiment of the present invention provides a cross-layer multi-model feature fusion and image description method based on convolution decoding, using VGG-16 and language-CNN (ie, the language module used in the present invention) to train the model, and use it as Baseline model CNN+CNN(Baseline), then on the basis of Baseline, add multiple VGG-16 networks, and realize cross-layer feature fusion in each VGG-16, use the trained benchmark model parameters to initialize the model , retraining, on the MSCOCO dataset, some experimental results are as follows:

[0047] R1: A hamburger and a salad sitting on top of a table.

[0048] R2: A salad and a sandwich wait to be eaten at a restaurant.

[0049] R3: An outside dining area with tables and chairs highlighting a salad and sandwich.

[0050] R4: A sandwich and a salad are on a tray on a wooden table.

[0051] R5: A table with a bowl of food, sandwich and wine glass sitting on itin a restaur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a cross-layer multi-model feature fusion and image description method based on convolution decoding, which relates to the fields of computer vision and natural language processing. The cross-layer multi-model feature fusion and convolutional decoding-based image description method includes the following steps: S1. Obtain the embedding vector sequence and the final text feature; S2. Calculate the attention vector for visual and text fusion matching; S3. The force vector and the text feature vector are added and fused; S4. Generate a complete description sentence. By using cross-layer multi-model feature fusion, the loss of low-level image feature information can be effectively compensated, so as to obtain more detailed image features and learn more detailed description sentences. The model can effectively extract and save semantic information in complex background images, and It has the ability to process long sequences of words, more accurate description of image content, and richer information expression, which is worthy of vigorous promotion.

Description

technical field [0001] The invention relates to the fields of computer vision and natural language processing, in particular to an image description method based on cross-layer multi-model feature fusion and convolution decoding. Background technique [0002] As one of the main carriers of information, images have been increasingly shared by humans. How to make computers generate grammatically correct and semantically reasonable natural language sentences based on image content is very important, which is different from target detection and image classification. For relatively simple computer vision tasks such as computer vision, image description belongs to higher-level visual understanding. It not only needs to recognize objects and scenes in the image, but also needs to express the relationship between objects and objects, and between objects and scenes. The description sentences can meet human standards in both grammar and semantics. The traditional image description met...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/583G06F16/58G06F16/55G06F16/51G06K9/62G06V10/80G06V10/82G06N3/04G06N3/08
CPCG06F16/583G06F16/5866G06F16/55G06F16/51G06N3/08G06N3/045G06F18/253
Inventor 罗会兰岳亮亮陈鸿坤
Owner JIANGXI UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products