Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A visual relationship detection method and device based on high-order semantic structure of scene graph

A detection method and scene graph technology, applied in the field of image processing, can solve problems such as difficulty in obtaining the correct type and quantity of triplet labels, and achieve the effect of optimizing computational complexity, reducing hardware requirements, and simple and direct position encoding processing

Active Publication Date: 2022-06-28
SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In order to solve the technical problem that it is difficult to obtain the correct type and quantity of annotations and complete triplet annotations, the present invention proposes a visual relationship detection method and device based on the high-order semantic structure of the scene graph

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A visual relationship detection method and device based on high-order semantic structure of scene graph

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036]In order to have a clearer understanding of the technical features, objects and effects of the present invention, the specific embodiments of the present invention will now be described with reference to the accompanying drawings.

[0037] The visual relationship detection method based on the high-level semantics of the scene graph proposed by the embodiment of the present invention specifically includes:

[0038] S1. Visual feature extraction, predict the category and position of all objects in the picture through the convolutional neural network CNN and the regional convolutional neural network RCNN. The category of the object is a number, generally by sequentially encoding the objects that may appear in the input data. Obtained, the position of the object is a frame, which is determined by two points, namely the upper left corner and the lower right corner of the frame, each point includes the values ​​of the abscissa and ordinate, and at the same time, the correspondi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a visual relationship detection method and device based on the high-order semantic structure of the scene graph. The algorithm includes predicting the category and position of all objects in the picture, outputting the visual feature vector corresponding to each object, and analyzing all detected objects. Each pairing operation is performed for each two, based on the pairing result, the joint visual feature vector is extracted, and the position is encoded to obtain the position encoding; the categories of all objects are input into the hierarchical semantic clustering algorithm, and the high-level semantics corresponding to each object are obtained after processing feature vectors; carry out semantic encoding to the output of the hierarchical semantic clustering algorithm; generate relational classifier weights; combine the visual feature vectors, the joint visual feature vectors and the position codes into a unified feature vector, using the The weight of the relationship classifier performs a dot product operation on the unified feature vector, and finally obtains the conditional probability of the relationship between each two objects as a scene graph.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a visual relationship detection method and device based on a high-order semantic structure of a scene graph. Background technique [0002] The main goal of the visual relation detection task is to identify and localize the content of the visual triadic relation (subject, relation, object) present in the image. Recognition refers to identifying the category attributes of the target object, and localization is to return the bounding box of the target object. Understanding a visual scene is often more than recognizing individual objects, and even a perfect object detector would struggle to perceive the subtle differences between a person feeding a horse and a person standing next to it. Learning the rich semantic relations between these objects is what visual relation detection is all about. The key to a deeper understanding of the visual scene is to build a structured representati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/762G06V10/764G06V10/82G06V10/774G06V20/70G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/00G06N3/045G06F18/23G06F18/24G06F18/214
Inventor 袁春魏萌
Owner SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products