Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Zero sample identification method and system based on discriminative visual attributes

A visual attribute, discriminative technology, applied in character and pattern recognition, computer parts, instruments, etc., can solve the problem of lack of semantic information of feature representation

Active Publication Date: 2021-02-05
CHENGDU UNIV OF INFORMATION TECH
View PDF8 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, one of the purposes of the present invention is to provide a zero-shot recognition method based on discriminative visual attributes, which can solve the problem of lack of semantic information of feature representations caused by artificially defined attributes during recognition, and realize visual feature space and semantic recognition. Joint embedding spaces to enhance the discriminative power of visual feature representations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Zero sample identification method and system based on discriminative visual attributes
  • Zero sample identification method and system based on discriminative visual attributes
  • Zero sample identification method and system based on discriminative visual attributes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] refer to figure 1 , is a schematic structural diagram of an embodiment of a zero-shot recognition system based on discriminative visual attributes in the present invention; specifically, the system includes: an original feature domain learning module 1, a target feature domain learning module 2, and a zero-shot recognition module 3;

[0057] The association between the visual feature space and the semantic embedding space plays an important role in zero-shot visual recognition. The present invention decomposes the learning of the above feature space association information into two parts: the original feature domain learning (known object category) and the target feature Domain learning (unknown object category), zero-shot recognition problem in which object categories are disjoint between the original feature domain and the target feature domain.

[0058] Specifically, the original feature domain learning module 1 includes a manually defined attribute unit 11 and a dis...

Embodiment 2

[0081] Based on the system in Embodiment 1, the present invention also provides a zero-sample recognition method based on discriminative visual attributes, the flowchart can refer to figure 2 , specifically, a zero-shot recognition method based on discriminative visual attributes, comprising the following steps:

[0082] S1: Construct a sparse coding model, optimize the sparse coding model based on the sample data of the original feature domain to obtain the transformation relationship of the original feature domain from the visual feature to the manually defined attribute representation;

[0083]In this step, using the original feature domain sample data, by optimizing the following sparse coding model, the transformation relationship from visual features to manually defined attribute representations is obtained. Further, the sparse coding model in step S1 is:

[0084]

[0085] Among them, F s Represents the visual feature set of the original feature domain image sample....

Embodiment 3

[0110] In the present embodiment, the inspection data to the system in the embodiment 1 and the method in the embodiment 2 are also provided, specifically, select aPY, AwA2 reference database, wherein, the data statistics of the zero sample recognition database in the aPY, AwA2 database are as follows 1:

[0111] Table 1 Statistics of the current benchmark aPY and AwA2 zero-sample recognition databases

[0112]

[0113] Then, select existing several zero-sample methods and compare the accuracy of the method in the present invention on the benchmark zero-sample recognition database. The existing zero-sample methods for selection include: the zero-sample method proposed by M. Method CONSE; the zero-sample method LATEM proposed by Y.Xian et al. in 2016, and the zero-sample method DLFZRL proposed by Bin Tong et al. in 2019. The final accuracy is shown in Table 2 below:

[0114] Table 2 Accuracy of different part recognition methods on the benchmark zero-sample recognition data...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a zero sample identification method and system based on discriminative visual attributes, and the method comprises the following steps: S1, constructing a sparse coding model, and optimizing the sparse coding model based on original feature domain sample data, and obtaining an original feature domain transformation relation from visual features to artificial defined attribute representation; S2, introducing a classification error cost term to construct a supervised dictionary learning target model, and extracting an original feature domain discriminative visual attributeset; S3, constructing a target feature domain learning model based on the original feature domain transformation relationship and the original feature domain discriminative visual attribute set, andmining target feature domain discriminative visual attributes; and S4, inputting a to-be-detected image containing the semantic object, extracting depth visual features of the to-be-detected image through the depth residual network, and optimizing the sparse coding target function to obtain semantic attribute representation of the visual features of the to-be-detected image. According to the method, the problem of lack of feature representation semantic information during recognition of artificially defined attributes can be solved, and the discrimination capability of visual feature representation is enhanced.

Description

technical field [0001] The invention belongs to the technical field of computer vision recognition, and in particular relates to a zero-sample recognition method and system based on discriminative visual attributes. Background technique [0002] Nowadays, image and video data are growing explosively. In the face of complex multimedia data, how to effectively analyze and understand its semantic content has become increasingly important. In order to solve the above problems, computer vision recognition technology was born. At present, with the establishment of large-scale visual databases and the wide application of deep neural networks in this field, visual recognition methods have developed rapidly, especially in visual feature extraction, strong supervision model construction, and data-driven neural network learning. Big progress. However, due to the inherent semantic gap between low-level visual data and high-level semantic information, current object recognition algorit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N20/20
CPCG06N20/20G06F18/2411G06F18/214
Inventor 谢昱锐蒋涛袁建英许林
Owner CHENGDU UNIV OF INFORMATION TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products