Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target recognition system and method based on time slice convolutional neural network

A convolutional neural network and time slicing technology, applied in biological neural network models, neural architectures, character and pattern recognition, etc., to improve recognition efficiency and accuracy

Pending Publication Date: 2021-11-09
NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The challenge of this problem is how to integrate with the original deep learning method and be effectively compatible with all event representation methods for target recognition while utilizing the temporal correlation of event streams.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target recognition system and method based on time slice convolutional neural network
  • Target recognition system and method based on time slice convolutional neural network
  • Target recognition system and method based on time slice convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050] An object recognition system based on time-sliced ​​convolutional neural network, such as figure 1 As shown, the system includes:

[0051] The event stream segmentation module 101 is used to segment the event stream samples to form an event set, represent the event set as a pseudo image through an event representation method, and stitch the pseudo image into the first feature according to the input channel of the time-sliced ​​convolutional neural network map, and then re-assign different weights to each channel to obtain the second feature map;

[0052] The feature extraction module 102 is used to input the second feature map into the time-sliced ​​convolutional neural network for feature extraction to become a feature map of preset specifications;

[0053] The classification module 103 is configured to convert the feature map of the preset specification into a vector, and obtain the category with the highest probability as the target recognition result.

[0054] Speci...

Embodiment 2

[0073] An object recognition system based on time-sliced ​​convolutional neural network, still as figure 1 As shown, the system includes: an event stream segmentation module 101, which is used to segment the event stream samples to form an event set, express the event set as a pseudo image through an event representation method, and convolute the pseudo image according to time slice convolutional neural network The input channels are spliced ​​into the first feature map, and then re-assign different weights to each channel to obtain the second feature map; the feature extraction module 102 is used to input the second feature map into the time-sliced ​​convolutional neural network for feature extraction become a feature map of a preset specification; the classification module 103 is used to convert the feature map of a preset specification into a vector, and obtain the category with the highest probability as the target recognition result.

[0074] Preferably, during execution,...

Embodiment 3

[0083] An object recognition method based on time-sliced ​​convolutional neural network, such as image 3 shown, including the following steps:

[0084] S1. The step of segmenting the event stream, segmenting the event stream samples to form an event set, representing the event set as a pseudo image through an event representation method, splicing the pseudo image according to the time-sliced ​​convolutional neural network input channel into The first feature map, and then re-assign different weights to each channel to obtain the second feature map;

[0085] S2. A feature extraction step, inputting the second feature map into the time-sliced ​​convolutional neural network for feature extraction to become a feature map of a preset specification;

[0086] S3. A classification step, converting the feature map of the preset specification into a vector, and obtaining the category with the highest probability as the target recognition result.

[0087] As a transformable implementa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a target recognition system and method based on a time slice convolutional neural network. The system comprises an event stream segmentation module used for segmenting an event stream sample to form an event set, representing the event set into pseudo images through an event representation method, splicing the pseudo images into a first feature map according to time slice convolutional neural network input channels, and then assigning different weights to each channel again to obtain a second feature map; a feature extraction module which is used for inputting the second feature map into a time slice convolutional neural network for feature extraction to obtain a feature map of a preset specification; and a classification module which is used for converting the feature map of the preset specification into a vector to obtain a target recognition result. According to the method, the event stream is segmented by using the time correlation of the event stream, and meanwhile, the method is fused with an original deep learning method and is effectively compatible with all event representation methods for target recognition, so that the recognition efficiency and precision are improved.

Description

technical field [0001] The present application relates to the technical field of target recognition, and more specifically, the present application relates to a target recognition system and method based on a time-sliced ​​convolutional neural network. Background technique [0002] Event camera is a new type of neuromorphic vision sensor, also known as dynamic vision sensor. It captures dynamic changes in the scene based on an event-driven approach and responds to pixel-level brightness changes. It has the advantages of low latency, low power consumption, high dynamic range, and high time resolution. It has attracted more and more attention from people inside and outside the industry. support. With the development of convolutional neural networks, computer vision methods have made great progress. However, due to the sparsity and asynchrony of event streams, traditional computer vision algorithms cannot be directly applied, and existing methods are mainly divided into metho...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/2415G06F18/214
Inventor 史殿习徐化池张拥军王之元沈天龙凡遵林
Owner NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products