Computer vision processing method and device for processing equipment with low computing power

A technology of computer vision and processing equipment, applied in the field of computer vision, can solve problems such as poor real-time performance, low computing power processing equipment, slow neural network computing speed, etc., to reduce the amount of data, increase memory overhead, and improve convolution operation speed. Effect

Active Publication Date: 2020-11-24
BEIJING TUSEN ZHITU TECH CO LTD
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of the above problems, the present invention provides a computer vision processing method and device for processing equipment with low computing power to solve the problems of slow computing speed and poor real-time performance of neural networks in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computer vision processing method and device for processing equipment with low computing power
  • Computer vision processing method and device for processing equipment with low computing power
  • Computer vision processing method and device for processing equipment with low computing power

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] see figure 1 , is a flow chart of the neural network optimization method provided by the embodiment of the present invention. In this embodiment, the convolutional layer of the neural network is processed, and the method includes:

[0038] Step 101: Perform binarization and bit packing operations on the input data of the convolutional layer along the channel direction to obtain compressed input data.

[0039] The input data of the convolutional layer is generally three-dimensional data, which includes the height, width and number of channels of the input data, and the number of channels of the input data is more, generally a multiple of 32. Such as figure 2 Shown is a schematic diagram of the input data and the compressed input data corresponding to the input data. H represents the height of the input data, W represents the width of the input data, and C represents the number of channels of the input data; the height and width of the compressed input data are not cha...

Embodiment 2

[0082] see Figure 7 , is a schematic flowchart of a neural network optimization method provided by an embodiment of the present invention, the method includes steps 701 to 709, wherein steps 701 to 705 process the convolutional layer in the neural network, and figure 1 Steps 101 to 105 are in one-to-one correspondence. For the corresponding specific implementation, refer to Embodiment 1, which will not be repeated here. Steps 706 to 709 process the fully-connected layers in the neural network. The order of steps 706 to 709 and steps 701 to 705 is not strictly limited, and is determined according to the structure of the neural network. For example, the neural network contains The network layers are convolutional layer A, convolutional layer B, fully connected layer C, convolutional layer D, and fully connected layer E in sequence, and steps 701 to 100 are applied to each convolutional layer in sequence according to the order of the network layers included in the neural network...

Embodiment 3

[0115] Based on the same idea of ​​the neural network optimization method provided by the foregoing embodiments 1 and 2, embodiment 3 of the present invention provides a neural network optimization device. The structural diagram of the device is as follows Figure 10 shown.

[0116] The first data processing unit 11 is configured to perform binarization and bit packing operations on the input data of the convolutional layer along the channel direction to obtain compressed input data;

[0117] The second data processing unit 12 is configured to perform binarization and bit packing operations on the convolution kernels of the convolution layer along the channel direction to obtain corresponding compressed convolution kernels;

[0118] The division unit 13 is used to sequentially divide the compressed input data into data blocks of the same size as the compressed convolution kernel according to the order of convolution operations, and the input data included in one convolution op...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a computer vision processing method and device for processing equipment with low computing power to solve the problems of slow computing speed and poor real-time performance of neural networks in the prior art. The method includes: performing binarization and bit packing operations on the input data of the convolution layer along the channel direction to obtain compressed input data; performing binarization and bit packing operations on each convolution kernel of the convolution layer along the channel direction respectively. The packing operation obtains the corresponding compressed convolution kernel; the compressed input data is sequentially divided into data blocks of the same size as the compressed convolution kernel according to the order of the convolution operation, and the input data included in a convolution operation constitutes a data block; Each data block of the compressed input data is sequentially convolved with each compressed convolution kernel to obtain a convolution result, and multiple output data of the convolution layer are obtained according to the convolution result. The technical scheme of the invention can improve the calculation speed and real-time performance of the neural network.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a computer vision processing method and device for processing equipment with low computing power. Background technique [0002] In recent years, deep neural networks have achieved great success in various applications in the field of computer vision, such as image classification, object detection, image segmentation, etc. [0003] However, the deep neural network model often contains a large number of model parameters, with a large amount of calculation and slow processing speed, and it cannot be calculated in real time on some devices with low power consumption and low computing power (such as embedded devices, integrated devices, etc.). Contents of the invention [0004] In view of the above problems, the present invention provides a computer vision processing method and device for processing equipment with low computing power to solve the problems of slow computing speed and p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/04G06F17/15
CPCG06N3/063G06F17/153G06N3/045G06N3/08G06F12/0207H03M7/30G06F17/16G06N20/10
Inventor 胡玉炜李定华苏磊靳江明
Owner BEIJING TUSEN ZHITU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products