Data conversion method, multiplier, adder, terminal device and storage medium

A conversion method and data technology, applied in electrical digital data processing, digital data processing components, computing using non-contact manufacturing equipment, etc., can solve problems such as excessive design, achieve low power consumption, reduce computing overhead, numerical Indicates a wide range of effects

Active Publication Date: 2021-11-23
JIMEI UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Correspondingly, for deep learning algorithms based on convolutional neural networks, there is a possibility of "over-design" in the adder and multiplier designed for the IEEE-754 floating-point data format

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data conversion method, multiplier, adder, terminal device and storage medium
  • Data conversion method, multiplier, adder, terminal device and storage medium
  • Data conversion method, multiplier, adder, terminal device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] An embodiment of the present invention provides a data conversion method for image recognition based on a convolutional neural network model, which is used to extract key features from images or videos input by the network through a convolutional neural network, so as to classify images or detect objects . Since the convolution operation is usually the most expensive function in the convolutional neural network, and the multiplication operation is the most expensive step in the convolution operation, therefore, the data conversion method proposed in this embodiment will be used for the convolution operation. Operations are performed after the points are converted to the new standard number format.

[0057] The specific conversion method is as follows:

[0058] 1. The floating-point number F uses k n-bit (bit) integer numbers (a 1 ,a 2 ,a 3 ...a k ) to approximate the sequence, the specific mathematical meaning is expressed as:

[0059] (1) Through this data form...

example 1

[0077] Let’s take the floating point number -0.96582 as an example to illustrate, its IEEE754 format is: 10111111011101110100000000000000, a total of 32 bits, counting from right to left, the 32nd bit represents the sign bit of the original floating point number, 1 represents a negative number, 0 represents a positive number, this implementation In the example, if the 32nd bit is 1, the original floating-point number is a negative number. A total of 8 bits from the 24th to the 31st bits represent the exponent code of the original floating point number. The 1st to 23rd digits represent the mantissa of the original floating-point number with a total of 23 digits. Therefore, the exponent code of the floating point number -0.96582 is 01111110, and the mantissa is 11101110100000000000000.

[0078] (1): The floating point number -0.96582 is non-zero;

[0079] (2): Set the first byte a 1 Equal to the order code 01111110, set count=1;

[0080] (3): The mantissa of the original flo...

example 2

[0089] The following takes the floating point number 0.5 as an example for illustration, and its IEEE754 format is:

0 01111110 00000000000000000000000

[0090] The sign bit is 0, the exponent code is 01111110, and the mantissa is 00000000000000000000000.

[0091] (1): The floating point number 0.5 is not zero;

[0092] (2): Set the first byte a 1 Equal to the order code 01111110, set count=1;

[0093] (3): the 24th-count=23rd bit of the mantissa is not equal to 1;

[0094] (4): count=23 is not established;

[0095] (5): set count=count+1=1+1=2;

[0096] (6): the 24th-count=22 bit of the mantissa is not equal to 1;

[0097] ...

[0098] (46): count=23 is established, the second byte a will be set 2 =00000000;

[0099] (47): The sign bit is 0, set the first byte a 1 Placed in the high 8 bits, the second byte a 2 Put in the lower 8 bits;

[0100] The floating point number 0.5 in the new standard format is: 01111110 00000000, its mathematical meaning is 0.5, and the rel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to a data conversion method, a multiplier, an adder, a terminal device and a storage medium. The method includes: inputting a floating-point number F; converting the input floating-point number F according to the following conversion rules, and the conversion rules are: wherein, a i is an integer number, each integer number is n bits, i represents the serial number, and k represents the number of integers; according to the converted floating-point number F, the new standard number converted into k n-bits is set integer a i Numbers arranged in descending order or ascending order from high order to low order; when the floating point number F=0, the k n-bit integer numbers are negative infinity; the converted new standard number is output. The present invention not only retains the advantages of a large numerical representation range of single-precision floating-point numbers, but also reduces the calculation overhead of floating-point number multiplication operations, so it can reduce the calculation overhead of deep neural network algorithms, and provide low-cost, low-power Deployment on consumer devices provides a solution.

Description

technical field [0001] The invention relates to the technical field of data conversion, in particular to a data conversion method, a multiplier, an adder, a terminal device and a storage medium. Background technique [0002] Deep neural network algorithms with image recognition and natural language processing as their main applications are becoming more and more popular in the social economy. Deep neural networks have high requirements on the computing performance of computing devices. How to reduce the computing overhead of algorithms has become a common concern of both academia and industry. [0003] In recent years, deep learning algorithms based on convolutional neural networks have achieved impressive results in fields such as machine vision and natural language processing. The convolutional neural network extracts key features from pictures or videos through complex neural network design and increases the depth of the neural network, and finally realizes the classific...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F7/50G06F7/523G06F7/57G06N3/04
CPCG06F7/523G06F7/50G06F7/57G06N3/045
Inventor 黄斌叶从容蔡国榕陈豪郭晓曦
Owner JIMEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products