Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Block floating point computations using shared exponents

A block floating point, exponent technology, applied in the field of block floating point computing using shared exponents, can solve problems such as reduction, network accuracy impact, etc.

Pending Publication Date: 2020-12-15
MICROSOFT TECH LICENSING LLC
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

While reduced precision can improve the performance of different functions of neural networks (including the speed at which they can perform classification and regression tasks for object recognition, lip reading, speech recognition, detecting unusual transactions, text prediction, and many other functions), Network accuracy may be adversely affected by

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Block floating point computations using shared exponents
  • Block floating point computations using shared exponents
  • Block floating point computations using shared exponents

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] Computing devices and methods described herein are configured to perform block floating point calculations using multiple layers of shared exponents. For example, subvector components with mantissas that share exponents both at the global level and at a finer-grained level (or finer-grained) are clustered, allowing calculations to be performed with integers. In some examples, the finer granularity of block floating point exponents allows for enormous effective precision of the expressed values. As a result, the computational burden is reduced while improving overall accuracy.

[0017] In the case of various examples of the present disclosure, a neural network such as a deep neural network (DNN) can be trained and trained using block floating point or a numerical format that is less precise than a single precision floating point format (e.g., 32-bit floating point numbers). Deployment with minimal or reduced loss of accuracy. On dedicated hardware such as Field Program...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A system for block floating point computation in a neural network receives a plurality of floating point numbers. An exponent value for an exponent portion of each floating point number of the plurality of floating point numbers is identified and mantissa portions of the floating point numbers are grouped. A shared exponent value of the grouped mantissa portions is selected according to the identified exponent values and then removed from the grouped mantissa portions to define multi-tiered shared exponent block floating point numbers. One or more dot product operations are performed on the grouped mantissa portions of the multi-tiered shared exponent block floating point numbers to obtain individual results. The individual results are shifted to generate a final dot product value, which is used to implement the neural network. The shared exponent block floating point computations reduce processing time with less reduction in system accuracy.

Description

Background technique [0001] Block floating-point number format allows independent scaling of dynamic range and precision. By reducing the precision, the system performance of a processor (such as a hardware accelerator) can be increased. However, reduced precision may affect system accuracy. For example, the block floating-point numeric format can be used in neural networks, which can be implemented in many application areas for applications such as computer vision, robotics, speech recognition, medical image processing, computer games, augmented reality, virtual reality tasks. While reduced precision can improve the performance of different functions of neural networks (including the speed at which they can perform classification and regression tasks for object recognition, lip reading, speech recognition, detecting unusual transactions, text prediction, and many other functions), Network accuracy may be adversely affected. Contents of the invention [0002] This Summar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F7/487G06F17/16
CPCG06F7/4876G06F17/16G06F7/4915G06F7/49936G06F7/49947G06F7/523
Inventor D·洛E·S·钟
Owner MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products