Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voltage accumulation in-memory calculation circuit based on SRAM bit line XNOR

A technology for calculating circuits and voltages, which is applied in the field of voltage accumulation in-memory computing, and can solve the problems of data output path consumption and long transmission time.

Pending Publication Date: 2020-10-23
中科南京智能技术研究院
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to provide a voltage accumulation memory calculation circuit based on SRAM bit line exclusive OR, to solve the problem of long transmission time and long time spent on data output path in the selection decoding process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voltage accumulation in-memory calculation circuit based on SRAM bit line XNOR
  • Voltage accumulation in-memory calculation circuit based on SRAM bit line XNOR
  • Voltage accumulation in-memory calculation circuit based on SRAM bit line XNOR

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0037] The purpose of the present invention is to provide a voltage accumulative memory calculation circuit based on SRAM bit line XOR, which shortens the propagation time of the output voltage.

[0038] In order to make the above objects, features and advantages of the present invention more comprehensible, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0039] figure 1...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a voltage accumulation in-memory calculation circuit based on SRAM bit line XNOR. The voltage accumulation in-memory calculation circuit is characterized in that a read word line driver module in an XNOR mode is connected with a storage operation unit through a read word line; the row decoder module in the storage mode is connected with a storage arithmetic unit through awriting line; the write bit line driving and column decoding module in the storage mode is connected with the storage arithmetic unit through a write bit line; the read bit line in each storage arithmetic unit is directly connected with one analog-to-digital converter; and analog accumulation is carried out on bitwise ternary XNOR gate results of each read bit line voltage in the storage array module, and the analog-to-digital converter is used for digitally outputting the read bit line voltage. The circuit can shorten the propagation time of the output voltage.

Description

technical field [0001] The invention relates to the field of voltage accumulation in-memory calculation, in particular to a voltage accumulation in-memory calculation circuit based on the exclusive OR of SRAM bit lines. Background technique [0002] Deep neural networks (DNNs) and convolutional neural networks (CNNs) have achieved unprecedented improvements in the accuracy of large-scale recognition tasks. However, algorithmic complexity and memory access limit the energy efficiency and acceleration speed of DNN hardware. To solve this problem, in recent algorithms, weights and neuron activations are binarized to +1 or -1, so that the multiplication between weights and input activations becomes NOR gate XNOR operations, and the accumulation of XNOR operations becomes these XNOR results the number of bits. [0003] However, the reduced operational complexity of binary and ternary algorithms makes row-by-row memory accesses dominate the speed and energy efficiency of DNN har...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G11C11/41G11C11/413G11C11/416
CPCG11C11/41G11C11/413G11C11/416Y02D10/00
Inventor 乔树山史万武尚德龙周玉梅
Owner 中科南京智能技术研究院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products