The invention discloses an approximate-computation-based binary weight
convolution neural network hardware accelerator calculating module. The hardware accelerator calculating module is able to receive the input neural element data and binary
convolution kernel data and conducts rapid
convolution data multiplying, accumulating and calculating. The calculation module utilizes the complement data representation, and includes mainly an optimized approximation
binary multiplier, a compressor tree, an innovative approximation
adder, and a temporary register for the sum of the serially adding part. In addition, targeted to the optimized approximation
binary multiplier, two error compensation schemes are proposed, which reduces or completely eliminates the errors brought about from the optimized approximation
binary multiplier under the condition of only slightly increasing the hardware resource overhead expense. Through the optimized calculating units, the key paths for the binary weight convolution
neural network hardware accelerator using the computation module are shortened considerably, and the size loss and
power loss are also reduced, making the module suitable for a low power consuming embedded
type system in need of using the convolution neural network.