Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Channel estimation method, device and readable storage medium based on deep neural network

A deep neural network and channel estimation technology, applied to devices and readable storage media, in the field of channel estimation methods based on deep neural networks, can solve problems such as low estimation accuracy, and achieve the effect of improving estimation accuracy

Active Publication Date: 2020-07-28
XI AN JIAOTONG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to overcome the shortcomings of the low estimation accuracy of the channel estimation scheme in the above-mentioned prior art, and provide a channel estimation method, device and readable storage medium based on a deep neural network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Channel estimation method, device and readable storage medium based on deep neural network
  • Channel estimation method, device and readable storage medium based on deep neural network
  • Channel estimation method, device and readable storage medium based on deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0104] The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.

[0105]Consider a single-bit quantized single-cell massive MIMO scenario. In this scenario, the channel estimation method based on the deep neural network of the present invention is used to realize uplink channel estimation. The detailed simulation parameters are shown in Table 1.

[0106] Table 1 Simulation parameter table

[0107]

[0108] Comparison scheme

comparative approach 1

[0109] Comparison scheme 1: use the least square method (LS) to estimate the channel. In this scheme, the signal received by the base station is directly estimated by LS without any preprocessing.

comparative approach 2

[0110] Comparison scheme 2: use the linear minimum mean square error estimation method (LMMSE) to estimate the channel. In this scheme, the signal received by the base station is first preprocessed, that is, the Bussgang decomposition method is used to convert the nonlinear quantization process into a linear process, and then Then LMMSE is used to realize channel estimation.

[0111] see Figure 6 and 7 , and plotted the simulation graphs of the estimated performance of four different schemes as the signal-to-noise ratio varies. In this section, two channel models are simulated: one is the Rayleigh block fading channel model; the other is the spatially correlated channel model given by formula (1), it can be seen that the proposed method has the same better than other estimation methods. exist Figure 6 Under the Rayleigh block fading channel model, compared with other algorithms, the proposed algorithm obtains a significant gain of at least 8dB when the SNR is -8dB. also...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of signal processing in wireless communication and discloses a channel estimation method and device based on a deep neural network, and a readable storagemedium. The method comprises the following steps of acquiring through channel estimation coefficient based on pilot frequency by a pilot frequency sequence and a pilot frequency signal sent by a communication channel;. obtaining a first training tag; acquiring obtaining sending data through the channel estimation coefficient based on the pilot frequency and the received quantized data, obtaininga second training label through the sending data and the received quantized data, and training the second training label through a deep neural network to obtain a channel estimation coefficient basedon the data; and averaging the pilot frequency-based channel estimation coefficient and the data-based channel estimation coefficient to obtain a channel estimation coefficient. After a pilot-based channel estimation coefficient is obtained through deep neural network training, data channel estimation is performed to obtain a data-based channel estimation coefficient, and the average of the pilot-based channel estimation coefficient and the data-based channel estimation coefficient is used as a final channel estimation coefficient, so that the channel estimation precision is greatly improved.

Description

technical field [0001] The invention belongs to the technical field of signal processing, and relates to a deep neural network-based channel estimation method, device and readable storage medium. Background technique [0002] Massive multiple-input multiple-output (MIMO) technology is to deploy a large-scale antenna array with hundreds or thousands of antennas in the base station to serve more users on the same time-frequency resource, thereby greatly improving spectrum efficiency. However, when the base station uses 128 antennas, the hardware cost reaches several million RMB, which greatly increases the system deployment cost. On the other hand, a large number of antennas corresponds to a large number of radio frequency links, which significantly increases the power consumption of the receiving system of the base station. The receiver power consumption of the uplink includes two aspects: the power consumption of the analog-to-digital converter (ADC) processing unit and the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04L25/02H04B7/08
CPCH04B7/0854H04B7/0857H04L25/0222H04L25/0254H04L25/0256
Inventor 张国梅朱瑞芳李国兵吕刚明
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products