The invention discloses a semantic coding method of a long short-term memory network based on attention distraction, and belongs to the field of natural language processing and generation. Aiming at the problems of semantic deviation, gradient disappearance, gradient explosion, incomplete contextual information fusion and the like in the prior art, a neural network used by the method adds a parameter sharing unit on the basis of BiLSTM, and the capability of obtaining and fusing bidirectional feature information of a model is enhanced; an activation function in an improved deep learning model is adopted, so that the probability of occurrence of a gradient problem is reduced; for an input layer and a hidden layer, a model is constructed in an interactive space and extended LSTM mode, so that the capability of fusing context information of the model is enhanced; an attention distraction mechanism of statement structure information variables is introduced, and semantic generation is limited, so that high semantic accuracy is improved. The method is suitable for natural language generation applications such as automatic news or title writing, robot customer service, conference or diagnosis report generation and the like.