The invention provides an RNN-based multi-task learning method. The method includes steps of S1, initializing a system parameter theta=(W, U, B, V); S2, inputting samples x<1, i>... x<R, i>, learningpublic information Xco and compensating the public information into training of single tasks; S3, calculating predication label vector output, as in the description, of each neural network and calculating loss Lr of a task <r, i>; S4, solving the gradient of theta=(W, U, B, V) according to a gradient decrease method and a BPTT algorithm and determining the gradient of a task r relative to the public information Xco; S5, determining a learning rate eta and updating each weight gradient W=W-eta*deltaW; S6, judging whether a neural network reaches stability or not, if yes, performing a step S7 and if not, returning to step S2 and performing iteration update on model parameters; S7, outputting an optimized model. According to the invention, public features among RNN learning multiple tasks canbe utilized effectively and the public features are input to learning of the single tasks, so that information share is realized. Besides, through introduction of a GRU structure in RNN, the problemof gradient vanishment can be solved effectively.