The invention discloses a universal cross-
modal retrieval model based on deep hash. The universal cross-
modal retrieval model comprises an image model, a text model, a binary
code conversion model and a
Hamming space. The image model is used for the feature and semantic extraction of the image data; the text model is used for the feature and semantic extraction of the text data; the binary
code conversion model is used for converting the original features into the binary codes; the
Hamming space is a common subspace of images and the text data, and the similarity of the cross-
modal data can be directly calculated in the
Hamming space. According to the
universal model for solving cross-modal retrieval by combining
deep learning and Hash learning, the data points in an original feature space are mapped into the binary codes in the public Hamming space, similarity
ranking is carried out by calculating the
Hamming distance between the codes of the data to be queried and the codes of the
original data, and therefore a
retrieval result is obtained, and the retrieval efficiency is greatly improved. The binary codes are used for replacing the
original data storage, so that the requirement of the retrieval tasks for the storage capacity is greatly reduced.