Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Digital humanities-oriented mobile visual retrieval method

A humanities and digital technology applied in the field of mobile visual retrieval for digital humanities

Inactive Publication Date: 2018-11-02
WUHAN UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the above methods, most of them do not fully consider the extraction of deep semantic features of images and the limitation of data transmission scale, and the digital humanities mobile visual retrieval method still has a large room for optimization.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Digital humanities-oriented mobile visual retrieval method
  • Digital humanities-oriented mobile visual retrieval method
  • Digital humanities-oriented mobile visual retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] In order to make the purpose and technical solution of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0061] Such as figure 1 As shown, the specific implementation of the embodiment of the present invention includes the following steps:

[0062] Step 1. Construct an image semantic extraction model based on deep hashing. The model is divided into nine processing layers: including five convolutional layers, two fully connected layers, a hashing layer, and an output layer; each processing layer The specific strategies are shown in Table 1:

[0063]

[0064] Among them, the convolution processing layer C i Contains three processing steps of convolution, activation and pooling, expressed as:

[0065]

[0066] in, Is the convolutio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a digital humanities-oriented mobile visual retrieval method. The method comprises: firstly, constructing a deep hashing-based image semantic extraction model; initializing parameters of all processing layers of the model through pre-training; constructing a loss function suitable for use in the field of digital humanities; collecting digital-humanities image samples, and constructing a model training data set and a verification set; pre-processing the image samples; using the constructed loss function and the digital-humanities training set to train the model to optimize the model parameters; and using the model, of which training is completed, to extract image semantic feature vectors, and completing an image retrieval process. For two major challenges of image deep-semantic-feature extraction and data transmission size limitation in mobile visual retrieval of the digital humanities, the invention provides the deep hashing-based mobile visual search (MVS) method of the digital humanities by combining deep learning and a hashing method, and the mobile visual search method has outstanding performance on the data sets of the field of the digital humanities.

Description

technical field [0001] The invention relates to the fields of digital humanities, mobile visual retrieval, etc., and in particular to a digital humanities-oriented mobile visual retrieval method. Background technique [0002] With the popularization of mobile smart terminal equipment and the rapid development of big data and cloud computing technology, massive pictures, videos, 3D models and other visual content have been generated on the Internet; the portability of mobile devices and the ubiquity of wireless networks make information retrieval more difficult. The method tends to be mobile and multimedia. Mobile Visual Search (MVS) technology, that is, an information retrieval model that uses visual data such as images, videos or maps collected by mobile smart terminals as retrieval objects to obtain relevant information, is gradually developing. rise, and have generated huge market and application demands. The application of MVS to the field of digital humanities has emer...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/46G06K9/62G06N3/04
CPCG06V10/40G06N3/045G06F18/241G06F18/214
Inventor 曾子明秦思琪
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products