Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A mobile visual retrieval method for digital humanities

A humanities and digital technology applied in the field of mobile visual retrieval for digital humanities

Inactive Publication Date: 2020-06-09
WUHAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the above methods, most of them do not fully consider the extraction of deep semantic features of images and the limitation of data transmission scale, and the digital humanities mobile visual retrieval method still has a large room for optimization.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A mobile visual retrieval method for digital humanities
  • A mobile visual retrieval method for digital humanities
  • A mobile visual retrieval method for digital humanities

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] In order to make the purpose and technical solution of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0061] like figure 1 As shown, the specific implementation of the embodiment of the present invention includes the following steps:

[0062] Step 1. Construct an image semantic extraction model based on deep hashing. The model is divided into nine processing layers: including five convolutional layers, two fully connected layers, a hashing layer, and an output layer; each processing layer The specific strategies are shown in Table 1:

[0063]

[0064] Among them, the convolution processing layer C i Contains three processing steps of convolution, activation and pooling, expressed as:

[0065]

[0066] in, Is the convolution o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a digital humanities-oriented mobile visual retrieval method, comprising: first constructing an image semantic extraction model based on deep hash; initializing parameters of each processing layer of the model through pre-training; constructing a loss function suitable for the field of digital humanities ;Collect digital humanities image samples, and build model training data sets and verification sets; preprocess image samples; use the built loss function and digital humanities training set to train the model, optimize model parameters; use the trained model to extract images The semantic feature vector completes the image retrieval process. Aiming at the two major challenges of image depth semantic feature extraction and data transmission scale limitation in digital humanities mobile visual retrieval, the present invention combines deep learning and hashing methods to propose a digital humanities mobile visual search method based on deep hashing. This method is in the field of digital humanities Excellent performance on the dataset.

Description

technical field [0001] The invention relates to the fields of digital humanities, mobile visual retrieval, etc., and in particular to a digital humanities-oriented mobile visual retrieval method. Background technique [0002] With the popularization of mobile smart terminal equipment and the rapid development of big data and cloud computing technology, massive pictures, videos, 3D models and other visual content have been generated on the Internet; the portability of mobile devices and the ubiquity of wireless networks make information retrieval more difficult. The method tends to be mobile and multimedia. Mobile Visual Search (MVS) technology, that is, an information retrieval model that uses visual data such as images, videos or maps collected by mobile smart terminals as retrieval objects to obtain relevant information, is gradually developing. rise, and have generated huge market and application demands. The application of MVS to the field of digital humanities has emer...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/583G06K9/46G06K9/62G06N3/04
CPCG06V10/40G06N3/045G06F18/241G06F18/214
Inventor 曾子明秦思琪
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products