Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional model search method based on deep learning

A 3D model and deep learning technology, applied in character and pattern recognition, special data processing applications, instruments, etc., can solve problems such as limited scope of use, high hardware requirements, etc., to improve retrieval performance, exert autonomy, and avoid dependence Effect

Active Publication Date: 2017-08-18
TIANJIN UNIV
View PDF3 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These methods require clear spatial structure information, high hardware requirements, and limited scope of use

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional model search method based on deep learning
  • Three-dimensional model search method based on deep learning
  • Three-dimensional model search method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] In order to solve the above problems, methods that can comprehensively, automatically and accurately extract the features of multi-view objects and retrieve them are needed. Studies have shown that: with the increase of the number of neural network layers, the obtained features will show intuitive and excellent properties such as combination, translation invariance, and class distinguishability. [8] . The embodiment of the present invention proposes a 3D model retrieval method based on deep learning, see figure 1 , see the description below:

[0038] 101: Convolute any type of picture with the feature extractor channel by channel, correct the absolute value of the convolution result, and normalize the local contrast, perform average pooling on each picture, and obtain a single-layer volume of each picture Aggregating neural network results;

[0039] 102: Divide the low-level features output by the convolutional neural network into blocks with a preset size, aggregate...

Embodiment 2

[0043] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas and examples, see the following description for details:

[0044] 201: Preprocess all the images in the database, and obtain the cluster centers through k-means clustering;

[0045] Among them, preprocessing is performed on all pictures in the database, including normalizing picture size and extracting picture blocks Brightness and contrast normalization x (i) , whitening, k-means clustering to get the cluster center c (j) step, where i∈{1,2,…,M}, j∈{1,2,…,N}.

[0046] In the embodiment of the present invention, the input image is first preprocessed, and the process is as follows: firstly, the input RGB image data of different sizes is scale-normalized, and the size is adjusted to be 148×148×3, and then the image interval step size needs to be adjusted. 1 Extract image blocks of size 9×9×3 A total of 19600 picture blocks can be obtained, where i∈{1,2,…,19600}. ...

Embodiment 3

[0099] The scheme in embodiment 1 and 2 is carried out feasibility verification below in conjunction with specific example, see the following description for details:

[0100] In this experiment, the ETH database is divided into 8 categories, with 10 objects in each category, and a total of 80 objects. Each object includes 41 images. Includes: car, horse, tomato, apple, cow, pear, mug, puppy and more.

[0101] This experiment uses the MVRED database produced by the Tianjin University laboratory, including 311 query objects and 505 test objects. Each object includes 73 images. Including RGB pictures and corresponding depth maps and masks. The 505 test objects are divided into 61 categories, each category contains 1 to 20 objects. 311 objects are used as query models, and each category contains no less than 10 objects. Each type of object contains pictures from three perspectives, including 36, 36, and 1 pictures respectively.

[0102] Precision-recall curve: It mainly des...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional model search method based on deep learning. The method comprises the steps of performing channel-by-channel convolution on any type of pictures and a feature extractor, correcting absolute values of convolution results, performing local contrast normalization, performing average pooling on each picture to form a single-layer convolutional neural network result of each picture, partitioning low order convolutional neural network output features, aggregating partitions into a parent vector, aggregating output matrixes into a vector at last, expressing each picture with a plurality of features, performing series connection on the features to serve as a picture output feature, using a three-dimensional model search algorithm based on a view for the extracted output feature, matching a model to be detected and an existing model, calculating similarity between the model to be detected and the existing model for sorting, and obtaining a final search result. According to the method, the dependence on a specific type of image is avoided during image feature acquisition; the limitation of different images on artificial design features is eliminated, and the search precision of a multiple view target is improved.

Description

technical field [0001] The invention relates to the field of three-dimensional model retrieval, in particular to a three-dimensional model retrieval method based on deep learning. Background technique [0002] With the rapid development of computer technology and network, the scale of multimedia data is getting larger and larger, and 3D model data has become a new type of multimedia data after sound, image and video. The three-dimensional model has the characteristics of intuitiveness and strong expressiveness, and its application fields are more and more extensive, such as: computer-aided design (CAD), computer vision (such as gesture recognition), medical imaging, indoor robot navigation, behavior analysis, etc. [0003] At present, there are many 3D model recognition methods or systems, which are divided into several categories, such as early text-based 3D model retrieval methods, content-based 3D model retrieval methods, and topic model-based 3D model retrieval methods, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/583G06F18/23213
Inventor 刘安安李梦洁聂为之
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products