Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D Model Retrieval Method Based on Optimal View and Deep Convolutional Neural Network

A convolutional neural network and 3D model technology, applied in the field of computer graphics, can solve problems such as inconsistent viewpoint quality and large difference in model extraction effect, and achieve good expression effect, improved retrieval effect, and strong feature robustness

Active Publication Date: 2020-07-17
ZHEJIANG UNIV OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] (1) In the existing view selection methods, researchers usually choose to surround the 3D model with a polyhedron, use the vertices of the polyhedron as the viewpoint for view selection, and extract views consistent with the number of vertices, or use grid saliency, visible Area ratio, maximum area projection method and other methods to select the optimal view, but the quality of the viewpoint obtained by the former is different, and the extraction effect of the latter for different types of models is quite different

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D Model Retrieval Method Based on Optimal View and Deep Convolutional Neural Network
  • 3D Model Retrieval Method Based on Optimal View and Deep Convolutional Neural Network
  • 3D Model Retrieval Method Based on Optimal View and Deep Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0037] Example: such as figure 1 As shown, the 3D model retrieval method based on the optimal view and deep convolutional neural network includes the following steps:

[0038] Step 1: Extract views from a total of 2106 3D models. Firstly, according to the predefined initial viewpoint, the 3D model is wrapped with a viewpoint sphere centered at the center of mass of the 3D model and contains multiple viewpoints. In terms of rendering method, the rendering method of closed contour line combined with implied contour (hereinafter referred to as mixed contour line) is adopted; the closed contour line is obtained by detecting the part perpendicular to the normal vector of the viewpoint vector and the model surface, and drawing it out, while the implied The contour line finds some lines around it that conform to the curvature close to human vision for further drawing to obtain a two-dimensional view of the three-dimensional model. Extract a 2D view of a 3D model using a hybrid silho...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a three-dimensional model retrieval method based on an optimal view and a deep convolutional neural network. The invention first extracts views of a three-dimensional model from multiple viewpoints, and selects an optimal view according to the order of gray entropy. Secondly, the view set is trained through a deep convolutional neural network to extract the deep features of the view and perform dimensionality reduction. At the same time, the edge contour map is extracted from the input natural image, and a set of 3D models is returned after similarity matching. Finally, the result list is fine-tuned and rearranged based on the proportion of the category of the target model in the retrieval results, and the final retrieval results are returned to realize 3D model retrieval. This method effectively selects a better view, reduces view redundancy, and uses deep features to The view is expressed at a higher level, which effectively improves the retrieval effect.

Description

technical field [0001] The invention relates to the field of computer graphics, in particular to a three-dimensional model retrieval method based on an optimal view and a deep convolutional neural network. Background technique [0002] With the increasing computer graphics processing capability and 3D modeling technology, 3D models have been widely used in many fields such as industrial design, virtual reality, and medical diagnosis, and the number of 3D models has also grown explosively. Massive data brings new opportunities and challenges to the development of 3D model retrieval technology. [0003] In the field of view-based 3D model retrieval, the common research idea is to render the 3D model into multiple 2D views, extract the artificially designed features of the views, and then match the features of the input source image with the view features to achieve similarity matching. Get the target model. However, this search method has the following two problems: [0004...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/583G06N3/04G06K9/46
CPCG06F16/583G06V10/44G06N3/045
Inventor 刘志李江川陈波
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products