Method for synthesizing virtual viewpoint image based on implicit neural scene representation

A virtual viewpoint and image synthesis technology, which is applied in the field of virtual viewpoint image synthesis and roaming, can solve problems such as slow speed, and achieve the effect of optimizing distribution and improving computing speed and performance.

Pending Publication Date: 2022-06-24
NANJING UNIV OF POSTS & TELECOMM
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the implicit neural scene representation is very slow in training and re

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for synthesizing virtual viewpoint image based on implicit neural scene representation
  • Method for synthesizing virtual viewpoint image based on implicit neural scene representation
  • Method for synthesizing virtual viewpoint image based on implicit neural scene representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present application and the corresponding drawings. Obviously, the described embodiments are some, but not all, embodiments of the present invention.

[0045] like figure 1 As shown, in this embodiment, the method for synthesizing virtual viewpoint images based on implicit neural scene representation includes the following steps:

[0046] S1. Acquire datasets used as training images and test images.

[0047]In this example, the dataset includes a large-scale scene dataset captured by a camera. This large-scale scene dataset needs to capture about 30 images to meet the needs of constructing neural scene representation, and the shooting vision needs to cover all corners of the scene; Limits, including line, ar...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for synthesizing a virtual viewpoint image by using implicit neural scene representation on the basis of multi-view three-dimensional cross-view loss, and is suitable for the field of computer vision. The method comprises the following steps: acquiring an image data set which needs to generate a virtual viewpoint; preprocessing the training image data set, and performing feature point extraction and matching on the input training image data set based on a feature matching algorithm Sift in the preprocessing stage; processing the obtained training image data and the extracted feature point information, and inputting the processed training image data and extracted feature point information into a multi-layer perceptron network for training; inputting test image data into the trained multi-layer perceptron network, and then obtaining a tested rendered image through volume rendering; and generating a virtual viewpoint image based on the trained multilayer perceptron network. Therefore, the data volume of the neural network during training fitting scene representation is reduced, and centralized sampling is performed in combination with the image depth information, so that the operation speed and performance of the neural scene representation can be improved, and a high-quality virtual viewpoint image is generated.

Description

technical field [0001] The invention relates to a method for realizing the synthesis and roaming of virtual viewpoint images by using implicit neural scene representation on the basis of multi-view stereo cross-view loss, which is suitable for the field of computer vision. Background technique [0002] With the development of science and technology and the continuous improvement of living standards, panoramic video, interactive video, free-view video and other new video methods different from traditional two-dimensional video have gradually entered the public's field of vision. Most of the current so-called free-view video generation methods are to set up multiple cameras in the scene to shoot at the same time, which is inefficient. And in a general environment, it is impossible to place a camera at any viewpoint for shooting in a large-scale scene. Therefore, synthesizing a virtual viewpoint image at any position through a small number of input viewpoint images is a researc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N13/221H04N13/293H04N13/282H04N13/111H04N13/156H04N13/15H04N13/106G06N3/04G06N3/08
CPCH04N13/221H04N13/293H04N13/282H04N13/111H04N13/156H04N13/15H04N13/106G06N3/08G06N3/045
Inventor 霍智勇郭权
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products