Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Kinect depth video spatio-temporal union restoration method

A combined repair and depth video technology, applied in the field of 3D rendering, can solve the problems of depth repair and influence without considering reflection and dark areas, and achieve the effect of easy to distinguish repair, good repair effect, and obvious hole repair effect

Inactive Publication Date: 2014-02-05
TONGJI UNIV
View PDF3 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method can effectively reduce optical noise and repair boundaries, but it does not consider the depth repair of reflective and dark areas, and the effect of this method is affected by the accuracy of motion estimation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Kinect depth video spatio-temporal union restoration method
  • Kinect depth video spatio-temporal union restoration method
  • Kinect depth video spatio-temporal union restoration method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] Below in conjunction with accompanying drawing, the present invention will be further described with specific example:

[0023] The present invention uses a section of depth video sequence and corresponding color video sequence captured by Kinect to detect the effect of the proposed method. The video sequence consists of 100 frames of images. The image resolution is 640×480. All the examples involved use MATLAB7 as the simulation experiment platform.

[0024] The flow chart of the present invention is as figure 1 Shown: For the depth map of the first frame, only spatial inpainting is used. Use the color segmentation map of its corresponding color image to guide the initial depth filling, and then perform hole repair on darker color areas to further improve the depth map quality. For all depth maps after the first frame, the motion area is extracted first, the space of the motion area is repaired separately, and then the depth value of the corresponding position of t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Provided is a Kinect depth video spatio-temporal union restoration method. According to the Kinect depth video spatio-temporal union restoration method, on the basis of the hypothesis that neighborhood pixels with similar colors should have similar depth values, a color segment image which corresponds to color images is used for guiding depth filling carried out on motion areas extracted from a first depth image and motion areas extracted from all the follow-up depth images; in consideration that some areas with darker colors and without effective depth values can cause failure of the Kinect depth video spatio-temporal union restoration method, the areas with the darker colors are detected firstly, and effective depth values of the same areas with the darker colors are used for repairing empty areas; according to static areas in depth video shot by a Kinect, if a current depth image is provided with empty pixels, filling is carried out by using depth values of positions, corresponding to the positions of the empty pixels, of a previous depth image. Due to the fact that the drawing technology based on the depth images is used for drawing a virtual viewpoint, the image quality of a virtual right view which is correspondingly obtained through the Kinect depth video spatio-temporal union restoration method is obviously better than that of an original virtual right view, and the Kinect depth video spatio-temporal union restoration method can be applied to 3D drawing.

Description

technical field [0001] The invention relates to the technical field of image / video processing and can be applied to 3D rendering. technical background [0002] Three-dimensional stereoscopic TV has been considered by many people to bring a more natural and life-like visual entertainment experience. With the development of stereoscopic display technology and video processing technology, 3D video technology has become a research hotspot in recent years. At present, there are mainly two schemes to realize the stereoscopic video system: one is the multi-view scheme, which acquires the 3D scene by multiple camera arrays and plays it on the stereoscopic display; the other is the "texture + depth" scheme, which uses color texture Video and depth video describe the color texture information and depth information in the stereoscopic scene. Combining the two video information, the depth image based rendering (DIBR) technology is used to draw the virtual viewpoint, and finally the syn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N15/00
Inventor 张冬冬姚烨刘典陈艳毓臧笛
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products