Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A video jitter removing method based on a fusion motion model

A motion model and video technology, applied in TV, color TV, color TV components and other directions, can solve the problems of video de-shake method failure, reconstruction failure, etc., to achieve the effect of optimizing the viewing experience

Active Publication Date: 2019-05-07
苏州中科广视文化科技有限公司
View PDF13 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the 3D reconstruction method that 3D motion model estimation relies on has reconstruction failures, which will cause the video deshaking method to fail.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video jitter removing method based on a fusion motion model
  • A video jitter removing method based on a fusion motion model
  • A video jitter removing method based on a fusion motion model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0025] The video de-shake method based on the fusion motion model is implemented according to the following steps:

[0026] 1. Obtain feature point matching and optical flow between adjacent video frames:

[0027] Convert the image to a grayscale image, extract SIFT feature points, and use the k-nearest neighbor method to find the two most similar feature points in the descriptor Euclidean space in adjacent frames for each feature point. Calculate the difference between the two feature points and the original image feature point descriptor distance. If it is greater than a certain threshold, it is considered to be a correct match, and the matching result is retained.

[0028] The Lucas-Kanade optical flow algorithm is used to estimate the motion of each pixel between adjacent frames of the video m=(u, v), where u is the movement of the x-axis coordinate of the image, and v is the movement of the y-axis coordinate of the image.

[0029] 2. Perform 3D reconstruction according t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video jitter removal method based on a fusion motion model, and the method comprises the following steps: (1) calculating an optical flow between adjacent frames of a video,extracting feature points, and calculating a matching result; (2) performing three-dimensional reconstruction according to the matching result of the feature points, and recovering the three-dimensional camera pose of the camera and the three-dimensional point cloud of the scene; (3) establishing a grid-based two-dimensional motion model by the optical flow, and describing a deformation relationship between the two frames; (4) smoothing the motion estimation of the three-dimensional motion model, and solving the motion compensation of the grid model; (5) smoothly calculating a motion track based on the grid model, and solving motion compensation; And (6) drawing a stable image frame according to the obtained motion compensation. According to the method, the three-dimensional motion model and the two-dimensional motion model are fused, the visual effect and robustness of the video jitter removal algorithm are improved, the motion track is smoothed by using an optimization method, motioncompensation is solved, the video content is stabilized, and the watching experience is optimized.

Description

technical field [0001] The invention relates to the fields of digital image processing and computer vision, in particular to a motion compensation method for image frames by smoothing motion trajectories and redrawing video frames to obtain stable and smooth visual effects. Background technique [0002] With the development of the Internet and consumer electronics, video has become the main carrier for recording and sharing information, and the amount of video content shot anytime, anywhere using mobile devices has exploded, and videos shot with handheld devices are often not as smooth as professional equipment. lens trajectory, which results in a degraded viewing experience. So how to solve the problem of video jitter has become a research hotspot. [0003] The digital image stabilization solution processes video content through algorithms, and consists of two parts: video motion estimation and motion smoothing. Motion estimation obtains the motion trajectory of the video...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/232H04N5/14H04N13/106H04N13/275
Inventor 李兆歆穆乐文王兆其
Owner 苏州中科广视文化科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products