Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mouth-movement-identification-based video marshalling method

A video and lip-modeling technology, applied in character and pattern recognition, instruments, computer parts, etc., can solve problems such as poor adaptability and robustness, poor viewing experience, etc., to enhance adaptability and robustness, Enhanced sensitivity and viewability, smooth playback and complete effects

Inactive Publication Date: 2015-01-21
COMMUNICATION UNIVERSITY OF CHINA
View PDF4 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, they only use chroma-difference color feature vectors, which are poor in adaptability and robustness
In addition, the field of video editing has high requirements for real-time performance, and a slight delay in the output screen will cause a bad viewing experience

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mouth-movement-identification-based video marshalling method
  • Mouth-movement-identification-based video marshalling method
  • Mouth-movement-identification-based video marshalling method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0033] The present invention provides a kind of lip segmentation algorithm based on HSV color space Fisher classifier and utilizes it to carry out the method for video arrangement, figure 1 Shown is the overall flowchart.

[0034] In this embodiment, after the system is started, firstly, in step S101, the CCameraDS class of direct show is used to collect original images, obtain the number of cameras, and allocate corresponding memory space for the system. If there is a camera, then enter step S102 to open the first camera, a property selection window pops up, and video encoding and video compression rate settings are performed; otherwise, if the number of cameras is 0, an error returns and the program terminates.

[0035]In step S103, first obtain the current frame of video information obtained by the camera, create a cvVideoWriter object and allocate ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Disclosed in the invention is a mouth-movement-identification-based video marshalling method. According to the invention, on the basis of distribution differences of a tone (H) component, a saturation (S) component, and a brightness (V) component at lip color and skin color areas in a color image, three color feature vectors are selected; filtering and area connection processing is carried out on a binary image that has been processed by classification and threshold segmentation by a fisher classifier; a lip feature is matched with an animation picture lip feature in a material library; and a transition image between two frames is obtained by image interpolation synthesis, thereby realizing automatic video marshalling. The fisher classifier is constructed by selecting color information in the HSV color space reasonably, thereby obtaining more information contents for lip color and skin color area segmentation and enhancing reliability and adaptivity of mouth matching feature extraction in a complex environment. Moreover, with the image interpolation technology, the transition image between the two matched video frame pictures is generated, thereby improving the sensitivity and ornamental value of the video marshalling and realizing a smooth and complete video content.

Description

technical field [0001] The invention relates to the fields of image processing and computer vision. Specifically, by segmenting facial lips and extracting matching features, the output image is rearranged to achieve the effect that the mouth movement of the output image is consistent with the actual detection of the person's mouth movement. Background technique [0002] With the development of image processing technology and video editing technology, researchers have applied image segmentation technology to video screen editing to provide viewers with a more realistic and vivid viewing experience. [0003] In animated videos, animated characters need to be highly consistent with real humans, whether it's facial expressions, body movements or vocalizations. Among them, the mouth movements of animated characters when making sounds also need to be consistent with real humans, rather than simply opening and closing. The traditional production method, taking Putonghua as an exa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46
CPCG06V40/162G06V20/48
Inventor 徐品蓝善祯张岳王爽张宜春
Owner COMMUNICATION UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products