Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Rgb/depth camera for improving speech recognition

A speech recognition and phoneme technology, applied in speech recognition, input/output of user/computer interaction, character and pattern recognition, etc., can solve the problems of increasing the complexity and delay of speech recognition, improve processing time and simplify speech recognition process, the effect of simplifying the processing time

Inactive Publication Date: 2012-01-11
MICROSOFT TECH LICENSING LLC
View PDF16 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods increase the complexity and delay of speech recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Rgb/depth camera for improving speech recognition
  • Rgb/depth camera for improving speech recognition
  • Rgb/depth camera for improving speech recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Reference will now be made to the attached Figure 1A-12 Various embodiments of the present technology are described, which generally relate to systems and methods for facilitating speech recognition by processing visual speech cues. These voice cues may include the position of the lips, tongue and / or teeth during the utterance. Although some phonemes are difficult to identify from an audio perspective, the lips, tongue and / or teeth can form a different, unique position for each phoneme. These locations can be captured in the image data and analyzed against cataloging rules to identify specific phonemes from the locations of the lips, tongue and / or teeth.

[0025] In one embodiment, the system identifies the speaker and the location of the speaker after the frame of data is captured by the image capture device. The speaker position may be determined from the image and / or from audio position data (such as is generated in a typical microphone array). The system then fo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a RGB / depth camera for improving speech recognition. A system and method are disclosed for facilitating speech recognition through the processing of visual speech cues. These speech cues may include the position of the lips, tongue and / or teeth during speech. In one embodiment, upon capture of a frame of data by an image capture device, the system identifies a speaker and a location of the speaker. The system then focuses in on the speaker to get a clear image of the speaker's mouth. The system includes a visual speech cues engine which operates to recognize and distinguish sounds based on the captured position of the speaker's lips, tongue and / or teeth. The visual speech cues data may be synchronized with the audio data to ensure the visual speech cues engine is processing image data which corresponds to the correct audio data.

Description

technical field [0001] The present invention relates to systems and methods for facilitating speech recognition by processing visual speech cues. Background technique [0002] In the past, computing applications, such as computer games and multimedia applications, used controllers, remote controls, keyboards, mice, etc. to allow users to manipulate game characters or other aspects of the application. More recently, computer games and multimedia applications have begun to use cameras and software gesture recognition engines to provide natural user interfaces ("NUIs"). Using the NUI, user gestures are detected, interpreted, and used to control game characters or other aspects of the application. [0003] In addition to gestures, another aspect of NUI systems is the ability to receive and interpret audio questions and commands. Speech recognition systems that rely solely on audio are known and do an acceptable job with most audio. However, certain phonemes such as sounds suc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G10L15/24G06F3/01
CPCG10L15/25
Inventor J·A·塔迪夫
Owner MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products