Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-language speech recognition system

a multi-language, speech recognition technology, applied in the field of multi-language speech recognition system, can solve the problems of many persons not being able or will not, unable to or cannot speak, and the “internet experience” of users has been limited to non-voice-based input/output devices, etc., to achieve accurate best response, facilitate query recognition, and facilitate query recognition

Inactive Publication Date: 2005-06-02
NUANCE COMM INC
View PDF99 Cites 297 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0028] A primary object of the present invention is to provide a word and phrase recognition system that is flexibly and optimally distributed across a client / platform computing architecture, so that improved accuracy, speed and uniformity can be achieved for a wide group of users;
[0047] Computer-assisted instruction environments often require the assistance of mentors or live teachers to answer questions from students. This assistance often takes the form of organizing a separate pre-arranged forum or meeting time that is set aside for chat sessions or live call-in sessions so that at a scheduled time answers to questions may be provided. Because of the time immediacy and the on-demand or asynchronous nature of on-line training where a student may log on and take instruction at any time and at any location, it is important that answers to questions be provided in a timely and cost-effective manner so that the user or student can derive the maximum benefit from the material presented.

Problems solved by technology

Until now, however, the INTERNET “experience” for users has been limited mostly to non-voice based input / output devices, such as keyboards, intelligent electronic pads, mice, trackballs, printers, monitors, etc.
This presents somewhat of a bottleneck for interacting over the WWW for a variety of reasons.
First, there is the issue of familiarity.
In addition, many persons cannot or will not, because of physical or psychological barriers, use any of the aforementioned conventional I / O devices.
For example, many older persons cannot easily read the text presented on WWW pages, or understand the layout / hierarchy of menus, or manipulate a mouse to make finely coordinated movements to indicate their selections.
Many others are intimidated by the look and complexity of computer systems, WWW pages, etc., and therefore do not attempt to use online services for this reason as well.
To date, however, there are very few systems, if any, which permit this type of interaction, and, if they do, it is very limited.
For example, various commercial programs sold by IBM (VIAVOICE™) and Kurzweil (DRAGON™) permit some user control of the interface (opening, closing files) and searching (by using previously trained URLs) but they do not present a flexible solution that can be used by a number of users across multiple cultures and without time consuming voice training.
Another issue presented by the lack of voice-based systems is efficiency.
While this is very advantageous (for the reasons mentioned above) it is also extremely costly and inefficient, because a real person must be employed to handle such queries.
This presents a practical limit that results in long wait times for responses or high labor overheads.
In a similar context, while remote learning has become an increasingly popular option for many students, it is practically impossible for an instructor to be able to field questions from more than one person at a time.
Even then, such interaction usually takes place for only a limited period of time because of other instructor time constraints.
To date, however, there is no practical way for students to continue a human-like question and answer type dialog after the learning session is over, or without the presence of the instructor to personally address such queries.
While a form of this functionality is used by some websites to communicate information to visitors, it is not performed in a real-time, interactive question-answer dialog fashion so its effectiveness and usefulness is limited.
While the HMM-based speech recognition yields very good results, contemporary variations of this technique cannot guarantee a word accuracy requirement of 100% exactly and consistently, as will be required for WWW applications for all possible all user and environment conditions.
Thus, although speech recognition technology has been available for several years, and has improved significantly, the technical requirements have placed severe restrictions on the specifications for the speech recognition accuracy that is required for an application that combines speech recognition and natural language processing to work satisfactorily.
Because spontaneous speech contains many surface phenomena such as disfluencies,—hesitations, repairs and restarts, discourse markers such as ‘well’ and other elements which cannot be handled by the typical speech recognizer, it is the problem and the source of the large gap that separates speech recognition and natural language processing technologies.
Except for silence between utterances, another problem is the absence of any marked punctuation available for segmenting the speech input into meaningful units such as utterances.
However, most continuous speech recognition systems produce only a raw sequence of words.
Second, most of the very reliable voice recognition systems are speaker-dependent, requiring that the interface be “trained” with the user's voice, which takes a lot of time, and is thus very undesirable from the perspective of a WWW environment, where a user may interact only a few times with a particular website.
Furthermore, speaker-dependent systems usually require a large user dictionary (one for each unique user) which reduces the speed of recognition.
This makes it much harder to implement a real-time dialog interface with satisfactory response capability (i.e., something that mirrors normal conversation—on the order of 3-5 seconds is probably ideal).
While most of these applications are adequate for dictation and other transcribing applications, they are woefully inadequate for applications such as NLQS where the word error rate must be close to 0%.
In addition these offerings require long training times and are typically are non client-server configurations.
Another significant problem faced in a distributed voice-based system is a lack of uniformity / control in the speech recognition process.
Thus, from the server side perspective, it is not easy to assure uniform treatment of all users accessing a voice-enabled web page, since such users may have significantly disparate word recognition and error rate performances.
While a prior art reference to Gould et al.—U.S. Pat. No. 5,915,236—discusses generally the notion of tailoring a recognition process to a set of available computational resources, it does not address or attempt to solve the issue of how to optimize resources in a distributed environment such as a client-server model.
This reference therefore does not address the issue of how to ensure adequate performance for a very thin client platform.
Moreover, it is difficult to determine, how, if at all, the system can perform real-time word recognition, and there is no meaningful description of how to integrate the system with a natural language processor.
Also, the streaming of the acoustic parameters does not appear to be implemented in real-time as it can only take place after silence is detected.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-language speech recognition system
  • Multi-language speech recognition system
  • Multi-language speech recognition system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

Overview

[0075] As alluded to above, the present inventions allow a user to ask a question in a natural language such as English, French, German, Spanish or Japanese at a client computing system (which can be as simple as a personal digital assistant or cell-phone, or as sophisticated as a high end desktop PC) and receive an appropriate answer from a remote server also in his or her native natural language. As such, the embodiment of the invention shown in FIG. 1 is beneficially used in what can be generally described as a Natural Language Query System (NLQS) 100, which is configured to interact on a real-time basis to give a human-like dialog capability / experience for e-commerce, e-support, and e-learning applications.

[0076] The processing for NLQS 100 is generally distributed across a client side system 150, a data link 160, and a server-side system 180. These components are well known in the art, and in a preferred embodiment include a personal computer system 150, an INTERNET ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A speech recognition system includes distributed processing across a client and server for recognizing a spoken query by a user. A number of different speech models for different natural languages are used to support and detect a natural language spoken by a user. In some implementations an interactive electronic agent responds in the user's native language to facilitate an real-time, human like dialogue.

Description

RELATED APPLICATIONS [0001] The present application claims priority to and is a continuation of Ser. No. 10 / 684,357 filed Oct. 10, 2003—which in turn is a continuation of Ser. No. 09 / 439,145 filed Nov. 12, 1999 (now U.S. Pat. No. 6,633,846). Both applications are hereby incorporated by reference herein.FIELD OF THE INVENTION [0002] The invention relates to a system and an interactive method for responding to speech based user inputs and queries presented over a distributed network such as the INTERNET or local intranet. This interactive system when implemented over the World-Wide Web services (WWW) of the INTERNET, functions so that a client or user can ask a question in a natural language such as English, French, German, Spanish or Japanese and receive the appropriate answer at his or her computer or accessory also in his or her native natural language. The system has particular applicability to such applications as remote learning, e-commerce, technical e-support services, INTERNE...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F17/30G06F40/00G10L15/14G10L15/28
CPCG06F17/3043G10L15/005G10L15/142G10L15/183G10L17/22G10L15/30Y10S707/99935G06F17/289G10L15/18G10L15/22G06F16/24522G06F40/58
Inventor BENNETT, IAN M.BABU, BANDI RAMESHMORKHANDIKAR, KISHORGURURAJ, PALLAKI
Owner NUANCE COMM INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products