Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice detection method and device, prediction model training method and device, equipment and medium

A prediction model and speech detection technology, applied in speech analysis, speech recognition, character and pattern recognition, etc., can solve the problems of speech interaction in the end state, missing detection of speech end points, poor accuracy of speech end points, etc. Premature truncation, avoiding interference, avoiding the effect of misunderstanding the graph

Active Publication Date: 2021-03-26
HUAWEI TECH CO LTD
View PDF12 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] When the above method is used to detect the end point of the speech, once the background noise is strong, the tail silence duration of the detected audio signal will be longer than the accurate tail silence duration, resulting in the speech end point being easily missed, which will lead to too late Detect that the voice interaction has ended; in addition, once the user pauses during speaking, the tail silence duration of the detected audio signal is smaller than the accurate tail silence duration, which will cause the voice interaction to be detected prematurely in final state
It can be seen that the accuracy of the speech end point detected by this method is poor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice detection method and device, prediction model training method and device, equipment and medium
  • Voice detection method and device, prediction model training method and device, equipment and medium
  • Voice detection method and device, prediction model training method and device, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0088] In order to make the purpose, technical solution and advantages of the present application clearer, the implementation manners of the present application will be further described in detail below in conjunction with the accompanying drawings.

[0089] The meaning of the term "at least one" in this application refers to one or more, and the meaning of the term "multiple" in this application refers to two or more, for example, multiple second messages refer to two or two More than two second messages. The terms "system" and "network" are often used interchangeably herein.

[0090] In this application, the terms "first" and "second" are used to distinguish the same or similar items with basically the same function and function. It should be understood that "first", "second" and "nth" There are no logical or timing dependencies, nor are there restrictions on quantity or order of execution.

[0091] It should be understood that in each embodiment of the present application...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a voice detection method and device, a prediction model training method and device, equipment and a medium, and belongs to the technical field of voice interaction. The invention relates to a multi-mode voice end point detection method, which comprises the following steps: identifying a shot face image through a model so as to predict whether a user has an intention of continuing speaking or not, and judging whether a collected audio signal is a voice end point or not by combining a prediction result. In addition, the characteristics of the visual mode of the face imageare fused for detection on the basis of acoustic characteristics, and whether the voice signal is a voice end point or not can be accurately judged by utilizing the face image even if the backgroundnoise is very strong or the user pauses during speaking, so that the interference caused by the background noise and speaking pauses is avoided. Therefore, the problem that the voice interaction is detected to be in an end state too late or too early due to interference of background noise and speaking pause is avoided, and the accuracy of detecting the voice end point is improved.

Description

technical field [0001] The present application relates to the field of voice interaction technology, and in particular to a voice detection method, a prediction model training method, a device, a device, and a medium. Background technique [0002] In the voice interaction technology, in order to realize the human-computer interaction function based on voice, the voice start point and the voice end point in a piece of voice are usually recognized, and the part between the voice start point and the voice end point is intercepted as a voice command. Instructions to instruct the device to perform corresponding operations. Among them, the voice start point is usually triggered by the user's active operation, which is easily determined by the collection time point of the wake-up word, the time point when the voice interaction activation option is triggered, and the voice end point needs to be determined by the device through voice analysis. processing can be obtained. It can be ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/22G10L15/25G10L15/26G10L25/87G06K9/00G06K9/62
CPCG10L15/22G10L15/26G10L25/87G10L15/25G10L2015/223G06V40/165G06V40/174G06F18/214G06F3/167G06V10/82G06F3/165G06V40/172G10L15/05
Inventor 高益聂为然黄佑佳
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products