Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Conference voice data processing method and system

A technology of speech data and processing methods, applied in speech analysis, speech recognition, instruments, etc., can solve the problems of confusion of speakers, inconvenience, waste of human resources and time cost, etc.

Active Publication Date: 2021-11-26
深圳极联信息技术股份有限公司
View PDF9 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This invention describes methods for analyzing audio signals captured during meetings or conversations over time. It involves gathering together call sounds (CW) from each meeting at least two locations - close proximity to speakers called parties' speaking partners. These gathered CWD may include background noise levels and their associated characteristics such as loudness level and volume. By comparing these characteristic values against predetermined criteria, confidence scores indicating similarities could then be determined based upon this similarity score. If there were differences greater than certain threshold, all previously identified ones would have been considered equivalent. Based on this comparison, the system selects representative voicemail samples instead of default voics. Additionally, the systems use unique identification patterns to distinguish among caller faces and improve recognition capabilities. Overall, this technology helps ensure accurate communication across various types of devices while reducing errors caused by mistakenly selecting wrong voices.

Problems solved by technology

The technical problem addressed by this patented technology relates to improving efficiency during meetings with multiple participants at once without wasting valuable man-hours for each participant's own hands while also ensuring that all relevant content remains accurate across different stages of conversations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Conference voice data processing method and system
  • Conference voice data processing method and system
  • Conference voice data processing method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] A method for processing conference voice data, comprising the following steps:

[0036] S110: A plurality of acquisition modules 201 are arranged near different participants, and the above-mentioned acquisition modules 201 collect the identity information and initial voiceprint features of the corresponding participants according to the different participants, so as to collect the speech voices of the corresponding participants;

[0037] S120: Identifying and judging whether the speech contents of the plurality of speech speeches are the same, and if they are the same, analyzing the sound intensities of the plurality of speech contents, and selecting the speech contents with the highest sound intensity;

[0038] S130: Establish voice feature models of a plurality of participants according to the above identity information and the above initial voiceprint features, and input the selected speech voice into the above voice feature models to obtain an identity matching resul...

Embodiment 2

[0051] see figure 2 , figure 2 It is a schematic diagram of a conference voice data processing system 200 provided by an embodiment of the present invention.

[0052] A conference voice data processing system 200, comprising an error correction module 202, a confirmation module 204, an identity comparison module 203 and a plurality of acquisition modules 201: the plurality of acquisition modules 201 are used to be arranged near different participants, according to different acquisition Module 201 collects the identity information and initial voiceprint features of the corresponding participants to collect the speech voices of the corresponding participants; the above-mentioned error correction module 202 is used to identify and judge whether the voice content of a plurality of the above-mentioned speech voices is the same. , analyze the sound intensity of a plurality of above-mentioned speech contents, select the above-mentioned speech content with the largest sound intensi...

Embodiment 3

[0064] see image 3 , image 3 A schematic structural block diagram of an electronic device provided in an embodiment of the present application. The electronic device includes a memory 101, a processor 102, and a communication interface 103. The memory 101, the processor 102, and the communication interface 103 are electrically connected to each other directly or indirectly, so as to realize data transmission or interaction. For example, these components can be electrically connected to each other through one or more communication buses or signal lines. The memory 101 can be used to store software programs and modules, such as program instructions / modules corresponding to the conference speech processing system provided in the embodiment of the present application, and the processor 102 executes various functions by executing the software programs and modules stored in the memory 101 applications and data processing. The communication interface 103 can be used for signalin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a conference voice data processing method and system, and relates to the field of voice recognition. The method comprises the following steps: arranging a plurality of acquisition modules near different participants, and acquiring identity information and initial voiceprint features of the corresponding participants according to the different acquisition modules so as to acquire speech of the corresponding participants; identifying and judging whether the voice contents of the plurality of speech are the same, if so, analyzing the sound intensity of the plurality of voice contents, and selecting the voice content with the highest sound intensity; establishing a voice feature model of the plurality of participants according to the identity information and the initial voiceprint features, and inputting the selected speech into the voice feature model to obtain an identity matching result; and judging whether the identity information is matched with the identity matching result according to the acquisition modules, and if not, selecting the same voice content of the corresponding acquisition modules according to the identity matching result. The accuracy of voice acquisition of participants can be improved, and the conference recording effect is improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner 深圳极联信息技术股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products