Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

741 results about "Delay" patented technology

Delay is an audio effect and an effects unit which records an input signal to an audio storage medium, and then plays it back after a period of time. The delayed signal may either be played back multiple times, or played back into the recording again, to create the sound of a repeating, decaying echo.

Method and apparatus for processing an input speech signal during presentation of an output audio signal

A start of an input speech signal is detected during presentation of an output audio signal and an input start time, relative to the output audio signal, is determined. The input start time is then provided for use in responding to the input speech signal. In another embodiment, the output audio signal has a corresponding identification. When the input speech signal is detected during presentation of the output audio signal, the identification of the output audio signal is provided for use in responding to the input speech signal. Information signals comprising data and / or control signals are provided in response to at least the contextual information provided, i.e., the input start time and / or the identification of the output audio signal. In this manner, the present invention accurately establishes a context of an input speech signal relative to an output audio signal regardless of the delay characteristics of the underlying communication system.
Owner:AUVO TECH

Digital wavetable audio synthesizer with delay-based effects processing

A digital wavetable audio synthesizer is described. The synthesizer can generate up to 32 high-quality audio digital signals or voices, including delay-based effects, at either a 44.1 KHz sample rate or at sample rates compatible with a prior art wavetable synthesizer. The synthesizer includes an address generator which has several modes of addressing wavetable data. The address generator's addressing rate controls the pitch of the synthesizer's output signal. The synthesizer performs a 10-bit interpolation, using the wavetable data addressed by the address generator, to interpolate additional data samples. When the address generator loops through a block of data, the signal path interpolates between the data at the end and start addresses of the block of data to prevent discontinuities in the generated signal. A synthesizer volume generator, which has several modes of controlling the volume, adds envelope, right offset, left offset, and effects volume to the data. The data can be placed in one of sixteen fixed stereo pan positions, or left and right offsets can be programmed to place the data anywhere in the stereo field. The left and right offset values can also be programmed to control the overall volume. Zipper noise is prevented by controlling the volume increment. A synthesizer LFO generator can add LFO variation to: (i) the wavetable data addressing rate, for creating a vibrato effect; and (ii) a voice's volume, for creating a tremolo effect. Generated data to be output from the synthesizer is stored in left and right accumulators. However, when creating delay-based effects, data is stored in one of several effects accumulators. This data is then written to a wavetable. The difference between the wavetable write and read addresses for this data provides a delay for echo and reverb effects. LFO variations added to the read address create chorus and flange effects. The volume of the delay-based effects data can be attenuated to provide volume decay for an echo effect. After the delay-based effects processing, the data can be provided with left and right offset volume components which determine how much of the effect is heard and its stereo position. The data is then stored in the left and right accumulators.
Owner:MICROSEMI SEMICON U S

Dynamic range compressor-limiter and low-level expander with look-ahead for maximizing and stabilizing voice level in telecommunication applications

InactiveUS6535846B1Avoid excessive amplificationMaximizing and stabilizing voice levelInterconnection arrangementsSpeech analysisTelecommunication applicationSpeech recognition
A voice signal processing system with multiple parallel control paths, each of which address different problems, such as the high peak-to-RMS signal ratios characteristic of speech, wide variations in RMS speech levels, and high background noise levels. Different families of input-output control curves are used simultaneously to achieve efficient peak limiting and dynamic range compression as well as low-level dynamic expansion to prevent excessive amplification of background noise. In addition, a delay in the audio path relative to the control path makes it possible to employ an effective look-ahead in the control path, with FIR filtering smoothing-matched to the look-ahead. Digital domain peak interpolators are used for estimating the peaks of the input signal in the continuous time domain.
Owner:K S WAVES

Device for and a method of processing audio data

According to an exemplary embodiment of the invention, a device (100) for processing audio data (101, 102) is provided, wherein the device (100) comprises a manipulation unit (103) (particularly a resampling unit) adapted for manipulating (particularly for resampling) selectively a transition portion of a first audio item (104) in a manner that a time-related audio property of the transition portion is modified (particularly, it is possible to simulate also the temporal delay effects of movement in a realistic manner).
Owner:KONINKLIJKE PHILIPS ELECTRONICS NV

Wavetable audio synthesizer with left offset, right offset and effects volume control

A digital wavetable audio synthesizer is described. A synthesizer volume generator, which has several modes of controlling the volume, adds envelope, right offset, left offset, and effects volume to the data. The data can be placed in one of sixteen fixed stereo pan positions, or left and right offsets can be programmed to place the data anywhere in the stereo field. The left and right offset values can also be programmed to control the overall volume. Zipper noise is prevented by controlling the volume increment. A synthesizer LFO generator can ad LFO variation to: (i) the wavetable data addressing rate, for creating a vibrato effect; and (ii) a voice's volume, for creating a tremolo effect. Generated data to be output from the synthesizer is stored in left and right accumulators. However, when creating delay-based effects, data is stored in one of several effects accumulators. This data is then written to a wavetable. The difference between the wavetable write and read addresses for this data provides a delay for echo and reverb effects. LFO variations added to the read address create a chorus and flange effects. The volume of the delay-based effects data can be attenuated to provide volume decay for an echo effect. After the delay-based effects processing, the data can be provided with left and right offset volume components which determine how much of the effect is heard and its stereo position. The data is then stored in the left and right accumulators.
Owner:MICROSEMI SEMICON U S

Audio receiver having adaptive buffer delay

InactiveUS20060092918A1Buffer delay be increaseIncreasing and decreasing buffer delayError preventionTransmission systemsInterval delaySelf adaptive
Generally speaking, there are provided systematic techniques for increasing and decreasing jitter buffer delay. The disclosed techniques typically utilize various combinations of: evaluating received data over a specified interval, increasing a recommended buffer delay if the interval delay exceeds a first threshold and decreasing the recommended buffer delay if the interval delay is less than a second threshold, causing the recommended buffer delay to decrease over time until an underflow condition is identified, and / or increasing the recommended buffer delay in response to identifying the underflow condition.
Owner:PIVOT VOIP

Method for Controlling Acoustic Echo Cancellation and Audio Processing Apparatus

A method for controlling acoustic echo cancellation and an audio processing apparatus are described. In one embodiment, the audio processing apparatus includes an acoustic echo canceller for suppressing acoustic echo in a microphone signal, a jitter buffer for reducing delay jitter of a received signal, and a joint controller for controlling the acoustic echo canceller by referring to at least one future frame in the jitter buffer.
Owner:DOLBY LAB LICENSING CORP

Apparatus and method of out-of-head localization of sound image output from headpones

An apparatus for and method of externalizing a sound image output to headphones are provided. The method of externalizing a sound image output to headphones includes: localizing the sound image of an input signal to a predetermined area in front of a listener; and signal-processing a left signal component and a right signal component of the input signal with different delay values and gain values, respectively. According to the method and apparatus, the sound image output to the headphones can be localized to a virtual sound stage in front of the listener, thereby reducing tiredness occurring when listening through headphones, and even when a sound source includes many monophonic component, the sound image can be externalized.
Owner:SAMSUNG ELECTRONICS CO LTD

Method for compensating delay and frequency response characteristics of multi-output channel sound system

The invention discloses a method for compensating delay and frequency response characteristics of multi-output channel sound system and system implementation thereof. The method comprises the following steps that firstly, phase difference between channels of a multi-channel output device is measured, so that the situation that the multi-channel output device itself is in the state of synchronously outputting signals is ensured; then the delay of the signals which are output by the multi-channel output device and pass through a signal processing system, a loudspeaker box and a space transmission routine to reach a specific audition area is estimated through a delay estimation method, and delay compensation is conducted on the channels by comparing delay difference between the channels; finally, an FIR frequency response compensating filter is designed and achieved according to actually detected frequency response curves of a signal transmission routine, and the part above the low and medium frequency in the frequency response curves of the multiple channels is compensated into a straight line through the filter as far as possible. By means of the method, when sound waves transmitted out of each loudspeaker box reach the audition area, the phases of the sound waves are basically the same, and the frequency response characteristics of the channels are basically the same.
Owner:ZHEJIANG ELECTRO ACOUSTIC R&D CENT CAS +1

Adaptive voice separating method based on sound source positioning

The invention provides an adaptive voice separating method based on sound source positioning, and relates to the technical field of information processing. The method includes steps: acquiring an audio signal of an observed environment, and confirming the number of sound sources and the direction of arrival of each sound source; generating a dimension reduction matrix P; generating a voice transfer matrix and a delay superposed wave beam coefficient; determining an active sound source of a frequency point and separating voice components; obtaining the obtained voice components and setting non-activated sound source components as zero; and obtaining time domain voice signals of the sound sources. According to the method, the number and the orientation of the sound sources in a current environment can be obtained through a sound source positioning technology, dimension reduction of each frequency band of the voice signal is performed with the cooperation of a PCA whitening technology toobtain an initial separation matrix, frequency components of each sound source channel are separated through the number of the activated sound sources at the frequency point by adaptive usage of the beam forming technology and the FDICA technology to restore the voice components, the obtained signal-to-noise ratio improvement characteristic is higher, better noise suppression performance is achieved, and the method is applicable to any sound source situations in the real voice environment.
Owner:NORTHEASTERN UNIV

Bluetooth audio equipment synchronous playing method and system, Bluetooth audio master equipment and Bluetooth audio slave equipment

ActiveCN111918261APlayback Latency AccuratePrecise synchronicityMicrophonesNetwork traffic/resource managementComputer hardwareTimestamp
The invention relates to the technical field of Bluetooth communication, in particular to a synchronous playing method and system for Bluetooth audio equipment, Bluetooth audio master equipment and Bluetooth audio slave equipment. The method comprises the steps of correspondingly determining the playing time of the audio data packet to be synchronously played at the Bluetooth audio main equipmentend; converting the playing time into a timestamp, wherein the timestamp comprises a Bluetooth clock value and a microsecond clock offset value; and sending the timestamp. At a Bluetooth audio slave device end, a timestamp is received, and an audio data packet to be synchronously played is received; and the Bluetooth audio slave device plays the audio data packet based on the timestamp, and playsthe audio data packet corresponding to the timestamp when both the value of the Bluetooth clock and the microsecond clock offset value arrive. According to the method, the playing time is determined based on the microsecond-level clock offset value, so that audio playing is started and carried out between each Bluetooth audio slave device or between the Bluetooth audio master device and the Bluetooth audio slave device based on the more accurate playing time, and the playing delay is at the microsecond level.
Owner:NANJING ZGMICRO CO LTD

Improved sound source localization method based on progressive serial orthogonalization blind source separation algorithm, and implementation system for same

The invention relates to an improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm, and an implementation system for the improved sound source localization method. The improved sound source localization method comprises the steps of: 1, acquiring and storing sound signals; 2, separating the sound signals to obtain independent sound source signals; 3, selecting the independent sound source signal of sounds to be localized by adopting a pattern matching algorithm from the independent sound source signals; 4, and if the sound source is a single sound source, performing coarse localization at first according to a result of pattern matching calculating an envelope of the signals, performing low-resolution sampling, calculatingtime delay by adopting a generalized autocorrelation function method roughly, carrying out time domain shifting on the signals according to a point number of rough localization, then performing finelocalization, carrying out high-resolution sampling, calculating time delay by adopting the generalized autocorrelation function method to obtain precise time delay, and solving a position of the sound source; and if the sound sources are multiple, calculating time delay by adopting a TDOA algorithm and solving positions of the sound sources. Compared with the traditional TDOA algorithm, the improved sound source localization method can improve the precision to some extent, and can reduce the algorithm computation amount.
Owner:SHANDONG UNIV

Method and device for compensating drop frame after start frame of voiced sound

InactiveCN102915737AAvoid Compensation LatencyGuaranteed compensation sound qualitySpeech analysisSelf adaptiveSpeech sound
The invention discloses a method and a device for compensating a drop frame after a start frame of voiced sound and guarantees against delaying of compensation to the drop frame after the start frame of the voiced sound. The method includes selecting different manners to deduce pitch delay of a first drop frame following the start frame of the voiced sound on the condition of stability of the start frame of the voiced sound; deducing self-adaptive codebook gain of the first drop frame according to self-adaptive codebooks of one or multiple sub-frames received before the first drop frame, or deducing the self-adaptive codebook gain of the first drop frame according to energy change of time-domain voice signals of the start frame of the voiced sound; and compensating the first drop frame by the pitch delay and the self-adaptive codebook gain deduced. After compensation, each sub-frame of the frame correctly received after the start frame the voiced sound is decoded to acquire the self-adaptive codebook gain, the self-adaptive codebook gain times a scale factor to obtain the new self-adaptive codebook gain of the corresponding sub-frame, and the new self-adaptive codebook gain substitutes for the self-adaptive codebook gain acquired by decoding to participate in voice synthesis. Therefore, error transmission caused by the drop frame can be decreased, and energy for voice synthesis can be controlled.
Owner:ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products