Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

49 results about "Interleaving" patented technology

In disk storage and drum memory, interleaving is a technique used to improve access performance of storage by putting data accessed sequentially into non-sequential sectors. The number of physical sectors between consecutive logical sectors is called the interleave skip factor or skip factor.

Interleaving and rate matching and de-interleaving and rate de-matching methods

The invention discloses an interleaving and rate matching method. A correction interleaving mode is determined according to a column interleaving mode and head filling bits, an interleaving operator is determined according to the correction interleaving mode, and the interleaving operator is utilized to perform interleaving processing on subblock interleaving matrices in sequence; one mode can utilize the interleaving operator to perform the interleaving processing on bits required to be output in the subblock interleaving matrices according to the requirement of redundancy version, directly output subblock interleaving results after protocol sorting while completing the interleaving of subblocks until meeting the corresponding requirement of the code rate; and the other mode can utilize the interleaving operator to perform the interleaving processing on the subblock interleaving matrices one by one, output the subblock interleaving results after protocol sorting to a circulating buffer while completing the interleaving of subblocks, and then output corresponding bits after the interleaving according to the requirement of the redundancy version. Furthermore, the invention also discloses a de-interleaving and rate de-matching method corresponding to the interleaving and rate matching method. The methods can greatly reduce buffers to be used and read-write operation of the buffers in the interleaving and rate matching and de-interleaving and rate de-matching processes.
Owner:POTEVIO INFORMATION TECH

MC/MC-DS dual-mode adaptive multi-carrier code division multiple access (CDMA) apparatus and method thereof

Provided are multi-carrier (MC) / multi-carrier direct sequence (MC-DS) dual-mode adaptable CDMA apparatus, the method thereof, and a computer program that implements the method. The apparatus can vary the user modulation degree and the transmission repetition degree independently, and convert a spreading scheme between the time-based spreading scheme (MC-DS-CDMA) and the frequency-based spreading scheme (MC-CDMA) in a MC-CDMA system. The apparatus includes: a user signal processing unit for performing symbol modulation, repetition and spreading of bit stream for each user based on a transmission mode suitable for channel environment of each user, and generating spread chip streams for the user; a combining unit for adding up all the spread chip streams for the users; a first interleaving unit for interleaving the chip streams added up in the combining unit and generating a first interleaved stream; and a second interleaving unit for optionally performing a second interleaving on the first interleaved stream.
Owner:ELECTRONICS & TELECOMM RES INST

Parallel turbo decoding with non-uniform window sizes

A turbo decoder circuit performs a turbo decoding process to recover a frame of data symbols from a received signal comprising soft decision values for each data symbol of the frame. The data symbolsof the frame have been encoded with a turbo encoder comprising upper and lower convolutional encoders which can each be represented by a trellis, and an interleaver which interleaves the encoded databetween the upper and lower convolutional encoders. The turbo decoder circuit comprises a clock, a configurable network circuitry for interleaving soft decision values, an upper decoder and a lower decoder. Each of the upper and lower decoders include processing elements, which are configured, during a series of consecutive clock cycles, iteratively to receive, from the configurable network circuitry, a priori soft decision values pertaining to data symbols associated with a window of an integer number of consecutive trellis stages representing possible paths between states of the upper or lower convolutional encoder. The processing elements perform parallel calculations associated with the window using the a priori soft decision values in order to generate corresponding extrinsic soft decision values pertaining to the data symbols. The configurable network circuitry includes network controller circuitry which controls a configuration of the configurable network circuitry iteratively,during the consecutive clock cycles, to provide the a priori soft decision values for the upper decoder by interleaving the extrinsic soft decision values provided by the lower decoder, and to providethe a priori soft decision values for the lower decoder by interleaving the extrinsic soft decision values provided by the upper decoder. The interleaving performed by the configurable network circuitry controlled by the network controller is in accordance with a predetermined schedule, which provides the a priori soft decision values at different cycles of the one or more consecutive clock cycles to avoid contention between different a priori soft decision values being provided to the same processing element of the upper or the lower decoder during the same clock cycle. Accordingly the processing elements can have a window size which includes a number of stages of the trellis so that the decoder can be configured with an arbitrary number of processing elements, making the decoder circuitan arbitrarily parallel turbo decoder.
Owner:阿塞勒康姆有限公司

Memory device and method for exchanging data with memory device

Improved double data rate type II dynamic random access memory data path. The present invention proposes techniques and circuits to support the switching operations required to exchange data between a memory array and an external data buffer. In the write path, this switching operation can include latching and assembling some of the data bits received sequentially on a single data buffer, rearranging the data bits based on the type of access pattern (for example, interleaved or sequential). Sort and perform encoding operations on the accessed bank locations based on the chip structure (eg ×4, ×8, ×16). Similar operations are performed in the read path (in reverse order) to assemble the data to be read from the slave device. By separating the cache logic from the switching logic that can perform various other logic functions, the switching logic that performs those functions can operate at a lower clock frequency, saving the time of data transfer from the memory array to the DQ buffer, which can Alleviate the associated timing needs and improve latency, and vice versa.
Owner:INFINEON TECH AG +1

Parallel turbo decoding with non-uniform window sizes

A turbo decoder circuit performs a turbo decoding process to recover a frame of data symbols from a received signal comprising soft decision values for each data symbol of the frame. The data symbols of the frame have been encoded with a turbo encoder comprising upper and lower convolutional encoders which can each be represented by a trellis, and an interleaver which interleaves the encoded data between the upper and lower convolutional encoders. The turbo decoder circuit comprises a clock, a configurable network circuitry for interleaving soft decision values, an upper decoder and a lower decoder. Each of the upper and lower decoders include processing elements, which are configured, during a series of consecutive clock cycles, iteratively to receive, from the configurable network circuitry, a priori soft decision values pertaining to data symbols associated with a window of an integer number of consecutive trellis stages representing possible paths between states of the upper or lower convolutional encoder. The processing elements perform parallel calculations associated with the window using the a priori soft decision values in order to generate corresponding extrinsic soft decision values pertaining to the data symbols. The configurable network circuitry includes network controller circuitry which controls a configuration of the configurable network circuitry iteratively, during the consecutive clock cycles, to provide the a priori soft decision values for the upper decoder by interleaving the extrinsic soft decision values provided by the lower decoder, and to provide the a priori soft decision values for the lower decoder by interleaving the extrinsic soft decision values provided by the upper decoder. The interleaving performed by the configurable network circuitry controlled by the network controller is in accordance with a predetermined schedule, which provides the a priori soft decision values at different cycles of the one or more consecutive clock cycles to avoid contention between different a priori soft decision values being provided to the same processing element of the upper or the lower decoder during the same clock cycle. Accordingly the processing elements can have a window size which includes a number of stages of the trellis so that the decoder can be configured with an arbitrary number of processing elements, making the decoder circuit an arbitrarily parallel turbo decoder.
Owner:ACCELERCOMM LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products