Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

30results about How to "Reduce cache capacity" patented technology

Router with a cache having a high hit probability

A router allowing the entry hit probability of the cache to be increased is disclosed. The cache is searched using a different mask for each cache entry. A maximum or optimum cache prefix length is determined as a length of upper bits of the destination address of the received packet which are not masked by a corresponding mask. Alternatively, the cache is searched using longest prefix match (LFM). A cache entry allowing a plurality of destination addresses to be hit can be registered in the cache, resulting in increased cache hit probability.
Owner:NEC CORP

Convolutional calculation accelerator, convolutional calculation method and convolutional calculation equipment

The invention relates to a convolution calculation accelerator, a convolution calculation method and convolution calculation equipment, and relates to the technical field of electronic circuits. The convolution calculation accelerator comprises a controller, a calculation matrix and a first cache, the calculation matrix comprises at least one row of calculation units, and each row of calculation units comprises at least two calculation units; the controller is used for controlling input data loaded to the first cache to be input into the computing units of the corresponding rows, and the computing units of the corresponding rows transmit the input data in the computing units of the corresponding rows; each computing unit in the computing units of the corresponding row performs convolutioncomputation on the received input data and a pre-stored convolution kernel; at least two computing units in the same row multiplex the same input data, and only one input channel is needed, so that the cache capacity and input bandwidth requirements of the computing matrix are reduced, and the expandability of the computing matrix is improved.
Owner:TENCENT TECH (SHENZHEN) CO LTD

FPGA (Field Programmable Gata Array) based Flicker picture component generation method

The invention discloses an FPGA (Field Programmable Gata Array) based Flicker picture component generation method. The FPGA based Flicker picture component generation method comprises 1, confirming horizontal perpendicular points of a Flicker picture lattice in a host computer, confirming picture vertex coordinates and coloring the points; 2, enabling the host computer to transmit the points, the vertex coordinates, color of the points and the module resolution to a data analysis module to be analyzed; 3, enabling the data analysis module to transmit analyzed data to an image signal generator; 4, writing color into a RAM (Random Access Memory) of the image signal generator with the number of pixel points of the picture lattice being served as an address; 5, scanning areas which are corresponding to pictures inside the image signal generator and calculating the every pixel point address being mapped in the picture lattice inside the scanning area; 6, reading color values of the pixel points of the picture lattice and coloring the pixel points with the pixel point addresses mapping in the lattice to be RAM reading addresses. The FPGA based Flicker picture component generation method can generate complex logic pictures such as the Flick picture by utilizing the FPGA.
Owner:WUHAN JINGCE ELECTRONICS GRP CO LTD

Display controller capable of reducing cache memory and the frame adjusting method thereof

A display controller capable of reducing cache memory and a frame adjusting method thereof are provided. The display controller comprises a memory controller, a first memory, a second memory and a frame control circuit. The memory controller is for reading part of the image data from a source layer to obtain a first image data, and reading part of the image data from the target layer to obtain a second image data. The first memory is for storing the first image data. The second memory is for storing the second image data. The frame control circuit is for processing the first image data to generate a first processed image data overlaid with the second image data to obtain a second processed image data. If the second processed image data needs further processing, then the display controller loads the second processed image data to an external memory.
Owner:QUANTA COMPUTER INC

System and method for reduced cache mode

A system and method are described for dynamically changing the size of a computer memory such as level 2 cache as used in a graphics processing unit. In an embodiment, a relatively large cache memory can be implemented in a computing system so as to meet the needs of memory intensive applications. But where cache utilization is reduced, the capacity of the cache can be reduced. In this way, power consumption is reduced by powering down a portion of the cache.
Owner:NVIDIA CORP

Memory system and operation method thereof

A memory system may include a first memory device including a first input / output buffer, a second memory device including a second input / output buffer, and a cache memory suitable for selectively and temporarily storing first and second data to be respectively programmed in the first and second memory devices. The first data is programmed to the first memory device in a first program section by being stored in the cache memory only in a first monopoly section of the first program section. The second data is programmed to the second memory device in a second program section by being stored in the cache memory only in a second monopoly section of a second program section. The first monopoly section and the second monopoly section are set not to overlap each other.
Owner:SK HYNIX INC

Any-order checker board image assembly generating method based on FPGA

The invention discloses an any-order checker board image assembly generating method based on an FPGA. The any-order checker board image assembly generating method based on the FPGA comprises the steps that (1), checker board image peak coordinate information, checker board image color value information and checker board image order information are sent to an image signal generator; (2) horizontal coordinate points within the range of a checker board image are divided into a plurality of continuous horizontal blocks, each horizontal block with the serial number as an odd number is tagged with a first tagged value, each horizontal block with the serial number as an even number is tagged with a second tagged value, the vertical coordinate points within the range of the checker board image are divided into a plurality of continuous vertical blocks, each vertical block with the serial number as an odd number is tagged with a first tagged value, and each vertical block with the serial number as an even number is tagged with a second tagged value; (3), each coordinate point within the range of the checker board image is scanned, if the horizontal tagged value of the scanned coordinate point and the vertical tagged value of the scanned coordinate point are identical, the color value of the scanned point is set as a first color value, and the color value of the scanned point is set as a second color value otherwise. According to the any-order checker board image assembly generating method based on the FPGA, an any-order checker board image assembly, as a complicated logic image, is generated through the FPGA.
Owner:WUHAN JINGCE ELECTRONICS GRP CO LTD

Method and device for dividing caches

An embodiment of the invention discloses a method and device for dividing caches and relates to the technical field of computers. The method includes: judging the virtual machine management operation state (VMS) of a first virtual machine when the cache data of the cache groups visited by the first virtual machine misses; replacing the cache data, which belongs to a second virtual machine and is used the least recently, in the cache group when the VMS of the first virtual machine is the first state. The method is applicable to virtualized environments and capable of increasing the performance of virtual machines during the starting and copying of the virtual machines.
Owner:HUAWEI TECH CO LTD +1

Method for generating any triangle filling picture assembly based on FPGA

The invention discloses a method for generating any triangle filling picture assembly based on an FPGA. The method comprises the steps that 1, coordinates and color values of the three vertexes of a triangle are obtained; 2, a first RAM and a second RAM are generated inside an image signal generator; 3, the two RAMs are initialized; 4, the horizontal coordinate and the vertical coordinate of each pixel in the three edges of a triangle filling picture are generated; 5, the minimum horizontal coordinate, corresponding to each vertical coordinate, in the effective pixel of the triangle filling picture is obtained; 6, the maximum horizontal coordinate, corresponding to each vertical coordinate, in the effective pixel in the triangle filing picture is obtained; 7, the coordinates are scanned within the range of a triangle circumscribed rectangle, whether each pixel is located in the triangle or not is judged, and the color values are given to the pixels in the triangle. The method can generate complex logic pictures like any triangle filling picture assembly through the FPGA.
Owner:WUHAN JINGCE ELECTRONICS GRP CO LTD

LLR processing method and receiving equipment

The invention provides an LLR processing method and receiving equipment. According to the embodiment of the invention, only the data of one subframe needs buffer memory before LLR processing, thereby saving the buffer memory size of the data of one subframe and reducing the cost of chips of the receiving equipment. In addition, the power consumption and the processing time delay of the receiving equipment can further be reduced.
Owner:HONOR DEVICE CO LTD

Display controller capable of reducing using high speed buffer store and its frame regulating method

A display controller used for reducing service of high speed buffer storage is prepared as fetching image data from source frame and destination frame to separately obtain the first and the second image data by storage controller, storing the second image data by the second storage, processing the first image data by frame management circuit and generating processed first image data, superposing processed first image data with the second image data in the second storage to form processed second image data, writing processed second image data back to external storage by display controller if it needs to be reprocessed.
Owner:QUANTA COMPUTER INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products