Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

58 results about "Cache invalidation" patented technology

Cache invalidation is a process in a computer system whereby entries in a cache are replaced or removed. It can be done explicitly, as part of a cache coherence protocol. In such a case, a processor changes a memory location and then invalidates the cached values of that memory location across the rest of the computer system.

Method and system for limiting the use of user-specific software features

A server architecture for a digital rights management system that distributes and protects rights in content. The server architecture includes a retail site which sells content items to consumers, a fulfillment site which provides to consumers the content items sold by the retail site, and an activation site which enables consumer reading devices to use content items having an enhanced level of copy protection. Each retail site is equipped with a URL encryption object, which encrypts, according to a secret symmetric key shared between the retail site and the fulfillment site, information that is needed by the fulfillment site to process an order for content sold by the retail site. Upon selling a content item, the retail site transmits to the purchaser a web page having a link to a URL comprising the address of the fulfillment site and a parameter having the encrypted information. Upon following the link, the fulfillment site downloads the ordered content to the consumer, preparing the content if necessary in accordance with the type of security to be carried with the content. The fulfillment site includes an asynchronous fulfillment pipeline which logs information about processed transactions using a store-and-forward messaging service. The fulfillment site may be implemented as several server devices, each having a cache which stores frequently downloaded content items, in which case the asynchronous fulfillment pipeline may also be used to invalidate the cache if a change is made at one server that affects the cached content items. An activation site provides an activation certificate and a secure repository executable to consumer content-rendering devices which enables those content rendering devices to render content having an enhanced level of copy-resistance. The activation site “activates” client-reading devices in a way that binds them to a persona, and limits the number of devices that may be activated for a particular persona, or the rate at which such devices may be activated for a particular persona.
Owner:MICROSOFT TECH LICENSING LLC

Storage area network file system

A shared storage distributed file system is presented that provides applications with transparent access to a storage area network (SAN) attached storage device. This is accomplished by providing clients read access to the devices over the SAN and by requiring most write activity to be serialized through a network attached storage (NAS) server. Both the clients and the NAS server are connected to the SAN-attached device over the SAN. Direct read access to the SAN attached device is provided through a local file system on the client. Write access is provided through a remote file system on the client that utilizes the NAS server. A supplemental read path is provided through the NAS server for those circumstances where the local file system is unable to provide valid data reads. Consistency is maintained by comparing modification times in the local and remote file systems. Since writes occur over the remote file systems, the consistency mechanism is capable of flushing data caches in the remote file system, and invalidating metadata and real-data caches in the local file system. It is possible to utilize unmodified local and remote file systems in the present invention, by layering over the local and remote file systems a new file system. This new file system need only be installed at each client, allowing the NAS server file systems to operate unmodified. Alternatively, the new file system can be combined with the local file system.
Owner:DATAPLOW

Method and device for determining tasks to be migrated based on cache perception

ActiveCN103729248AReduce the probability of resource contentionImprove performanceResource allocationOperational systemCache invalidation
The invention discloses a method for determining tasks to be migrated based on cache perception. The method comprises the steps that a source processor core and a target processor core are determined according to loads of each processor core; the number of times of cache invalidation and the number of executed orders of each task in the source processor core and the target processor core are monitored to obtain the number of times of cache invalidation of thousands of orders of each task in the source processor core and the target processor core; the average number of times of cache invalidation of thousands of orders of the source processor core and the average number of times of cache invalidation of thousands of orders of the target processor core are obtained; the tasks needing to be migrated from the source processor core to the target processor core are determined according to the average number of times of cache invalidation of thousands of orders of the source processor core and the average number of times of cache invalidation of thousands of orders of the target processor core. According to the method for determining the tasks to be migrated, an operating system can perceive the behavior of programs, and more reasonable tasks can be selected when the tasks are migrated. The invention further discloses a device for determining the tasks to be migrated based on cache perception.
Owner:HUAWEI TECH CO LTD +1

Translation lookaside buffer entry systems and methods

Presented systems and methods can facilitate efficient information storage and tracking operations, including translation look aside buffer operations. In one embodiment, the systems and methods effectively allow the caching of invalid entries (with the attendant benefits e.g., regarding power, resource usage, stalls, etc), while maintaining the illusion that the TLBs do not in fact cache invalid entries (e.g., act in compliance with architectural rules). In one exemplary implementation, an “unreal” TLB entry effectively serves as a hint that the linear address in question currently has no valid mapping. In one exemplary implementation, speculative operations that hit an unreal entry are discarded; architectural operations that hit an unreal entry discard the entry and perform a normal page walk, either obtaining a valid entry, or raising an architectural fault.
Owner:NVIDIA CORP

System and method for limited fanout daisy chaining of cache invalidation requests in a shared-memory multiprocessor system

A protocol engine is for use in each node of a computer system having a plurality of nodes. Each node includes an interface to a local memory subsystem that stores memory lines of information, a directory, and a memory cache. The directory includes an entry associated with a memory line of information stored in the local memory subsystem. The directory entry includes an identification field for identifying sharer nodes that potentially cache the memory line of information. The identification field has a plurality of bits at associated positions within the identification field. Each respective bit of the identification field is associated with one or more nodes. The protocol engine furthermore sets each bit in the identification field for which the memory line is cached in at least one of the associated nodes. In response to a request for exclusive ownership of a memory line, the protocol engine sends an initial invalidation request to no more than a first predefined number of the nodes associated with set bits in the identification field of the directory entry associated with the memory line.
Owner:SK HYNIX INC

Processor Cache write-in invalidation processing method based on memory access history learning

A processor Cache write-in invalidation processing method based on memory access history learning includes the following procedures: (1) Cache invalidation preprocessing procedure; (2) Cache write-allocation strategy setting procedure includes that: the immediate write-allocation or delay write-allocation strategy of each group is set; (3) the group belonging to the immediate write-allocation immediately accesses the Cache block corresponding to the memory and reads the missing data of the group back, and integrates the missing data with the write-in data to form complete Cache block data, and writes the complete Cache block data into the corresponding Cache block; the group belonging to the delay write-allocation collects the write-in data of Cache write-in invalidation operation allocated in the group, and writes the write-in data directly into the corresponding Cache block when the write-in data of a certain group is fully collected in the whole Cache block. The invention can reduce great unnecessary operations of reading Cache block from the memory during the processing process of the Cache write-in invalidation, accordingly reduces the bandwidth waste of the process and further improves the performance of the application program.
Owner:LOONGSON TECH CORP

Hardware-based translation lookaside buffer (TLB) invalidation

Hardware-based translation lookaside buffer (TLB) invalidation techniques are disclosed. A host system is configured to exchange data with a peripheral component interconnect express (PCIe) endpoint (EP). A memory management unit (MMU), which is a hardware element, is included in the host system to provide address translation according to at least one TLB. In one aspect, the MMU is configured to invalidate the at least one TLB in response to receiving at least one TLB invalidation command from the PCIe EP. In another aspect, the PCIe EP is configured to determine that the at least one TLB needs to be invalidated and provide the TLB invalidation command to invalidate the at least one TLB. By implementing hardware-based TLB invalidation in the host system, it is possible to reduce TLB invalidation delay, thus leading to increased data throughput, reduced power consumption, and improved user experience.
Owner:QUALCOMM INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products