Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

36results about How to "Relieve memory pressure" patented technology

System for settling fierce competition of memory resources in big data processing system

The invention discloses a system for settling fierce competition of memory resources in a big data processing system. A memory information feedback module is used for monitoring the memory using condition for a running thread task, converting collected memory information and then feeding back the converted memory information to an information sampling and analyzing module; the information sampling and analyzing module is used for dynamically controlling the number of sampling times of information of all working nodes, analyzing data after the assigned number of sampling times is achieved, and calculating the optimal CPU to memory proportion of the current working node; a decision making and task distributing module is used for making a decision and controlling whether new tasks are distributed to the working nodes for calculation operation according to the information obtained through analysis and the task running information of the current working node to achieve effective limit on the CPU and memory use relation. By means of the system, a memory-perceptive task distribution mechanism can be achieved on a universal big data platform, the I / O expenditure generated when data overflows to a disc due to fierce competition of memory resources can be reduced, and the integral performance of the system can be effectively improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Distributed unstructured grid cross-processor-plane butt joint method and distributed unstructured grid cross-processor-plane butt joint system

The invention relates to the technical field of grid processing, and discloses a distributed unstructured grid cross-processor-plane butt joint method and system.The butt joint method comprises the steps that a two-stage index structure is adopted for recognizing the butt joint relation of the two sides of a grid partition boundary in a parallel mode; and carrying out equivalence judgment on the centroid coordinates of any two butt joint surface elements and the normalized grid point sequence in sequence. Comprising the following steps: S1, importing basic geometric data of a distributed unstructured grid in parallel; s2, constructing a dual communication list among the sub-regions in a cross-processor manner; s3, constructing a curved surface grid discrete structure at the butt joint boundary in parallel; s4, constructing a fork tree structure of the query set family on each processor; s5, querying a pairing relationship between the docking surface elements in a cross-processor manner; and S6, exporting the butt joint information of the distributed unstructured grid in parallel. According to the method, the problems of low processing efficiency, poor data processing capability and the like during large-scale unstructured grid processing in the prior art are solved.
Owner:CALCULATION AERODYNAMICS INST CHINA AERODYNAMICS RES & DEV CENT

Scheduling method and system for relieving memory pressure in distributed data processing systems

The invention discloses a scheduling method for relieving memory pressure in distributed data processing systems. The scheduling method comprises the following steps of: analyzing a memory using law according to characteristics of an operation carried out on a key value pair by a user programming interface, and establishing a memory using model of the user programming interface in a data processing system; speculating memory using models of tasks according to a sequence of calling the programming interface by the tasks; distinguishing different models by utilizing a memory occupation growth rate; and estimating the influence, on memory pressure, of each task according to the memory using model and processing data size of the currently operated task, and hanging up the tasks with high influences until the tasks with low influences are completely executed or the memory pressure is relieved. According to the method, the influences, on the memory pressure, of all the tasks during the operation are monitored and analyzed in rea time in the data processing systems, so that the expandability of service systems is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Electrical equipment monitoring method and device

The invention provides an electrical equipment monitoring method and a monitoring device thereof. The electrical equipment monitoring method comprises the following steps of: initializing; setting parameters; establishing an electrical business analysis model; controlling output power of a fan by grades; acquiring measuring point information of power transformation equipment in real time; and transmitting the acquired measuring point information of the power transformation equipment to a real-time database to be packaged and stored; and transmitting inquiry, accounting, analysis and / or processing operation requests to the electrical equipment monitoring device according to the request of a user; and displaying to the user in real time after visualized processing. The electrical equipment monitoring method is real-time and effective; the equipment is high in integration level and availability, good in expansibility, and stable in system performance, as well as simple and convenient to operate.
Owner:SHANDONG LUNENG SOFTWARE TECH

Model publishing method, device and equipment, and storage medium

The invention discloses a model publishing method. According to the method, a local + distributed storage model release scheme is provided; a complete deep learning model with a large space is segmented into a dense part and a sparse part; the sparse part occupying large space is deployed in the distributed cluster, the dense part occupying small space is deployed in the local computing cluster, and the model dispersion part is dispersedly pulled, so that the occupation of a memory by the model is reduced; the memory pressure of the local computing cluster is relieved when the model is loaded,and the memory consumption of computing nodes is greatly reduced. The invention further provides a model publishing device and equipment and a readable storage medium, which have the above beneficialeffects.
Owner:SUZHOU LANGCHAO INTELLIGENT TECH CO LTD

File system, file storage method, storage device and computer readable medium

The invention provides a file system, a file storage method of the file system, a file storage device of the file system and a computer readable medium. In the file storage method of the file system,at least two levels of indexes are used in the file system, a relation between a category of a file to be stored and block information of the file system is stored in a first level of index in the atleast two levels of indexes, and the file storage method of the file system comprises the following operations: obtaining the category according to the category information; Segmenting the file into aplurality of segments; Querying and obtaining block information associated with the category in the first-level index according to the category information; Under the condition that the correspondingblock has no residual space or the size of the residual space is smaller than that of the fragment, inserting a relation between a continuous extension category representing the same category as thecategory of the file and new block information into the first-level index, and storing the fragment in the block when the size of the fragment is greater than or equal to that of the fragment; And storing the relationship between the block information and the information of the file in other indexes except the first-level index in the at least two levels of indexes.
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1

Method and system for achieving remote control function, server and remote control terminal

The invention provides a method and system for achieving a remote control function, a server and a remote control terminal. The method for achieving the remote control function comprises the steps that when remote control code requesting information including controlled product data and function data is received, the remote control code requesting information is compared with controlled product data and function data in stored remote control instruction information, and remote control instruction information matched with the remote control code requesting information is determined; instruction code data in the matched remote control instruction information is sent. By means of the scheme, the instruction code data corresponding to the corresponding function of a controlled product can be sent according to the demand of a user instead of directly releasing all instruction codes related to the controlled product to the user remote control terminal. On the premise that the demand of the user is met, the storage space of the remote control terminal is occupied as little as possible; besides, the remote control terminal only needs to process instruction code data related to the demand of the user when processing the instruction code, and the processing efficiency can be effectively improved.
Owner:LETV HLDG BEIJING CO LTD +1

Live broadcast room admission flow data processing method and device, equipment and storage medium

The invention provides a live broadcast room admission flow data processing method and device, equipment and a storage medium, and the method comprises the steps: filtering the data of a full live broadcast room in timing synchronization, and obtaining the admission flow data of an effective live broadcast room; storing the admission flow data based on a preset survival time length and Redis, andperforming real-time updating; determining whether an ordered queue sorted according to a time window exists in an effective live broadcast room corresponding to the admission flow data or not; if so,obtaining the current latest admission flow data, placing the current latest admission flow data at the head of the ordered queue, and updating the ordered queue in real time; and if not, obtaining the current latest admission flow data, and generating an ordered queue according to the descending order of the survival duration. The data of the full live broadcast room is filtered to obtain the admission flow data of the effective live broadcast room, the ordered queue corresponding to the effective live broadcast room is stored and updated by utilizing Redis, the memory pressure caused by massive admission flow data is relieved, and the real-time processing of the admission flow data is ensured.
Owner:GUANGZHOU HUADUO NETWORK TECH

Real-time positioning method and device applied to automatic driving

Embodiments of the invention disclose a real-time positioning method and device applied to automatic driving. The method comprises the steps of obtaining inertia measurement sensor data from an inertia measurement sensor built in a vehicle and recording a time point of obtaining the inertia measurement sensor data; when sensor data except the inertia measurement sensor data is obtained at the timepoint, positioning the vehicle according to the inertia measurement sensor data and the other sensor data; and when the other sensor data is not obtained at the time point, positioning the vehicle according to the inertia measurement sensor data. By implementing the method and device, the time point of receiving the inertia measurement sensor data can be used as a time reference, and the sensor data obtained at the time point is combined to carry out positioning, thereby improving the vehicle positioning precision.
Owner:BEIJING MOMENTA TECH CO LTD

GPU-based high-performance graph mining method and system

The invention discloses a GPU-based high-performance graph mining method and system. The method comprises the steps: employing a GPU & CPU cooperative computing architecture, carrying out the graph mining operation through multiple threads of a GPU, improving the search efficiency, and storing a large number of intermediate sub-graphs generated in a graph mining process through a CPU memory; describing a system architecture by combining a Grow-Cull execution model: in a system operation process, copying a part of sub-graphs to a GPU each time to execute a Grow operation, judging a relationshipbetween the sub-graphs and vertexes / edges, and copying the generated candidate sub-graphs to a CPU memory; in order to check the legality of the candidate sub-graphs, the CPU multithreading technology is used for executing the Cull operation to judge the candidate sub-graphs, the qualified sub-graphs are stored in the CPU main memory. The system repeats the iteration process. By referring to thethought of a pipeline, CPU calculation and GPU calculation can be executed at the same time during iteration, bidirectional copying of data can also be executed at the same time, and calculation and transmission delay is masked.
Owner:中科院计算所西部高等技术研究院

A system to solve fierce competition for memory resources in big data processing systems

The invention discloses a system for settling fierce competition of memory resources in a big data processing system. A memory information feedback module is used for monitoring the memory using condition for a running thread task, converting collected memory information and then feeding back the converted memory information to an information sampling and analyzing module; the information sampling and analyzing module is used for dynamically controlling the number of sampling times of information of all working nodes, analyzing data after the assigned number of sampling times is achieved, and calculating the optimal CPU to memory proportion of the current working node; a decision making and task distributing module is used for making a decision and controlling whether new tasks are distributed to the working nodes for calculation operation according to the information obtained through analysis and the task running information of the current working node to achieve effective limit on the CPU and memory use relation. By means of the system, a memory-perceptive task distribution mechanism can be achieved on a universal big data platform, the I / O expenditure generated when data overflows to a disc due to fierce competition of memory resources can be reduced, and the integral performance of the system can be effectively improved.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products