Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

234results about How to "Reduce access pressure" patented technology

Distributed cache server system and application method thereof, cache clients and cache server terminals

Provided in the invention is a distributed cache server system, comprising: cache clients, which are used for obtaining all cache server terminal information from a main memory database server, establishing connection with cache server terminals and generating and regularly maintaining links and link tables; the main memory database server, which is used for establishing and preserving a cache server terminal information table and a catalogue table of correspondence of data storage type and cache server terminals and carrying out processing on the received cache server terminal information reported by the cache server terminals; and cache server terminals, which are used for reporting the cache server terminal information to the main memory database server and completing management of cache data blocks. In addition, the invention also provides an application method of the distributed cache server system, cache clients and cache server terminals. With utilization of the above-mentioned technical scheme, the arrangement and the usage of the cache server system become concise and convenient; the access speed is fast; and the system can be extended and updated automatically.
Owner:ZTE CORP

Asynchronous caching method, server and system

The invention discloses an asynchronous caching method, server and system, and relates to the technical field of caching. According to the technical scheme provided by the invention, data requested by a user are read from the asynchronous caching server and a source server, and are returned to the user, asynchronous caching data is formed by setting logic expiration time for source data while a response to a user request is completed, and the asynchronous caching data are stored into the asynchronous caching server, so that the update of the asynchronous caching server is realized; and meanwhile, whether the asynchronous caching data expire or not is judged by taking the logic expiration time as a reference, the source data are read after the asynchronous caching data expire, and the update is performed again in the asynchronous caching server. Through the two-time update of the asynchronous caching data, it can be ensured that 80%-90% of user requests only need to access to the asynchronous caching server and do not need to read the source server. With the adoption of the technical scheme provided by embodiments of the invention, the availability and stability of a system can be remarkably improved.
Owner:BEIJING CHESHANGHUI SOFTWARE

Method and system for realizing data consistency

The invention discloses a method and a system for realizing data consistency. A data access assembly receives user messages, dynamic structured query language (SQL) is generated according to the user messages, an extensive makeup language (XML) database configuration file is connected with a relationship database and obtains data, and then, the data is transmitted to an application layer; an overall buffer data set is inquired according to the overall property data access assembly object name and the data obtaining object name, the data is returned to the data access assembly when the data exists, and the data is transmitted to the application layer by the data access assembly; and according to the data access assembly object name and the data obtaining object name, an overall buffer memory assembly is created when no data exists, and the obtained data dynamic SQL, the data access assembly object name and the data obtaining object name are transmitted to the buffer memory assembly. The method and the system solve the problem of data consistency between a server buffer memory and the relationship database and have the advantages that the effectiveness of the server buffer memory data is ensured, the connection access to the relationship database is reduced, the access speed is accelerated, and the access efficiency is improved.
Owner:HUNAN CRRC TIMES SIGNAL & COMM CO LTD

Method and system for operating bank business data memory cache

A method and a system for operating bank business data memory cache are provided. The system comprises a bank business application system, a memory cache cluster and a bank business database. The memory cache cluster comprises a plurality of memory cache servers; the bank business database is used for outputting bank business information data; the bank business application system is connected with the memory cache cluster and is used for receiving a user operation request and transmitting an access request according to the user operation request; the memory cache cluster is respectively connected with the bank business database and the bank business application system and is used for converting the bank business information data as an unstructured memory data structure and then storing in the memory cache servers, and distributing address information corresponding to the memory cache servers to the bank business application system, and inquiring the bank business information data in a local application memory, and outputting a feedback result to the bank business application system. Thereby, the method and the system of the invention provide an integrated, concise and standardized data access method and improve entire interactive response speed of a bank application system under the condition of super-large scale concurrent access.
Owner:BANK OF COMMUNICATIONS

Mass-unstructured data distributed type processing structure for description information

The invention discloses a mass-unstructured data distributed type processing structure for description information. The mass-unstructured data distributed type processing structure comprises a data collecting module, a data buffering and pre-processing module, a data separating and filing storage module, a stream processing module, a distributed type data storage module, a distributed type service processing module and distributed type massage-oriented middleware. The data collecting module is used for collecting and sending unstructured data to a data buffering queue. The data buffering and pre-processing module is used for temporally storing the data sent by the data collecting module and selectively repairing or carrying out secondary processing on the data. The data separating and filing storage module is used for acquiring data from a prior module distributed type queue, selectively separating the unstructured data from the description information, and the separated data are forwarded or stored in a successor module. The stream processing module is used for monitoring, comparing, calculating and processing new access data. The distributed type data storage module is used for storing the unstructured data and the description information. The distributed type service processing module comprises a service processor, a data access unit and a data buffering assembly. The distributed type massage-oriented middleware is used fore receiving front-end requests for a service processor to carry out selective executing or returning the background processing result to a front end.
Owner:JINAN GRANDLAND DATA TECH

Method and device for releasing access pressure of server-side database

The invention discloses a method and device for releasing the access pressure of a server-side database. The method comprises the steps that version information of an application in the server-side database is inquired and copied into a shared memory; an application update inquiry request, containing the name and the version information of the application, from a client side is received; the shared memory is inquired, whether the record of corresponding applications exists in the shared memory or not is judged, whether the applications corresponding to the application update inquiry request need to be updated is determined by comparing the version information of the applications if the record of the corresponding application exists in the shared memory, and the applications which do not need to be updated are filtered out; update relevant information of the applications which need to be updated is inquired in the server-side database and is retuned to the client side. According to the technical scheme, due to the fact that the shared memory is arranged at the front end of the server-side database, the inquiry requests of the applications which do not need to be updated are filtered out through the function of the shared memory, the request that the number of the requests of the server-side database is valid is particularly inquired, and the access pressure of the server-side database is greatly reduced.
Owner:BEIJING QIHOO TECH CO LTD

Picture caching method and picture caching system

The invention discloses a picture caching method and a picture caching system. The picture caching method includes periodically acquiring to-be-displayed pictures of a website within a preset period, distributing caching addresses of a CDN (content delivery network) caching server to the acquired pictures according to identification information of the acquired pictures, and caching the acquired pictures to the CDN caching server according to the distributed caching addresses. According to the picture caching method and the picture caching system, the to-be-displayed pictures in the website within the preset period is acquired automatically and periodically, and the acquired pictures are cached to the CDN caching server, so that caching control is more flexible; to-be-displayed content of the website can be cached in advance, so that when a large number of users visit the website suddenly, accessing pressure of the website can be effectively reduced, and 'source return' pressure of a picture source website can be effectively avoided.
Owner:GUANGZHOU PINWEI SOFTWARE

On-line seat-picking method and system, and overload protection device

The invention relates to an on-line seat-picking method and system, and an overload protection device. The method comprises: monitoring connection requests sent to a same original ticketing system by each ticketing system; determining whether the number of times of connection requests within a set time exceeds a preset threshold value; if the number of times of connection requests exceeds a preset threshold value, intercepting the connection requests and starting an overload protection mode, otherwise transmitting the connection requests to an original ticketing system; responding to the interception of the connection requests, and querying whether a required seat map exists in the buffer memory of an on-line seat map; if a required seat map exists in the buffer memory of an on-line seat map, determining whether the required seat map is available according to the survival time of the seat map; feeding back data to a corresponding user if the required seat map is available; and applying for a token to access the original ticketing system if the required seat map is not available or is not stored; and responding to token obtainment, obtaining the required seat map from the original ticketing system, and selecting seats according to the obtained seat map. According to the invention, the access pressure to an external original ticketing system under high concurrent pressure can be reduced.
Owner:CHINA TELECOM CORP LTD

Distributed system and incremental data updating method

InactiveCN105335170AReduce access pressureReduce system resource consumptionProgram loading/initiatingResource consumptionMiddleware
The present application proposes a distributed system and an incremental data updating method. The system comprises: a server, used for sending a data change notification to distributed clients; the distributed clients, used for receiving the data change notification from the server, sending a changed-data acquisition request to first middleware according to the data change notification, and receiving changed data returned by the first middleware; and the first middleware, used for receiving the changed-data acquisition request from the distributed clients, acquiring the changed data according to the changed-data acquisition request and returning the changed data to the distributed clients. The distributed system and the incremental data updating method provided by the embodiments of the present application reduce the access pressure on a database in the distributed system, reduce system resource consumption, and improve the incremental data updating speed of the distributed clients and the stability of the distributed system.
Owner:ALIBABA GRP HLDG LTD

Circuit and method based on AVS motion compensation interpolation

The invention relates to a circuit and a method based on AVS motion compensation interpolation, which belong to the technical field of audio and video digital encoding and decoding; the circuit comprises integer-pixel memories I and II, bh and j pixel memories, a memory interface module, one-second and one-fourth pixel interpolation filters, a multiplexer and an adjusting amplitude limiter; the output ends of the circuit comprises integer-pixel memories I and II and the bh and j pixel memories are connected with the input end of the memory interface module; the output end of the memory interface module is respectively connected with the input ends of the one-second and one-fourth pixel interpolation filters; the output ends of the one-second and one-fourth pixel interpolation filters are respectively connected with the input end of the multiplexer; the output end of the one-second pixel interpolation filter is respectively connected with the input ends of the bh and j pixel memories; and the output end of the multiplexer is connected with the input end of the adjusting amplitude limiter, and interpolation results are output by the adjusting amplitude limiter. The invention carries out interpolation operation by the means of improving system parallelism, thereby effectively improving the system performance.
Owner:SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products