Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

286results about How to "Improve execution performance" patented technology

Clustered database system dynamic loading balancing method

InactiveCN101169785AImprove overall resource utilizationImprove overall execution performanceStore-and-forward switching systemsSpecial data processing applicationsCluster basedResource utilization
The invention provides a dynamic load balancing method for a cluster database system. The method comprises determining the loading state of a database server according to a dynamic load balancing algorithm, transmitting a database sentence to the database server with the lowest load through a database gateway system, and returning the result back to a client-end. Different data synchronization mechanisms are executed according to different database sentences. For the database searching sentence, the database gateway directly returns the corresponding result back to the client-end; for the database updating sentence, the database gateway returns the corresponding result back to the client-end, records the state of the updating table, and then transmits the updating sentence to other database servers, to keep the data consistency in each database. On the basis of keeping the performance and the usability of the cluster based on database gateway (middleware), the invention effectively improves the overall resource utilization rate of the database cluster through the dynamic load balancing mechanism, and further improves the overall execution performance of the database.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Dimension context propagation techniques for optimizing SQL query plans

Techniques for efficient execution of queries. A query plan generated for the query is optimized and rewritten as an enhanced query plan, which when executed, uses fewer CPU cycles and thus executes faster than the original query plan. The query for which the enhanced query plan is generated thus executes faster without compromising the results obtained or the data being queried. Optimization includes identifying a set of one or more fact scan operations in the original query plan and then, in the rewritten enhanced query plan, associating one or more dimension context predicate conditions with one or more of the set of fact scan operations. This reduces the overall cost of scanning and / or processing fact records in the enhanced query plan compared to the original query plan and makes the enhanced query plan execute faster than the original query plan.
Owner:ORACLE INT CORP

Apparatus and method for performing convolutional neural network training

The present invention provides an apparatus and a method for performing convolution neural network inverse training. The apparatus comprises an instruction storage unit, a controller unit, a data access unit, an interconnection module, a main computing module, and a plurality of slave computing modules. The method comprises: for each layer, carrying out data selection on the input neuron vector according to the convolution window; and taking the data from the previous layer and the data gradient from the subsequent layer that are obtained according to selection as the inputs of the computing unit of the apparatus; calculating and updating the convolution kernel; and according to the convolution kernel, the data gradient, and the derivative function of the activation function, calculating the data gradient output by the apparatus, and storing the data gradient to a memory so as to output to the previous layer for inverse propagation calculation. According to the apparatus and method provided by the present invention, data and weight parameters involved in the calculation are temporarily stored in the high-speed cache memory, so that convolution neural network inverse training can be supported more flexibly and effectively, and the executing performance of the application containing a large number of memory access is improved.
Owner:CAMBRICON TECH CO LTD

Method of Enhancing Command Executing Performance of Disc Drive

For decreasing seeks generated when switching an execution flow between commands to enhance read and write performances of a disc drive, a command is implemented with a specifically-designed data structure, and commands having neighboring physical addresses and the same type of read or write operations are grouped and linked together. With the aid of command groups, seeks between commands are significantly decreased, though starvation may arise. A few techniques are further provided for preventing starvation of command groups and for preserving the benefits of decreasing seeks.
Owner:MEDIATEK INC

Method for memory on-line analytical processing (OLAP) query optimization based on field programmable gate array (FPGA)

ActiveCN105868388AReduce memory storage costsReduce Computational Cost and Power ConsumptionMulti-dimensional databasesSpecial data processing applicationsQuery optimizationStorage model
The invention relates to a method for memory on-line analytical processing (OLAP) query optimization based on a field programmable gate array (FPGA). The method comprises the steps of constructing a memory-memory-faced data warehouse heterogeneous storage model; performing query optimization facing a central processing unit (CPU)-FPGA heterogeneous processor based on the heterogeneous storage model: generating a grouping projection vector through subquery; performing dictionary table compression on the grouping projection vector; updating a grouping projection as a grouping projection vector based on dictionary table coding according to a projection dictionary table; performing connection operation on the grouping projection vector and a fact table foreign key, and generating a measurement vector based on measurement list aggregation computation; performing index aggregation computation based on the measurement vector; performing query optimization facing a CPU and FPGA heterogeneous computing platform based on the heterogeneous storage model: causing the FPGA and the CPU to perform shared access of an identical memory address space; when the FPGA is in PCI-E accelerator card configuration, using an FPGA acceleration connection performance and the FPGA to directly access a flash card through a PCI-E channel to perform data processing; and when the FPGA is integrated to a flash memory, accelerating data access and aggregation computation performances of the flash card through the FPGA.
Owner:RENMIN UNIVERSITY OF CHINA

Deep-sea working ROV (Remotely Operated Vehicle) propeller system

The invention discloses a deep-sea working ROV (Remotely Operated Vehicle) propeller system. An ROV controller can generate a speed control instruction of six degrees of freedom according to the current motion state of an ROV; a communication unit adopts a TCP (Transmission Control Protocol) / IP (Internet Protocol) network communication mode and transmits the speed control instruction generated by the ROV controller to a thrust distribution unit; a propeller unit comprises four horizontal propellers and three vertical propellers; the thrust distribution unit decomposes the instruction according to the received speed control instruction, and the obtained thrust values of all the propellers are transmitted to a driving unit; the driving unit outputs corresponding voltage signals according to the received thrust values and transmits the voltage signals to a propeller proportional valve, and opening and closing of the propeller proportional valve is adjusted; a hydraulic unit transmits hydraulic oil to the propeller unit through the propeller proportional valve. The deep-sea working ROV propeller system can improve the execution capability and efficiency of a propelling system.
Owner:HARBIN ENG UNIV

Natural language processing model training method, task execution method, equipment and system

ActiveCN111079406ASolve deployment difficultiesEnhanced natural language processing capabilitiesSemantic analysisMachine learningData setOriginal data
The invention discloses a natural language processing model training method, a natural language processing method, natural language processing equipment and a natural language processing system, whichbelong to the field of natural language processing, and the method comprises the following steps: training a teacher model by utilizing a marked original data set; enhancing text sentences in the original data set to obtain enhanced text sentences, and labeling the enhanced text sentences by using a trained teacher model to obtain a labeled enhanced data set; taking the original data set and theenhanced data set as a training data set, training the student model, and taking the trained student model as a natural language processing model, wherein the teacher model and the student model are both deep learning models and execute the same natural language processing task, and the teacher model is more complex and larger in scale. According to the invention, the data set of the natural language processing task can be effectively enhanced in a knowledge distillation scene, and the processing capability of the natural language processing model is improved, so that the execution effect of the natural language processing task is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Method for testing transaction performance of terminal

ActiveCN102053872AImprove execution performanceAvoid the disadvantages of losing original transaction informationFinanceError detection/correctionCommunication linkComputer science
The invention discloses a method for testing the transaction performance of a terminal. A test tool comprises a client, a database and a server; a user makes a transaction template and test cases at the client and stores the transaction template and the test cases in the database; and during testing, the server receives a test command and adopts a processing mode comprising the following steps of: a, loading the transaction template and the test cases to a memory pool from the database; b, initializing an extraction algorithm for extracting transactions from the memory pool; c, starting communication connection among the server, an acquiring platform and an encryptor; d, setting communication links as required; e, serving as terminal processed transactions; f, calculating whether an interval between the current time and last statistical time is more than or equal to a statistical period or not, and outputting transaction statistical information according to the statistical period when the condition is met; and g, returning to the step d and circularly processing transactions interacted with communication. By the method for testing the performance, multi-level related transactions can be supported, the actual transaction situation can be truly simulated, and test continuity is ensured.
Owner:CHINA UNIONPAY

FFT accelerator based on DSP chip

The invention discloses an FFT accelerator based on a DSP chip. The accelerator comprises a mode configuring module, an FFT computing control module, a data access control module and an FFT computing module, wherein the mode configuring module is used for receiving the configuring data of a data address, a computing scale and computing times; when the computing scale is less than the maximum computing scale which can be directly supported, the FFT computing control module is used for controlling the FFT computing module to carry out the one-dimensional FFT computing; when the computing scale is greater than the maximum computing scale which can be directly supported, the FFT computing control module is used for controlling the FFT computing module to carry out the two-dimensional FFT computing; the data access control module is used for controlling the read of the computing data from a memory in a DMA manner and writing the computing result back to the memory; the FFT computing module is used for carrying out the FFT computing according to a control signal output by the FFT computing control module. The accelerator has the advantages that various configuring modes of the computing scale, the computing times and the data format can be supported, the FFT computing from the small scale to the large scale can be realized, the implementation effect is high, and the utilization ratio of hardware resources is high.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products