Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

464 results about "Bottleneck" patented technology

In production and project management, a bottleneck is one process in a chain of processes, such that its limited capacity reduces the capacity of the whole chain. The result of having a bottleneck are stalls in production, supply overstock, pressure from customers and low employee morale. There are both short and long-term bottlenecks. Short-term bottlenecks are temporary and are not normally a significant problem. An example of a short-term bottleneck would be a skilled employee taking a few days off. Long-term bottlenecks occur all the time and can cumulatively significantly slow down production. An example of a long-term bottleneck is when a machine is not efficient enough and as a result has a long queue.

Automatic identification of bottlenecks using rule-based expert knowledge

Execution states of tasks are inferred from collection of information associated with runtime execution of a computer system. Collection of information may include infrequent samples of executing tasks, the samples which may provide inaccurate executing states. One or more tasks may be aggregated by one or more execution states for determining execution time, idle time, or system policy violations, or combinations thereof.
Owner:DOORDASH INC

Scheduling method for semiconductor production line based on multi-ant-colony optimization

The invention relates to a scheduling method for a semiconductor production line based on multi-ant-colony optimization. The method comprises the following steps of: determining bottleneck processing areas of the semiconductor production line, wherein processing areas, of which average utilization rate exceeds 70 percent, of equipment are regarded as the bottleneck processing areas; setting the number of ant colonies as the number of the bottleneck processing areas, and initializing a multi-ant-colony system; parallelly searching scheduling schemes of all bottleneck processing areas by all ant colony systems; restraining and integrating the scheduling schemes of all bottleneck processing areas into one scheduling scheme for all bottleneck processing areas according to a procedure processing sequence, and deducing the scheduling schemes of other non-bottleneck processing areas by using the scheduling scheme and the procedure processing sequence as restraint to obtain the scheduling scheme of the whole semiconductor production line; and judging whether program ending conditions are met, if so, inputting the scheduling scheme which is optimal in performance, otherwise, updating pheromones of the ant colonies by using the scheduling scheme which is current optimal in performance, and guiding a new round of searching process. The method has the advantages that: an important practical value is provided for solving the optimal dispatching problem of the semiconductor production line; and important instructional significance is provided for improving the production management level of semiconductor enterprises of China.
Owner:TONGJI UNIV

Method and system for test, simulation and concurrence of software performance

InactiveCN103544103AAvoid influenceAvoid interference of response time with each otherSoftware testing/debuggingUser inputSoftware engineering
The invention relates to a method for test, simulation and concurrence of software performance. The method for the test, simulation and concurrence of the software performance specifically comprises the steps that (1) user configuration information which is input by a user is read, (2) a user requirement structural body is stored in a shared memory module and mapping is established, (3) service requests of concurrent users are received and at least one test progress is established according to the number of the concurrent users and the user requirement structural body, (4) test threads are established, (5) each test thread is used for processing the service request of a corresponding user and is stopped when a stopping condition is met, (6) operation is ended and the test threads in each test process are stopped after operation of the test threads in each process is ended in sequence, (7) relevant data of each service are stored, analyzed and counted, and then all the processes are finished. According to the method and system for the test, simulation and concurrence of the software performance, the fact that how to simulate user concurrence is explained, bottlenecks are prevented from occurring, and the purpose of a high-concurrency scene by means of a small number of hardware sources is achieved; concurrency stability is guaranteed; support to different user services is achieved; help is provided for positioning and development cycle shorting.
Owner:烟台中科网络技术研究所

Management scheduling technology based on hyper-converged framework

The invention discloses a management scheduling technology based on a hyper-converged framework. The method comprises a hyper-converged system architecture design, resource integrated management basedon a hyper-converged architecture, unified computing virtualization oriented to a domestic heterogeneous platform, storage virtualization based on distributed storage, network virtualization based onsoftware definition and a container dynamic scheduling management technology oriented to a high-mobility environment. According to the management scheduling technology based on the hyper-converged framework provided by the invention, the virtualization capability and the management capability of the tactical cloud platform are improved; a key technical support is provided for constructing army maneuvering tactical cloud full-link ecology; an on-demand flexible virtualized computing storage resource pool is provided, heterogeneous fusion computing virtualization is achieved, meanwhile, a distributed storage technology is used for constructing a storage resource pool, a software definition technology is used for constructing a virtual network, a super-fusion resource pool is formed, localization data and network access of application services are achieved, the I/O bottleneck problem of a traditional virtualization deployment mode is solved, and the service response performance is improved.
Owner:BEIJING INST OF COMP TECH & APPL

Log analysis-based micro-service performance optimization system and analysis method

ActiveCN109756364AReduce workloadQuickly identify performance bottlenecksHardware monitoringData switching networksMicroservicesService gateway
The invention discloses a micro-service performance optimization method based on log analysis. The micro-service performance optimization method comprises the following steps that a key interface of amicro-service module records an access log called by an interface through a log sdk; The log collection agent module collects performance monitoring information of the service system at regular intervals; The unified log analysis platform carries out extraction and analysis according to the access logs to obtain performance bottleneck points of the system; The micro-service gateway updates the routing strategy of the intelligent routing module at regular intervals through the performance indexes of the micro-service modules; Meanwhile, the API monitoring module extracts the external request number and the throughput through log analysis system processing, and then obtains the external current limiting weight of the microservice gateway according to the external request number, the throughput and the performance bottleneck point. According to the method, through automatic extraction and analysis of logs, a complete calling chain topology is generated, hidden performance doubtful pointsare found, performance bottleneck points of the system are quickly found out, and the actual workload of development and operation maintenance personnel can be effectively reduced.
Owner:CHENGDU SEFON SOFTWARE CO LTD

Resource bottleneck identification for multi-stage workflows processing

Identifying resource bottleneck in multi-stage workflow processing may include identifying dependencies between logical stages and physical resources in a computing system to determine which logical stage involves what set of resources; for each of the identified dependencies, determining a functional relationship between a usage level of a physical resource and concurrency level of a logical stage; estimating consumption of the physical resources by each of the logical stages based on the functional relationship determined for each of the logical stages; and performing a predictive modeling based on the estimated consumption to determine a concurrency level at which said each of the logical stages will become bottleneck.
Owner:IBM CORP

Method for quickly developing heterogeneous parallel program

The invention provides a method for quickly developing a heterogeneous parallel program, and relates to performance analysis of a CPU (central processing unit) serial program and transplantation of a heterogeneous parallel program. The method includes: firstly, performing performance and algorithm analysis on the CPU serial program, and positioning a performance bottleneck and parallelizability of the program; secondly, inserting an OpenACC pre-compilation command on the basis of an original code to obtain a heterogeneous parallel code which can be executed in heterogeneous parallel environment; compiling and executing the code according to specified parameters of a hardware and software platform, and determining whether further optimization is needed or not according to a program run result. Compared with the prior art, the method has the advantages that existing codes need not to be reconstructed; multilanguage support is realized, and languages such as C / C++ and FORTRAN (formula translator) are supported; cross-platform and cross-hardware are realized, operating systems such as Linux, Windows, Mac and the like are supported, and hardware such as Nvidia, GPU of AMD and Intel Xeon Phi is supported. By the method which is high in practicality and easy to popularize, existing programs can be parallelized efficiently, and the programs are enabled to make full use of computing power of a heterogeneous system.
Owner:三多(杭州)科技有限公司

Measuring system and method for supporting analysis of OpenFlow application performance

InactiveCN103997432AOvercome the defect that frequent reading of switch flow table data can easily affect network performanceOvercoming defects that can easily affect network performanceData switching networksRouting control planeReal-time computing
The invention discloses a measuring system and method for supporting the analysis of the OpenFlow application performance. The measuring system and method are based on an OpenFlow network and a measuring server. The OpenFlow network comprises a controller and n exchangers connected with the controller respectively. The n exchangers are controlled by the OpenFlow of the controller. The controller and the n exchangers become measuring entities to be controlled by the measuring server in a centralized mode after expanding the local journal function and the clock synchronization function. The measuring system and method for supporting the analysis of the OpenFlow application performance have the advantages that a centralized performance bottleneck does not exist, the interference on the network application by measuring is small, information of a data plane and a control plane can be obtained comprehensively, and an interactive relationship between the control plane and the data plane can be obtained.
Owner:PLA UNIV OF SCI & TECH

Network access flow limiting control method and device and computer readable storage medium

ActiveCN111030936AGuaranteed uptimeCurrent limiting implementationData switching networksPage viewAccess frequency
The embodiment of the invention discloses a flow limiting control method and device for network access and a computer readable storage medium. The method comprises the steps of when a service access request sent by a user is received and the current access is judged to be the first access, acquiring the accumulated page view of all service interfaces within a preset duration and the total number of access users; calculating access frequency according to the total number of the access users and the accumulated page view; calculating an expected page view according to the accumulated page view and the access frequency; if the expected page view exceeds a preset threshold, refusing the service access request; and if the expected page view does not exceed the preset threshold, responding to the service access request according to the service logic corresponding to the service access request. Based on the scheme, when a new request is received, the expected page view is estimated. When thepage view is judged to be close to the bottleneck of the system in advance, flow limiting is carried out, and part of new user access is refused to guarantee normal operation of the system. Users entering the system are not affected by flow limiting while flow limiting is achieved.
Owner:TENCENT CLOUD COMPUTING BEIJING CO LTD

Method and device for realizing persistence in flow calculation application

The invention discloses a method and a device for realizing persistence in flow calculation application. The method comprises the following steps that: when the current batch message is successfully consumed, whether the persistence operation needs to be performed or not is judged according to the first initial offset and the preset persistence interval; when the persistence operation needs to be performed, the persistence processing is carried out according to the message position indicated by the second initial offset; and after the persistence succeeds, the first initial offset and the second initial offset are updated into the initial offset of a next batch message. The persistence operation is performed after the persistence interval, and the disk persistence time interval is prolonged, so that the real-time calculation efficiency is greatly improved. During fault recovery, at most the batch message in the persistence interval needs to be consumed again; the performance bottleneck caused by frequent disk writing in the existing synchronous persistence process is avoided; the real-time calculation message throughput performance is improved by an order of magnitude; and meanwhile, the delay caused by the fault recovery is reduced to the second stage, and the real-time performance cannot be influenced.
Owner:阿里巴巴华南技术有限公司

Method for solving performance bottleneck of network management system in communication industry based on cloud computing technology

InactiveCN102624558AFix performance issuesSolve difficult system performance problemsData switching networksVirtualizationThird party
The invention provides a method for solving the performance bottleneck of a network management system in the communication industry based on a cloud computing technology. In the method, cloud computing is adopted to determine the guiding principle of the network management system. The method comprises the following steps of: 1) aiming at hardware and third-party software, a mainstream virtualization technology which supports unified cloud computing implementation mode is adopted, and 2) aiming at system software, a design mode adaptive to distributed deployment is adopted; and in terms of three levels (hardware, middleware and application software), the cloud computing technology is utilized to effectively solve the performance problem of the network management system; Based on a cloud computing architecture design and a deployment application system, the system performance problem which is hardly solved by a traditional system can be effectively solved; and simultaneously, by utilizing cloud computing, the advantages in the aspects of low cost, high expansibility and the like can be achieved, and from the macroscopic view, great problems can be solved by utilizing existing technologies without spending great time in processing certain technical details.
Owner:INSPUR TIANYUAN COMM INFORMATION SYST CO LTD

Log organization structure clustered based on transaction aggregation and method for realizing corresponding recovery protocol thereof

The invention discloses a log organization structure clustered according to transaction aggregation and a recovery protocol based on the log organization structure clustered, which can be applied to a transactional data management system of a large-sized computer. A log file is sequentially organized to a plurality of log fragments and each log fragment is used for storing the log content of the same transaction and reserving a transaction number as well as a preceding log fragment pointer of the transaction; a data page number involved in a log entry of the same fragment is stored in the form of an array. When the system is operating, each transaction only writes its own log fragment and writes the log fragment in the log file when the transaction is submitted. In a recovering process, the system can be recovered to a lasting and consistent state by scanning all the log fragments for remake and returning the log fragments of all the active transactions for return. The problem of producing bottlenecks during writing logs in the traditional transactional data management system is resolved and log amount of the system can be effectively reduced.
Owner:天津神舟通用数据技术有限公司

Block chain parallel transaction processing method and system based on isomorphic multi-chain, and terminal

The invention relates to a parallel transaction processing method based on isomorphic multi-chains, which comprises the following steps: constructing one or more sub-network chains, each sub-network chain having the same block chain framework; dividing a logic transaction to be executed into at least one actual transaction; and distributing the actual transaction to the corresponding subnet chainto carry out parallel transaction processing. Transaction processing mainly comprises one-way asset transfer, Dapp application compatibility and asset aggregation and dispersion. The overall architecture is divided into two parts, namely a client and a block chain platform, the client constructs optimized parallel transactions according to statistical information of the block chain platform, userrequirements are considered comprehensively, and the overall performance of the system is improved; meanwhile, information of a user account is tracked, related states are maintained, and communication under a chain is achieved. Aiming at the performance problem of a single chain, a logic transaction parallel execution algorithm is innovatively provided, the performance optimization bottleneck problem in an original block chain technical architecture is solved, and the flux upper limit of global transaction processing is improved.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

System and method for analyzing and optimizing computer system performance utilizing observed time performance measures

A data processing system and method analyze the performance of its components by obtaining measures of usage of the components over time as well as electrical requirements of those components to recommend an optimal configuration. The location in the system and the time duration that any one or more components is in a performance-limiting or bottleneck condition is determined. Based on the observed bottlenecks, their times of occurrence and their time duration, more optimal configurations of the system are recommended. The present invention is particularly adapted for use in data processing systems where a peripheral component interconnect (PCI) bus is used.
Owner:LENOVO GLOBAL TECH INT LTD

Method and system for guaranteeing application service quality in distributed environment

ActiveCN104486129AReduce overheadReduce request response time fluctuationsData switching networksQos quality of serviceCritical path method
The invention provides a method and a system for positioning bottleneck nodes and ensuring application service quality in the distributed environment. The method for positioning a bottleneck node comprises the following steps: calculating a delay fluctuation value of each node in a processing stage on a critical path of service; determining the bottleneck node according to the delay fluctuation value. The service critical path is obtained by processing the critical path of service request in a period of time; the delay fluctuation value is obtained according to time of processing the request of the node in the processing period in a period of time. A method for ensuring application service quality comprises the following steps: positioning a bottleneck node according to the service with long tail delay; checking whether the delay fluctuation value of the bottleneck node exceeds a predefined threshold or not, and carrying out fault diagnosis according to checking result or carrying out the request for regulating speed or accelerating speed of service request of the bottleneck node. According to the method and the system for positioning bottleneck nodes and ensuring application service quality in the distributed environment, the request response time fluctuation is reduced; the long tail delay is reduced; in addition, the cost for optimizing the nodes one by one and step by step is also reduced.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Lock-free, parallel remembered sets

A multi-threaded garbage collector operates in increments and maintains, for each of a plurality of car sections in which it has divided a portion of the heap, a respective remembered set of the locations at which it has found references to objects in those car sections. It stores the remembered sets in respective hash tables, whose contents it updates in a scanning operation, executed concurrently by multiple threads, in which it finds references and records their locations in the appropriate tables. Occasionally, one of the threads replaces the hash table for a given car section. Rather than wait for the replacement operation to be completed, a thread that has an entry to be made into that car section's remembered set accesses the old table to find out whether the entry has already been made. If so, no new entry is necessary. Otherwise, it places an entry into the old table and sometimes places an insertion record containing that entry into a linked list associated with that car section. When the reclaiming thread has finished transferring information from the old table to the new table, it transfers information from the linked list of insertion records into the new table, too. In this way, the replacement process is not a bottleneck to other threads' performing update operations.
Owner:ORACLE INT CORP

Novel broad sense parallel connection platform structure

The invention relates to a novel broad sense parallel connection mechanism, Compared with the traditional parallel connection mechanism, the invention has the advantages of large work space, strong practicability and the like. The traditional parallel connection mechanism has a series of defects of single form, narrow application range and the like. Thereby, after about a half century of development, the parallel connection mechanism subject faces a large research bottleneck. The invention grasps the substance of the parallel connection mechanism, and generates a novel type of novel parallel connection form which is called the broad sense parallel connection mechanism through changing the structure characteristics of movable platforms of the parallel connection mechanism. The mechanism type is based on the simplest and most stable spatial structure of tetrahedroids, and generates the novel parallel connection mechanism through serial connection or parallel connection of a plurality of tetrahedroids. On one hand, the provision of the mechanism widens the application fields of the parallel connection mechanism, and makes the parallel connection mechanism hopeful to be applied to novel fields such as spatial operation arms, movable robots and the like, and on the other hand, the invention greatly enriches the types of the parallel connection mechanism, widens the construction principles of the parallel connection mechanism, and has higher value.
Owner:高金磊

Printing system and bottleneck obviation

InactiveUS20070177189A1Maximize run timeDigital output to print unitsPrint mediaJob stream
A printing system capable of processing a plurality of job streams and sub-jobs within a job stream. The system including one or more marking engines, a hopper, and one or more print media destinations. The system further provides a jobs scheduler for determining a schedule for processing queued print sub-jobs of a job stream using a utility function based on dwell time and a system model indicative of the plurality of interconnected processing units. The plurality of sub-jobs employing one or more of the plurality of sheet processing paths including at least one pre-print batch and at least one direct print batch. A sheet itineraries processor is provided for causing the plurality of interconnected processing units to concurrently move sheets of the concurrent sub-jobs along selected sheet processing paths to process the sheets and to deliver the at least one pre-print batch to the hopper and to deliver the at least one direct print batch to the destination.
Owner:XEROX CORP

Cloud service method for taxation cloud computing network billing IM (Instant Messaging) online customer system

The invention provides a cloud service method for a taxation cloud computing network billing IM (instant messaging) online customer system, which adopts a cloud computing technology to perform overall structuring of the service. The method comprises the following step of effectively dividing resources through role positioning of the virtual computing node of the cloud platform of the tax industry, so that the limit of the single service structure of an IM software system in the original industry is broken through, free and flexible configuration of nodes is realized, namely, the configured node is used at once, the change of programs is not required, and the problems that service structure extending cannot be performed and an effective service load cannot be formed are solved; as various functional modules are developed by using a distributed computing language, the various functional modules can form code segment mirrors in the cloud computing platform, resource allocation can be dynamically performed on the various functional modules by the cloud computing platform, and the problems that the system resource is wasted and the resource is distributed unevenly can be solved; with the adoption of an independently developed database reverse message proxy module, a message mechanism is adopted to replace the original polling mechanism at the same time of state management information, so that the system efficiency is greatly improved and the bottleneck restriction of the system performance is broken through.
Owner:日照浪潮云计算有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products