Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

43 results about "Queuing network model" patented technology

Method of incorporating DBMS wizards with analytical models for DBMS servers performance optimization

Disclosed is an improved method and system for implementing DBMS server performance optimization. According to some approaches, the method and system incorporates DBMS wizards recommendations with analytical queuing network models for purpose of evaluating different alternatives and selecting the optimum performance management solution with a set of expectations, enhancing autonomic computing by generating periodic control measures which include recommendation to add or remove indexes and materialized views, change the level of concurrency, workloads priorities, improving the balance of the resource utilization, which provides a framework for a continuous process of the workload management by means of measuring the difference between the actual results and expected, understanding the cause of the difference, finding a new corrective solution and setting new expectations.
Owner:DYNATRACE

Goods location allocating method applied to automatic warehousing system of multi-layer shuttle vehicle

The invention discloses a goods location allocating method applied to an automatic warehousing system of a multi-layer shuttle vehicle. The method comprises the following steps: firstly, according to the quantities of goods shelves and tunnels, generating plane layout structure data of the system; then analyzing the waiting time of the shuttle vehicle executing an outbound task as well as the idle time of a hoister; establishing an open queuing network model for describing the system; analyzing the relationship among the waiting time of the shuttle vehicle, the idle time of the hoister and the time of inbound and outbound works by a decomposition process; determining that the higher the reaching rate of a task serviced by the shuttle vehicle is, the lower the goods location of the task is, and the goods with highest correlation are allocated at different layers, so that multiple shuttle vehicles can provide service simultaneously; finally putting forward a principle of dividing storage zones in light of item correlation, establishing a correlation matrix of outbound items, clustering the items by an ant colony algorithm, and combining and arranging the storage zones in a two-dimensional plane according to the analysis result of the queuing network model, thereby realizing the allocation of goods location. By the method, the waiting time of the shuttle vehicle and the idle time of the hoister can be effectively shortened, so that the rate of equipment utilization and the throughput capacity of a distribution center are increased.
Owner:SHANDONG UNIV

On-demand group communication services with quality of service (QoS) guarantees

The present invention broadly contemplates addressing QoS concerns in overlay design to account for the last mile problem. In accordance with the present invention, a simple queuing network model for bandwidth usage in the last-mile bottlenecks is used to capture the effects of the asymmetry, the contention for bandwidth on the outgoing link, and to provide characterization of network throughput and latency. Using this characterization computationally inexpensive heuristics are preferably used for organizing end-systems into a multicast overlay which meets specified latency and packet loss bounds, given a specific packet arrival process.
Owner:HULU

Delay-guaranteed NFV cloud platform dynamic capacity expanding and shrinking method and system

The invention discloses a delay-guaranteed NFV cloud platform dynamic capacity expanding and shrinking method and system. The method comprises the steps of collecting network configuration information, tenant configuration information and operation logs of an NFV cloud platform; according to the collected network configuration information of the NFV cloud platform, tenant configuration informationand an operation log, predicting a data packet average arrival rate of each tenant in a next time period by utilizing a logarithmic linear Poisson autoregression model to perform flow prediction, andanalyzing data packet average processing delay of each service chain by utilizing a classification Boxson queuing network model based on the data packet average arrival rate of each tenant in the next time period; according to the flow prediction result and the data packet average processing delay of each service chain, carrying out dynamic capacity expansion and shrinkage decision-making of theNFV cloud platform, the decision-making information comprising the deployment number and positions of various virtual network function instances and a flow forwarding rule; and translating the decision information into an instruction, and respectively sending the instruction to an SDN controller and a controller of the NFV cloud platform to execute capacity expansion and shrinkage operation.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

Performance evaluation method aiming at KVM virtualization server

The invention relates to a performance evaluation method aiming at a KVM virtualization server. The method comprises the steps of firstly at the application level, determining that the performance evaluation measure index of the KVM virtualization server is response time and throughput according to the quality of service parameter, namely QoS parameter, in a service level agreement and a common performance measure in the current actual application environment; then with the help of an open type queuing network model, establishing a virtualization server performance evaluation model through combining with the load characteristic of the virtualization server according to a resource scheduling and resource virtualization implementing mode in the KVM; and at last based on the performance evaluation mode, illustrating how to compute the performance measure index and evaluate the performance of a virtual machine in an Linux operation system. According to the method, the problem about evaluating the performance of the virtualization server in the KVM is solved, the performance evaluation efficiency is improved, the establishment of a complex performance testing environment is avoided, the performance evaluation cost is reduced, and the performance measures of main components in the virtualization server can be obtained, thus being capable of helping the user find system performance bottlenecks.
Owner:兰雨晴

Predicting Performance Regression of a Computer System with a Complex Queuing Network Model

An approach is provided for predicting system performance. The approach predicts system performance by identifying a Queuing Network Model (QNM) corresponding to a clustered system that handles a plurality of service demands using a plurality of parallel server nodes that process a workload for a quantity of users. A workload description is received that includes server demand data. Performance of the clustered system is predicted by transforming the QNM to a linear model by serializing the parallel services as sequential services, identifying transaction groups corresponding to each of the server nodes, and distributing the workload among the transaction groups across the plurality of nodes. The approach further solves analytically the linear model with the result being a predicted resource utilization (RU) and a predicted response time (RT).
Owner:IBM CORP

Performance predicating method for software system based on UML (Unified Modeling Language) architecture

The invention discloses a performance predicting method for a software system based on a UML (Unified Modeling Language) architecture. The performance predicting method comprises the following steps: firstly, establishing a UML model of the software system; adding a stereotype and a marking value on a UML diagram and enabling the stereotype and the marking value to be converted into the UML diagram with labels to generate a UMLSPT (Unified Modeling Language Subsystem Parameter Table) model; generating a queue network model algorithm and a queue network model by using the UML model; and finally, calculating to obtain a software performance parameter value according to a solving method of the queue network module performance parameter and realizing the prediction of software performances. According to the software performance predicting method provided by the invention, a user can solve the performance index of the software by establishing the UML model of the software system and adding the stereotype and the marking value and enabling the stereotype and the marking value to become the UMLSPT model. Therefore, the complexity of software performance prediction is greatly reduced and the development efficiency of the software is improved.
Owner:南通壹选智能科技有限公司

Urban rail transit passenger flow control optimization method based on fluid queuing network

ActiveCN112906179AReduced travel time costsReduce queuing overflowForecastingDesign optimisation/simulationSimulationQueuing network model
The invention discloses an urban rail transit passenger flow control optimization method based on a fluid queuing network, and the method comprises the following steps: S1, building an urban rail transit passenger flow control optimization model based on the fluid queuing networkm, including: S11, obtaining passenger travel OD data and subway line parameters; s12, constructing an urban rail transit fluid queuing network model; s13, determining a passenger flow control decision variable; s14, determining passenger flow control constraint conditions; s15, constructing an optimization objective function; S2, solving the optimization model, including: S21, calculating a population individual objective function; s22, judging whether the individual fitness meets a termination condition or not; if yes, ending; otherwise, performing the next step; and S23, performing selection, crossover and variation processing. On the basis of an urban rail transit passenger flow control scheme, a passenger flow control optimization model is established in combination with a fluid queuing network model, and the possibility of platform queuing overflow and queuing explosion is reduced.
Owner:SOUTHWEST JIAOTONG UNIV +1

Storage space allocation method for multi-storey shuttle car automatic storage system

The invention discloses a cargo location allocation method applied in a multi-storey shuttle car automatic storage system. The method includes: firstly, generating the plane layout structure data of the system according to the number of shelves and roadways; Analyze the idle time of the hoist, establish an open-loop queuing network model to describe the system, use the decomposition method to analyze the relationship between the waiting time of the shuttle car, the idle time of the hoist, and the time for entering and leaving the warehouse, and determine that the higher the arrival rate of the tasks served by the shuttle car The goods should be arranged in the bottom cargo space, and the goods with higher correlation should be divided into different layers to ensure that there can be multiple shuttle services at the same time. Finally, the principle of dividing the storage area according to the correlation of the items is proposed, the correlation matrix of the outgoing items is established, and the items are clustered with the ant colony algorithm, and the storage areas are combined in the two-dimensional plane according to the analysis results of the queuing network model Arrangement to realize the storage allocation of goods. Through the implementation of the present invention, the waiting time of the shuttle car and the idle time of the hoist can be effectively reduced, thereby improving the utilization rate of the equipment and the throughput of the distribution center.
Owner:SHANDONG UNIV

Performance evaluation method and device for multi-layer shuttle vehicle system

The invention provides a performance evaluation method and device for a multi-layer shuttle vehicle system. The performance evaluation method for the multi-layer shuttle vehicle system comprises the following steps: respectively establishing an open-loop queuing network model of a transfer vehicle system and an open-loop queuing network model of a loop line system; based on the transfer vehicle system open-loop queuing network model and the loop line system open-loop queuing network model, the throughput and the order completion period corresponding to the corresponding open-loop queuing network model are calculated respectively; under the condition that the number of layers and the number of roadways are fixed, the optimal multi-layer shuttle vehicle system is evaluated according to the throughput and the order completion period corresponding to the corresponding open-loop queuing network model.
Owner:SHANDONG UNIV

Adaptive scaling control system and method for web application in cloud computing platform

The invention discloses a self-adapting flexible control system of Web application in a cloud computing platform and a method of the self-adapting flexible control system, which are used to dynamically adjust computing resources according to load change. The system comprises a performance monitor, a load database, a performance model computing module, an optimization controller and an automatic configuring module. The method comprises the steps that: firstly, the performance monitor constructs a layered queue network model according to structure of the Web application and processing procedure of requests; the web application is deployed in a real cloud computing platform and a record label is inserted into each layer of component of the web application, so as to record the actual execution time of each request at each resource of each component, thereby obtaining parameters needed by a web application performance model in the performance model computing module; when application load changes, the optimization controller computes the performance of the application of each resource configuration scheme via a heuristic search algorithm, finds the configuration scheme which needs the minimal cost and also can meet the requirement on Qos (quality of service), and then takes the configuration scheme as the optimal configuration scheme; and finally, the automatic configuring module readjusts resources needed by each component of the application.
Owner:南京大学镇江高新技术研究院

Heterogeneous industrial field bus fusion method and system

The invention discloses a heterogeneous industrial field bus fusion method and a heterogeneous industrial field bus fusion system, which solve the defects of poor isomerism and poor software maintenance and reliability caused by coexistence of field bus standards, and are characterized in that: aiming at a heterogeneous industrial field bus protocol, a master station/slave station control network is constructed, and based on a Modbus RTU (Remote Terminal Unit) message model and a CANopen message model, a CANopen message model is established; under the support of a data structure and a processing function provided by a set RTOS, parameters are obtained, a packing and unpacking mechanism of Modbus RTU and CANopen information frames is constructed based on a queuing network model, and when data are continuously exchanged between two protocols, management of interaction tasks and management of a buffer area are abstracted into a multi-task, multi-queue and multi-service mode. According to the heterogeneous industrial field bus fusion method and the heterogeneous industrial field bus fusion system, equipment hooked on various industrial field buses can be conveniently fused to an IT network, and industrial field bus network intercommunication and fusion are realized.
Owner:SHANGHAI JIAO TONG UNIV

Method for predicting and optimizing efficiency of multi-workbin robot warehouse system

PendingCN114202270AImprove performanceResource allocation conforms toForecastingResourcesSemi openAlgorithm
The invention provides a method for predicting the efficiency of a multi-workbin robot warehouse system, and the method comprises the steps: building a semi-open queuing network model corresponding to the order selection and storage of a multi-workbin robot warehouse, and enabling all parameters of the model to correspond to all kinds of resource configurations of the warehouse system; carrying out approximate aggregation on the semi-open queuing network model by adopting an approximate average analysis method to obtain an approximate semi-open queuing network model which only has two service nodes and the service rate of which is influenced by the number of robots in the service nodes; taking the approximate semi-open queuing network model as a mathematical model for evaluating the efficiency of the multi-workbin robot warehouse system, and solving by adopting a matrix geometry method to obtain steady-state distribution of the mathematical model under the current resource configuration; and predicting the efficiency indexes of the system under the current resource configuration based on the steady state distribution: the average throughput time of the multi-workbin robot warehouse, the robot utilization rate, the queuing time of the robot at the workstation and the busy rate of the picker at the workstation.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

Queuing network model training method, queuing optimization method, equipment and medium

The invention discloses a training method of a queuing network model, a queuing optimization method, equipment and a medium. The training method comprises the following steps: acquiring a real-time queuing network diagram; inputting the real-time queuing network graph as training data into a to-be-trained first graph neural network model; the first graph neural network model comprises a trained first model corresponding to the processor; in the training process, locking model parameters of the first model to be unchanged, and training the first graph neural network model based on a preset first loss function to obtain a first optimization network model corresponding to the encoder and a second optimization network model corresponding to the decoder; releasing model parameters of the first model, and continuing to train the first graph neural network model based on the first loss function to obtain a final queuing network model. According to the method, the trained processor is utilized, the encoder and the decoder are trained by utilizing the field data, training is performed again according to the field data, and relatively high prediction accuracy can be achieved by utilizing less data training.
Owner:SHANGHAI CLEARTV CORP LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products