Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

30 results about "Throughput (business)" patented technology

Throughput is rate at which a product is moved through a production process and is consumed by the end-user, usually measured in the form of sales or use statistics. The goal of most organizations is to minimize the investment in inputs as well as operating expenses while increasing throughput of its production systems. Successful organizations which seek to gain market share strive to match throughput to the rate of market demand of its products. .

Resource scheduling method and system based on multiple process models

ActiveCN111223001ARealize the optimization of downhole schedulingDesign optimisation/simulationResourcesOptimal schedulingOperations research
The invention discloses a resource scheduling method and a system based on multiple process models. The method comprises the following steps: 1) establishing a mining production resource model for describing resource basic characteristics, functional characteristics and constraint characteristics of each production resource; wherein the production resources comprise process resources, discrete resources and batch resources; 2) establishing production business models including a process type business model, a discrete type business model and a batch type business model; 3) calculating constraint conditions on different position distribution by the intelligent scheduling module on the basis of the discrete service model; 4) planning a path and calculating the energy demand and the maximum throughput on the path by the intelligent scheduling module according to the constraint condition of each position and the upstream and downstream node information of the position node; and 5) calculating batch-type service output in the time sequence segmentation period by the intelligent scheduling module according to the batch-type service model, and selecting the minimum energy demand as the optimal scheduling planning route by taking the maximum throughput as the constraint condition, so that underground scheduling optimization can be realized.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Log generation method and device, computer equipment and storage medium

The embodiment of the invention discloses a log generation method and device, computer equipment and a storage medium, and the method comprises the steps: obtaining a log file generated by executing atarget business, and enabling the log file to comprise at least one business log; identifying whether the log file comprises a preset target service log or not; and if the log file comprises the target service log, uploading the log file, otherwise, clearing the log file. Whether the log file is uploaded or not is determined by taking whether the log file contains the target service log or not asan uploading condition; only the log file containing the specified log can be uploaded and stored; other logs are cleared in time and are not stored any more, so that the throughput is improved, batch submission and batch processing are realized, the influence on user service processing is very small, and meanwhile, the logs are selectively controlled by utilizing uploading conditions, so that the log quantity can be greatly reduced, and the log availability is improved.
Owner:CHINA MOBILEHANGZHOUINFORMATION TECH CO LTD +1

Garbage recycling method and device

The invention provides a garbage recycling method. The storage system determines a first time period according to the business pressure value of the storage system in the historical time period, where the business pressure value of the storage system in the first time period is lower than a set threshold value, and the business pressure value of the storage system in the historical time period is obtained according to any one or more of IOPS, IO size, data read-write proportion, deduplication compression proportion and throughput; therefore, the storage system can carry out garbage collection in the first time period with the small business pressure value. Therefore, the storage system can have more blank logic block groups to support the storage system to quickly write a large amount of new data in other time periods, so that the influence of garbage collection in the other time periods on the IOPS and other performances of the host can be reduced, and the performance of the host when the host has a large amount of data write-in tasks is improved. In addition, the embodiment of the invention further provides a garbage recycling device.
Owner:HUAWEI TECH CO LTD

Loosely coupled distributed workflow coordination system and method

The invention discloses a loosely coupled distributed workflow coordination system and method, and the method comprises the steps: a user carries out definition, online, operation and maintenance of a workflow through calling an interface service API (Application Program Interface); a distributed workflow coordinator schedules the workflow in a timed manner by integrating a distributed timing engine Quartz, adds the workflow to a workflow distribution distributed message queue MQ, receives the workflow, processes the task dependency relationship of the workflow, and adds a business type task to be executed after coordination to the task distribution distributed message queue MQ; a distributed task actuator Worker receives each business type task from the task distribution distributed message queue MQ and executes the business type tasks, and the task execution result is called back to the distributed workflow coordinator through the task callback distributed message queue MQ; and finally, the Coordinator persistently stores a task execution result in a database for feeding back the result to a user. According to the method, Coordinator is focused on logic coordination processing, so that full decoupling of workflow coordination processing and task execution is ensured, and the throughput, expansibility and scalability of the system are improved.
Owner:浙江数新网络有限公司

A Consortium Chain Master-Slave Multi-Chain Consensus Method Based on Tree Structure

The invention discloses a tree-structured alliance chain master-slave multi-chain consensus method. By dividing the alliance chain consensus group, the upper channel and the lower channel are obtained, and the channels are isolated from each other to realize the classification and classification of different digital assets. Data isolation meets the privacy requirements of data isolation, and multiple channels are processed concurrently, which improves transaction performance and solves the problems of low throughput and high transaction delay in the existing blockchain. Chain architecture, and the Byzantine fault-tolerant consensus algorithm based on threshold signature under the master-slave multi-chain architecture solves the consistency problem caused by concurrent processing of diversified digital asset classification, and has the advantages of low communication complexity and signature verification complexity. Master-slave The multi-chain structure breaks through the functional and performance constraints of a single chain, has good high concurrent transaction performance, and takes into account the isolation and protection of private data, meeting the diverse business needs of enterprises.
Owner:芽米科技(广州)有限公司

Three-dimensional log full-link monitoring system and method, medium and equipment

PendingCN114189430AFacilitate aggregated searchRealize monitoringTransmissionMonitoring systemData profiling
The invention provides a three-dimensional log full-link monitoring system and method, a medium and equipment, and the system comprises a log analysis platform which collects application logs, and carries out the filtering, desensitization, storage, query and alarm of the application logs; the full-link tracking platform is used for collecting related data based on an application performance monitoring system SkyWalking, and analyzing and diagnosing the performance bottleneck of the application under the distributed architecture; and the monitoring platform is used for monitoring infrastructures, application performance, middleware and business operation indexes and controlling performance conditions of each access system, including application response time, throughput, slow response and error details, JVM and middleware states and business indexes. According to the method, logs and full-link tracking are combined, end-to-end transactions are associated in a micro-service scene, problems can be quickly positioned and analyzed by directly searching the logs and tracking the call chain, and the problem positioning complexity is reduced.
Owner:IND BANK CO +1

Method, device and equipment for distinguishing splitting process based on OGG technology

The embodiment of the invention provides a method, device and equipment for distinguishing a splitting process based on an OGG technology, and the method comprises the steps: obtaining the statisticalinformation of a process, and determining the actual throughput of the process, wherein the statistical information comprises a target file; determining a pre-throughput according to the called log quantity and the conversion coefficient of the target file; judging whether the relationship between the pre-throughput and the actual throughput meets a first preset condition or not; and under the condition that a preset condition is met, splitting the process. According to the invention, the statistical information is obtained; executing and splitting processes are optimized; the process based on the OGG technology can timely respond to the data change caused by the change of the business system, the influence on the timeliness and consistency of the data due to the data synchronization delay caused by the sudden business change can be prevented, and the labor cost is reduced while the data extraction and replication efficiency of the OGG process is improved.
Owner:CHINA MOBILE GROUP SICHUAN +1

Association log playback method and device

ActiveCN108829802BFast playbackPlayback any specifiedHardware monitoringData switching networksDatabaseComputer engineering
The invention provides an associated log playback method and device. The associated log playback method comprises the following steps: performing application classifying and business relating on application logs; and storing each application log performed the application classifying in sequence according to business associated relationships, wherein the business associated relationships comprise the follows: the beginning of the latter business depends on the completion of the former business; the application logs stored in sequence are divided into plurality groups of logs according to the number of press machines and the number of thread groups of each press machine, and each group of divided logs is transmitted to the corresponding press machine; each thread in the thread group of the press machine is controlled to read a bucket of logs in the corresponding log group successively and perform log sending at a preset sending speed, so as to meet the test throughput requirements; and the number of the press machines, the number of log groups and the number of thread groups are increased to carry out linear extension on the throughput when the sending speed is limited. By means of the invention, the associated logs can be replayed quickly and with high throughput.
Owner:THE PEOPLES BANK OF CHINA NAT CLEARING CENT

Distributed computing method and system for financial indicators

The present invention provides a method and system for distributed calculation of user indicators in finance. The method includes: dividing the calculation of user indicators into three types of indicator calculations: offline calculation, business-triggered calculation, and data source change-triggered calculation; defining user indicators , including user indicator attributes and indicator operators. Indicator operators include defining the data source used in indicator calculation, indicator calculation logic, and indicator calculation type. There must be at least one indicator calculation type in the indicator operator; define triggers for three indicator calculation types conditions; according to the index calculation type to which the user index belongs, when the trigger condition occurs, the index calculation type corresponding to the trigger condition is used for index calculation; the calculated user index is stored, and the index calculation efficiency of the present invention is high, and can solve the problem of repeated index calculation and new Add / modify index calculation logic difficulties, serious index missing and other problems, and can effectively ensure the execution of high-quality computing and fast operators, and improve system stability and real-time throughput.
Owner:鑫涌算力信息科技(上海)有限公司

Industrial device, method, and non-transitory computer readable medium

The invention provides an industrial device, a method, and a non-transitory computer readable medium. The industrial device supports device-level data modeling that pre-models data stored in the device with known relationships, correlations, key variable identifiers, and other such metadata to assist higher-level analytic systems to more quickly and accurately converge to actionable insights relative to a defined business or analytic objective. Data at the device level can be modeled according to modeling templates stored on the device that define relationships between items of device data for respective analytic goals (e.g., improvement of product quality, maximizing product throughput, optimizing energy consumption, etc.). This device-level modeling data can be exposed to higher level systems for creation of analytic models that can be used to analyze data from the industrial device relative to desired business objectives.
Owner:ROCKWELL AUTOMATION TECH

Smart factory-oriented random access resource optimization method and device

The invention discloses a smart factory-oriented random access resource optimization method and device. The method comprises the following steps: dividing the access priority of each service according to the delay sensitivity of different services; training a local model at a local end by adopting a reinforcement learning algorithm; carrying out global model aggregation on the local model parameters of each local end by adopting a federal learning algorithm at the cloud end, and establishing a shared machine learning model, wherein the reinforcement learning target is to maximize the number of successfully accessed users on the premise of ensuring the service quality requirements of various businesses; and utilizing the optimized shared machine learning model to realize access resource allocation, so that the system throughput is maximized and the overall production efficiency of a factory is improved on the premise of meeting the service quality requirements of various businesses. According to the invention, the resource utilization rate can be optimized and the network performance can be improved on the premise of meeting the delay requirements of various services in industrial production.
Owner:UNIV OF SCI & TECH BEIJING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products