Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

208 results about "Dispatch table" patented technology

In computer science, a dispatch table is a table of pointers to functions or methods. Use of such a table is a common technique when implementing late binding in object-oriented programming.

System and method utilizing a graphical user interface of a business process workflow scheduling program

A graphical user interface (GUI) scheduler program is provided for modeling business workflow processes. The GUI scheduler program includes tools to allow a user to create a schedule for business workflow processes based on a set of rules defined by the GUI scheduler program. The rules facilitate deadlock not occurring within the schedule. The program provides tools for creating and defining message flows between entities. Additionally, the program provides tools that allow a user to define a binding between the schedule and components, such as COM components, script components, message queues and other workflow schedules. The scheduler program allows a user to define actions and group actions into transactions using simple GUI scheduling tools. The schedule can then be converted to executable code in a variety of forms such as XML, C, C+ and C++. The executable code can then be converted or interpreted for running the schedule.
Owner:MICROSOFT TECH LICENSING LLC

Systems and methods for enhancing connectivity between a mobile workforce and a remote scheduling application

Systems and methods for enhancing connectivity are discussed. An illustrative aspect of the invention includes a method for enhancing connectivity. The method includes scheduling an order to be performed by a worker into a schedule, accessing the schedule by a mobile device via a server on the Internet, and substituting the schedule by a proxy to allow an application on the mobile device to interact with the proxy when the mobile device is temporarily disconnected from the schedule.
Owner:HITACHI ENERGY SWITZERLAND AG

Method and system for managing print job files for a shared printer

A system and method enable a user to generate a single batch job ticket for a plurality of print job tickets. The system includes a print driver, a print job manager, and a print engine. The print driver enables a user to request generation of a collective job queue and to provide a plurality of job tickets for the job queue. The print job manager includes a collective job queue manager and a print job scheduler. The collective job queue manager collects job tickets for a job queue and generates a single batch job ticket for the print job scheduling table when the job queue is closed. The print job scheduler selects single batch job tickets in accordance with various criteria and releases the job segments to a print engine for contiguous printing of the job segments.
Owner:XEROX CORP

Schedule management system

A schedule management system in a managing party connects via a network to one or more managed parties. The system comprises a schedule table for storing a schedule created by the managing party. The created schedule is transferred to a common schedule table provided on a server. The server is provided outside the managing party. The system provides each of the managed parties with an inquiry means for inquiring the schedule stored in the common schedule table. When the schedule stored in the schedule table is modified for some reasons, the modified schedule is transferred to the common schedule table. Thus, the managed party can view the latest schedule. Furthermore, each of the managed parties transfers modification data via the modification means. The system modifies the schedule stored in the schedule table with the received modification data. The system further displays the progress in a hierarchical format. The system compares the progress data with the schedule and displays the progress by a mark assigned in accordance with the comparison result.
Owner:HONDA MOTOR CO LTD

Method for business dispatching in time triggering FC network

ActiveCN108777660AReduce space complexityMeet the needs of real-time data configurationFibre transmissionData switching networksChannel networkFibre Channel
The present invention discloses a method for business dispatching in a time triggering FC network, and relates to the field of an FC network. The method comprises the following steps of: establishinga network model, calculating a cluster period, and determining the length of a single time slot of each time triggering message; determining the priority of a TT message according to a certain rule; planning a link transmitting the TT message; detecting the schedulability of the TT message; selecting the TT message with the highest priority to arrange the time slot of the TT message; according tothe periodicity and the transmission link of the TT message, arranging all other time slots; arranging the TT message of the next priority, and solving a time slot map meeting conflict-free and periodsending of all the TT messages in all the links; and according to a whole network and total business time slot map, solving the sending and receiving time dispatching tables in each terminal and eachexchange. The method ensures the transmitting and receiving of the message determinacy in the optical fiber channel network so as to meet the demand of real-time message dispatching in the complex application system and allow the upper-layer application system performance to be more determined and reliable.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA +1

Data flow compilation optimization method oriented to multi-core cluster

ActiveCN103970580AImplementing a three-level optimization processImprove execution performanceResource allocationMemory systemsCache optimizationData stream
The invention discloses a data flow compilation optimization method oriented to a multi-core cluster system. The data flow compilation optimization method comprises the following steps that task partitioning and scheduling of mapping from calculation tasks to processing cores are determined; according to task partitioning and scheduling results, hierarchical pipeline scheduling of pipeline scheduling tables among cluster nodes and among cluster node inner cores is constructed; according to structural characteristics of a multi-core processor, communication situations among the cluster nodes, and execution situations of a data flow program on the multi-core processor, cache optimization based on cache is conducted. According to the method, the data flow program and optimization techniques related to the structure of the system are combined, high-load equilibrium and high parallelism of synchronous and asynchronous mixed pipelining codes on a multi-core cluster are brought into full play, and according to cache and communication modes of the multi-core cluster, cache access and communication transmission of the program are optimized; furthermore, the execution performance of the program is improved, and execution time is shorter.
Owner:HUAZHONG UNIV OF SCI & TECH

Multi-core parallel simulation engine system supporting joint operations

The invention discloses a multi-core parallel simulation engine system supporting joint operations. The system solves the problem that the real-time performance of a traditional joint operation system is easily influenced when step length is used to forward logic time. The system includes a model scheduling management module, a thread management module, an external interface management module and a high-level architecture (HLA) management module. According to the system, target nodes are assigned for simulation entities to enable total computation amounts of models on each node to be equivalent; then through the model scheduling management module, a scheduling schedule of each node is generated based on a principle of load balancing, the simulation step length is assigned for the models, and during a simulation process, the scheduling schedule is adjusted and the simulation step length of the destroyed entities and generated new entities is adjusted. The system can autonomously divide the scheduling schedule according to operating cycles of the models and the system step length, allow the entities to use the different physical models or the behavior models according to needs for simulation, and support real-time scheduling of large-scale simulation and the high-fidelity operation models.
Owner:BEIHANG UNIV

Hadoop job scheduling method based on genetic algorithm

The invention discloses a Hadoop job scheduling method based on a genetic algorithm. The Hadoop job scheduling method comprises the following steps: firstly, pre-processing work to generate an encoding and decoding table; secondly, generating initial scheduling tables of a plurality of executing work, and carrying out fitness detection sorting on the initial scheduling tables to obtain a scheduling table list; finally, carrying out genetic operation on the initial scheduling tables in the scheduling table list to form a final scheduling table list; taking the scheduling table ranked in the most front of the final scheduling table list as an optimal scheduling table; distributing tasks of different work to corresponding TaskTracker for execution according to the optimal scheduling table, so as to finish a Hadoop job scheduling task. According to the scheduling method, resources in a platform do not need to be pre-set before jobs are scheduled; dynamic acquisition, counting and distribution are carried out in a scheduling process and the burden of an administrator is alleviated; furthermore, the total finishing time of the work and the average finishing time of the work can be controlled by the scheduling method, so that the fairness of executing the work is guaranteed and the executing efficiency can also be ensured.
Owner:XI'AN POLYTECHNIC UNIVERSITY

Asynchronous concurrent processing method

The invention provides an asynchronous and concurrent processing method for implementing uploading files in batch via a content management system, which includes the steps: Step 1, when a user inputs content information of a relative file on an explorer, the content information is saved in a data base via a data base module; Step 2, when the user uploads the file, the status of the file content is set as the status of waiting for the backstage processing, and the upload task indicator of a relative file is added in a task scheduling table; Step 3, when searching out that the task scheduling table has the upload task indicator of the relative file, a backstage control module starts a backstage content upload executing module; and Step 4, if the number of the current upload task indicators are not beyond the preset threshold, the backstage content upload executing module uploads the files, and deletes the upload task indicator.
Owner:STATE GRID CORP OF CHINA +4
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products