A flow cytometry apparatus and methods to process information incident to particles or cells entrained in a sheath fluid stream allowing assessment, differentiation, assignment, and separation of such particles or cells even at high rates of speed. A first signal processor individually or in combination with at least one additional signal processor for applying compensation transformation on data from a signal. Compensation transformation can involve complex operations on data from at least one signal to compensate for one or numerous operating parameters. Compensated parameters can be returned to the first signal processor for provide information upon which to define and differentiate particles from one another.
Roughly described, a memory device has a multilevel stack of conductive layers. Vertically oriented pillars each include series-connected memory cells at cross-points between the pillars and the conductive layers. SSLs run above the conductive layers, each intersection of a pillar and an SSL defining a respective select gate of the pillar. Bit lines run above the SSLs. The pillars are arranged on a regular grid which is rotated relative to the bit lines. The grid may have a square, rectangle or diamond-shaped unit cell, and may be rotated relative to the bit lines by an angle θ where tan(θ)=±X / Y, where X and Y are co-prime integers. The SSLs may be made wide enough so as to intersect two pillars on one side of the unit cell, or all pillars of the cell, or sufficiently wide as to intersect pillars in two or more non-adjacent cells.
The invention relates to the technical field of gene sequencing, and provides a parallel gene splicing method based on a De Bruijn graph. The parallel gene splicing method based on the De Bruijn graph comprises the following steps that S1, the distributed De Bruijn graph is built in parallel; S2, error paths are removed; S3, the De Bruijn graph is simplified on the base of a depth graph traversal method; S4, contig is combined, and scaffold is generated; S5, the scaffold is output. The parallel gene splicing method is based on a trunkingsystem, the De Bruijn graph is built in parallel, and the problems that when large genomes are spliced, as the data volume of the large genomes is too large, graphs cannot be built and further processing cannot be executed in traditional single-machine serial gene splicing algorithms are solved. Meanwhile, in the simplifying process, parallel simplification based on depth graph traversal is carried out, the graph simplifying process is simple, the degree of parallelism is high, and the splicing speed is high.
The embodiment of the invention discloses a decompression method, device and system for an FPGA heterogeneous acceleration platform. The method comprises the following steps: receiving first data to be decompressed sent by a host side processor and storing the first data to be decompressed; calling a decompression algorithm implemented by an FPGA hardware circuit according to a start-up instruction and parameter information sent by the host side processor, and decompressing the first data to be decompressed based on the decompression algorithm to obtain decompressed data, wherein the parameter information contains data information corresponding to the first data to be decompressed and a compression relation table; and storing the decompressed data, and returning a completion signal to the host side processor to ensure that the host side processor receives the completion signal and then reads the decompressed data. According to the decompression method, device and system disclosed by the invention, the decompression speed can be increased during use, and the power consumption required during decompression can be reduced.
The invention discloses a novel method and system for parallel printing dispatching. A host of a printing server is connected with a plurality of usable printers, and maintains a printer list; and the list is updated when the host is rebooted or some new printers are added. For realizing parallel printing of a document, a client side transmits a message, containing the number N of the printers applied for, to the server before printing; and then, the client side segments the document to be printed into print pages, and transmits the print pages to the printing server. After receiving the print pages transmitted by the client side, based on the number N of the applied printers, the printing server takes the page as a unit, and adds the printing tasks to a printing queue of the N dispatched printers by a certain strategy. The technical scheme of the invention is featured by making full use of the printing resource, being convenient for unified management and having high efficiency.
A computer system provides distributed memorycomputer architecture achieving extremely high speed parallel processing, and includes: a CPU modules, a plurality of memory modules, each module having a processor and RAM core, and a plurality of sets of buses making connections between the CPU and the memory modules and / or connections among memory modules, so the various memory modules operate on an instruction given by the CPU. A series of data having a stipulated relationship is given a space ID and each memory module manages a table containing at least the space ID, the logical address of the portion of the series of data managed, the size of the portion and the size of the series of data, and, the processor of each memory module determines if the portion of the series of data managed is involved in a received instruction and performs processing on data stored in the RAM core.
The invention provides a log dataprocessing method, a log dataprocessing device and a business system. The log dataprocessing method comprises the steps of acquiring to-be-processed log data; carrying out distribution processing on the to-be-processed log data according to a distribution identifier to obtain the log data of each log source, wherein each value of the distribution identifier corresponds to one log source; respectively processing a corresponding business and according to the log data of each log source. According to the log data processing method, the log data processing device and the business system, the business may be conducted dependent on the log data, the parallelism of the businesses is improved and the throughput of a log system is enhanced.
The invention discloses an FFT (Fast Fourier Transform) paralleling method based on a GPU (GraphicsProcessing Unit) multi-core platform. In the FFT paralleling method, the communication is carried out for one time to complete FFT operation of N M points according to a principle of once communication mass operation on a storage aspect, which greatly reduces the communication consumption; and by using the high-speed cache, i.e. a shared storage, inside each thread block, the communication time is further reduced, and the operating efficiency is enhanced. The invention is used for parallelly processing the data by using hundreds of processing cores through scientific comprehensive arrangement, thereby furthest enhancing the parallelism degree and efficiently completing the operation and enhancing the operation accuracy.
The invention discloses a hybrid electromagnetic transient simulation method suitable for microgrid real-time simulation. The method is characterized in that a traditional node analysis method (NAM) and a highly-parallelized delayinsertion method (LIM) are combined; therefore, the micro-grid is segmented from a filter of the distributed power generation system; a delayinsertion (LIM) network including a distribution line and a plurality of node analysis (NAM) networks including a distributed power generation system are respectively formed. The NAM network is simulated by adopting a traditional node analysis method; the LIM network is simulated by adopting a delayinsertion method; in an initialization phase, an incidence matrix and four diagonal matrixes containing line parameters used for LIM network simulation are formed according to micro-grid line topology and parameters; in a simulation main body cycle, the LIM network and a plurality of NAM networks can be solved at the same time, the parallelism degree of microgrid simulation is improved, in addition, diagonalmatrix multiplication is mainly adopted for calculation in the LIM network simulation solving process, the calculation burden generated when a node analysis method is used for solving a large-scale network equation is avoided, and the simulation efficiency is improved.
The invention provides a background modeling method of a video image. The method comprises the steps of dividing each frame of image into a plurality of image blocks for a plurality of video image frames; Establishing an initial background model according to the first frames of the plurality of video image frames, wherein the initial background model stores a corresponding sample set for each background point; For a subsequent frame of the first frame, constructing a background model for the plurality of image blocks by matching with the initial background model to form a background image. According to the method, the background model can be quickly and accurately constructed.
The invention provides a real-time star atlas background filtering method in a daytime environment. A finite row (2n+1) of star atlas data is filtered in real time by taking a (2n+1)* (2n+1) structureelement as a basic unit, a minimal star atlas and a maximal star atlas are solved sequentially, and a difference is obtained to complete filtering. The whole process is carried out in a processing line manner, calculation rules are simple and highly parallel with one another, the process is realized by hardware, star atlas collection and filtering are carried out in a processing line, the speed and instantaneity are high, and a using value is very high; and especially, the process of solving extreme star atlas values is carried out in the parallel graded processing line, the extreme values are solved in the column direction and then in the row direction, extreme values of the (2n+1) data are solved in the column or row direction, and calculation is highly parallel, low in computational complexity and high in speed.
The invention relates to the field of voice signalprocessing and intelligent control, in particular to a voice control intelligent toy car system based on TMS320VC5509A. The difference is as follows:the system comprises a voice recognition module which is an LD3320 chip, template keywords are preset in the LD3320 chip, the voice recognition module performs spectral analysis on input voice signals through an ARS voice recognition technology to extract parameters representing voice features in the input voice signals, then the parameters are compared with the template keywords, and the keywordwith the highest matching degree is found out to serve as a voice recognition signal to be output; the system also comprises a DSP processor, which is a DSP chip of TMS320VC5509A, the input end of the DSP processor is connected to the voice recognition module, and the DSP processor is used for converting the voice recognition signal output by the voice recognition module into a control signal andsending the control signal to a control object of a toy car so as to execute a corresponding action command. The system overcomes the defects of a man-machine interaction plate of an existing toy car, achieves voice control, and is high in performance and low in power consumption.
The invention discloses a task execution method which comprises the following steps: collecting a target position label of a current server in a Redis queue, and querying a task identifier of each task; the Redis queue is used for storing the IP of each server; determining a task identifier matched with the target position label according to the task identifier, and determining a task corresponding to the task identifier matched with the target position label as a task executed on the current server; and after the task executed by the current server is determined, executing the corresponding task. According to the method, the service logic that some tasks need to be executed in the same server can be ensured, and the parallel tasks can be executed in multiple servers at the same time, so that the service logic is not influenced, and the task execution efficiency can be improved. The invention also provides a task execution device, electronic equipment and a computer readable storage medium, which have the above beneficial effects.
The invention discloses a method and a device for rebuilding an image, and belongs to the filed of digital imageprocessing. The method comprises the following steps of: acquiring predicted image data and outputting the predicted image data according to the preset direction; acquiring residual image data and outputting the residual image data according to the preset direction; and adding the predicted image data in each preset direction to the residual image data in each preset direction while outputting the predicted image data in each preset direction and the residual image data in each preset direction to acquire a rebuilt image. The device comprises a predicted image data output module, a residual image data output module and a rebuilt image acquisition module. The predicted image data and the residual image data are both output according to the preset direction, and the predicted image data in each preset direction and the residual image data in each preset direction are added while the predicted image data in each preset direction and the residual image data in each preset direction are output, so that the parallelism degree of the rebuilt image is improved, the image rebuilding time is reduced and the image rebuilding speed is improved.