Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A distributed model training system and application method

A model training and distributed technology, applied in the field of distributed model training system, can solve the problems of GPU performance not being fully utilized, limited SSD read and write speed, etc.

Active Publication Date: 2021-07-06
BEIJING DAJIA INTERNET INFORMATION TECH CO LTD
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This prevents the performance of the GPU from being fully utilized and is limited by the read and write speed of the SSD.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A distributed model training system and application method
  • A distributed model training system and application method
  • A distributed model training system and application method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] In order to give full play to the computing performance of the GPU and improve the efficiency of model training, in the embodiments of the present disclosure, a distributed system is used for model training. Specifically, the model is split into two parts, and one part is embedded on several CPUs ( Embedding) service model, and the other part is a deep neural network part built on several GPUs.

[0051] refer to figure 2 As shown, the Embedding service model built on several CPUs receives the feature data set in the sample data set used for training, maps the feature data into a representation vector and sends it to the GPU; and several GPUs receive The sample ID and real evaluation results in the sample data set, combined with the characterization vectors obtained from several CPUs, based on the deep neural network, the prediction model is trained until the training is completed, wherein, in the training process, each time the prediction model is After parameter adju...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The disclosure relates to the field of computers, and discloses a distributed model training system and an application method for improving model training efficiency. The distributed model training system includes several GPUs and several CPUs, wherein any one of the CPUs is used to map each feature data of the allocated part of the sample data into corresponding characterization vectors according to preset mapping rules; any one of them The GPU is used to obtain corresponding characterization vectors from each CPU based on the obtained sample IDs when allocating the sample IDs of the partial sample data, and use the obtained characterization vectors to train the prediction model based on the deep neural network; thus , each feature data of each sample data can be distributed to each GPU and each CPU for processing, thus giving full play to the computing performance of GPU and CPU, achieving the purpose of multi-machine expansion, and effectively improving the efficiency of model training.

Description

technical field [0001] The present disclosure relates to the field of computer science, in particular to a distributed model training system and application method. Background technique [0002] In the prior art, AIBox is a system that encounters a large amount of data in solving the click-through rate (Click-Through-Rate, CTR) prediction problem. The system uses a central processing unit (central processing unit, CPU), A method in which a Graphics Processing Unit (GPU) is combined with memory and a solid state disk (Solid State Disk, SSD) for calculation. The method is: when encountering a large amount of sparse training data, first hand the training data to the CPU for densification training, after densifying the training data, the CPU transfers the obtained densification data to the GPU for prediction After the training of the model, the data GPU after the training is completed is returned to the CPU for the dense training. During the whole process of the training data,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F30/27G06T1/20
CPCG06T1/20G06F30/27
Inventor 廉相如刘成军刘霁
Owner BEIJING DAJIA INTERNET INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products