Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Third-order low-rank tensor completion method based on GPU

A third-order tensor and completion technology, which is applied in the field of high-performance computing, can solve the problems of unsuitable large-scale tensor processing and running time increase, and achieve the effect of improving computing efficiency and improving computing efficiency

Pending Publication Date: 2019-07-26
SHANGHAI UNIV
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In general, the CPU-based third-order low-rank tensor data completion method, the running time increases exponentially with the size of the tensor, so it is not suitable for processing large-scale tensors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Third-order low-rank tensor completion method based on GPU
  • Third-order low-rank tensor completion method based on GPU
  • Third-order low-rank tensor completion method based on GPU

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] A GPU-based third-order low-rank tensor completion method, the steps are as follows figure 1 shown, including:

[0046] Step 1: The CPU transmits the input data DATA1 to the GPU, and the number of initialization cycles l=1;

[0047] Step 2: GPU obtains a third-order tensor Y of a current cycle l based on the least squares solution l ;

[0048] Step 3: GPU obtains a third-order tensor X of the current cycle l based on the least squares solution l ;

[0049] Step 4: The CPU checks whether the end condition is satisfied, if it is satisfied, go to step 5, otherwise, increase the number of cycles l by 1 and go to step 2 to continue the loop;

[0050]Step 5: The GPU transmits the output data DATA2 to the CPU.

Embodiment 2

[0051] Embodiment 2: This embodiment is basically the same as Embodiment 1, and the special features are as follows:

[0052] The step 1 includes:

[0053] Step 1.1: Allocate space in GPU memory;

[0054] Step 1.2: Transfer the input data DATA1 in the CPU memory to the allocated space in the GPU memory. DATA1 contains the following data:

[0055] (1) A third-order tensor T∈R to be completed m×n×k . R represents a real number, and m, n, and k are the sizes of the first, second, and third dimensions of the tensor T, respectively. The total number of elements of this tensor is m×n×k, and the tensor elements whose first, second, and third dimensions are i, j, and k respectively are recorded as T i,j,k .

[0056] (2) An observation set S∈o×p×q, and o≤m, p≤n, q≤k.

[0057] (3) An observation set S and tensor T∈R to be completed m×n×k The observation tensor TP ∈ R m×n×k . TP is obtained by using the observation function ObserveS() on T, that is, TP=ObserveS(T). Among them,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a third-order low-rank tensor completion method based on a GPU. The method comprises the following operation steps that (1) a CPU transmits input data DATA1 to a GPU, and the number of cycles is initialized; (2) the GPU obtains a third-order tensor of a current cycle based on a least square solution; (3) the GPU obtains a third-order tensor of a current cycle based on a least square solution; (4) the CPU checks whether an end condition is met, and if yes, the step (5) is executed, otherwise, the number of cycles is increased by one, and the step (2) is excuted to continue to circulate; and (5) the GPU transmits the output data DATA2 to the CPU. According to the method, the GPU is used for accelerating the calculation task of three-order low-rank tensor completion medium-high concurrency processing, so that the calculation efficiency is improved. Compared with traditional CPU-based third-order low-rank tensor completion, the method has the advantages that the calculation efficiency is obviously improved, and the same calculation can be completed within a short time.

Description

technical field [0001] The invention relates to the technical field of high-performance computing, in particular to a GPU (Graphics Processing Unit)-based third-order low-rank tensor completion method. Background technique [0002] High-dimensional data in the real world can be naturally represented by tensors. Data loss often occurs in the transmission of wireless sensors, so the obtained sensing data is often incomplete. In scenarios where computing and network resources are limited, people use partial measurements to reduce the amount of data to be processed and transmitted, which can also lead to incomplete data. How to recover complete data from these incomplete data is a research hotspot in recent years. A common approach is to model incomplete data as low-rank tensors, and then exploit redundant features in the data for recovery. [0003] The present invention mainly focuses on data completion of third-order low-rank tensors. Existing studies have proposed some CP...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/15G06T1/20
CPCG06F17/153G06T1/20G06F17/16
Inventor 张涛徐达刘小洋
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products