Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video event detection method of continuous learning

A detection method and video event technology, applied in the field of continuous learning video event detection, can solve the problems of large computing resources consumption, high delay, congestion, etc., and achieve the effect of avoiding limitations and strong adaptability

Inactive Publication Date: 2016-05-04
CHINA UNIV OF PETROLEUM (EAST CHINA)
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The amount of video data is very large, and processing requires a large amount of computing resources. Direct processing of video data will cause high delay or even congestion. Therefore, a reasonable encoding method is required to express video motion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video event detection method of continuous learning
  • Video event detection method of continuous learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0029] like figure 1 As shown, the present invention is a continuous learning video event detection method, including an initial learning stage and an incremental learning stage. In the initial learning stage, video data with labels are prepared, and sparse autoencoding is used to learn these data. Train a priori model; in the incremental learning phase, use the trained priori model to classify the new incoming video data, calculate the probability score and g...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video event detection method of continuous learning. The detection method comprises an initial learning stage and an incremental learning stage, wherein in the initial learning stage, video data with a tag is prepared, sparse self coding is used for learning the video data with the tag, and a priori model is trained; in the incremental learning stage, the priori model which is trained is used for classifying new video data, a probability score and a gradient distance are calculated, and automatic tag adding or manual tag adding is selected for the new video data by active learning according to a calculation result. The video event detection method is combined with deep learning and the active learning to automatically select the most proper characteristics and utilize video streaming data to gradually improve a traditional model; when the new video data is obtained, unsupervised learning is used for extracting characteristics, then, the active learning is used for reducing the work of manual classification as much as possible, the model is perfected gradually, and finally, a purpose of continuous learning is achieved.

Description

technical field [0001] The invention relates to various fields of computer vision, pattern recognition and machine learning, in particular to a continuous learning video event detection method. Background technique [0002] Most of the existing video event detection systems manually extract the features in the video, such as motion features such as gradient histogram (HOG) and optical flow histogram (HOF). These artificially selected features cannot be applied to all fields or Scenarios require different trade-offs based on different applications. And deep learning is an effective way to solve this problem. [0003] Some studies also try to use deep learning methods, such as C3D, but they all use a large amount of labeled data to train a fixed model. This model cannot be changed during use, which makes it unable to adapt to complex environments. changes, and cannot accurately identify all untrained event categories in the video, which is very inapplicable for constantly ch...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/24
Inventor 张卫山赵德海宫文娟卢清华李忠伟
Owner CHINA UNIV OF PETROLEUM (EAST CHINA)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products