Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Method for Unsupervised Time-series Segmentation of Behavior Video

An unsupervised, video sequence technology, applied in image analysis, image enhancement, instruments, etc., can solve the problems of manpower and material resources, low timeliness of video monitoring and screening, etc.

Inactive Publication Date: 2019-04-09
SHANDONG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing analysis methods assume that in an observed video clip, there is only one behavior category
In practice, the observed behavior videos often contain multiple continuous behavior categories; and in many cases, we usually do not have prior knowledge to judge the possible types and the time range of each behavior, which leads to video monitoring and screening. The timeliness is very low and consumes a lot of manpower and material resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Method for Unsupervised Time-series Segmentation of Behavior Video
  • A Method for Unsupervised Time-series Segmentation of Behavior Video
  • A Method for Unsupervised Time-series Segmentation of Behavior Video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0052] A method for unsupervised time-series segmentation of behavioral video, involving a sliding window model of behavioral video in the method, comprising

[0053] (1-1) The start time of initializing video detection is n t and the corresponding sliding window frame length L t ;

[0054] (1-2) do the detection of the behavior change point in the video sequence window of setting up;

[0055] (1-3) If it is detected that there is a behavior change point c in the video sequence window, then take the time point c as the detection start time and re-initialize the frame length of the sliding window to continue to detect the subsequent video; otherwise, if in the video sequence window If no behavior change point is detected in , the initialized n t To detect the starting frame, ie n t+1 =n t , while the frame length of the sliding window is updated to L t+1 = L t +ΔL, where ΔL is the incremental step of the sliding window length;

[0056] (1-4) The entire detection process u...

Embodiment 2

[0063] A kind of behavioral video unsupervised temporal sequence segmentation method as described in embodiment 1, its difference is, the establishment method of the sliding window model of described behavioral video, comprises the steps:

[0064] Step (1-1):

[0065] Initialize the start frame n of video detection t =n 1 and the frame length L of the corresponding sliding window t = L 1 , where L 1 Set to 2L 0 , L 0 It is the minimum length of a type of behavioral video, which is set to 50 in the application;

[0066] Step (1-2)

[0067] Perform behavior change point detection within the established video sequence sliding window;

[0068] Steps (1-3)

[0069] If a behavior change point c is detected in the video sequence window, the time point c is used as the starting frame of subsequent detection and the sliding window frame length L 1 Continue to detect subsequent videos, ie n t+1 = c, L t+1 = L 1 ; If no behavior change point is detected in the video sequence...

Embodiment 3

[0073] A kind of unsupervised temporal segmentation method of behavior video as described in embodiment 2, its difference is, described in the described step (1-2) in the method for the detection of behavior change point in the video sequence sliding window of setting up, comprises Follow the steps below:

[0074] Under the above-mentioned incremental sliding window, the timing segmentation of different behaviors is realized by detecting the timing change point detection of the video subsequence in each window;

[0075] Step (2-1)

[0076] For a given video subsequence Y, use y(t) to represent the feature vector of the tth frame;

[0077] Y is recorded as: Y={y(t)}, t=1,2,...,N, where N represents the frame number of the video, and the dimension of y(t) is represented by D; assuming that Y(t) is given A video subsequence of length L in the behavior video Y, starting from time t and ending at time t+L-1, and recorded as:

[0078] Y(t):=[y(t) T ,y(t+1) T ,...,y(t+L-1) T ]∈...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Provided is a behavior video non-supervision time-sequence partitioning method. The method comprises steps of: initializing the initial time of video detection as nt and a corresponding sliding window frame length as Lt; detecting behavior change points in an established video sequence window; if it is detected that a behavior change point c is in the video sequence window, using a time point c as the initial time of detection and reinitializing sliding window frame length in order to continue detecting subsequent videos; if it is not detected that the behavior change point is in the video sequence window, still using the nt as the initial frame of detection, namely nt+1=nt, and updating the sliding window frame length as Lt+1=Lt+[delta]L, wherein the [delta]L is a incremented step length of the length of the sliding window; ending the method until all video frame sequences are detected or reach a predetermined deadline T0. The method makes a decision on data change points in behavior video analysis, does not require online and real-time non-supervision partition with prior knowledge, and is directly used in behavior video data online analysis.

Description

technical field [0001] The invention relates to an unsupervised time-sequence segmentation method of behavioral video, which belongs to the technical field of intelligent video monitoring. Background technique [0002] Visual human behavior analysis is a key technology to realize intelligent video surveillance, human-computer interaction, medical assistance, and motion restoration. Most of the existing analysis methods assume that only one behavior category exists in an observed video clip. In practice, the observed behavior videos often contain multiple continuous behavior categories; and in many cases, we usually do not have prior knowledge to judge the possible types and the time range of each behavior, which leads to video monitoring and screening. The timeliness is very low and consumes a lot of manpower and material resources. Contents of the invention [0003] Aiming at the deficiencies of the prior art, the present invention provides a method for unsupervised tim...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/215
CPCG06T2207/10016G06T2207/30232
Inventor 卢国梁高桢闫鹏王亮
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products