Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Double-stage time sequence action detection method and device, equipment and medium

An action detection, two-stage technology, applied in neural learning methods, character and pattern recognition, instruments, etc., can solve the problems of low recognition accuracy, low judgment accuracy, special requirements for the length of the video to be detected, etc., and achieve recognition stability High and robust effect

Pending Publication Date: 2021-10-08
BEIHANG UNIV
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The existing motion detection methods have various disadvantages such as low recognition accuracy, low accuracy in judging the start and end positions of motion, and special requirements for the length of the video to be detected.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Double-stage time sequence action detection method and device, equipment and medium
  • Double-stage time sequence action detection method and device, equipment and medium
  • Double-stage time sequence action detection method and device, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0172] During the public data set thumos-14 and the ActivityNet-1.3, the two-stage timing action detection is performed, followed by:

[0173] S1, get video information characteristics

[0174] S2, according to the video information characteristics, extract candidate boundaries, and obtain candidate boxes by the candidate boundary;

[0175] S3, correct the candidate frame boundary, determine the action in the video.

[0176] In step S1, the video is cut into the same N-segment segment as the length in order, extracts the RGB stream of all segments, and the RGB stream and the optical stream are input to the 3D action recognition model to extract RGB features and optical flow characteristics, and then fuse RGB features and optical flow characteristics, characterizes the characteristics of the entire video information, wherein each segment in the n segment segment is 16 frames.

[0177] The following subsections are included in step S2:

[0178] S21, convert video information charact...

experiment example

[0256] Comparative Example 1 The result of the candidate box in the ThumoS-14 data center, as shown in Table 1

[0257] Table I

[0258]

[0259] Among them, @ 50, @ 100, @ 200 indicates that the average recall rate when each video is generated in 50, 100,200 candidate boxes. The higher the average recall rate, the better performance, which can be seen from the table, this application The recall rate in Example 1 was significantly higher than the recall of other means.

[0260] Comparative Example 1 The result of the candidate box is generated in the ActivityNet-1.3 data set, as shown in Table 2.

[0261] Table II

[0262]

[0263] Among them, AR @ AN = 100 means that the average recall rate when each video is generated in 100 candidate frames, the higher the average recall rate, the better the performance. AUC is Ar @ an = 100 curve and the area enclosed in the coordinate axis, the greater the value, the better performance. As can be seen from the table, the candidate frame i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a double-stage time sequence action detection method and device, equipment and a medium. The method comprises the steps of obtaining video information features; finding according to video information features, potential action starting and ending moments; combining the starting time and the ending time into a candidate frame; and calibrating the boundary of the candidate box, and judging the content of the candidate box to obtain an action category. The two-stage time sequence action detection method, device and equipment and the medium have the advantages of being high in recognition precision, good in recognition stability, good in robustness and the like.

Description

Technical field [0001] The present invention relates to a timing operation detecting method, belonging to the field of image recognition detection. Background technique [0002] The action detection in the video is an important branch in the image understanding. [0003] The existing action detection method has low recognition accuracy, starting with the initiation of the action, and the end position judgment accuracy is low, and there are special requirements such as special requirements to detect the video length. [0004] Due to the above reasons, the inventors have conducted in-depth studies in the existing video in the existing video, and proposed a two-stage timing operation detection method. Inventive content [0005] In order to overcome the above problems, the inventors have conducted in-depth studies to design a two-stage timing action detection method, including the following steps: [0006] S1, acquire video information characteristics; [0007] S2, according to the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/047G06N3/048G06N3/045G06F18/2415Y02D10/00
Inventor 王田李泽贤吕金虎刘克新张宝昌
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products