Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Teaching Video Annotation Method Based on Collaborative Filtering

A teaching video and collaborative filtering technology, applied in the field of image processing, can solve the problems of low labeling accuracy and inconspicuous differences in visual features, and achieve the effect of overcoming low labeling accuracy, low labeling accuracy, and high precision

Active Publication Date: 2017-09-22
山西恒奕信源科技有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current video annotation method based on machine learning is based on the visual features of the video, such as color, shape, texture, etc., and the scene of the teaching video is uniform, and the difference in visual features is not obvious, so the video annotation method based on machine learning When labeling the teaching video, the labeling accuracy is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Teaching Video Annotation Method Based on Collaborative Filtering
  • Teaching Video Annotation Method Based on Collaborative Filtering
  • Teaching Video Annotation Method Based on Collaborative Filtering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0033] refer to figure 1 , the implementation steps of the present invention are as follows:

[0034] Step 1: Input the teaching video, and extract key frames of subtitles from the teaching video according to the subtitles, and obtain D key frames.

[0035] The instructional video input in this step is as follows: figure 2 as shown, figure 2 There are 12 screenshots of 2a-2l in total, and the following steps are used to realize the figure 2 Extraction of keyframes:

[0036] 1.1) Obtain an image in an educational video every 20 frames, and get Q frames of images, Q>0;

[0037] 1.2) Select the sub-region at the bottom 1 / 4 of each image frame, and calculate the sum Y of the absolute value of the pixel difference between the corresponding positions of the sub-region and other image frames a ;

[0038] 1.3) Set the threshold P a 1 / 10 of the number of pi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a teaching video labeling method based on collaborative filtering, which mainly solves the disadvantage of low accuracy rate of teaching video labeling in the prior art. The implementation steps are as follows: input the teaching video, and extract the subtitle key frames of the teaching video according to the subtitles to obtain D key frames; use the optical character software to extract the subtitles of the D key frames, and modify and delete the obtained subtitles , get D text documents; use D text documents combined with Gibbs sampler to segment the teaching video, and divide the teaching video into M shots; in the M shots, mark some shots, and then use the collaborative filtering method to calculate the marked shots The cosine similarity between the unmarked shot and the first 5 words with high cosine similarity is selected to mark the unmarked shot. Since the present invention considers the subtitle information in the teaching video, it can describe the teaching video more effectively, improves the marking accuracy of the teaching video, and can be used in video teaching.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a video labeling method in the technical field of pattern recognition, which can be used in network teaching. Background technique [0002] With the rapid development of Internet technology and multimedia technology, learning methods based on online learning platforms have gradually become an important way to effectively supplement traditional classroom learning. However, thousands of teaching videos are uploaded to the Internet every day and every hour. How to efficiently and quickly search for the videos that learners need in these massive teaching videos is an urgent research topic. The most commonly used method is to tag videos, which can effectively help online learning users find the desired videos quickly and efficiently. [0003] Existing video annotation methods are generally divided into three categories: manual annotation, rule-based annotation, and mac...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00G06K9/00G06F17/30
Inventor 王斌丁海刚关钦高新波牛振兴王敏宗汝牛丽军
Owner 山西恒奕信源科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products