Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Intelligent video automatic editing method

A video and mixed-cutting technology, applied in the field of video editing, can solve the problems of rough recommendation method, low efficiency of retrieving video clips, and unstable quality, so as to reduce labor costs and time costs, reduce manual editing links, and improve work efficiency. Effect

Pending Publication Date: 2021-02-26
珠海九松科技有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Manual editing is the traditional way, one or more target videos are edited by humans, and the required video clips are synthesized together. One or more specific software is used for editing, adding text materials, adding animation materials and video synthesis to complete; machine-assisted editing is smarter than manual editing. It is a machine that analyzes one or more target videos provided by people, separates the audio track and picture, converts the voice track into text, and then analyzes the keywords in the text. , recommend relevant video clips based on keywords, and finally cut and edit the video by humans. When compositing, the machine will render the video according to the established template, such as adding background music, adding a fixed title and ending; but the above method, The efficiency of retrieving video clips is low, the recommendation method is rough when matching keywords, and it is difficult to define how long the clips are, resulting in rough granularity of video clips, and the quality of video editing depends entirely on the editors, resulting in unstable quality. Mixed video clipping; the current technical solution is to input text or article links and automatically convert them into videos, but the main application field of this solution is entertainment short videos, and the correlation and similarity between articles and videos are relatively high. Low, the content of the video and the text may be inconsistent; in the professional field, there are high requirements for the professional authority of the segment and the text, and the text needs to have a high correlation with the target segment, and the text corresponds to the target segment one-to-one

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Intelligent video automatic editing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments.

[0024] The present invention provides an intelligent video automatic mixing and cutting method, comprising the following steps:

[0025] Step 1: Semantically analyze the target text through the video clip retrieval algorithm, reorganize the language, compare the reorganized language script with the video clips in the video library, and filter and match the corresponding video clips;

[0026] Step 2: The video clip recommendation algorithm performs correlation and similarity analysis on the selected video clips; at the same time, the clip detail comparison algorithm performs a detail consistency comparison to select the best video clip combin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an intelligent video automatic editing method, which comprises the following steps of: 1, performing semantic analysis on a target text, and selecting and matching corresponding video clips in a video library; 2, performing correlation and similarity analysis on the video clips, and performing detail comparison at the same time; 3, previewing and displaying a video clip combination finished product, and manually modifying or adjusting the same; and 4, seamlessly splicing and rendering the plurality of video clips, and exporting a required video. Compared with the priorart, the method has the following beneficial effects that the manual link is reduced, and the production cost is reduced; the person participates in assisting algorithm learning instead of participating in editing originally; the algorithm retrieval time is shorter, the recommendation algorithm is accurate, and the recall is higher than that of the human brain; and in the aspect of final quality,the consistency of the video is better, and instability caused by human factors is avoided.

Description

technical field [0001] The invention relates to the technical field of video editing, in particular to a method for automatic mixing and cutting of intelligent video. Background technique [0002] Video mixing is a process of dividing single or multiple videos, extracting target segments, and then recombining them to generate new videos; the existing video mixing methods are mainly manual editing and machine-assisted editing. [0003] Manual editing is a traditional method. Humans edit one or more target videos and synthesize the required video clips. The editing, adding text material, adding animation material and video synthesis must use one or more specific software. Machine-assisted editing, which is more intelligent than manual editing, is that the machine parses one or more target videos provided by humans, separates the audio track and the picture, converts the audio track into text, and then analyzes the keywords in the text. , recommend relevant video clips based o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N21/234H04N21/44H04N21/466H04N21/84H04N21/845G06F16/78G06F16/783
CPCH04N21/23424H04N21/44016H04N21/44008H04N21/4668H04N21/23418H04N21/84H04N21/8456G06F16/784G06F16/7867
Inventor 白志勇王宇廷
Owner 珠海九松科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products