Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video construction method and system

A construction method and video technology, applied to TV system components, TVs, color TVs, etc., can solve the problems of terminal equipment running time that is difficult to meet user needs and large amount of calculation, so as to reduce the amount of calculation and memory usage, reduce workload effect

Active Publication Date: 2021-06-18
成都视海芯图微电子有限公司
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004]In addition, the video construction method for the current method has a huge amount of computation, and the running time on the terminal device is difficult to meet the needs of users

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video construction method and system
  • Video construction method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] like figure 1 As shown, the present embodiment provides a video construction method, and its steps are specifically:

[0045] Step 1, performing feature conversion on the first input information, and obtaining first feature representation information for the first input information;

[0046] Step 2, matching the first representation information obtained in step 1 with the abstract model library to generate a first representation abstract model view based on the first input information;

[0047] Step 3, the video generation algorithm model performs a video generation operation on the first abstract model view generated in step 2, and generates a first representation video for the first input information;

[0048] Step 4, performing feature conversion on the second input information to obtain second feature representation information for the second input information; matching the second representation information with the abstract model library to generate a second repre...

Embodiment 2

[0062] The scene video construction in the animation production task is taken as an example for further description.

[0063] In this embodiment, the "scene of characters walking on the grassland" in the animation video scene construction is taken as an example, in which the initial image and text description are used as input information, and an abstract model library based on the style of ink and wash is used to generate an adversarial network to perform video generation In the method, image segmentation processing, background marking, and image harmonization processing are performed by a codec structure neural network, and image fusion processing of each frame of a video is performed by an image fusion method based on a deep neural network.

[0064] Step S1, using the first original hand-painted character picture and the description text of the image as input information to perform feature conversion to obtain feature representation information for character information;

...

Embodiment 3

[0073] like figure 2 As shown, a video construction system provided in this embodiment includes: a first feature extraction device, an abstract model matching device, a characterization video generation device, an image processing device, a fusion device and a computing device;

[0074] The first feature extraction device is used to perform feature conversion on a plurality of input information respectively to obtain feature representation information of each input information;

[0075] The abstract model matching device is used to match the characteristic representation information of each input information with the abstract model library, and generate a representation abstract model view based on each input information;

[0076] The characterization video generating device respectively inputs the characterization abstract model views of each input information into the video generation algorithm model to generate corresponding characterization videos;

[0077] The image pro...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video construction method and system. The method comprises the following steps: firstly, performing feature conversion on various input information describing the same video to obtain feature representation information of each piece of input information; inputting input information, sequentially obtaining a representation abstract model view and a representation video of each piece of input information, performing fusion processing on each representation video to obtain a group of fusion image sets, and finally taking a harmonious fusion image set as construction video output of all the input information to construct a smooth video work. According to the technical scheme, videos of different styles and scenes can be generated, the generated videos are fused and harmonized; a smooth video work is finally constructed; the parallel operation process has an acceleration function; the calculation amount and memory occupation are reduced; the workload of edge equipment is reduced; and the video can be rapidly constructed.

Description

technical field [0001] The invention relates to the technical field of video animation, in particular to a video construction method and system. Background technique [0002] The deep learning intelligent perception algorithm enables electronic devices to have accurate semantic perception capabilities, such as text-based semantic recognition, voice information-based semantic recognition, and image semantic recognition, providing a good method foundation for devices to describe and represent the environment and intentions . The method of constructing video based on semantic information has also achieved good expressive effects in the video prediction and generation of characters, and the realization of the function of generating video from voice, text, and images will also provide great support for the design of animation, communication, education, construction and other industries. Work more efficiently. [0003] The current intelligent algorithm can generate videos for hu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N5/262G06T7/194G06T5/50G06K9/62
CPCH04N5/262G06T5/50G06T7/194G06T2207/10016G06T2207/20221G06F18/22
Inventor 张旻晋许达文
Owner 成都视海芯图微电子有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products