Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

418 results about "Subtitle" patented technology

Subtitles are text derived from either a transcript or screenplay of the dialog or commentary in films, television programs, video games, and the like, usually displayed at the bottom of the screen, but can also be at the top of the screen if there is already text at the bottom of the screen. They can either be a form of written translation of a dialog in a foreign language, or a written rendering of the dialog in the same language, with or without added information to help viewers who are deaf or hard of hearing to follow the dialog, or people who cannot understand the spoken dialogue or who have accent recognition problems.

Method for realizing subtitle overlay screenshot based on deep learning

ActiveCN108347643AImprove accuracyImprove image stitching effectSelective content distributionDeep learningKey frame
The invention discloses a method for realizing subtitle overlay screenshot based on deep learning, and belongs to the field of media technology. The method comprises the following steps: selecting a video interval for subtitle overlay screenshot from a video; locating and cutting out subtitles on each frame of image in the video interval; segmenting all subtitles and extracting key frames in eachsubtitle; performing similarity calculation on the key frames, and performing comparative duplication elimination according to a calculation result to obtain final subtitles; and stitching a first frame of picture in the video interval with the final subtitles in sequence to obtain a subtitle overlay screenshot. The method is low in error rate, high in processing efficiency and high in automationdegree.
Owner:CHENGDU SOBEY DIGITAL TECH CO LTD

Subtitle dramatizing method based on closed outline of Bezier

The invention discloses a subtitle dramatizing method based on the closed outline of Bezier, which belongs to the technical field of subtitle editing and broadcasting of a television program manufacture and broadcasting agent in television and film industries. The method of the invention comprises the following steps: firstly, converting subtitle objects into vector outline information which consists of first power, second power or third power Bezier line sections and comprises N closed vector outlines; then, converting the first power and the third power Bezier line sections into second power Bezier line sections; next, deleting or cutting the closed annular paths contained in the self intersected closed outlines in the vector outline information; regulating and sorting the intersected closed outlines in the vector outline information so that the closed outlines are not intersected mutually; and finally, converting the intersected closed outlines in the vector outline information into polygons, and dramatizing the subtitles after adding the inner edges or the outer edges on the polygons. When being adopted, the method of the invention can improve the subtitle dramatizing efficiency, can enhance the subtitle dramatizing effect, and can meet the high-grade application requirement of the subtitle.
Owner:北京市文化科技融资租赁股份有限公司

Command links

A command link input control has a main title portion describing a user input option corresponding to selection of that command link. Upon selection of the command link, a dialog containing that command link is completed without requiring a user to select additional input controls. The command link may optionally contain a subtitle portion for providing supplementary text further explaining or otherwise elaborating upon the option corresponding to the command link. The command link may also contain a glyph. Upon hovering a cursor over a command link or otherwise indicating potential selectability of the command link, the entire link is highlighted by, e.g., altering the background color of the display region containing the main title, subtitle and / or glyph.
Owner:MICROSOFT TECH LICENSING LLC

Technology for realizing real-time subtitle overlay during video call and applications of technology

The invention discloses a technology for realizing real-time subtitle overlay during a video call and applications of the technology. The technology comprises subtitle software, and comprises the following steps: S1, a voice recognition algorithm, namely, through a machine learning algorithm, capturing audio data in a video in real time, and converting the audio data into language data with practical meaning; S2, a character converting algorithm, namely, carrying out algorithm processing on the acquired voice data so that character information is obtained through converting in real time; S3, asubtitle display algorithm, namely, carrying out real-time character-by-character or word-by-word display on the character information; S4, an automatic sentence punctuating algorithm, namely, analyzing an audio file, so that the starting and pausing points of one sentence are acquired; and S5, a character and audio/video overlay method, namely, directly displaying characters to a video interfacein an overlay manner, so that video captions are formed, wherein the video interface does not assign the display positions of the character subtitles. Through the technology for realizing real-time subtitle overlay during a video call and the applications of the technology, after the complete sentence is acquired, all the displayed characters can be updated according to the meaning of the complete sentence.
Owner:BEE SMART INFORMATION TECH CO LTD

Calculating meter system for adding subtitles to foreign language audio image data in real time

InactiveCN111601479AAffects effective heat dissipationAvoid affecting air convectionCasings/cabinets/drawers detailsCooling/ventilation/heating modificationsCold airSubtitle
The invention discloses a calculating meter system for adding subtitles to foreign language audio image data in real time. The calculating meter system comprises a machine body, wherein a platform used for installing computer hardware is fixedly connected to the bottom in the machine body, a circular groove is formed in the inner wall of the machine body, a motor is fixedly connected to the side wall of the machine body through a bracket, and the movable shaft of the motor penetrates through the side wall of the machine body and extends into the circular groove. According to the invention, through rotation of a first reciprocating lead screw, a vertical plate drives bristles to slide up and down on the side wall of a second filter screen, and dust on the side wall of the second filter screen is cleaned in real time, so that the condition that a large amount of dust is accumulated after the second filter screen is used for a long time to affect the effective heat dissipation in the machine body is avoid; and a second reciprocating lead screw and a third reciprocating lead screw rotate to drive a sliding block to move back and forth, so that a fixing plate drives a telescopic air bagto continuously stretch out and draw back, and the cold air in the telescopic air bag is continuously exhausted to a platform through an air outlet pipe so as to accelerate the heat dissipation of electronic elements on the platform.
Owner:HEILONGJIANG UNIV OF TECH +3

Subtitle correction method, subtitle display method, subtitle correction device, subtitle display device, equipment and medium

The invention discloses a subtitle correction method, a subtitle display method, a subtitle correction device, a subtitle display device, equipment and medium. The subtitle correction method comprisesthe following steps: acquiring audio stream data and video picture data in video data; performing voice recognition on the audio stream data to obtain first subtitle information; performing text recognition on the video picture data; and correcting the first subtitle information according to the text recognition result to obtain second subtitle information. The subtitle display method comprises the following steps: acquiring video data and second subtitle information; and when the video data is played, displaying the second subtitle information. According to the invention, the subtitle information recognized by voice is corrected based on text recognition of the video picture content, the subtitle information related to the video picture content can be corrected, the consistency between the subtitles recognized by voice and the video content is improved, the accuracy of the subtitle content is improved, the watching experience of a user is improved, and the subtitle correction method,the subtitle display method, the subtitle correction device, the subtitle display device, the equipment and the medium can be widely applied to the technical field of the Internet.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Method for performing multi-mode video question answering by using frame-subtitle self-supervision

The invention belongs to the field of video questions and answers, and particularly relates to a method for performing multi-mode video question answering by using frame-subtitle self-supervision. The method includes the following steps: extracting video frame features, question and answer features, subtitle features and subtitle suggestion features; obtaining frame features with attention and caption features with attention, and obtaining fusion features; calculating and obtaining a time attention score based on the fusion feature; calculating and obtaining the time boundary of the question by using the time attention score; calculating and obtaining answers to the questions by adopting the fusion features and the time attention scores; training a neural network by using the time boundary of the question and the answer of the question; and optimizing network parameters of the neural network, performing video question answering by using the optimal neural network, and delimiting a time boundary. The time boundary related to the problem is generated according to the self-designed time attention score instead of using time annotation with high annotation cost. In addition, more accurate answers are obtained by mining the relation between the subtitles and the corresponding video content.
Owner:STATE GRID ZHEJIANG ELECTRIC POWER +3

Method for extracting conceptual words from video subtitles

The invention discloses a method for extracting conceptual words from video subtitles, which comprises the following steps of: carrying out word segmentation processing on a subtitle text, and deleting punctuation marks; stop words and part-of-speech tagging are carried out on the caption text after word segmentation; calculating co-occurrence characteristics of the target word and the adjacent word; calculating the semantic similarity between the target word and the adjacent word; performing concept word marking on a small number of subtitle texts subjected to word segmentation to serve as atraining set; and training a pre-established semi-supervised learning framework based on a conditional random field according to the training set to obtain a conceptual word prediction model, and obtaining a conceptual word prediction result corresponding to the subtitle text output by the conceptual word prediction model. Based on the method for extracting the conceptual words provided by the invention, the workload of manually labeling corpora is reduced, the accuracy of extracting the conceptual words in the MOOC video subtitle scene is improved, and the actual requirements are met.
Owner:SHANDONG UNIV OF SCI & TECH

Personalized image and subtitle generating method based on context sequence memory network

The invention provides a personalized image and subtitle generating method based on a context sequence memory network. The personalized image and subtitle generating method based on the context sequence memory network mainly comprises the following steps: construction of a database, construction of the context sequence memory network, sequence generation based on states and network training. According to the process, an image database is constructed by using an application interface in a social medium and a crawler device at first, redundant information is filtered, labels and subtitles are predicted by an image memory vector, an context memory vector and a character output vector, high frequency words are found out from a keyword dictionary, and therefore, personalized new words with the highest matching degree for the user are generated. Synchronous prediction of the labels and the subtitles can be processed, a frame of the context memory network is provided to solve the character matching problem, and meanwhile, the prediction precision of generated corresponding images and subtitles is improved.
Owner:SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products