Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

System And Method For Transcoding A Multimedia File To Accommodate A Client Display

a multimedia file and client technology, applied in the field of marking multimedia files, can solve the problems of wasting time, user may be required to access content, and conventional bookmarks do not provide a convenient way of switching between different data formats

Inactive Publication Date: 2007-02-08
VMARK
View PDF5 Cites 68 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0048] The system of the present invention can include a wide area network such as the Internet. Moreover, the method of the present invention can facilitate the creating, storing, indexing, searching, retrieving and rendering of multimedia content on any device capable of connecting to the network and performing one or more of the aforementioned functions. The multimedia content can be one or more frames of video, audio data, text data such as a string of characters, or any combination or permutation thereof.
[0078] The other embodiment of the method of present invention provides an efficient and fast-compressed DCT domain method to locate caption text regions in intra-coded and inter-coded frames through visual rhythm from observations that caption text generally tend to appear on certain areas on video or are known a prior; and secondly, the method employs a combination of contrast and temporal coherence information on the visual rhythm, to detect text frame and uses information obtained through visual rhythm to locate caption text regions in the detected text frame along with their temporal duration within the video.

Problems solved by technology

For example, in the case of data interruption due to a poor network condition, the user may be required to access the content again.
Unfortunately, multimedia content is represented in a streaming file format so that a user has to view the file from the beginning in order to look for the exact point where the first user left off.
If the multimedia file is viewed by streaming, the user must go through a series of buffering to find out the last accessed position, thus wasting much time.
Even for the conventional bookmark with a bookmarked position, the same problem occurs when the multimedia content is delivered in live broadcast, since the bookmarked position within the multimedia content is not usually available, as well as when the user wants to replay one of the variations of the bookmarked multimedia content.
Further, conventional bookmarks do not provide a convenient way of switching between different data formats.
Similarly, if a bookmark incorporating time information was used to save the last-accessed segment during real-time broadcast, the bookmark would not be effective during later access because the later available version may have been edited or a time code was not available during the real-time broadcast.
This results in mismatches of time points because a specific time point of the source video content may be presented as different media time points in the five video files.
When a multimedia bookmark is utilized, the mismatches of positions cause a problem of mis-positioned playback.
The entire multimedia presentation is often lengthy.
However, there are frequent occasions when the presentation is interrupted, voluntarily or forcibly, to terminate before finishing.
However, EPG's two-dimensional presentation (channels vs. time slots) becomes cumbersome as terrestrial, cable, and satellite systems send out thousands of programs through hundreds of channels.
Navigation through a large table of rows and columns in order to search for desired programs is frustrating.
However, there still exist some problems for the PVR-enabled STBs.
The first problem is that even the latest STBs alone cannot fully satisfy users' ever-increasing desire for diverse functionalities.
The STBs now on the market are very limited in terms of computing and memory and so it is not easy to execute most CPU and memory intensive applications.
However, the generation of such video metadata usually requires intensive computation and a human operator's help, so practically speaking, it is not feasible to generate the metadata in the current STB.
The second problem is related to discrepancy between the two time instants: the time instant at which the STB starts the recording of the user-requested TV program, and the time instant at which the TV program is actually broadcast.
This time mismatch could bring some inconvenience to the user who wants to view only the requested program.
While high-level image descriptors are potentially more intuitive for common users, the derivation of high-level descriptors is still in its experimental stages in the field of computer vision and requires complex vision processing.
Despite its efficiency and ease of implementation, on the other hand, the main disadvantage of low-level image features is that they are perceptually non-intuitive for both expert and non-expert users, and therefor, do not normally represent users' intent effectively.
Perceptually similar images are often highly dissimilar in terms of low-level image features.
Searches made by low-level features are often unsuccessful and it usually takes many trials to find images satisfactory to a user.
When the refinement is made by adjusting a set of low-level feature weights, however, the user's intent is still represented by low-level features and their basic limitations still remain.
Due to its limited feasibility for general image objects and complex processing, its utility is still restricted.
However, a weakness of HSOM is that it is generally too computationally expensive to apply to a large multimedia database.
However, it is also known that those approaches are not adequate for the high dimensional feature vector spaces, and thus, they are useful only in low dimensional feature spaces.
The prior art method, however, confronts two major problems mentioned below.
The first problem of the prior art method is that it requires additional storage to store the new version of an edited video file.
In this case, the storage is wasted storing duplicated portions of the video.
The second problem with the prior art method is that a whole new metadata have to be generated for a newly created video.
If the metadata are not edited in accordance with the edition of the video, even if the metadata for the specific segment of the input video are already constructed, the metadata may not accurately reflect the content.
If the display size of a client device is smaller than the size of the image, sub-sampling and / or cropping to fit the client display must reduce the spatial resolution of the image.
Users very often in such a case have difficulty in recognizing the text or the human face due to the excessive resolution reduction.
Although the importance value may be used to provide information on which part of the image can be cropped, it does not provide a quantified measure of perceptibility indicating the degree of allowable transcoding.
For example, the prior art does not provide the quantitative information on the allowable compression factor with which the important regions can be compressed while preserving the minimum fidelity that an author or a publisher intended.
The InfoPyramid does not provide either the quantitative information about how much the spatial resolution of the image can be reduced or ensure that the user will perceive the transcoded image as the author or publisher initially intended.
Even if abrupt scene changes are relatively easy to detect, it is more difficult to identify special effects, such as dissolve and wipe.
However, these approaches usually produce many false alarms and it is very hard for humans to exactly locate various types of shots (especially dissolves and wipes) of a given video even when the dissimilarity measure between two frames are plotted, for example when they are plotted in 1-D graph where the horizontal axis represents time of a video sequence and the vertical axis represents the dissimilarity values between the histograms of the frames along time.
They also require high computation load to handle different shapes, directions and patterns of various wipe effects.
As contents become readily available on wide area networks such as the Internet, archiving, searching, indexing and locating desired content in large volumes of multimedia containing image and video, in addition to the text information, will become even more difficult.
However, most of the compressed domain methods restrict the detection of text in I-frames of a video because it is time-consuming to obtain the AC values in DCT for intra-frame coded frames.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System And Method For Transcoding A Multimedia File To Accommodate A Client Display
  • System And Method For Transcoding A Multimedia File To Accommodate A Client Display
  • System And Method For Transcoding A Multimedia File To Accommodate A Client Display

Examples

Experimental program
Comparison scheme
Effect test

third embodiment

[0294] the method of the present invention may also be gleaned from FIG. 56. In this embodiment, a request to mark the current location or termination position 5630 of the video is sent to the network server by the client. When playback of the interrupted video or multimedia content 5602 is later requested, the server preferably executes a scene change detection algorithm on the rewind interval 5616, i.e., the segment of multimedia content 5602 between viewing beginning position 5614 and termination position 5630. Upon completion of the scene detection algorithm, the network server sends the client system the resulting list of scene boundaries or scene change frames 5618, 5620, 5622, 5624 and 5628, which will serve as refresh frames. Playback of the video or multimedia content 5602 preferably begins upon completion of the client's display of refresh frames 5618, 5620, 5622 and 5624.

[0295] Illustrated in FIG. 57 is a flow chart depicting a static method of adaptive refresh rewinding ...

first embodiment

[0318] MetaSync First Embodiment

[0319]FIG. 68 shows the system to implement the present invention for a set top box (“STB”) with the personal video recorder (“PVR”) functionality. In this embodiment 6800 of the present invention, the metadata agent 6806 receives metadata for the video content of interest from a remote metadata server 6802 via the network 6804. For example, a user could provide the STB with a command to record a TV program beginning at 10:30 PM and ending at 11:00 PM. The TV signal 6816 is received by the tuner 6814 of the STB 6820. The incoming TV signal 6816 is processed by the tuner 6814 and then digitized by MPEG encoder 6812 for storage of the video stream in the storage device 6810. Metadata received by the metadata agent 6806 can be stored in a metadata database 6808, or in the same data storage device 6810 that contains the video streams. The user could also indicate a desire to interactively browse the recorded video. Assume further that due to emergency new...

second embodiment

[0322] MetaSync Second Embodiment

[0323]FIG. 69 shows the system 6900 that implements the present invention when a STB 6930 with PVR is connected to the analog video cassette recorder (VCR) 6920. In this case, everything is the same as the previous embodiment, except for the source of the video stream. Specifically, metadata server 6902 interacts with the metadata agent 6906 via network 6904. The metadata received by the metadata agent 6906 (and optionally any instructions stored by the user) are stored in metadata database 6908 or video stream storage device 6910. The analog VCR 6920 provides an analog video signal 6916 to the MPEG encoder 6912 of the STB 6930. As before, the digitized video stream is stored by the MPEG encoder 6912 in the video stream storage device 6910.

[0324] From the business point of view, this embodiment might be an excellent model to reuse the content stored in the conventional videotapes for the enhanced interactive video service. This model is beneficial t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

PropertyMeasurementUnit
timeaaaaaaaaaa
relative timeaaaaaaaaaa
Timeaaaaaaaaaa
Login to View More

Abstract

A method and system are provided for tagging, indexing, searching, retrieving, manipulating, and editing video images on a wide area network such as the Internet. A first set of methods is provided for enabling users to add bookmarks to multimedia files, such as movies, and audio files, such as music. The multimedia bookmark facilitates the searching of portions or segments of multimedia files, particularly when used in conjunction with a search engine. Additional methods are provided that reformat a video image for use on a variety of devices that have a wide range of resolutions by selecting some material (in the case of smaller resolutions) or more material (in the case of larger resolutions) from the same multimedia file. Still more methods are provided for interrogating images that contain textual information (in graphical form) so that the text may be copied to a tag or bookmark that can itself be indexed and searched to facilitate later retrieval via a search engine.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application is a continuing application that is a divisional of commonly-owned, copending U.S. patent application Ser. No. 09 / 911,293, filed Jul. 23, 2001 by Sull et al.BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention [0003] The present invention relates generally to marking multimedia files. More specifically, the present invention relates to applying or inserting tags into multimedia files for indexing and searching, as well as for editing portions of multimedia files, all to facilitate the storing, searching, and retrieving of the multimedia information. [0004] 2. Background of the Related Art 1. Multimedia Bookmarks [0005] With the phenomenal growth of the Internet, the amount of multimedia content that can be accessed by the public has virtually exploded. There are occasions where a user who once accessed particular multimedia content needs or desires to access the content again at a later time, possibly at or fro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F17/00G06F17/30G11B27/034G11B27/10G11B27/28G11B27/34
CPCG06F17/30796G06F17/30799G06F17/3082G06F17/30858G06T3/4092G11B2220/41G11B27/105G11B27/28G11B27/34G11B2220/20G11B27/034G06F16/7867G06F16/7844G06F16/7847G06F16/71G06Q50/10
Inventor SULL, SANGHOONLEE, KEANSUBCHUN, SEONG SOO
Owner VMARK
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products