Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

325 results about "Virtual cinematography" patented technology

Virtual cinematography is the set of cinematographic techniques performed in a computer graphics environment. This includes a wide variety of subjects like photographing real objects, often with stereo or multi-camera setup, for the purpose of recreating them as three-dimensional objects and algorithms for automated creation of real and simulated camera angles.

Direct vision sensor for 3D computer vision, digital imaging, and digital video

A method and apparatus for directly sensing both the focused image and the three-dimensional shape of a scene are disclosed. This invention is based on a novel mathematical transform named Rao Transform (RT) and its inverse (IRT). RT and IRT are used for accurately modeling the forward and reverse image formation process in a camera as a linear shift-variant integral operation. Multiple images recorded by a camera with different camera parameter settings are processed to obtain 3D scene information. This 3D scene information is used in computer vision applications and as input to a virtual digital camera which computes a digital still image. This same 3D information for a time-varying scene can be used by a virtual video camera to compute and produce digital video data.
Owner:SUBBARAO MURALIDHARA

Camera positioning correction calibration method and system

The present invention discloses a camera positioning correction calibration method and a system for realizing the method, belonging to the technical field of image virtual production. Through the intrinsic relationship among a lens parameter, an imaging surface and an optical tracking device, by using the world coordinate and image point coordinate of N mark points on a background screen and the internal parameter and lens distortion parameter of a camera lens, the rotation matrix between a camera coordinate system and a world coordinate system and the translation vector of a camera perspective center in the world coordinate system are obtained, combined with the current position information given by a camera posture external tracking device in a current state, camera correction calibration information and viewing angle are obtained, the lookup table with a focusing distance and a focus length relationship is established, thus when a camera position, a lens focal distance and a focusing distance change, the position of the virtual camera of the virtual production system is fully automatically positioned and corrected, and thus a real video frame picture and a virtual frame image generated by a computer can be matched perfectly.
Owner:BEIJING FILM ACAD

Virtual fly over of complex tubular anatomical structures

An embodiment of the invention is method, which can be implemented in software, firmware, hardware, etc., for virtual fly over inspection of complex anatomical tubular structures. In a preferred embodiment, the method is implemented in software, and the software reconstructs the tubular anatomical structure from a binary imaging data that is originally acquired from computer aided tomography scan or comparable biological imaging system. The software of the invention splits the entire tubular anatomy into exactly two halves. The software assigns a virtual camera to each half to perform fly-over navigation. Through controlling the elevation of the virtual camera, there is no restriction on its field of view (FOV) angle, which can be greater than 90 degrees, for example. The camera viewing volume is perpendicular to each half of the tubular anatomical structure, so potential structures of interest, e.g., polyps hidden behind haustral folds in a colon are easily found. The orientation of the splitting surface is controllable, the navigation can be repeated at another or a plurality of another split orientations. This avoids the possibility that a structure of interest, e.g., a polyp that is divided between the two halves of the anatomical structure in a first fly over is missed during a virtual inspection. Preferred embodiment software conducts virtual colonoscopy fly over. Experimental virtual fly over colonoscopy software of the invention that performed virtual fly over on 15 clinical datasets demonstrated average surface visibility coverage is 99.59+ / −0.2%.
Owner:UNIV OF LOUISVILLE RES FOUND INC

System for producing time-independent virtual camera movement in motion pictures and other media

A system for producing virtual camera motion in a motion picture medium in which an array of cameras is deployed along a preselected path with each camera focused on a common scene. Each camera is triggered simultaneously to record a still image of the common scene, and the images are transferred from the cameras in a preselected order along the path onto a sequence of frames in the motion picture medium such as motion picture film or video tape. Because each frame shows the common scene from a different viewpoint, placing the frames in sequence gives the illusion that one camera has moved around a frozen scene (i.e., virtual camera motion). In another embodiment, a two-dimensional array of video cameras is employed. Each camera synchronously captures a series of images in rapid succession over time. The resulting array of images can be combined in any order to create motion pictures having a combination of virtual camera motion and time-sequence images.
Owner:TAYLOR DAYTON V

Real-time virtual scene LED shooting system and method

The invention discloses a real-time virtual scene LED shooting system and method, and belongs to the field of movie and television shooting. According to the invention, the digital assets are called to construct the virtual scene according to the content needing to be presented by shooting scenes, the virtual LED screen and the virtual camera are reconstructed in a virtual engine module, and the real environment illumination information in the photostudio is synchronized to the virtual engine in real time. Distributed real-time rendering is performed on the virtual scene through the virtual engine module and the virtual scene is displayed on a virtual LED screen; the virtual engine further overlaps a picture with a depth channel rendered by the virtual engine in real time according to theposition of the real-time camera and lens distortion information outside the virtual LED screen. The physical LED screen displays the virtual LED screen, the real-time camera completes shooting, the XR module obtains the picture with the depth channel and the picture shot by the real-time camera, and a final picture is obtained through synthesis. The method can replace green screen key to achievethe effect of direct film formation in most environments, optimize the film production process, and save the cost of complex visual special effects.
Owner:浙江时光坐标科技股份有限公司

Cartoon expression based auxiliary entertainment system for video chatting

The invention relates to an auxiliary entertainment system for video chatting. The system can be used for adding cartoon expression pictures for video images of a user undergoing video chatting in real time so as to increase the chatting pleasure for the user. According to the system, cartoonlized expression patterns are located and drawn on various body parts, such as the face, fingers and the like, of a user in the video by using a mode recognition technology, and a specific expression pattern can also be drawn at an appointed position of the user in a video picture. The patterns can be changed according to the changes on pleasure, anger, sorrow, joy and other emotions selected by the user, and can also be changed by automatically recognizing the expressions and actions of the user according to mode recognition. Meanwhile, the system can also be compatible with the expression patterns, in general formats, used in word chatting software, and can be used for drawing the expression patterns in the video according to user settings. All the cartoon drawings are completed in real time, and a modified video stream is provided for any video chatting system used by the user in a virtual camera manner.
Owner:张明

Control method and device of virtual camera in virtual studio, implementation method of virtual studio, virtual studio system, computer readable storage medium and electronic equipment

The invention relates to the technical field of computers, and provides a control method and device of a virtual camera in a virtual studio, an implementation method of the virtual studio, a virtual studio system, a computer readable storage medium and electronic equipment. The control method of the virtual camera in the virtual studio comprises the steps of: obtainingpreset parameters of the virtual camera in different preset studio links, whereinthe preset parameters comprise at least one of the position, the posture and the focal length; generating a corresponding mirror moving control according to each preset parameter; and in response to a triggering operation on any lens moving control, adjusting the virtual camera according to a preset parameter corresponding to any lens moving control. According to the scheme, based on the generated lens moving control, the virtual camera can shoot the virtual background pictures corresponding to different preset performance broadcasting links,so that the virtual background pictures and the actual pictures shot by the entity camera can be better fused, and the reality sense of the performance broadcasting pictures is improved.
Owner:NETEASE (HANGZHOU) NETWORK CO LTD

Hand-drawn scene three-dimensional modeling method combining multi-perspective projection with three-dimensional registration

The invention provides a hand-drawn scene three-dimensional modeling method combining multi-perspective projection with three-dimensional registration. The three-dimensional modeling method comprises steps that standardized preprocessing is performed on all three-dimensional models in a three-dimensional model base, virtual cameras are arranged at vertexes of a regular polyhedron, projection pictures at all angles of each three-dimensional model are shot to represent visual shapes of the three-dimensional model, visual features of all the projection pictures of each three-dimensional model are extracted, and a three-dimensional model feature base is established according to the visual features; users draw two-dimensional hand-drawn pictures of each three-dimensional model of a three-dimensional scene needing showing and character labels of the two-dimensional hand-drawn drawings by hands, images are shot through cameras, processing on image regions is performed, visual features of hand-drawn pictures are extracted, character label regions subjected to processing serve as retrieval key words, similarity calculation is performed on visual features of hand-drawn pictures and three-dimensional model features of the three-dimensional model feature base, retrieval is performed to obtain three-dimensional models of a three-dimensional scene, three-dimensional models with largest similarity are projected to corresponding positions through a three-dimensional registration algorithm, and then show of three-dimensional modeling of the hand-drawn scene and a three-dimensional is achieved.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Game system, game device, storage medium storing game program, and game process method

An example game system includes a controller device, and a game process section for performing a game process based on an operation on the controller device. The controller device includes a plurality of direction input sections, a sensor section for obtaining a physical quantity used for calculating an attitude of the controller device, and a display section for displaying a game image. The game process section first calculates the attitude of the controller device based on the physical quantity obtained by the sensor section. Then, the game process section controls an attitude of a virtual camera in a virtual space based on the attitude of the controller device, and controls a position of the virtual camera based on an input on the direction input section. A game image to be displayed on the display section is generated based on the position and the attitude of the virtual camera.
Owner:NINTENDO CO LTD

Unmanned aerial vehicle positioning method based on a cooperative two-dimensional code of a virtual simulation environment

ActiveCN109658461ASolving Fast, Robust Localization ProblemsUnable to solve the problemImage enhancementImage analysisUncrewed vehicleEngineering
The invention provides an unmanned aerial vehicle positioning method based on a cooperative two-dimensional code of a virtual simulation environment, and the method comprises the steps: placing a checkerboard in a virtual scene, carrying out the calibration of a camera, and obtaining the parameters of the virtual camera; Identifying the AprilTag two-dimensional code in the scene, accurately positioning the unmanned aerial vehicle through the AprilTag two-dimensional code, and verifying the calibration accuracy of the camera and the feasibility of the positioning and attitude determination algorithm based on the AprilTag two-dimensional code in the virtual scene. In the virtual scene, the present invention places a checkerboard grid, uses a coordinate system conversion relationship to obtain virtual camera parameters, and calibrates the camera, and provides a camera internal reference for the drone visual navigation verification algorithm in the virtual scene. the problem that the virtual camera internal parameter cannot be acquired is solved, the calibrated camera parameters and the AprilTag two-dimensional code positioning algorithm are used for solving the position parameters ofthe camera, and the problem of rapid and robust positioning of the unmanned aerial vehicle in a complex environment is solved.
Owner:NO 20 RES INST OF CHINA ELECTRONICS TECH GRP

Merging of a video and still pictures of the same event, based on global motion vectors of this video

It is quite common for users to have both video and photo material that refer to the same event. Adding photos to home videos enriches the content. However, just adding still photos to a video sequence has a disturbing effect. The invention relates to a method to seamlessly integrate photos into the video by creating a virtual camera motion in the photo that is aligned with the estimated camera motion in the video. A synthesized video sequence is created by estimating a video camera motion in the video sequence at an insertion position in the video sequence at which the still photo is to be included, creating a virtual video sequence of sub frames of the still photo where the virtual video sequence has a virtual camera motion correlated to the video camera motion at the insertion position.
Owner:SHENZHEN TCL CREATIVE CLOUD TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products