Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

72 results about "Video texture" patented technology

Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment

The invention relates to an automatic matching correction method of a video texture projection in a three-dimensional virtual-real fusion environment and a method of fusing an actual video image and a virtual scene. The automatic matching correction method comprises the steps of constructing the virtual scene, obtaining video data, fusing video textures and correcting a projector. A shot actual video is subjected to virtual scene fusion on the surface of the complicated scene such as an earth surface and a building by a texture projection mode, the expression and showing abilities of dynamic scene information in the virtual-real environment are improved, and the layered sense of the scene is enhanced. A dynamic video texture coverage effect of the large-scale virtual scene can be realized by increasing the number of the videos at different shooting angles, so that a dynamic reality effect of the virtual-real fusion of the virtual-real environment and the display scene is realized. The obvious color jump is eliminated, and a visual effect is improved by conducting color consistency processing on video frames in advance. With the adoption of an automatic correction algorithm, the virtual scene and the actual video can be fused more precisely.
Owner:北京微视威信息科技有限公司

Quick HEVC (High Efficiency Video Coding) inter-frame prediction mode selection method

The invention discloses a quick HEVC (High Efficiency Video Coding) inter-frame prediction mode selection method. After coarse selection of inter-frame prediction modes, the statistical property of Hadamard transform-based cost values corresponding to the coarsely selected inter-frame prediction modes is fully utilized, and the correlation of the video texture direction and an inter-frame prediction mode angle are fully considered; for prediction unit types of different sizes, the inter-frame prediction modes after the coarse selection are quickly screened through a threshold value method or the continuity of the coarsely selected inter-frame prediction modes is calculated to reflect the texture direction of the prediction unit, so that the unnecessary coarsely selected inter-frame prediction mode is screened without introducing more extra calculation; in a verification process of the most possible prediction mode, the correlation of the coarsely selected inter-frame prediction modes and the most possible prediction mode and the space correlation of a video image per se are fully considered, the final optimal inter-frame prediction mode is quickly obtained, and the inter-frame coding complexity is reduced on the premise that the video coding quality is guaranteed.
Owner:NINGBO UNIV

Immersive virtual display system and display method of CAVE (Cave Automatic Virtual Environment)

The invention provides an immersive virtual display system and a display method of a CAVE (Cave Automatic Virtual Environment). An audio control module is used for caching audio data to be played and generating an audio stream to form a play list, so as to be convenient for outputting audio data in time; a video control module is used for intensively caching video data once, so as to play video images smoothly and efficiently; an edge melting processing module is used for gripping desktop picture data on a main screen through a master control module, therefore processing time can be saved; furthermore, by adopting multithreading traversal operation and supporting multi-channel simultaneous processing, the processing efficiency can be improved; in addition, by adopting a plurality of correction functions to process, including vertex correcting process, geometrical correction processing and pixel RGB (Red Green Blue) colorful correction, on the one hand, the effect of edge melting can be ensured; on the other hand, the data handling capacity can be reduced, therefore, the system is not limited to single channel projection by a video texture technology, the large screen playing and multi-channel large-screen display can be realized by taking a player fusion technique as edge fuse processing.
Owner:SHANGHAI FINEKITE EXHIBITION ENG

Method and system for watching 3D panoramic video based on network video live broadcast platform

InactiveCN106851244ASynchronous view switchingGorgeous 3D experienceSteroscopic systemsTerminal equipment3d camera
The invention discloses a method and system for watching a 3D panoramic video based on a network video live broadcast platform. The method comprises an establishment step of establishing two 3D models in a panoramic player of a terminal device, splitting video data into left and right part content corresponding to left and right eyes, adding the left and right part content to the 3D models, thereby forming panoramic video textures, establishing left and right three-dimensional coordinate spaces in the player of the terminal device, and establishing a 3D visual angle sphere on which the panoramic video textures are mapped, in each coordinate space; an addition step of adding two 3D cameras to each three-dimensional coordinate space and placing the lenses of the 3D cameras at the sphere centers of the3D visual angle spheres; and a rendering step of dividing a display window of the terminal device into left and right parts and rendering the two 3D visual angle spheres on which the panoramic video textures are mapped to the left and right parts of the display window.
Owner:北京阿吉比科技有限公司

VR-based video rendering method and device, electronic equipment and storage medium

The invention relates to the technical field of artificial intelligence, and provides a VR-based video rendering method and device, electronic equipment and a storage medium. Obtaining a to-be-rendered video and decoding to obtain video texture data; loading a vertex shader and a fragment shader and compiling; inputting the vertex coordinates and the vertex indexes into a vertex shader to obtain atarget vertex shader, and inputting the texture coordinates into a fragment shader to obtain a target fragment shader; and monitoring the offset of the to-be-rendered video to obtain updated video texture data, and rendering the updated video texture data to a display screen of the terminal device in the target vertex shader and the target fragment shader. According to the invention, the updatedvideo texture data is rendered to the display screen of the terminal device through the preset rendering mode, so that the video rendering accuracy is improved. In addition, the invention also relatesto the technical field of the block chain, and the to-be-rendered video can be stored in the block chain node.
Owner:PINGAN INT SMART CITY TECH CO LTD

Three-dimensional scene model display method and device, storage medium and electronic equipment

The invention relates to a three-dimensional scene model display method and device, a storage medium and electronic equipment, and the method comprises the steps: obtaining a three-dimensional scene model with a video texture, and enabling a preset video image to be subjected to three-dimensional reconstruction to generate the three-dimensional scene model; determining a camera model correspondingto each frame of image in the preset video image; determining a plurality of target viewpoints corresponding to the three-dimensional scene model according to the determined camera model; obtaining atarget path among the plurality of target viewpoints, wherein the target path is a corresponding moving track when the display view angle is switched among different target viewpoints; and displayingthe three-dimensional scene models corresponding to different target viewpoints according to the target path. Thus, the target paths among the multiple target viewpoints are obtained, the three-dimensional scene models corresponding to the different target viewpoints are displayed according to the target paths, the monitoring content can be displayed at different display view angles, the monitoring display three-dimensional effect can be improved, and the user experience is improved.
Owner:北京五一视界数字孪生科技股份有限公司

Dynamic texture waterfall modeling method combined with multiple physical attributes

InactiveCN101937576AOvercoming generational distortionOvercome model untrustworthiness3D-image renderingTerrainCorrelation analysis
The invention relates to a dynamic texture waterfall modeling method combining with multiple physical attributes, belonging to the technical field of virtual reality science. The dynamic texture waterfall modeling method comprises the following steps of: (1) measuring on the spot to acquire physical attribute parameters of real waterfalls in different environments, such as flow velocity, flow rate, head drop, terrain and sunlight which influence the forms of the real waterfalls, and also recording video textures in corresponding states by using a video camera; (2) extracting a plurality of control parameters from the dynamic textures of a waterfall model; (3) reasonably reorganizing acquired data in a database, and building distribution models for the physical attributes through correlation analysis; (4) determining demands according to scenes, and calculating real data inside the database by utilizing a mapping law to obtain the dynamic textures of the waterfalls, which approac to real effect; and (5) rendering the waterfall scenes by using the dynamic textures. According to the invention, a statistic law of multiple physical attributes and the dynamic textures of the waterfalls is extracted, high-lifelike waterfall scene modeling is carried out on the waterfalls in any environmental form by utilizing statistical distribution and the traditional data, the defect of model texture distortion brought about by ignoring physical factors in the traditional waterfall modeling is overcome and the real simulation of the surface textures of waterfall flows driven by the physical attribute parameters is realized.
Owner:BEIHANG UNIV

Mountain disaster early-warning method and system

The invention provides a mountain disaster early-warning method and system. The mountain disaster early-warning method comprises the following steps that mountain disaster data are collected under thesame view point; static panoramas are spliced and synthesized; video textures in the mountain disaster motion process are generated; the static panoramas and the video textures are combined to generate a constructed dynamic panorama; and multi-period images are overlaid, and the change of the elevation and horizontal displacement before and after mountain disaster deformation is quantitatively analyzed. According to the mountain disaster early-warning method, the video textures and the panoramas are combined, the dynamic panorama in the deformation process of mountain disasters such as the landslide and the debris flow is constructed, meanwhile, the multi-period images are overlaid, the quantitative analysis result of mountain disaster deformation data is obtained, and the mountain disaster is early warned.
Owner:CHENGDU UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products