Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

32 results about "UV mapping" patented technology

UV mapping is the 3D modelling process of projecting a 2D image to a 3D model's surface for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y" and "Z" are already used to denote the axes of the 3D object in model space.

Fast rendering method for virtual scene and model

ActiveCN107103638AMaterial parameters can be modifiedImprove efficiencyImage rendering3D-image renderingUV mapping3D modeling
The invention discloses a fast rendering method and device for a virtual scene and a model. The method comprises steps of firstly obtaining the rendering request of the virtual scene and the model to be rendered and the standard material library; creating a read and write file including the corresponding relation among the scene parameter, the model and the material according to the rendering request, and selecting a material corresponding to the model to be rendered from the pre-established standard material library after loading; setting and adjusting the scene parameter according to the rendering request, rendering the model according to the selected material and adjusting the material parameter to complete the rendering satisfying the specified rendering request; when a single model corresponds to a plurality of materials, directly subjecting the plurality of materials to a three-dimensional presence UV mapping on the surface of the three-dimensional model; and generating a high light effect and its corresponding material by the hand-drawn trajectory according to the rendering request, and rendering the model's high light effect. The method has the advantages of high efficiency, openness, automation, integration of tools, light weight, sustainability, and the like on the basis of greatly improving the efficiency of virtual scene and model rendering.
Owner:HANGZHOU VERYENGINE TECH CO LTD

Making method and apparatus of three-dimensional comics

The invention provides a making method and apparatus of three-dimensional comics. The method mainly comprises following steps: receiving and storing a comics script and a plan; designing a role draftaccording to the comics script; performing three-dimensional role modeling, UV extension, UV sticker drawing, three-dimensional role model skeleton binding, three-dimensional role model skinning, three-dimensional scene modeling and three-dimensional prop modeling according to the role design draft, and obtaining three-dimensional role, scene, and prop models through sticker, binding and skinning;performing comics storyboard designing and drawing on the comics script, the plan, and the three-dimensional role, scene and prop models to obtain a comics storyboard draft; performing movement modeling matching according to the storyboard by employing the three-dimensional role model, and producing a three-dimensional comics manuscript through rendering; making a comics sketch according to the three-dimensional comics manuscript; and performing outline drawing and coloring on the three-dimensional comics manuscript, and adding lines, onomatopoetic words and effect lines. According to the method and apparatus, the three-dimensional comics with large amount and excellent quality can be obtained.
Owner:刘芳圃 +1

Real-time dynamic generating method for features on bird body model

ActiveCN104537704ARealistic distributionRandomAnimationAnimationUV mapping
The invention discloses a real-time dynamic generating method for features on a bird body model. The method comprises the specific steps that UV mapping is carried out on a polygon model of a bird body, a vertex local coordinate system is set up on each vertex, and direction vectors of feather rods in the features located in the vertex local coordinate systems are set; a particle system is generated, all particles are restrained on the faces of a polygon, repulsive force among the particles serves as the width of the features, after evolution is carried out to the static state, the positions of all the particles are the positions of folliculus pili, and the types of the features are determined randomly; after the bird model is animated and deformed, the vertex local coordinate systems are updated, the direction of the feature rods of the features at the moment of current frames is calculated, the positions of the folliculus pili serve as coordinate origins to set up a feature local coordinate system, feature reference NURBS surface patches are set up according to the set width and the set length, the features are generated on the NUBRS surface patches, and the steps are repeated for all the frames in animation. The method can achieve no-penetration covering among the features, and generate the dynamic features in real time.
Owner:北京春天影视科技有限公司

Real-time pupil beautifying method and device

PendingCN109785259AIncrease transparencyIdeal color contact lensesImage enhancementGeometric image transformationFace detectionPupil
The invention discloses a real-time pupil beautifying method and device, and the method comprises the steps: building a first eyeball grid model and a first periocular grid model according to a presetstandard face image; Carrying out face detection on the pupil image to be beautified, and obtaining eyeball key points and periocular key points of the pupil image to be beautified; Expanding the eyeball key point locations and the periocular key point locations to obtain eyeball grid vertexes and periocular grid vertexes; Mapping the first eyeball grid model and the first periocular grid model into the pupil image to be beautified, and obtaining a second eyeball grid model and a second periocular grid model; Mapping the pre-built cosmetic pupil material to a second eyeball grid model by adopting a UV mapping technology to obtain a first cosmetic pupil image; And adopting an OpenGL depth test technology to cover the second periocular grid model on the first pupil beautifying image to obtain a real-time pupil beautifying image. According to the technical scheme provided by the invention, the portrait can be beautified more conveniently and efficiently in real time on the premise of notchanging the shapes of the eyeballs in the image.
Owner:CHENDU PINGUO TECH

Method and device for efficiently previewing CG assets

The invention provides a method and a device for efficiently previewing CG assets. The method comprises the steps of: mapping UDIM space multi-quadrant UV into an initial UV quadrant through adoptingan arrangement algorithm by means of UV editing software; acquiring any point coordinates in the initial UV quadrant in the UV mapping relation; according to the coordinates corresponding relation ofany point in the initial UV quadrant, acquiring a space point and an edge, corresponding to the point in the initial UV quadrant, in the UDIM space multi-quadrant UV; determining each piece of channelmapping information required by a material to which a plane belongs according to space points and edges of the UDIM space multi-quadrant UV, sampling according to resolution to obtain mapping information of a corresponding map in the initial space UV quadrant, and determining mapping information in a preview UV space; and acquiring different preview requirements of the user, and displaying the UVcorresponding mapping relationship and the mapping information according to the preview requirements. According to the method and the device, asset output of two downstream links can be provided without additionally increasing the workload, the manual time for UV making and mapping making in two times is saved, and the color preview effect basically consistent with the rendering effect can be obtained.
Owner:上海咔咖文化传播有限公司

Coherent roaming implementation method for virtual tourism three-dimensional simulation scene

PendingCN111208897AFree and smooth roamingFree and seamless roamingInput/output for user-computer interactionData processing applicationsDimensional simulationRoaming
The invention discloses a coherent roaming implementation method for a virtual tourism three-dimensional simulation scene, and belongs to the computer simulation technology, the method comprises the following steps: recording a panoramic video of a scene to be simulated at a rate of 20-60 frames per second; converting a video frame of the panoramic video into a UV map in a scene in a Unity 3D environment; establishing a 3D scene in the form of UV mapping animation; using a rocker controller configured with software to drag to rotate at all angles and controlling a predefined user to substitutefor a role to advance, retreat, advance left and right and steer; playing a panoramic video in a positive time sequence to be matched with a simulated advancing picture; matching a simulated backwardpicture through reverse playing; stretching the panoramic picture frame in a non-linear mode to achieve left-right advancing and steering in a matched mode; when the rocker is in a relaxed state, enabling the video to pause to the last frame of UV chartlet before the rocker is loosened and keeps still. Free, smooth and coherent roaming of a three-dimensional simulation scene is achieved, and thevisual reality sense and the interesting sense are improved.
Owner:浙江开奇科技有限公司

Video image three-dimensional position extraction method

The invention discloses a video image three-dimensional position extraction method, which comprises the following steps of S1, establishing a three-dimensional scene of a real site through a 3D modeling tool; s2, setting a virtual camera at a corresponding position of the three-dimensional scene according to the position of the video image acquisition equipment of the real scene, and calibrating the angle between the video image acquisition equipment and the virtual camera; s3, constructing a UV mapping table for the three-dimensional scene through a Shader shader; s4, according to the UV mapping table, extracting a three-dimensional position corresponding to a pixel point of the video image shot by each virtual camera, and quickly finding the three-dimensional position through the pixel position of the video image; according to the invention, the problem of automatic extraction of the three-dimensional position of the video image is solved.
Owner:成都智鑫易利科技有限公司

A method for real-time dynamic generation of feathers in a bird torso model

ActiveCN104537704BRealistic distributionRandomAnimationAnimationHair follicle
The invention discloses a real-time dynamic generating method for features on a bird body model. The method comprises the specific steps that UV mapping is carried out on a polygon model of a bird body, a vertex local coordinate system is set up on each vertex, and direction vectors of feather rods in the features located in the vertex local coordinate systems are set; a particle system is generated, all particles are restrained on the faces of a polygon, repulsive force among the particles serves as the width of the features, after evolution is carried out to the static state, the positions of all the particles are the positions of folliculus pili, and the types of the features are determined randomly; after the bird model is animated and deformed, the vertex local coordinate systems are updated, the direction of the feature rods of the features at the moment of current frames is calculated, the positions of the folliculus pili serve as coordinate origins to set up a feature local coordinate system, feature reference NURBS surface patches are set up according to the set width and the set length, the features are generated on the NUBRS surface patches, and the steps are repeated for all the frames in animation. The method can achieve no-penetration covering among the features, and generate the dynamic features in real time.
Owner:北京春天影视科技有限公司

Intelligent shoe mold mapping method

The invention discloses an intelligent shoe mold mapping method, which comprises the following steps of S1, collecting sampling coordinates of a to-be-mapped shoe mold, and determining a triangular surface where the sampling coordinates are located and centroid coordinates of the triangular surface; s2, in the triangular surface, determining an area to which the triangular surface belongs and the weight of the triangular surface according to the center-of-mass coordinates; s3, mapping sampling is carried out according to the weight of the center-of-mass coordinate, and shoe mold mapping is completed. According to the intelligent shoe mold map mapping method, a universal automatic model map uv mapping scheme is provided, and model employees do not need to manually make map uv data when making models, so that the labor cost, the time cost and the authorization cost of corresponding making software are greatly reduced.
Owner:成都中鱼互动科技有限公司

Scene model surface texture superposition method and device based on UV mapping

The invention provides a scene model surface texture superposition method and device based on UV mapping, and the method comprises the steps: carrying out the full-expansion UV processing of a model in a scene, obtaining a second set of UV of the model, and enabling the second set of UV to serve as the coordinates of the superposition texture mapping; generating a coordinate map of a world space according to the coordinates of the superimposed texture mapping; according to the method, a coordinate map is obtained, a superposition texture mask map is generated according to the coordinate map and parameters required for texture superposition, superposition texture drawing is performed on a model in a scene, and a second set of UV is generated by using a full-display scene model, so that UV map mapping of a world coordinate space is realized, the map can cover the surface of the model in a fitting manner, and the rendering effect is improved.
Owner:FUJIAN SHUBO INFORMATION TECH CO LTD

3D asset inspection

ActiveUS11067388B2Facilitating subsequent abilityUsing optical meansElectromagnetic wave reradiation3d sensorOdometry
Systems and methods for physical asset inspection are provided. According to one embodiment, a probe is positioned to multiple data capture positions with reference to a physical asset. For each position: odometry data is obtained from an encoder and / or an IMU; a 2D image is captured by a camera; a 3D sensor data frame is captured by a 3D sensor, having a view plane overlapping that of the camera; the odometry data, the 2D image and the 3D sensor data frame are linked and associated with a physical point in real-world space based on the odometry data; and switching between 2D and 3D views within the collected data is facilitated by forming a set of points containing both 2D and 3D data by performing UV mapping based on a known positioning of the camera relative to the 3D sensor.
Owner:CHARLES MACHINE WORKS

UV mapping method based on rasterization rendering and cloud equipment

The invention belongs to the technical field of computers, and particularly relates to a UV mapping method based on rasterization rendering and cloud equipment. The method comprises the steps of: obtaining a regular grid; generating a plurality of UV points according to the type of the regular grid, each UV point comprising UV information and texture information; mapping the UV information of each UV point into position information of an actual grid according to the type of the regular grid; and rendering the actual grid according to the position information and the texture information of the UV points. The method has the advantages that compared with a traditional mapping method, the calculation amount of the UV mapping method can be reduced, computer resources can be fully utilized by accelerated operation of the method through a GPU, and the effect of modifying and adjusting the UV points in real time is achieved. The invention further provides cloud equipment to execute the UV mapping method based on rasterization rendering, so that the effect of rendering the actual grid by using the texture information of the regular grid is achieved.
Owner:广州引力波信息科技有限公司

A Fast Rendering Method for Virtual Scene and Model

The invention discloses a fast rendering method and device for a virtual scene and a model. The method comprises steps of firstly obtaining the rendering request of the virtual scene and the model to be rendered and the standard material library; creating a read and write file including the corresponding relation among the scene parameter, the model and the material according to the rendering request, and selecting a material corresponding to the model to be rendered from the pre-established standard material library after loading; setting and adjusting the scene parameter according to the rendering request, rendering the model according to the selected material and adjusting the material parameter to complete the rendering satisfying the specified rendering request; when a single model corresponds to a plurality of materials, directly subjecting the plurality of materials to a three-dimensional presence UV mapping on the surface of the three-dimensional model; and generating a high light effect and its corresponding material by the hand-drawn trajectory according to the rendering request, and rendering the model's high light effect. The method has the advantages of high efficiency, openness, automation, integration of tools, light weight, sustainability, and the like on the basis of greatly improving the efficiency of virtual scene and model rendering.
Owner:HANGZHOU VERYENGINE TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products