Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

137 results about "Character animation" patented technology

Character animation is a specialized area of the animation process, which involves bringing animated characters to life. The role of a Character Animator is analogous to that of a film or stage actor, and character animators are often said to be "actors with a pencil" (or a mouse). Character animators breathe life in their characters, creating the illusion of thought, emotion and personality. Character animation is often distinguished from creature animation, which involves bringing photo-realistic animals and creatures to life.

Face feature analysis for automatic lipreading and character animation

A face feature analysis which begins by generating multiple face feature candidates, e.g., eyes and nose positions, using an isolated frame face analysis. Then, a nostril tracking window is defined around a nose candidate and tests are applied to the pixels therein based on percentages of skin color area pixels and nostril area pixels to determine whether the nose candidate represents an actual nose. Once actual nostrils are identified, size, separation and contiguity of the actual nostrils is determined by projecting the nostril pixels within the nostril tracking window. A mouth window is defined around the mouth region and mouth detail analysis is then applied to the pixels within the mouth window to identify inner mouth and teeth pixels and therefrom generate an inner mouth contour. The nostril position and inner mouth contour are used to generate a synthetic model head. A direct comparison is made between the inner mouth contour generated and that of a synthetic model head and the synthetic model head is adjusted accordingly. Vector quantization algorithms may be used to develop a codebook of face model parameters to improve processing efficiency. The face feature analysis is suitable regardless of noise, illumination variations, head tilt, scale variations and nostril shape.
Owner:ALCATEL-LUCENT USA INC

Real-time automatic concatenation of 3D animation sequences

Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network. In addition, the server system is configured to generate a user interface that is accessible via the communication network, the server system is configured to receive high level descriptions of desired sequences of motion via the user interface, the server system is configured to generate synthetic motion data based on the high level descriptions and to concatenate the synthetic motion data, the server system is configured to stream the concatenated synthetic motion data to a rendering engine on the user device, and the user device is configured to render a 3D character animated using the streamed synthetic motion data.
Owner:ADOBE INC

Character animation framework

An extensible character animation framework is provided that enables video game design teams to develop reusable animation controllers that are customizable for specific applications. According to embodiments, the animation framework enables animators to construct complex animations by creating hierarchies of animation controllers, and the complex animation is created by blending the animation outputs of each of the animation controllers in the hierarchy. The extensible animation framework also provides animators with the ability to customize various attributes of a character being animated and to view the changes to the animation in real-time in order to provide immediate feedback to the animators without requiring that the animators manually rebuild the animation data each time that the animators make a chance to the animation data.
Owner:ELECTRONICS ARTS INC

Virtual simulation system and method of antagonistic event with net based on three-dimensional multi-image display

The invention discloses a virtual simulation system and method of an antagonistic event with a net based on three-dimensional multi-image display. The system comprises a network connection module, a game logic module, an interaction control module, a physical engine module, a three-dimensional rendering module and a dual-image projection display module, wherein the network connection module comprises a server sub-module and a client sub-module and is used for network communication and data transmission; the game logic module is used for storing game rules, controlling the play of character animation and performing mapping by position; the interaction control module is used for controlling corresponding game characters in a virtual tennis game scene to move and shooting three-dimensional images of different sight points; the physical engine module is used for efficiently and vividly simulating the physical effects of a tennis ball, such as rebound and collision through a physical engine and enabling the game scene to be relatively real and vivid. According to the system, the three-dimensional multi-image display operation can be performed on a same display screen, and the same game scene can be rendered in real time based on different sight angles.
Owner:SHANDONG UNIV

Method, system and storage device for creating, manipulating and transforming animation

An animation method, system, and storage device which takes animators submissions of characters and animations and breaks the animations into segments where discontinuities will be minimized; allows users to assemble the segments into new animations; allows users to apply modifiers to the characters; provides a semantic restraint system for virtual objects; and provides automatic character animation retargeting.
Owner:ANIMATE ME

Character animation system and method

InactiveUS20090135189A1Simple animationAnimationAnimationKey frame
A character animation system includes a data generating unit for generating a character skin mesh and an internal reference mesh, a character bone value, and a character solid-body value, a skin distortion representing unit for representing skin distortion using the generated character skin mesh and the internal reference mesh when an external shock is applied to a character, and a solid-body simulation engine for applying the generated character bone value and the character solid-body value to a real-time physical simulation library and representing character solid-body simulation. The system further includes a skin distortion and solid-body simulation processing unit for processing to return to a key frame to be newly applied after the skin distortion and the solid-body simulation are represented.
Owner:ELECTRONICS & TELECOMM RES INST

Method of representing and animating two-dimensional humanoid character in three-dimensional space

There is provided a method of representing and animating a 2D (Two-Dimensional) character in a 3D (Three-Dimensional) space for a character animation. The method includes performing a pre-processing operation in which data of a character that is required to represent and animate the 2D character like a 3D character is prepared and stored and producing the character animation using the stored data.
Owner:ELECTRONICS & TELECOMM RES INST

Method and device for generating two-dimensional images

The invention discloses a method and a device for generating two-dimensional images. The method includes: generating a map according to the geometrical shape of a three-dimensional model and a corresponding texture map, wherein the map stores the corresponding relation between the texture map and the geometrical shape of a two-dimensional image projected by the three-dimensional model; and inputting the texture map to render the map so as to generate the two-dimensional image. When the map is generated, the three-dimensional model is projected by a camera model of pinhole imaging, the coordinate of the texture map is directly obtained during projecting, the intensity of corresponding texture points is processed by the aid of an illumination model, and transparency of the corresponding texture points is processed by designing a learning strategy of two-step projection. The corresponding relation between the geometrical shape of the two-dimensional mapped image and the texture map can be well maintained, different final images can be rendered conveniently by changing different texture maps, flexibility is improved greatly, and the network bandwidth is further saved by only transmitting the texture map as compared with that of the prior art by transmitting the whole character animation sequence.
Owner:BEIJING PEOPLE HAPPY INFORMATION TECH

UE (Unreal Engine)-based performance capture system without marker

InactiveCN108564642ASolve the sense of intrusionAnimationAcquiring/recognising facial featuresGraphicsHuman body
The invention relates to the field of image processing, provides a UE (Unreal Engine)-based performance capture system without a marker, and aims to solve the problem that in methods of simultaneouslycapturing actions and expressions of performers to generate character animations, marker points cause intrusive feeling on the performers, and interference is enabled to be on performance. The systemincludes: a facial-performance capturing module, which is configured to collect facial image data of a performer, and calculate weight parameters of a facial expression of the performer according tothe facial image data; an action performance capturing module, which is configured to collect bone image data of the above performer, and determine human body posture parameters of the above performeraccording to the bone image data; and an animation generation module, which is configured to utilize a UE graphics program to generate an action and an expression of a character 3D model according tothe weight parameters of the above facial expression and the above human body posture parameters. The system realizes capturing of performer actions and expressions, and gives a virtual character realistic and reasonable actions and vivid expressions according to action and expression data.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Computer assisted character animation drawing method based on light irradiated ball model

InactiveCN101477705AGlobally consistentSolve the problem of not being able to flexibly reflect the expressiveness of the artist's worksAnimation3D-image renderingAnimationComputer-aided
The invention discloses a method for rendering character animation in an artistic manner with the assistance of a computer. The method is based on an illumination spherical model. The invention is characterized in that the illumination spherical model which extracts and creates an artistic style from a work of art is applied to rendering of a character model, so that coloring of an artistic style is realized conveniently and rapidly, and character animation can be manufactured as if by non-photorealistic rendering. By drawing on a method for rendering information about illumination on a sphere, the invention ensures that when a complex object which is uniform in illumination material is rendered, the illumination is consistent in an all-round manner. Because of extraction and application of the illumination spherical model, character rendering of the artistic style can be realized conveniently and rapidly only through simple interactive operation, so that rendering efficiency is improved. The invention solves the problems that the prior computer assisted animation has low rendering efficiency, and the traditional image technology cannot give expression to works of an artist flexibly.
Owner:ZHEJIANG UNIV

Method and apparatus for rendering efficient real-time wrinkled skin in character animation

Provided are an apparatus and method for providing the optimized speed and realistic expressions in real time while rendering wrinkled skin during character animation. The wrinkled skin at each expression is rendered using a normal map and a bump map. Generalized wrinkled skin data and weight data are generated by calculating a difference of the normal and bump maps and other normal and bump maps without expressions. Then, the wrinkled skin data of a desirable character is generated using the generalized wrinkle skin data at each expression, and then the normal and bump maps expressing a final wrinkled skin are calculated using the weight at each expression in a current animation time t. Therefore, the wrinkled skin in animation is displayed.
Owner:ELECTRONICS & TELECOMM RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products