Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

127 results about "Animation system" patented technology

The Computer Animation Production System (CAPS) was a digital ink and paint system used in animated feature films, the first at a major studio, designed to replace the expensive process of transferring animated drawings to cels using India ink or xerographic technology, and painting the reverse sides of the cels with gouache paint.

Online modeling for real-time facial animation

Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
Owner:APPLE INC

Sytem and a Method for Motion Tracking Using a Calibration Unit

The invention relates to motion tracking system (10) for tracking a movement of an object (P) in a three-dimensional space, the said object being composed of object portions having individual dimensions and mutual proportions and being sequentially interconnected by joints the system comprising orientation measurement units (S1, S3, . . . SN) for measuring data related to at least orientation of the object portions, wherein the orientation measurement units are arranged in positional and orientational relationships with respective object portions and having at least orientational parameters; a processor (3, 5) for receiving data from the orientation measurement units, the said processor comprising a module for deriving orientation and/or position information of the object portions using the received data and a calibration unit (7) arranged to calculate calibration values based on received data and pre-determined constraints for determining at least the mutual proportions of the object portions and orientational parameters of the orientation measurement units based on received data, pre-determined constrains and additional input data. The invention further relates to a method for tracking a movement of an object, a medical rehabilitation system and an animation system.
Owner:XSENS HLDG BV

System and method for tracking facial muscle and eye motion for computer graphics animation

A motion tracking system enables faithful capture of subtle facial and eye motion using a surface electromyography (EMG) detection method to detect muscle movements and an electrooculogram (EOG) detection method to detect eye movements. Signals corresponding to the detected muscle and eye movements are used to control an animated character to exhibit the same movements performed by a performer. An embodiment of the motion tracking animation system comprises a plurality of pairs of EMG electrodes adapted to be affixed to a skin surface of a performer at plural locations corresponding to respective muscles, and a processor operatively coupled to the plurality of pairs of EMG electrodes. The processor includes programming instructions to perform the functions of acquiring EMG data from the plurality of pairs of EMG electrodes. The EMG data comprises electrical signals corresponding to muscle movements of the performer during a performance. The programming instructions further include processing the EMG data to provide a digital model of the muscle movements, and mapping the digital model onto an animated character. In another embodiment of the invention, a plurality of pairs of EOG electrodes are adapted to be affixed to the skin surface of the performer at locations adjacent to the performer's eyes. The processor is operatively coupled to the plurality of pairs of EOG electrodes and further includes programming instructions to perform the functions of acquiring EOG data from the plurality of pairs of EOG electrodes. The EOG data comprises electrical signals corresponding to eye movements of the performer during a performance. The programming instructions further provide processing of the EOG data and mapping of the processed EOG data onto the animated character. As a result, the animated character will exhibit the same muscle and eye movements as the performer.
Owner:SONY CORP +1

Document animation system

InactiveUS20060197764A1Expand their vocabularyEnhance vocabulary quicklyAnimationSpeech synthesisAnimationPaper document
An animating system converts a text-based document into a sequence of animating pictures for helping a user to understand better and faster. First, the system provides interfaces for a user to build various object models, specify default rules for these object models, and construct the references for meanings and actions. Second, the system will analyze the document, extract desired information, identify various objects, and organize information. Then the system will create objects from corresponding object models and provide interfaces to modify default values and default rules and define specific values and specific rules. Further, the system will identify the meanings of words and phrases. Furthermore, the system will identify, interpolate, synchronize, and dispatch events. Finally, the system provides interface for the user to track events and particular objects.
Owner:YANG GEORGE L

Dimensioned modeling system

The disclosed inventions and embodiments relate to the propagation of information among workflow applications used by a design project such as a construction design project and the creation and use of dimensioned and animated models for such projects. The workflow applications may be extended to enable participation in the information sharing or the system may provide functionality external to the tools that facilitates the participation of the tools. Information from various sources, including the workflow applications and third party sources, can be represented and modeled in an animation system. Sometimes the propagation is enabled in part by a store of information that also enabled reuse and reporting. Information used and generated by the various workflow applications is kept consistent among the different workflow applications and among the different representations of that information.
Owner:SCENARIO DESIGN

Real-time automatic concatenation of 3D animation sequences

Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network. In addition, the server system is configured to generate a user interface that is accessible via the communication network, the server system is configured to receive high level descriptions of desired sequences of motion via the user interface, the server system is configured to generate synthetic motion data based on the high level descriptions and to concatenate the synthetic motion data, the server system is configured to stream the concatenated synthetic motion data to a rendering engine on the user device, and the user device is configured to render a 3D character animated using the streamed synthetic motion data.
Owner:ADOBE INC

Interactive design, synthesis and delivery of 3D character motion data through the web

Systems and methods are described for animating 3D characters using synthetic motion data generated by generative models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, an animation system is accessible via a server system that utilizes the ability of generative models to generate synthetic motion data across a continuum to enable multiple animators to effectively reuse the same set of previously recorded motion capture data to produce a wide variety of desired animation sequences. In several embodiments, an animator can upload a custom model of a 3D character and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character. One embodiment of the invention includes a server system configured to communicate with a database containing motion data including repeated sequences of motion, where the differences between the repeated sequences of motion are described using at least one high level characteristic. In addition, the server system is connected to a communication network, the server system is configured to train a generative model using the motion data, the server system is configured to generate a user interface that is accessible via the communication network, the server system is configured to receive a high level description of a desired sequence of motion via the user interface, the server system is configured to use the generative model to generate synthetic motion data based on the high level description of the desired sequence of motion, and wherein the server system is configured to transmit a stream via the communication network including information that can be used to display a 3D character animated using the synthetic motion data.
Owner:ADOBE INC

Integration system supporting dimensioned modeling system

The disclosed inventions and embodiments relate to the propagation of information among workflow applications used by a design project such as a construction design project and the creation and use of dimensioned and animated models for such projects. The workflow applications may be extended to enable participation in the information sharing or the system may provide functionality external to the tools that facilitates the participation of the tools. Information from various sources, including the workflow applications and third party sources, can be represented and modeled in an animation system. Sometimes the propagation is enabled in part by a store of information that also enabled reuse and reporting. Information used and generated by the various workflow applications is kept consistent among the different workflow applications and among the different representations of that information.
Owner:SCENARIO DESIGN

Electronic image identification and animation system

An electronic system that includes a working surface and a camera that can capture a plurality of images on the working surface. The system also includes a control station that is coupled to the camera and has a monitor that can display the captured images. The monitor displays a moving graphical image having a characteristic that is a function of a user input on the working surface. By way of example, the graphical image may be a character created from markings formed on the working surface by the user. The system can then “animate” the character by causing graphical character movement.
Owner:RUDELL DESIGN

Collaborative filtering-based real-time voice-driven human face and lip synchronous animation system

The invention discloses a collaborative filtering-based real-time voice-driven human face and lip synchronous animation system. By inputting voice in real time, a human head model makes lip animation synchronous with the input voice. The system comprises an audio/video coding module, a collaborative filtering module, and an animation module; the module respectively performs Mel frequency cepstrum parameter coding and human face animation parameter coding in the standard of Moving Picture Experts Group (MPEG-4) on the acquired voice and human face three-dimensional characteristic point motion information to obtain a Mel frequency cepstrum parameter and human face animation parameter multimodal synchronous library; the collaborative filtering module solves a human face animation parameter synchronous with the voice by combining Mel frequency cepstrum parameter coding of the newly input voice and the Mel frequency cepstrum parameter and human face animation parameter multimodal synchronous library through collaborative filtering; and the animation module carries out animation by driving the human face model through the human face animation parameter. The system has the advantages of better sense of reality, real-time and wider application environment.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Three dimensional modeling and animation system using master objects and modifiers

A three dimensional (3D) modeling system for generating a 3D representation of a modeled object on a display device of a computer system. The modeled object is represented by an initial definition of an object and a set of modifiers. Each modifier modifies some portion of the definition of an object that may result in a change in appearance of the object when rendered. The modifiers are ordered so that the first modifier modifies some portion of the initial definition of the object and produces a modified definition. The next modifier modifies the results of the previous modifier. The results of the last modifier are then used in rendering processes to generate the 3D representation. Each modifier is associated with a three dimensional representation so that the user can more easily visualize the effect of the modifier.
Owner:AUTODESK INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products