Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

118 results about "Action prediction" patented technology

Information providing method and information providing device

In a car navigation system (1), a position information detection means (11) detects position information on a vehicle using, for example, a GPS. A travel information history of the vehicle obtained based on the detected position information is accumulated in a travel information history means (15). When detecting an event such as start of an engine, an action prediction means (17) predicts a destination of the vehicle by referring to a route to the current time and to the accumulated travel information history. Commercial or traffic information regarding the predicted destination is acquired by an information acquisition means (18) from a server (2), and then is displayed on a screen, for example, by an information provision means (19).
Owner:PANASONIC INTELLECTUAL PROPERTY CORP OF AMERICA

Action prediction based on interactive history and context between sender and recipient

Techniques for action prediction based on interactive history and context between a sender and a recipient are described herein. In one embodiment, a process includes, but is not limited to, in response to a message to be received by a recipient from a sender over a network, determining one or more previous transactions associated with the sender and the recipient, the one or more previous transactions being recorded during course of operations performed within an entity associated with the recipient, and generating a list of one or more action candidates based on the determined one or more previous transactions, wherein the one or more action candidates are optional actions recommended to the recipient, in addition to one or more actions required to be taken in response to the message. Key word identification out of voice applications as well as guided actions has also been applied to generate action prediction candidates interactive history links Other methods and apparatuses are also described.
Owner:SAP AG

Headlight beam control system and headlight beam control method

A headlight beam control system includes an image taking apparatus for capturing an image to the rear of the user's vehicle and for generating image data from the captured image, a following vehicle information acquisition unit for acquiring following vehicle information from the image data, a passing action prediction unit for predicting a passing action based on the following vehicle information, a mode switching condition judgment unit for judging whether a mode switching condition is satisfied, based on the predicted passing action, and an automatic mode setting unit for switching the headlights between a high beam mode and a low beam mode when the mode switching condition is satisfied. When passing of the user's vehicle is predicted based on the following vehicle information, the headlights are switched from the high beam mode to the low beam mode.
Owner:AISIN AW CO LTD

Animation-based action prediction generation method and device

The invention provides an animation-based action prediction generation method and device, wherein the method comprises the steps of obtaining a current animation frame of a target animation corresponding to a target role; obtaining the current skeleton posture information of the target character and the motion posture information on a preset motion track according to the current animation frame; acquiring a similar action corresponding to the current animation frame and a similarity corresponding to the similar action; obtaining a fusion action according to the similarity and the similar action; obtaining a prediction action at the next moment; and controlling the target animation to simulate the prediction action in the next animation frame. According to the invention, the network learning is carried out on the motion characteristics of an action body with a skeleton structure, such as a human, an animal, etc., in different motion states and the change characteristics between the motion states; and finally, different motion characteristics can be well represented, and the change between different motion states can be naturally fused, so that the advancing animation effect of the people or animals is produced, and the natural effect of the animation effect is ensured.
Owner:TSINGHUA UNIV

A video abnormal behavior detection method based on action prediction

The invention discloses a video abnormal behavior detection method based on motion prediction, and the method comprises the specific steps: designing a confrontation generation network model which comprises a generator and a discriminator; Constructing a coding part of the generator; Constructing a decoding part of the generator; Establishing a discriminator; Training a generator and a discriminator of the adversarial generation network model; And detecting an abnormal event occurring in the video according to the obtained optimal generator network. According to the method, a part of videos ofnormal behaviors are used for counting the generation errors, the abnormal detection threshold values are dynamically generated according to different scenes and time changes, the method can be applied to more different scenes, and robustness is improved.
Owner:SOUTH CHINA UNIV OF TECH

Display method, device, equipment and computer readable storage medium

ActiveCN110460831AEliminate or avoid ghostingGuaranteed display clarityImage analysisGeometric image transformationDisplay deviceAction prediction
The invention provides a display method, a device, equipment and a computer readable storage medium, and the method comprises the steps: obtaining human eye information, and determining a gaze regionand a non-gaze region in display equipment according to the human eye information; determining to-be-displayed content of the gaze area according to the movement parameters of the dynamic object in the gaze area, generating an image of the gaze area, and rendering the non-gaze area to obtain an image of the non-gaze area; and combining the image of the gaze area and the image of the non-gaze areato obtain a combined image, and displaying the combined image on a display device. According to the display method provided by the invention, the fixation area and the non-fixation area in the displayequipment are divided according to the human eye information; by determining the dynamic object in the gaze area and predicting the action of the dynamic object, the content to be displayed in the gaze area can be determined according to the action prediction of the dynamic object, ghosting of the dynamic object during scene movement is eliminated or avoided, the display definition of a dynamic picture is ensured, and the display performance of equipment is improved.
Owner:BOE TECH GRP CO LTD +1

Method for recognizing actions on basis of deep feature extraction asynchronous fusion networks

The invention provides a method for recognizing actions on the basis of deep feature extraction asynchronous fusion networks. The method is implemented by the aid of main contents including coarse-grained-to-fine-grained networks, asynchronous fusion networks and the deep feature extraction asynchronous fusion networks. The method includes procedures of inputting each short-term light stream stackof each space frame and each movement stream of input video appearance stream into the coarse-grained-to-fine-grained networks; integrating depth features of a plurality of action class grain sizes;creating accurate feature representation; inputting extracted features into the asynchronous fusion networks with different integrated time point information stream features; acquiring each action class prediction results; combining the different action prediction results with one another by the deep feature extraction asynchronous fusion networks; determining ultimate action class labels of inputvideo. The method has the advantages that deep-layer features can be extracted from the multiple action class grain sizes and can be integrated, accurate action representation can be obtained, complementary information in a plurality of pieces of information stream can be effectively utilized by means of asynchronous fusion, and the action recognition accuracy can be improved.
Owner:SHENZHEN WEITESHI TECH

Multi-agent confrontation method and system based on dynamic graph neural network

The invention belongs to the field of reinforcement learning of a multi-agent system, particularly relates to a multi-agent confrontation method and system based on a dynamic graph neural network, and aims at solving the problems that an existing multi-agent model based on the graph neural network is low in training speed and low in efficiency, and much manual intervention is needed in graph construction. The method comprises the following steps: obtaining an observation vector of each agent, and carrying out linear transformation to obtain an observation feature vector; calculating a connection relationship between adjacent agents, and constructing a graph structure between the agents; carrying out embedded representation on a graph structure between the intelligent agents in combination with the observation feature vectors; performing network space-time parallel training on the action prediction result of the action network and the evaluation of the evaluation network by using the embedded representation; and performing action prediction and action evaluation in multi-agent confrontation through the trained network. According to the method, a more real graph relationship is established through pruning, space-time parallel training is realized by utilizing the full-connection neural network and position coding, the training efficiency is high, and the effect is good.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI +1

Video processing device, electronic equipment and computer readable storage medium

The invention provides a video processing device, electronic equipment and a computer readable storage medium, and the device comprises: a prompt display module which is used for displaying prompt information of at least one specified action through a display screen; a video acquisition module which is used for acquiring a video of the patient by using the camera; an action prediction module which is used for inputting the video of the patient into a Parkinson's disease detection model, performing predicting to obtain action information of the patient and sending the action information to the doctor equipment; an information acquisition module which is used for acquiring disease information of the patient; and a suggestion strategy module which is used for obtaining a suggestion program control strategy of the patient based on the disease information and the action information of the patient and sending the suggestion program control strategy to the doctor equipment. According to the device, the patient can be prompted to do the specified action based on the prompt information, the corresponding action information is obtained according to the video of the specified action of the patient, and a doctor is helped to quantitatively know the local posture and the movement performance condition of the patient in the movement process.
Owner:景昱医疗器械(长沙)有限公司

Method for predicting acute joint toxicity of three pesticides to photogenic bacteria

The invention discloses a method for predicting acute joint toxicity of three pesticides to photogenic bacteria, which aims to overcome problems that a conventional toxicology acute joint toxicity evaluation technique is large in testing workload and quantitative evaluation and acute joint toxicity action prediction methods are not available. The method for predicting the acute joint toxicity of three pesticides to photogenic bacteria comprises the following steps: (1) performing pretesting, namely confirming the testing concentration of different pesticides in official tests, wherein the step of performing pretesting, namely confirming testing concentrations of different pesticides in official tests comprises the following steps: (1) primarily confirming high, medium and low photogenic bacteria toxicity concentration ranges of a single pesticide; (2) establishing a dosage-effect equation that y is equal to f(x)(x belongs to [C,C']) of the single pesticide; (3) confirming the testing concentrations of different pesticides in the BBD tests; (2) confirming testing schemes through three-factor three-level center combined testing design (BBD); (3) performing official testing, namely, testing the relative light emission inhibition rates of different test groups of photogenic bacteria; and (4) establishing a model to predict the acute joint toxicity of the three pesticides to the photogenic bacteria.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products