Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

48 results about "Visual behavior" patented technology

Method and apparatus for developing a person's behavior

ActiveUS20070117073A1Assist in developing the behavior in the personElectrical appliancesTeaching apparatusHuman behaviorAdaptive response
An embodiment of an apparatus, or corresponding method, for developing a person's behavior according to the principles of the present invention comprises at least one visual behavior indicator that represents a behavior desired of a person viewing the at least one visual behavior indicator. The apparatus, or corresponding method, further includes at least two visual choice indicators viewable with the at least one visual behavior indicator that represent choices available to the person, the choices assisting in developing the behavior in the person by assisting the person in choosing an appropriately adaptive response supporting the desired behavior or as an alternative to behavior contrary to the desired behavior.
Owner:BEE VISUAL

Online user type identification method and system based on visual behavior

The invention discloses an online user type identification method and an online user type identification system based on visual behavior. The online user type identification method based on the visual behavior includes: collecting and processing eye movement data of one type or various types of users, obtaining a watching information data set and a user type set, obtaining a piece or multiple pieces of eye movement feature data according to watching information concentrated in the watching information data set so as to generate a sampling data set, selecting a part of the eye movement feature data from the sampling data set, inputting the selected eye movement feature data into a support vector machine, obtaining a user type classifier through training, completing a machine learning process so as to obtain a classifier, inputting collected eye movement data of each arbitrary online user into the user type classifier after being trained, and identifying a user type of each arbitrary online user according to the obtained classifier. The online user type identification method based on the visual behavior proactively uses an eye movement tracking technology to obtain and calculate three types of the eye movement feature data of each user when each user browses a webpage, and judges the types of the online users according to differences of the eye movement feature data. The online user type identification method and the online user type identification system based on the visual behavior perform user identification based on the visual behavior, can proactively record the eye movement data of the online users, simply and conveniently extract the data, and are high in accuracy and reliability.
Owner:BEIJING UNIV OF TECH

Method for online learning and recognition of visual behaviors

Described is a system for object and behavior recognition which utilizes a collection of modules which, when integrated, can automatically recognize, learn, and adapt to simple and complex visual behaviors. An object recognition module utilizes a cooperative swarm algorithm to classify an object in a domain. A graph-based object representation module is configured to use a graphical model to represent a spatial organization of the object within the domain. Additionally, a reasoning and recognition engine module consists of two sub-modules: a knowledge sub-module and a behavior recognition sub-module. The knowledge sub-module utilizes a Bayesian network, while the behavior recognition sub-module consists of layers of adaptive resonance theory clustering networks and a layer of a sustained temporal order recurrent temporal order network. The described invention has applications in video forensics, data mining, and intelligent video archiving.
Owner:HRL LAB

Intelligent substation fault diagnosing and positioning method based on multi-factor comparison visualization

ActiveCN104539047ARealize dynamic visual behavior operationCircuit arrangementsSpecial data processing applicationsVisual behaviorSmart substation
The invention discloses an intelligent substation fault diagnosing and positioning method based on multi-factor comparison visualization. The method includes the steps of generating SVG graphic primitives of a relay protection device, indicating GoogleGIS geographic information, converting COMTRADE files into SVG files, comparing protection waveforms with definite value information, and positioning fault information. The dynamic visual behavior operation of the relay protection device is truly achieved; the healthy and stable running of a whole electric system can be globally and remotely monitored; through multi-factor comparison, the cause of faults can be more accurately and efficiently found; the fault location is indicated in time through a GoogleGIS; a main station dispatching center can completely visually monitor the running of the whole electric system in a responsibility area in real time; once a circuit and an electric device fail, the fault source location can be instantly positioned, and inspection and maintenance personnel near a fault source can be dispatched in time, and the real intelligent visual running maintenance of the whole electric system is achieved.
Owner:STATE GRID CORP OF CHINA +3

Immersive media data transmission method adapting to human eye perception condition

The invention discloses an immersive media data transmission method adapting to a human eye perception condition. The method comprises the following steps of S1, before a user starts to browse, pre-storing part of low-quality panoramic video data to a user side; s2, when the user starts to browse stably, enabling the server side to transmit the high-quality view field area content according to thecurrent visual behavior of the user, and distributing the non-uniformly distributed image quality under the current view field based on the current sight drop point of the user; s3, during the process that the visual focus of the user is migrated to the next region from the current region, providing the data in the view field of the VR helmet by the pre-stored low-quality panoramic video data, and continuously pre-storing the low-quality panoramic video data to the user side; and S4, when the user recovers stable browsing and the sight line is migrated again, repeating the steps S2 and S3 until the observation is stopped. According to the method, the size of the VR data flow can be effectively reduced, and meanwhile it can be guaranteed that the subjective perception quality of the videocontent in the whole view field area is not changed.
Owner:NANJING UNIV

Virtual human interaction software bus system and an implementation method thereof

The invention discloses a virtual human interaction software bus system and an implementation method thereof, and the system comprises an event communication soft bus which is used for realizing softbus communication between modules based on a publish-subscribe mechanism; The visual ability unit and the voice ability unit are used for virtual human vision and voice interaction; The emotional ability unit is used for calculating the emotional state of the virtual human; The decision management unit is used for generating a behavior control instruction of the virtual human; The virtual human engine unit is used for driving visual behavior display of the virtual human; The application interface unit is used for interacting with a third-party application; According to the invention, each unitis mounted on an event communication soft bus through a bus interface; According to the system, the third-party application is supported to access the system according to the interface protocol, so that integrated control of virtual human images, actions, expressions, sounds, clothes transformation and other operations is realized, the problem of compatibility of different application systems tovirtual human control is solved, the virtual human is easier to integrate into an intelligent man-machine interaction process, and the user experience is effectively improved.
Owner:南京七奇智能科技有限公司

Distributed simulation platform based on behavior tree

The invention discloses a distributed simulation platform based on a behavior tree. Model behaviors are designed through a behavior tree method, the design is direct and convenient, the model behaviors can be dynamically adjusted in a visual behavior tree mode before a simulation system operates, a simulation platform calls the model behaviors according to an adjusted logic sequence, and the modelbehaviors can be changed quickly and dynamically; the simulation platform comprises a model behavior design tool, a model behavior development tool, a scenario generation tool, a system operation control tool and a distributed computer adapter, the five tools form a complete whole, a user can control full-cycle use of the whole simulation system through the five tools, and the use efficiency of the system is greatly improved; a model used by the simulation platform is in a dynamic link library form, so that a model source code can be effectively protected; and the used scenario file format, model behavior description file format and model initialization file format are XML file formats, so that the method has very strong universality.
Owner:中国航天系统科学与工程研究院

Visual behavior recognition method and system based on text semantic supervision and computer readable medium

The invention discloses a visual behavior recognition method and system based on text semantic supervision and a computer readable medium. The method comprises the steps of text semantic feature extraction, visual feature extraction based on text semantic supervision and visual behavior recognition construction. According to the method, text description normal forms of various behaviors are concluded on the basis of text description sentences of a video sample set of the same type of behaviors, a sample pair data set is constructed, and action semantic feature vectors and relation semantic feature vectors of the text description sentences are extracted from a text semantic extraction model; the extracted action visual feature vector and the relation visual feature vector are supervised byusing the action semantic feature vector and the relation semantic feature vector, and behavior recognition is performed by using the extracted action visual feature vector and the relation visual feature vector; therefore, the problems that in the current visual behavior recognition field, the accuracy of visual behavior recognition is not high, the efficiency of text semantic supervision is nothigh, and actions and relations between behaviors cannot be accurately recognized are solved.
Owner:XIDIAN UNIV

Behavior data monitoring method and device, computer equipment and medium

The invention relates to the field of artificial intelligence, and discloses a behavior data monitoring method and device, computer equipment and a storage medium, and the method comprises the steps:obtaining a user identification from an access request when the access request is detected each time, generating a random character string based on the user identification, and adding the random character string into a request message and a response message, generating log data according to the request message and the response message, storing the log data to a log cloud platform, obtaining a useridentifier contained in a query request as a target user identifier when the query request for the user operation behavior is received, obtaining the log data corresponding to the target user identifier from the log cloud platform. According to the behavior data monitoring method and device, behavior track analysis is carried out on the target log to obtain the visual behavior track correspondingto the target user identifier, and whether the operation behavior corresponding to the target user identifier is abnormal or not is judged based on the visual behavior track, so that the behavior data monitoring efficiency is improved.
Owner:CHINA PING AN PROPERTY INSURANCE CO LTD

Pighouse temperature control method and system based on visual behavior feedback

The invention particularly relates to a pighouse temperature control method and system based on visual behavior feedback. The system adopts a pig bed body module, a video collection module, a video image processing and recognition module, a control module and a display module, wherein the pig bed body module is a place used for allowing pigs to lie at rest and observing the pigs; the video collection module is used for collecting video data information; the video image processing and recognition module is used for collecting and saving video images and identifying and classifying behaviors ofthe pigs in the images; the control module receives identifying and classifying results and outputs control quantity to a servo motor in the pig bed body module, and opening or closing control over atop plate is achieved by controlling the servo motor, thereby regulating the temperature in a pig bed body; the display module plays received collected videos and displays the results recognized by the video image processing and recognition module. According to the pighouse temperature control method and system based on the visual behavior feedback, behavior information of the pigs in a pig bed ismonitored by using a camera, the behaviors of the pigs are recognized in time through the video image processing and recognition module, and the temperature in the pig bed is regulated according to the behaviors of the pigs.
Owner:HENAN UNIV OF ANIMAL HUSBANDRY & ECONOMY

Gaze intention recognition method and system based on historical visual behaviors

The invention provides a gaze intention recognition method and system based on historical visual behaviors. The method comprises the following steps: firstly, extracting eye movement characteristics of a user for each object based on historical visual behaviors, including fixation duration, fixation frequency, fixation interval and fixation speed; then inputting eye movement features of the user to each object to an SVM classifier, judging whether the user intentionally watches the object, and if so, adding the object into an intentionally historical watched object sequence; and finally, inputting a historical staring object sequence with intentions to a naive Bayes classifier, and determining the intentions of the user. Compared with a method based on a single object, the gaze intention recognition method based on the historical visual behaviors has the advantage that the intention recognition accuracy is remarkably improved.
Owner:SHANDONG UNIV

Device for testing the visual behavior of a person, and method for determining at least one optical design parameter of an ophthalmic lens using such a device

The invention relates to a device (10) for testing the visual behavior of a person. Said device comprises: an active display (11) capable of displaying at least one visually dominant target (20) in aplurality of positions (30) that are variable over time and that are aligned along at least one line (L1, L2) or column, and a unit for controlling the display. Said unit is programmed so that the consecutive display positions of the target follow, over time, a visual tracking protocol.
Owner:ESSILOR INT CIE GEN DOPTIQUE

A method for determining a postural and visual behavior of a person

ActiveCN110892311ASuitable for optical functionsImage enhancementMedical imagingContext dataComputer vision
A method for determining a postural and visual behavior of a person, the method comprising: - a person image receiving step during which a plurality of images of the person are received, - a context determining step during which the plurality of images of the person are analyzed so as to determine context data representative of the context in which the person is on each image of the plurality of images, - an analyzing step during which the plurality of images of the person are analyzed so as to determine at least one oculomotor parameter of the person, - a postural and visual behavior determining step during which a postural and visual behavior of the person is determined based at least on the at least one oculomotor parameter and the context data.
Owner:ESSILOR INT CIE GEN DOPTIQUE

Device for testing the visual behavior of a person, and method for determining at least one optical design parameter of an ophthalmic lens using such a device

A device for testing visual behavior of a person, including: an active display configured to display at least one visually predominant target in a plurality of positions that are variable over time and that are aligned along at least one line or column, and a unit for controlling the display. The unit is programmed so that consecutive display positions of the target follow, over time, a visual tracking protocol.
Owner:ESSILOR INT CIE GEN DOPTIQUE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products