Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

76 results about "Lip feature" patented technology

Adaptive lip language interaction method and interaction apparatus

The invention discloses an adaptive lip language interaction method and interaction apparatus. The adaptive lip language interaction method comprises the following steps: obtaining a depth image of a target human body object and an infrared image or a color image of the target human body object; obtaining lip area images of the target human body object respectively from the depth image and the infrared image or the color image; extracting lip features from the lip area images, and performing lip language identification after fusion processing is performed on the lip features extracted from the depth image and the infrared image or from the depth image and the color image; and converting a lip language identification result into a corresponding operation instruction, and according to the operation instruction, performing interaction. Such a mode is not easily affected by environment such as light intensity, the hit rate of image identification can be effectively improved, the hit rate of lip language identification is further improved, and the execution efficiency and the operation accuracy of the interaction can be finally effectively improved.
Owner:SHENZHEN ORBBEC CO LTD

Foam-in-place interior panels having integrated airbag door for motor vehicles and methods for making the same

Interior panels having integrated airbag doors for motor vehicles and methods for making such interior panels are provided herein. In one example, an interior panel comprises a substrate having outer and inner surfaces and an opening extending therethrough. An airbag chute-door assembly is mounted to the substrate and comprises a chute wall that at least partially surrounds an interior space. A door flap portion is pivotally connected to the chute wall and at least partially covers the opening. A perimeter flange extends from the chute wall and has a flange section that overlies the outer surface of the substrate. A molded-in lip feature extends from the flange section and contacts the outer surface to form a seal between the flange section and the substrate. A skin covering extends over the substrate and a foam is disposed between the skin covering and the substrate.
Owner:FAURECIA INTERIOR SYST

Face image processing method and device, electronic equipment and computer storage medium

The embodiment of the invention provides a face image processing method. The invention discloses a device, electronic equipment and a computer storage medium. The method comprises the following stepsof: obtaining a sample; lip makeup parameter modification information based on the to-be-processed face image is included in the lip makeup adjustment instruction; detecting lip feature points in theto-be-processed face image, performing interpolation processing on the lip feature points to obtain interpolation feature points, determining a lip region based on the lip feature points and the interpolation feature points, and performing corresponding processing on the to-be-processed lip region according to the lip makeup parameter modification information to obtain an adjusted to-be-processedface image. According to the scheme of the embodiment, When the lip makeup adjusting instruction is received, the to-be-processed face image can be processed according to the lip makeup adjusting instruction, that is, the function of adjusting the lip makeup of the user in the image through one key is achieved, the user does not need to edit the lip makeup of the face image manually, the processing time is shortened, and the use experience of the user is improved.
Owner:SHENZHEN LIANMENG TECH CO LTD

Personalized voice and video generation system based on phoneme posterior probability

The invention discloses a personalized voice and video generation system based on phoneme posterior probability. The personalized voice and video generation system mainly comprises the following steps: S1, extracting phoneme posterior probability through an automatic voice recognition system; s2, training a recurrent neural network to learn a mapping relationship between phoneme posterior probability and lip features, and through the network, inputting an audio of any target speaker to output the corresponding lip feature; s3, synthesizing the lip-shaped features into a corresponding face image through face alignment, image fusion, an optical flow method and other technologies; and S4, generating a final speaker speech video from the generated face sequence through dynamic planning and other technologies. The invention relates to the technical field of speech synthesis and speech conversion. According to the method, the lip shape is generated based on the phoneme posteriori probability, the requirement for the video data volume of the target speaker is greatly reduced, meanwhile, the video of the target speaker can be directly generated from the text content, and the audio of the speaker does not need to be additionally recorded.
Owner:深圳市声希科技有限公司

Tooth identification method, device and system

The embodiment of the invention provides a tooth identification method, device and system. The method comprises the following steps of: according to the obtained coordinates of lip feature points, determining a target image which comprises teeth to obtain the approximate area of the teeth; combining with a characteristic that the teeth are bright on a luminance channel, converting the target image into a preset color space of which the luminance detection is sensitive; on the basis of a luminance grayscale image and a first masking grayscale image, obtaining a second masking grayscale image; on the basis of a chromaticity image and a saturability image, obtaining a lip probability image; and finally, on the basis of the second masking grayscale image and the lip probability image, obtaining a third masking grayscale image, wherein the pixel value of a tooth part in the third masking grayscale image has an obvious difference with pixel values on other positions, and an area which is formed by the pixels of which the pixel values are greater than or equal to a third preset value in the third masking grayscale image as a tooth area. Therefore, a purpose that the teeth can be quickly and accurately identified is realized.
Owner:北京贝塔科技有限公司

Voice lip fitting method and system and storage medium

The invention relates to a voice lip shape fitting method. The method comprises the following steps: collecting image data and voice data of a target person video data set; extracting a lip feature vector of a target person in the image data; extracting a voice feature vector of a target person in the voice data; training a multi-scale fusion convolutional neural network by taking the voice feature vector as an input and the lip feature vector as an output; and inputting a to-be-fitted voice feature vector of the target person into the multi-scale fusion convolutional neural network, generating a fitted lip feature vector by the multi-scale fusion convolutional neural network, outputting the fitted lip feature vector, and fitting the lip based on the lip feature vector.
Owner:SUN YAT SEN UNIV

Lip characteristic and deep learning based smiling face recognition method

InactiveCN105956570AImprove recognition accuracySuppresses the effects of non-Gaussian noiseAcquiring/recognising facial featuresFeature vectorPositive sample
The invention discloses a lip feature and deep learning based smiling face recognition method, which comprises the steps of firstly tailoring on a positive sample image containing a smiling face and a negative sample image without a smiling face so as to acquire lip image training samples, carrying out feature extraction on all lip image training samples respectively so as to acquire feature vectors corresponding to each training sample, and training a deep neural network by adopting the feature vectors of the training samples; as for an image to be recognized, acquiring a lip feature vector of a human face in the image to be recognized by adopting the same method, inputting the lip feature vector into the well trained deep neural network so as to carry out recognition, and acquiring a recognition result on whether the human face is a smiling face or not. The smiling face recognition method disclosed by the invention improves the smiling face recognition accuracy under complicated conditions by combining lip features and feature learning capacity of the deep neural network; and influences of non-Gaussian noises are suppressed through improving an overall cost function in training of the deep neural network, and the recognition accuracy is improved.
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA

Lip motion capturing method and device and storage medium

The invention discloses a lip motion capturing method and device and a storage medium. The lip motion capturing method comprises the steps of acquiring a real-time image taken by a camera shooting device, and extracting a real-time face image from the real-time image; inputting the real-time face image into a pre-trained lip average model, and recognizing t lip feature points representing a lip position in the real-time face image; and calculating the motion direction and the motion distance of a lip in the real-time face image according to x, y coordinates of the t lip feature points in the real-time face image. The motion information of the lip in the real-time face image is calculated according to the coordinates of the lip feature points, and real-time capturing for lip motions is achieved.
Owner:PING AN TECH (SHENZHEN) CO LTD

Mouth-movement-identification-based video marshalling method

Disclosed in the invention is a mouth-movement-identification-based video marshalling method. According to the invention, on the basis of distribution differences of a tone (H) component, a saturation (S) component, and a brightness (V) component at lip color and skin color areas in a color image, three color feature vectors are selected; filtering and area connection processing is carried out on a binary image that has been processed by classification and threshold segmentation by a fisher classifier; a lip feature is matched with an animation picture lip feature in a material library; and a transition image between two frames is obtained by image interpolation synthesis, thereby realizing automatic video marshalling. The fisher classifier is constructed by selecting color information in the HSV color space reasonably, thereby obtaining more information contents for lip color and skin color area segmentation and enhancing reliability and adaptivity of mouth matching feature extraction in a complex environment. Moreover, with the image interpolation technology, the transition image between the two matched video frame pictures is generated, thereby improving the sensitivity and ornamental value of the video marshalling and realizing a smooth and complete video content.
Owner:COMMUNICATION UNIVERSITY OF CHINA

Lip language recognition method combining graph neural network and multi-feature fusion

The invention discloses a lip language recognition method combining a graph neural network and multi-feature fusion. The method comprises the following steps: firstly, extracting and constructing a face change sequence, marking face feature points, correcting a lip deflection angle, performing pre-processing through a trained lip semantic segmentation network, training a lip language recognition network through a graph structure of a single-frame feature point relationship and a graph structure of an adjacent-frame feature point relationship, and finally, generating a lip language recognition result through the trained lip language recognition network. CNN lip features and lip region feature points obtained after CNN extraction and feature fusion are performed on an identification network data set and a lip semantic segmentation network data set are subjected to the extraction and fusion by the GNN lip features obtained after GNN extraction and fusion and then input into BiGRU for identification, and the problems that time sequence feature extraction is difficult and lip feature extraction is affected by external factors are solved; the method effectively extracts the static features of the lip and the dynamic features of the lip change, and has the characteristics of high lip change feature extraction capability, high recognition result accuracy and the like.
Owner:HEBEI UNIV OF TECH

Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction

ActiveCN103425987AOvercome the effects of recognition errorsImprove recognition rateCharacter and pattern recognitionFeature vectorWheelchair
The invention discloses an intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction, and relates to the field of feature extraction and recognition control of a lip recognition technology. The method includes the steps of firstly, conducting DT_CWT filtering on a lip, then, conducting DCT conversion on a lip feather vector extracted through the DT_CWT so that the lip features extracted after conversion is conducted through the DT_CWT can be concentrated in a large coefficient obtained after the DCT conversion, enabling the feature vector to contain the largest amount of lip information, and enabling the effect of dimensionality reduction to be achieved at the same time, wherein due to the fact that the DT_CWT has approximate translation invariance, the difference between feature values of the same lip in different positions in an ROI is small after the DT_CWT filtering is conducted, and the influence produced when the lip recognition is wrong due to position offset of the lip in the ROI is eliminated. According to the intelligent wheelchair man-machine interaction method, the lip recognition rate is greatly improved, and the robustness of a lip recognition system is improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Lip language recognition method and system based on cross-modal attention enhancement

The invention discloses a lip language recognition method and system based on cross-modal attention enhancement, and the method comprises the steps of extracting a lip image sequence and the lip motion information, obtaining a corresponding lip feature sequence and a lip motion sequence through a pre-training feature extractor, inputting the obtained feature sequences into a cross-modal attention network, obtaining a lip enhancement feature sequence; through a multi-branch attention mechanism, establishing the time sequence relevance of an intra-modal feature sequence, and specifically selecting the related information in input at an output end. According to the method, the relevance between the time sequence information is considered, optical flow calculation is carried out on the adjacent frames to obtain the motion information between the visual features, the lip visual features are represented and fused and enhanced by using the motion information, the context information in the mode is fully utilized, and finally, the correlation representation and selection of the intra-modal features are carried out through the multi-branch attention mechanism, so that the lip reading recognition accuracy is improved.
Owner:HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products