Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

37 results about "Deep cnn" patented technology

Method for intelligently diagnosing rotating machine fault feature based on deep CNN model

The invention discloses a method for intelligently diagnosing a rotating machine fault feature based on a deep CNN model. The method comprises: (1) acquiring rotating machine fault vibration signal data, segmenting the data and performing de-trend item preprocessing; (2) performing short-time Fourier time-frequency transform analysis on the signal data to obtain the time-frequency representation of each vibration signal, and displaying the time-frequency representation with a pseudo-color map; (3) reducing the image resolution by an interpolation method and superimposing respective images to form a training sample and a test sample as inputs of the CNN; (4) constructing the deep CNN model including an input layer, two convolution layers, two pooling layers, a fully connected layer, and a softmax classification layer and an output layer; and (5) introducing the training sample into the model for training, obtaining a convolution feature, a pooling feature and a neural network structuralparameter, and diagnosing unknown fault signals according to the constructed deep CNN. The method has better accuracy and stability than an existing time-domain or frequency-domain method.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

An image small target detection method based on combination of two-stage detection

PendingCN109598290AFully excavatedReduce the problem of false detection and missed detection of small targetsCharacter and pattern recognitionPattern recognitionNetwork model
The invention discloses a small target detection method based on combination of two-stage detection. The method includes: Sending the original image into a first detector to detect a first-stage target B1; Fusing the output features of the shallow CNN and the output features of the deep CNN to obtain M1 ', and selecting a corresponding feature map M2 from the M1' by using B1; taking the M2 as an input feature map and sending the M2 to an RPN module and a classification and regression module of a second-stage detector for detection and positioning of a second-stage target; And adding d loss obtained from two-stage detection as the total Loss of the whole network to obtain an end-to-end detection network model. According to the invention, a two-stage detection network is constructed; A largetarget is accurately detected firstly, then a small target is detected in a large target area, and a detection frame of the small target is limited in a local area which is most possible and most easily detected, namely the area where the large target is located, so that complex background interference is effectively removed, the false detection probability is reduced, and the detection precisionof the small target and the small target in the image is improved.
Owner:SHANGHAI JIAO TONG UNIV

Integrated circuit defect image recognition and classification system based on fusion deep learning model

The invention discloses an integrated circuit defect image recognition and classification system based on a fusion deep learning model, and provides a mode of using a fusion model based on a deep convolutional neural network (CNN) to carry out on-line automatic recognition and classification on defect images of a wafer so as to timely detect the change of the number of various defects of the wafer. The core mechanism of the method is a defect image feature extraction method constructed by two deep learning models integrated into a learning mechanism. According to the deep CNN fusion model, a Combined3 defect image classification model is constructed on the basis of two frameworks of SE _ Inception _ V4 and SE _ Inception _ ResNet _ V2; and a sequence model optimization (SMBO) algorithm isutilized to perform hyper-parameter optimization on the fusion depth CNN recognition model, so that the model recognition precision is improved. Increasing automation levels. And the identification cost is reduced because an engineer is replaced by the AI model, and the working efficiency is greatly improved. Based on a real-time identification and classification result, engineers can count defectdata and search reasons in time, so that process parameters are adjusted, and the yield is improved.
Owner:上海众壹云计算科技有限公司

Human body identification method and device

ActiveCN106778614AImprove accuracyAvoid being misidentified as a human bodyBiometric pattern recognitionColor imageHuman body
The invention provides a human body identification method and a human body identification device. The method comprises the steps of acquiring image information; carrying out deep processing on the image information to obtain a depth image and a color image; carrying out human body detection of a deep CNN (Convolutional Neural Network) on the color image and determining a human body boundary frame in the color image; judging whether a unique human body exists in a human body boundary frame area, corresponding to the human body boundary frame, in the depth image; if two or more human bodies exist in the human body boundary frame area, separating the two or more human bodies; and determining the number of the human bodies in the image information according to the number of the human bodies in the human body boundary frame area in the depth image. According to the human body identification method and the human body identification device, secondary identification for the human bodies identified in the color image is realized based on the depth image, the overlapped human bodies are prevented from being misjudged as one human body, the human body identification is assisted by use of the image information of the depth image, and thus the accuracy degree of the human body identification is improved.
Owner:INT INTELLIGENT MACHINES CO LTD

Systems and methods for end-to-end handwritten text recognition using neural networks

The present disclosure provides systems and methods for end-to-end handwritten text recognition using neural networks. Most existing hybrid architectures involve high memory consumption and large number of computations to convert an offline handwritten text into a machine readable text with respective variations in conversion accuracy. The method combine a deep convolutional neural network (CNN) with a RNN (Recurrent Neural Network) based encoder unit and decoder unit to map a handwritten text image to a sequence of characters corresponding to text present in the scanned handwritten text inputimage. The deep CNN is used to extract features from handwritten text image whereas the RNN based encoder unit and decoder unit is used to generate converted text as a set of characters. The disclosed method requires less memory consumption and less number of computations with better conversion accuracy over the existing hybrid architectures.
Owner:TATA CONSULTANCY SERVICES LTD

An embedded face recognition system based on an ARM microprocessor and deep learning

PendingCN109948568AImprove practicalityOvercome the phenomenon of low or even unrecognizable recognition accuracyCharacter and pattern recognitionFace detectionDisplay device
The invention relates to an embedded face recognition system based on an ARM microprocessor and deep learning, and the system comprises an upper computer which is used for transplanting a driving program and a pre-trained face recognition program to a control panel; the control panel used for running a face recognition program and displaying a recognition result on the display. The face recognition program comprises the following steps: pre-training a network model for establishing a face recognition neural network of a Facenet and training the face recognition neural network; acquiring face image: starting an image acquisition device through the control panel to acquire a face photo; preprocessing the face photo, and performing scale change on the shot face photo to form a picture pyramid; detecting human face: sending the preprocessed human face picture into a pre-trained deep CNN human face detection neural network to obtain a picture of a human face part; and matching face: sendingthe obtained picture of the face part into a pre-trained Facenet face recognition neural network to obtain a matching result.
Owner:DONGHUA UNIV

Tree species identification method based on multi-source remote sensing of unmanned aerial vehicle

The invention discloses a tree species identification method based on multi-source remote sensing of an unmanned aerial vehicle, and the method comprises the steps: obtaining a visible light image and a laser radar point cloud, and carrying out the preprocessing of the laser radar point cloud and the visible light image; detecting the crown of the canopy height model of the laser radar point cloud through a local maximum value method, and segmenting the crown through a watershed method to obtain a segmented crown boundary; obtaining a crown data set and a sample data set by taking the segmented crown boundary as an outer boundary and taking a visible light orthoimage brightness value and a laser radar canopy height model (CHM) as features; and carrying out transfer learning and ensemble learning on the crown data set and the sample data set through a convolutional neural network, and then outputting a tree species identification result. The unmanned aerial vehicle visible light remote sensing image and the laser radar point cloud are comprehensively applied, the deep CNN model is adopted for transfer learning, deep convolutional neural network transfer learning and integrated learning are input for tree species identification, and the accuracy of unmanned aerial vehicle remote sensing tree species identification is improved.
Owner:RES INST OF FOREST RESOURCE INFORMATION TECHN CHINESE ACADEMY OF FORESTRY

Foundation meteorological cloud picture classification method based on cross validation deep CNN feature integration

The invention belongs to the technical field of ground-based meteorological cloud picture classification, and particularly relates to a ground-based meteorological cloud picture classification methodbased on cross validation deep CNN feature integration. According to the method, firstly, a convolutional neural network model is utilized to extract deep CNN features of a foundation meteorological cloud image, then multiple times of resampling of the CNN features is performed based on cross validation, and finally, identification of the cloud shape of the foundation cloud image is performed based on a voting strategy of multiple times of cross validation resampling results. According to the method, the ground-based meteorological cloud images are automatically classified, and an adaptive end-to-end automatic cloud recognition algorithm directly based on the original cloud images without any image preprocessing is realized. The proposed algorithm relates to the fields of computer vision,machine learning, image recognition and the like. The proposed algorithm fully overcomes the non-robustness of a single CNN feature cloud classification result and the high calculation overhead of multi-time deep convolutional neural network integration, and at the same time, ensures that the proposed algorithm has high classification accuracy and noise stability.
Owner:SHANXI UNIV

Medicinal plant leaf disease image recognition method based on deep learning

The invention discloses a medicinal plant leaf disease image recognition method based on deep learning, and relates to the technical field of medicinal plant leaf disease prevention, and the method comprises the steps: collecting a plurality of medicinal plant leaf disease images; carrying out enhancement processing on the leaf disease image of the medicinal plant; uniformly adjusting the size ofeach enhanced medical plant leaf disease image to be 299 * 299; training a deep CNN model, wherein the deep CNN model comprises a convolution pooling network, an Inception-I network, an average pooling network, a Dropout layer and a Softmax layer which are connected in series, the last two convolution layers of the convolution pooling network connected in series are depth separable convolution layers, and the Inception-I network comprises a random pooling layer; and identifying the size-adjusted leaf disease images of the medicinal plants through a deep CNN model, the recognition result beingthe type of the disease of the leaf of each medicinal plant, and classifying the disease of the leaf of each medicinal plant based on the recognition result. The recognition method can effectively assist planters to diagnose diseases and improve the diagnosis efficiency.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Tracking method based on dual-model adaptive kernel correlation filtering

The invention provides a tracking method based on dual-model adaptive kernel correlation filtering, which comprises the following steps: initializing the position of a pre-estimated target, calculating a Gaussian tag, and establishing a main feature model and an auxiliary feature model; extracting HOG features to serve as features of a main feature model, extracting deep convolution features to serve as features of an auxiliary feature model, and setting initialization parameters; calculating a response layer of the pre-estimated target by utilizing the main characteristic model, and obtainingan optimal position and an optimal scale of the pre-estimated target by the response layer through a Newton iteration method; if the maximum confidence response value max of the response layer corresponding to the optimal scale is greater than an empirical threshold u, determining a pre-estimated target position, and updating the main feature model; if max is smaller than or equal to an empiricalthreshold u, stopping updating the main feature model, expanding a search area, extracting CNN features of a target pre-selected area, performing dimensionality reduction on deep CNN features by using a PCA technology, estimating a new target position by using the dimensionality-reduced CNN features, and updating an auxiliary feature model until thevideo sequence ends.
Owner:NORTHEASTERN UNIV

LDW false and omitted alarm test method and system based on convolutional neural network

The invention discloses a LDW false and omitted alarm test method based on a convolutional neural network (CNN). The method comprises steps of S1, disposing a camera; S2, setting a maximum lateral distance L, and averagely discretizing the same into n categories; S3, acquiring a real-time image A, inputting the same to a deep CNN model, calculating the actual distance di of a lane line; S4, determining whether a LDW system has false or omitted alarms; and S5, obtaining the misoperation rate of the LDW system. A test system comprises an image acquisition device, an onboard data acquisition mechanism, an analyzer, and an operation processor. The image acquisition device is connected to the analyzer, and the operation processor is connected to the analyzer and the onboard data acquisition mechanism. The method is easy to operate, high in recognition speed and high in recognition precision, applicable to the lanes in various road conditions. The test system can be just provided with the image acquisition device, the onboard data acquisition mechanism, the analyzer and the operation processor in the simplest manner, can fully automatically identify deviations without an extra lane linemark ruler.
Owner:CHONGQING UNIV

Visual tracking method based on hybrid hierarchical filtering and complementarity characteristics

The invention discloses a visual tracking method based on hybrid hierarchical filtering and complementarity characteristics. The method comprises the following steps: establishing a three-stage hybridhierarchical filtering target tracking framework, and estimating a tracking result through a coarse-to-fine search strategy by using confidence output of each stage. The method specifically comprisesthe following steps: establishing a first-stage observation model as observation 1 by using deep CNN features so as to separate a target from a background and roughly position the target; establishing a second-stage observation model by using HOG features to serve as observation 2, and adjusting the target position; 3, establishing a third-stage observation model as observation by using SIFT features, and finally positioning the target. According to the method, the tracking precision and robustness are improved, and the tracking effect is excellent in the environments of rapid target movement, background mixing and the like.
Owner:中国人民解放军陆军炮兵防空兵学院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products