Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

397 results about "Inpainting" patented technology

Inpainting is the process of reconstructing lost or deteriorated parts of images and videos. In the museum world, in the case of a valuable painting, this task would be carried out by a skilled art conservator or art restorer. In the digital world, inpainting (also known as image interpolation or video interpolation) refers to the application of sophisticated algorithms to replace lost or corrupted parts of the image data (mainly small regions or to remove small defects).

Image repair interface for providing virtual viewpoints

A system and method for repairing an object in image data of an event. An image of the event is obtained from a camera, and an object is detected in the image. For example, the event may be a sporting event in which the object is a participant. Moreover, a portion of the object is occluded in a viewpoint of the camera. For instance, a limb of the participant may be occluded by another participant. The object is repaired by providing a substitute for the occluded portion. A user may perform the repair via a user interface by selecting part of an image from an image library and positioning the selected portion relative to the object. A textured 3d model of the event is combined with data from the repaired object, to depict a realistic virtual viewpoint of the event which differs from a viewpoint of the camera.
Owner:SPORTSMEDIA TECH CORP

Image enabled reject repair for check processing capture

A method and apparatus for processing a plurality of financial documents, comprising, a document processor, wherein, for each financial document, the document processor captures data encoded on the financial document and an image of the financial document during a prime pass, and assigns a prime pass sequence number to each financial document. The apparatus includes a computer database in which the prime pass data and image is stored in association with the prime pass sequence number for the financial document. The document processor is adapted to determine whether the financial document should be rejected because the data and document image needs to be repaired or the data only needs to be repaired. If the data and image needs to be repaired, the document processor, or a desktop scanner / reader, recaptures the data and image, assigns a recapture sequence number to the financial document, and the recaptured data and image is stored in the computer database in association with the recapture sequence number. An image repair application is adapted to permit an operator to locate a prime pass image that matches the recaptured image, and to repair the document image by visually comparing the recaptured image with the prime pass image. The repaired document image is then stored in the computer database in association with the corresponding prime pass sequence number.
Owner:WELLS FARGO BANK NA

A hyperspectral image inpainting method based on E-3DTV regularity

A hyperspectral image inpainting method based on E-3DTV regularity is provided. The method comprises the steps of expanding the original three-dimensional hyperspectral data with noise into matrix along the spectral dimension, and initializing the matrix representation of the noise term and the hyperspectral data to be repaired, and other model variables and parameters under the ADMM framework; performing differential operation on the hyperspectral data to be repaired along horizontal, vertical and spectral dimensions to obtain three gradient maps in different directions, which are expanded into matrices along the spectral dimensions; decomposing the gradient graph matrices in three directions by low rank UV, and constraining the basis matrices of the gradient graph by sparsity to obtain E-3DTV regularity; adding the E-3DTV regularity to the data to be repaired, writing out the optimization model, using the ADMM framework to solve iteratively; obtaining the restoration image and noisewhen the iteration is stable. The invention performs denoising and compression reconstruction on the hyperspectral image data, and improves the enhancement of the traditional 3DTV so as to give consideration to the structural correlation and sparsity of the gradient image, thereby overcoming the defect that the traditional 3DTV can only depict the sparsity of the gradient image and ignores the correlation.
Owner:XI AN JIAOTONG UNIV

Style-controllable image text real-time translation and conversion method

PendingCN111723585ASolve the problem of uncontrollable text styleRich morphological informationNatural language translationSemantic analysisFeature extractionComputer graphics (images)
The invention discloses a style-controllable image text real-time translation and conversion method. The method comprises the following steps: taking a scene image as input; performing feature extraction by using a multi-layer CNN network, and detecting the position and form information of the image text; and then erasing text pixels based on a text positioning box to obtain a background image anda mask, and carrying out background image restoration by using a thick restoration network and a thin restoration network based on a codec structure; performing form correction and style removal on the image text to obtain a common font image text; and recognizing the image text by using a CRNN model, performing correcting by combining text semantics, and performing translating or converting according to requirements; performing stylization processing on the translated text by learning the artistic style of the original text; and outputting a scene image with a controllable text conversion style. According to the method, more valuable information can be analyzed from the scene image, and the information storage degree during image text translation and conversion is remarkably enhanced.
Owner:CHINA UNIV OF PETROLEUM (EAST CHINA)

Visual tracking and positioning method based on dense point cloud and composite view

ActiveCN110853075ASolve association problemsRealize initial positioningImage enhancementImage analysisPattern recognitionPoint cloud
The invention provides a visual tracking and positioning method based on dense point cloud and a composite view, and the method comprises the steps: carrying out the three-dimensional scanning of a real scene, obtaining a color key frame image and a corresponding depth image, carrying out the image restoration, and carrying out the image coding of the key frame image; performing image coding on acurrent frame image acquired by the camera in real time, and selecting a composite image closest to the current frame image as a reference frame image of the current image; obtaining stable matching feature point sets on the two images, and performing processing to obtain six-degree-of-freedom pose information of the current frame camera relative to a three-dimensional scanning point cloud coordinate system; and performing judging by using an optical flow algorithm, if the requirement cannot be met, updating the current frame image to be the next frame image acquired by the camera, and performing re-matching. According to the invention, the association problem of the three-dimensional point cloud obtained by the laser radar and the heterogeneous visual image can be solved, and the effect of realizing visual navigation rapid initialization positioning is achieved.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Visual loopback detection method based on semantic segmentation and image restoration in dynamic scene

The invention discloses a visual loopback detection method based on semantic segmentation and image restoration in a dynamic scene. The visual loopback detection method comprises the following steps:1) pre-training an ORB feature offline dictionary in a historical image library; 2) acquiring a current RGB image as a current frame, and segmenting out that the image belongs to a dynamic scene areaby using a DANet semantic segmentation network; 3) carrying out image restoration on the image covered by the mask by utilizing an image restoration network; 4) taking all the historical database images as key frames, and performing loopback detection judgment on the current frame image and all the key frame images one by one; 5) judging whether a loop is formed or not according to the similarityand epipolar geometry of the bag-of-words vectors of the two frames of images; and 6) performing judgement. The visual loopback detection method can be used for loopback detection in visual SLAM in adynamic operation environment, and is used for solving the problems that feature matching errors are caused by existence of dynamic targets such as operators, vehicles and inspection robots in a scene, and loopback cannot be correctly detected due to too few feature points caused by segmentation of a dynamic region.
Owner:SOUTHEAST UNIV

Three-dimensional target detection method and system based on image restoration

PendingCN111079545AEnhanced GeometryIncreased shape integrityImage enhancementImage analysisPattern recognitionPoint cloud
The invention relates to a three-dimensional target detection method and system based on image restoration. The method comprises the following steps: obtaining an RGB image and radar point cloud of athree-dimensional target; generating a two-dimensional target detection box on the RGB image according to a two-dimensional target detection algorithm; for the picture with the shielded target, carrying out instance segmentation by adopting an instance segmentation algorithm to obtain a mask at the shielded position of the target, and then calculating a complete mask of the target according to morphological closed operation; converting the radar point cloud into a depth map through a camera matrix, performing image restoration on the shielded part of the target on the depth map, and extractingdepth information of the target in a depth map form according to a complete mask of the target after the restoration is completed; converting the depth information in the depth map form of the targetinto restored point cloud; and inputting the repaired point cloud into a three-dimensional target detection network for three-dimensional target detection. Compared with the prior art, the three-dimensional target detection method plays a role in reducing offset and improving precision in three-dimensional target detection.
Owner:SHANGHAI UNIV OF ENG SCI

Image restoring method and device

The invention discloses an image restoring method and device. The method comprises the following steps of: step A, obtaining a first layer region to be restored of an image damaged region from inside to outside, and reading structural information and a color of the known image region which is close to the first layer region to be restored; step B, carrying out gradient analysis and color estimation on the first layer region to be restored according to the structural information and the color of the known image region to obtain gradient information and a color of the first layer region to be restored; step C, taking the gradient information and the color of the first layer region to be restored as initial values to be extended to the first layer region to be restored; step D, carrying out texture synthesis restoration on the first layer region to be restored according to the initial values to obtain a first restoring result; and step E, taking the first restoring result as the known image region to circularly execute the steps A to D until the image damaged region is restored. With the adoption of the image restoring method and device, disclosed by the invention, the structural region can be accurately positioned, the block matching accuracy is improved, and the introduction and the reproduction of error information are reduced.
Owner:FOUNDER INTERNATIONAL CO LTD +1

Image restoration method based on edge generation

The invention provides an image restoration method based on edge generation, which can effectively solve the problems of fixed restoration area and blurred generated image in image restoration. The method comprises the steps that a defect image is generated, and the edge contour of the defect image is extracted; an edge generation network and a content generation network are constructed, wherein the content generation network adopts a U-Net structure; in the training stage, a defect image and an extracted edge contour are input to train an edge generation network, and image edge features generated by the trained edge generation network, texture information of the defect image extracted by the trained texture generation network and the defect image are input to train a content generation network; and in the restoration stage, the edge features of the to-be-restored image generated by the edge generation network, the texture information of the to-be-restored image extracted by the texture generation network and the to-be-restored image are input into the trained content generation network, and then original appearance restoration of the image is acheived. The invention relates to thefield of artificial intelligence and image processing.
Owner:UNIV OF SCI & TECH BEIJING +1

Construction site worker dressing detection method, system and device and storage medium

InactiveCN111383429AKeep abreast of security threatsRaise the level of high security managementImage enhancementImage analysisSite monitoringMobile device
The invention belongs to the technical field of construction site monitoring and protection, and discloses a construction site worker dressing detection method, system and device, and a storage medium. The method comprises the steps: receiving a real-time scene image collected at a construction site, carrying out the quality evaluation of the collected real-time scene image, carrying out the imagerestoration and enhancement of the real-time scene image with unqualified quality evaluation or at night, and obtaining a real-time scene image with qualified quality evaluation; performing real-timedetection of constructors and dresses of the constructors on the collected real-time scene images of the construction site after quality restoration, and marking the positions of the constructors onthe real-time scene images; transmitting the detected real-time scene image to mobile equipment, and carrying out alarm processing. The high safety management level of construction personnel is improved; once a constructor is found, dressing of the constructor is automatically recognized, safety threats faced by the constructor are analyzed, real-time intelligent analysis is carried out, WeChat ispushed to an operation and maintenance manager, and safety accidents can be greatly reduced.
Owner:西安咏圣达电子科技有限公司

Human body shape and posture estimation method for object occlusion scene

The invention discloses a human body shape and posture estimation method for an object occlusion scene, and the method comprises the steps: converting a weak perspective projection parameter obtainedthrough calculation into a camera coordinate, and obtaining a UV image containing human body shape information under the condition of no occlusion; adding a random object picture to the human body two-dimensional image for shielding, and obtaining a human body mask under the shielding condition; training a UV map restoration network of an encoding-decoding structure by using the obtained virtual occlusion data; inputting a human body color image shielded by a real object, and constructing a saliency detection network of an encoding-decoding structure by taking the mask image as a true value; supervising human body coding network training by using the hidden space features obtained by coding; inputting the shielded human body color image to obtain a complete UV image; and recovering the human body three-dimensional model under the shielding condition by using the vertex corresponding relationship between the UV image and the human body three-dimensional model. According to the method, the shape estimation of the shielded human body is converted into the image restoration problem of the two-dimensional UV chartlet, so that the real-time and dynamic reconstruction of the human body inthe shielded scene is realized.
Owner:SOUTHEAST UNIV

Face image restoration method introducing attention mechanism

InactiveCN111612718AImprove repair effectSolve the problem caused by the limited receptive field sizeImage enhancementImage analysisData setOriginal data
The invention relates to a face image restoration method introducing an attention mechanism, and the method comprises the steps: (1) obtaining an original data set, carrying out the image preprocessing, obtaining a needed face image data set, and reasonably dividing and arranging the face image data set into a test set and a data set; (2) inputting the training data set into an image restoration model introduced into a context attention layer for training, wherein two parallel encoder networks are introduced into a generator network, one encoder network is used for performing convolution operation to extract advanced feature images, and the other encoder is used for introducing a context attention layer network to realize long-range association between a foreground region and a backgroundregion; and (3) inputting the test data set into the trained face restoration model, and testing the restoration capability of the trained restoration model for the defective face image. According tothe method, after the context attention layer is introduced, the problem that the background region information cannot be fully utilized by the restoration model due to the limited receptive field size of the convolutional neural network is solved, the long-range association of the background information and the foreground region is realized, and the background region information is fully utilizedto fill the foreground region. After the context attention layer is introduced, the restoration model obtains a better restoration effect on some detail textures, and the restoration effect of the face image is also improved on the whole.
Owner:SUN YAT SEN UNIV

Raindrop removing method for single image based on dense multi-scale generative adversarial network

The invention discloses a raindrop removing method for a single image based on a dense multi-scale generative adversarial network. Constructing a multi-scale image restoration model by using a dense network for feature reuse; constructing a discriminant network model with an attention mechanism by combining and utilizing the multi-scale image restoration model; forming a multi-scale generative adversarial network model; obtaining an original rain image, an original rain-free image and a residual raindrop layer; inputting the original rain-free image and the residual raindrop layer into the discrimination network model; utilizing an error between the discriminant network model and the generative network model; and performing back propagation to alternately train the multi-scale generative adversarial network model, stopping training until errors of the discrimination network model and the generative network model converge to a set range, generating a raindrop removal model by using thetrained generative network model, and removing relatively large and dense raindrops in a single image.
Owner:联友智连科技有限公司 +1

Image restoration method and training method of image restoration model

The invention discloses an image restoration method and a training method of an image restoration model. The method comprises the steps of obtaining a to-be-restored first target image; extracting target image features of the first target image; based on the target image features, target candidate area information and a target reference image are acquired, and the target reference image carries mode information of the first target image; and repairing the first target image based on the target candidate region information and the target reference image to obtain a target repaired image corresponding to the first target image. In the image restoration process, the consideration of the mode information of the image is added, the consideration is comprehensive, the restoration effect of imagerestoration is improved, and the restored image is more natural.
Owner:TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products