Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

5469 results about "Edge detection" patented technology

Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.

Systems and methods for automated screening and prognosis of cancer from whole-slide biopsy images

InactiveUS20140233826A1Accurate and unambiguous measureReduce dependenceImage enhancementMedical data miningFeature setProstate cancer
The invention provides systems and methods for detection, grading, scoring and tele-screening of cancerous lesions. A complete scheme for automated quantitative analysis and assessment of human and animal tissue images of several types of cancers is presented. Various aspects of the invention are directed to the detection, grading, prediction and staging of prostate cancer on serial sections/slides of prostate core images, or biopsy images. Accordingly, the invention includes a variety of sub-systems, which could be used separately or in conjunction to automatically grade cancerous regions. Each system utilizes a different approach with a different feature set. For instance, in the quantitative analysis, textural-based and morphology-based features may be extracted at image- and (or) object-levels from regions of interest. Additionally, the invention provides sub-systems and methods for accurate detection and mapping of disease in whole slide digitized images by extracting new features through integration of one or more of the above-mentioned classification systems. The invention also addresses the modeling, qualitative analysis and assessment of 3-D histopathology images which assist pathologists in visualization, evaluation and diagnosis of diseased tissue. Moreover, the invention includes systems and methods for the development of a tele-screening system in which the proposed computer-aided diagnosis (CAD) systems. In some embodiments, novel methods for image analysis (including edge detection, color mapping characterization and others) are provided for use prior to feature extraction in the proposed CAD systems.
Owner:BOARD OF RGT THE UNIV OF TEXAS SYST

Deinterlacing of video sources via image feature edge detection

ActiveUS7023487B1Reduce artifactsPreserves maximum amount of vertical detailImage enhancementTelevision system detailsInterlaced videoProgressive scan
An interlaced to progressive scan video converter which identifies object edges and directions, and calculates new pixel values based on the edge information. Source image data from a single video field is analyzed to detect object edges and the orientation of those edges. A 2-dimensional array of image elements surrounding each pixel location in the field is high-pass filtered along a number of different rotational vectors, and a null or minimum in the set of filtered data indicates a candidate object edge as well as the direction of that edge. A 2-dimensional array of edge candidates surrounding each pixel location is characterized to invalidate false edges by determining the number of similar and dissimilar edge orientations in the array, and then disqualifying locations which have too many dissimilar or too few similar surrounding edge candidates. The surviving edge candidates are then passed through multiple low-pass and smoothing filters to remove edge detection irregularities and spurious detections, yielding a final edge detection value for each source image pixel location. For pixel locations with a valid edge detection, new pixel data for the progressive output image is calculated by interpolating from source image pixels which are located along the detected edge orientation.
Owner:LATTICE SEMICON CORP

Fusion night vision system

A fusion night vision system having image intensification and thermal imaging capabilities includes an edge detection filter circuit to aid in acquiring and identifying targets. An outline of the thermal image is generated and combined with the image intensification image without obscuration of the image intensification image. The fusion night vision system may also include a parallax compensation circuit to overcome parallax problems as a result of the image intensification channel being spaced from the thermal channel. The fusion night vision system may also include a control circuit configured to maintain a perceived brightness through an eyepiece over a mix of image intensification information and thermal information. The fusion night vision system may incorporate a targeting mode that allows an operator to acquire a target without having the scene saturated by a laser pointer. The night vision system may also include a detector, an image combiner for forming a fused image from the detector and a display, and a camera aligned with image combiner for recording scene information processed by the first detector.
Owner:L 3 COMM INSIGHT TECH

Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method

The invention discloses a regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method, which is implemented by the following steps: (1) shooting a calibration plate image with a mark point at two proper angles by using two black and white cameras; (2) keeping the shooting angles constant and shooting two images of a shooting target object at the same time by using the same camera; (3) performing the epipolar line rectification of the two images of the target objects according to the nominal data of the camera; (4) searching the neighbor regions of each pixel of the two rectified images for a closed region depth edge and building a supporting window; (5) in the built window, computing a normalized cross-correlation coefficient of supported pixels and acquiring the matching price of a central pixel; (6) acquiring a parallax by using a confidence transmission optimization method having an acceleration updating system; (7) estimating an accurate parallax by a subpixel; and (8) computing the three-dimensional coordinates of an actual object point according to the matching relationship between the nominal data of the camera and the pixel and consequently reconstructing the three-dimensional point cloud of the object and reducing the three-dimensional information of a target.
Owner:江苏省华强纺织有限公司 +1

Autonomously identifying and capturing method of non-cooperative target of space robot

InactiveCN101733746AReal-time prediction of motion statusPredict interference in real timeProgramme-controlled manipulatorToolsKinematicsTarget capture
The invention relates to an autonomously identifying and capturing method of a non-cooperative target of a space robot, comprising the main steps of (1) pose measurement based on stereoscopic vision, (2) autonomous path planning of the target capture of the space robot and (3) coordinative control of a space robot system, and the like. The pose measurement based on the stereoscopic vision is realized by processing images of a left camera and a right camera in real time, and computing the pose of a non-cooperative target star relative to a base and a tail end, wherein the processing comprises smoothing filtering, edge detection, linear extraction, and the like. The autonomous path planning of the target capture of the space robot comprises is realized by planning the motion tracks of joints in real time according to the pose measurement results. The coordinative control of the space robot system is realized by coordinately controlling mechanical arms and the base to realize the optimal control property of the whole system. In the autonomously identifying and capturing method, a self part of a spacecraft is directly used as an identifying and capturing object without installing a marker or a comer reflector on the target star or knowing the geometric dimension of the object, and the planned path can effectively avoid the singular point of dynamics and kinematics.
Owner:HARBIN INST OF TECH

Method and apparatus for ultrasonic continuous, non-invasive blood pressure monitoring

Ultrasound is used to provide input data for a blood pressure estimation scheme. The use of transcutaneous ultrasound provides arterial lumen area and pulse wave velocity information. In addition, ultrasound measurements are taken in such a way that all the data describes a single, uniform arterial segment. Therefore a computed area relates only to the arterial blood volume present. Also, the measured pulse wave velocity is directly related to the mechanical properties of the segment of elastic tube (artery) for which the blood volume is being measured. In a patient monitoring application, the operator of the ultrasound device is eliminated through the use of software that automatically locates the artery in the ultrasound data, e.g., using known edge detection techniques. Autonomous operation of the ultrasound system allows it to report blood pressure and blood flow traces to the clinical users without those users having to interpret an ultrasound image or operate an ultrasound imaging device.
Owner:GENERAL ELECTRIC CO

Techniques for image enhancement using a tactile display

Techniques are disclosed for enhancing the quality of a displayed image using a tactile or other texture display. In particular, the disclosed techniques leverage active-texture display technology to enhance the quality of graphics by providing, for example, outlining and / or shading when presenting a given image, so as to create the effect of increased contrast and image quality and / or to reduce observable glare. These effects can be present even at high viewing angles and in environments of high light reflection. To these ends, one or more graphics processes, such as edge-detection and / or shading, may be applied to an image to be displayed. In turn, an actuator element (e.g., microelectromechanical systems, or MEMS, devices) of the tactile display may be manipulated (e.g., in Z-height) to provide fine-grain adjustment of image attributes such as: pixel brightness / intensity; pixel color; edge highlighting; object outlining; effective shading; image contrast; and / or viewing angle.
Owner:INTEL CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products