Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

134 results about "View model" patented technology

A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of a whole system from the perspective of a related set of concerns.

Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model

The invention provides a method for carrying out self-adaption simplification, gradual transmission and rapid charting on a three-dimensional model by utilizing a space solid view model, which simulates and analyzes the display process of the three-dimensional model by utilizing the space solid view model to realize the self-adaption simplification, gradual transmission and rapid charting of the three-dimensional model, thereby completely solving the problems of network transmission and display of the mass three-dimensional model data. The beneficial effects of the method provided by the invention are mainly reflected in that the lossless display and self-adaption simplification of the three-dimensional model can be guaranteed; the display effect before the simplification is consistent to that after the simplification; the gradual transmission designed based on the simplification method can completely solve the conflict between the explosion type increase of the mass space data and the transmission through the limited network bandwidth; the data volume of the three-dimensional model after the simplification is still small; and the charting speed of the three-dimensional model can be obviously increased by utilizing a residence memory when a view window is refreshed.
Owner:董福田

Off-line programming system and method of optical visual sensor with linear structure for welding robot

InactiveCN101973032AIntuitive detection statusIntuitive display of detection statusManipulatorInteractive graphicsSimulation
The invention relates to an off-line programming system and method of an optical visual sensor with a linear structure for a welding robot. The system comprises a sensor model, a robot model, a process control rule base, a graphic editing interface, an operation sequence module and a programming information output, wherein the sensor model is used for simulating a sensor imaging process to acquire a view model and simultaneously completing detection view substantiation to be convenient for user graphic programming; the robot model is used for simulating the single-point motion of the robot and providing communication interface information of connecting sensor input signals; the process control rule base is used for providing process feature extraction rules for different welding tasks andcontrol command related information of processes; the graphic editing interface is used for interactive graphic programming between a user and a system; the operation sequence module is used for saving the information of a series of detection points, wherein the information comprises single-point motion commands, process compensation control commands, process feature extraction rules and imaging signal model information; and the programming information output is used for outputting the program text of the robot and the configuration information text of the sensor system.
Owner:SOUTHEAST UNIV

Interactive-interface fast implementation method based on reusable library

The invention provides an interactive-interface fast implementation method based on a reusable library. The interactive-interface fast implementation method comprises correlating data and interface elements by virtue of active data / view model design, constructing a reusable basic element library, a universal function library and a special function library according to the characteristics of an interactive interface and then constructing a reusable library, and finally raising model into-library standard specifications, thereby providing a unified standard interface a configuration path, a database table from and a data / view model binding specification. The method is suitable for a development platform based on remote access of WEB and based on local interactive interfaces of tools such as QT and VS; the method is capable of realizing various types of interfaces needing to be interacted based on the reusable library, and has the characteristics of simple function library design rule, high reusability, quick interface molding and the like; besides, the effectiveness of the reusable library and the usability of the interactive interfaces can be further improved by improving and optimizing the standard specifications of the reusable library.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method

The invention relates to a scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method. The scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method is characterized by comprising the following steps of 1, extracting feature description vectors of an image of a matching target and a front-lower view of an unmanned aerial vehicle by a scale invariant feature transform algorithm, 2, determining if the front-lower view in the frame and the image of the matching target are matching or not, and 3, if the front-lower view and the image of the matching target are matching, recording coordinates of a matching point in a satellite map comprising the image of the matching target and the matching target, in the front-lower view of the unmanned aerial vehicle, calculating current position coordinates of the unmanned aerial vehicle in the satellite map according to the coordinates of the matching point and carrying out positioning of the unmanned aerial vehicle, and if the front-lower view and the image of the matching target are not matching, reading an unmanned aerial vehicle front-lower view in the next frame and sequentially carrying out matching. The scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method realizes accurate matching of a front-lower view of an unmanned aerial vehicle and a matching target in a satellite map, determination of current position coordinates of the unmanned aerial vehicle according to a built unmanned aerial vehicle front-lower view model, and positioning of the unmanned aerial vehicle.
Owner:NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products