Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Vision-based mutual positioning method in unknown indoor environment

A positioning method and indoor environment technology, applied in the field of image processing, to achieve broad development prospects, improve retrieval speed, and high positioning accuracy

Active Publication Date: 2021-04-16
HARBIN INST OF TECH
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides a vision-based mutual positioning method in an unknown indoor environment, which is used to solve the problem of how to quickly and accurately find a common coordinate system in an unknown environment, and realize mutual positioning between users in this coordinate system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Vision-based mutual positioning method in unknown indoor environment
  • Vision-based mutual positioning method in unknown indoor environment
  • Vision-based mutual positioning method in unknown indoor environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0063] Mutual positioning between users is a process of determining each other's position, so it is necessary for two users to share information and determine the position of the other party relative to itself based on the information provided by the other party. Since images contain rich visual information, image sharing among users can obtain more content than sharing of language or text. Users judge whether they can see the same semantic scene as themselves through the pictures sent by the other party. If they fail to find the same target, it means that the two are far apart and need to continue walking to find other representative signs; When the user finds the same target that can be observed by the other party, it means that both users have a relative position relationship with the target, so a coordinate system can be established centering on the target. After the coordinate system is established, the two users can obtain their own position coordinates in the coordinate...

Embodiment 2

[0139] To verify the feasibility of the method proposed in the present invention, it is necessary to select an experimental scene for testing. The experimental environment of the present invention is the corridor on the 12th floor of Building 2A, Harbin Institute of Technology Science Park, and the plan view of the experimental scene is as follows Figure 6 shown. It can be seen from the schematic diagram that the experimental scene contains multiple corners. When two users stand on both sides of the corner, the two users cannot see each other due to obstacles, but they can observe at the same time. To the same scene, it meets the background conditions of the method proposed by the present invention, and is suitable for verifying the feasibility of the method proposed by the present invention.

[0140] Before positioning, it is necessary to accurately identify the semantic information contained in the user's image, so as to judge whether two users can observe the same scene t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a vision-based mutual positioning method in an unknown indoor environment. A user 1 and a user 2 respectively shoot the indoor environment in front of the users; the user 1 makes the images form a database of the user 1, and the user 2 shares the images with the user 1; the database identifies semantic information contained in each image in the image database of the user 1 by using R-FCN, and converts the semantic information into a corresponding semantic sequence to form a semantic database; the user 1 receives the image of the user 2, converts the image into a semantic sequence through R-FCN, matches the semantic sequence with a semantic database, continues to go along the current direction if no identical semantic sequence is retrieved, and selects the most representative semantic target to establish a position relationship if an identical semantic sequence is retrieved; and at the moment, a coordinate system is established by taking the target as a center, so that interactive positioning among users in an unknown environment is realized. According to the invention, the problem of how to quickly and accurately complete mutual positioning between users in an unknown environment is solved.

Description

technical field [0001] The invention belongs to the field of image processing, and in particular relates to a vision-based mutual positioning method in an unknown indoor environment. Background technique [0002] In people's daily life, they often enter some completely unfamiliar indoor places such as shopping malls and museums. For these places, there is no way to obtain prior knowledge of the layout of the indoor environment. Therefore, positioning in such places appears particularly difficult. When two users are in different locations in the same unfamiliar indoor place, it is urgent to know the location information of each other. Therefore, it has important practical significance for mutual positioning between users in an unfamiliar environment, and has a broad development prospect. [0003] Because it is impossible to deploy wireless base stations in unfamiliar indoor environments in advance, traditional wireless positioning methods are not suitable for unfamiliar envi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/53G06F16/583G06T7/13G06N3/04G06N3/08
Inventor 马琳董赫王彬叶亮何晨光韩帅孟维晓
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products