Indoor Sign-based Visual Localization Method
-
摘要: 为解决室内交通场景中智能汽车和移动机器人进行定位计算的问题,利用室内场景中已存在的各类标志,引入BEBLID(Boosted Efficient Binary Local Image Descriptor)算法,提出1种视觉定位方法。对BEBLID算法进行改进,赋予其对图像整体进行特征表征的能力。将定位过程分解为离线阶段和在线阶段,离线阶段构建场景标志地图,在线阶段将当前图像所提取的全局和局部BEBLID特征与场景标志地图的对应特征进行匹配,引入KNN方法确定最近节点和最近图像,并利用场景特征地图中存储的标志坐标信息,进行度量计算,获取当前位置信息。在教学楼、办公楼和室内停车场场景进行实验,实验中对场景标志的正确识别率达到90%,平均定位误差小于1 m,与传统方法相比,同一样本下识别精度相对提升约10%,实验验证了算法的有效性。Abstract: To solve the problem of localization calculation of intelligent vehicles and the mobile robot in the indoor traffic environment, by exploiting kinds of signs which existed in the indoor environment, a visual localization method is proposed through using BEBLID (Boosted Efficient Binary Local Image Descriptor) algorithm. The proposed method enforces the ability to characterize the whole image by improving the classic BEBLID. In this paper, the localization method consists of an offline stage and an online stage. In the offline stage, a scene sign map is created. In the online stage, the calculation progress is divided into 3 parts, which include holistic and local BEBLID method from current image and image in the scene sign map, closet sign site and closet image calculation by using KNN method, metric calculation by using coordinate information which is stored in the scene sign map. The experiment is conducted in three kinds of indoor scenes, including a teaching building, an office building, and an indoor parking lot. The experiment shows the scene sign recognition rate reached 90%, and the average localization error is less than 1 meter. Compared with the traditional method, the proposed method improves about 10% relative recognition rate with the same test set, which verified the effectiveness of the proposed method.
-
Key words:
- Indoor localization /
- Holistic feature /
- Visual localization /
- BEBLID feature
-
[1] YASSIN A, NASSER Y, AWAD M, et al. Recent advances in indoor localization:a survey on theoretical approaches and applications[J]. IEEE Communications Surveys & Tutorials, 2017, 19(99):1327-1346. [2] LI B, MUNOZ J P, RONG X, et al. Vision-based mobile indoor assistive navigation aid for blind people[J]. IEEE Transactions on Mobile Computing, 2019, 18(3):702-714. [3] ZOU H, CHEN C L, LI M, et al. Adversarial learning-enabled automatic WiFi indoor radio map construction and adaptation with mobile robot[J]. IEEE Internet of Things Journal, 2020, 7(8):6946-6954. [4] HUANG Y, ZHAO J, HE X, et al. Vision-based semantic mapping and localization for autonomous indoor parking[C]. 2018 IEEE Intelligent Vehicles Symposium(IV), Changshu, China:IEEE, 2018. [5] LAOUDIAS C, MOREIRA A, KIM S, et al. A survey of enabling technologies for network localization,tracking,and navigation[J]. IEEE Communications Surveys & Tutorials, 2018, 20(4):3607-3644 [6] HERNÁNDEZ N, HUSSEIN A, CRUZADO D, et al. Applying low cost WiFi-based localization to in-campus autonomous vehicles[C]. 2017 IEEE 20th International Conference on Intelligent Transportation Systems(ITSC), Yokohama, Japan:IEEE, 2017. [7] 赵国旗, 杨明, 王冰, 等.基于智能终端的移动机器人室内外无缝定位方法[J].上海交通大学学报, 2018, 52(1):13-19. ZHAO Guoqi, YANG Ming, WANG Bing, et al. Mobile robot seamless localization based on smart device in indoor and outdoor environments[J]. Journal of Shanghai Jiaotong University, 2018, 52(1):13-19.(in Chinese) [8] WANG W, MARELLI D, FU M. Multiple-vehicle localization using maximum likelihood Kalman filtering and ultra-wideband signals[J]. IEEE Sensors Journal, 2021, 21(4):4949-4956. [9] 王博远, 刘学林, 蔚保国, 等.WiFi指纹定位中改进的加权k近邻算法[J].西安电子科技大学学报, 2019, 46(5):41-47. WANG Boyuan, LIU Xuelin, YU Baoguo, et al. Improved weighted k-nearest neighbor algorithm for wifi fingerprint positioning[J]. Journal of Xidian University,2019,46(5):41-47.(in Chinese) [10] 杨保, 张鹏飞, 李军杰, 等.一种基于蓝牙的室内定位导航技术[J].测绘科学, 2019, 44(6):89-95. YANG Bao, ZHANG Pengfei, LI Junjie, et al. An indoor positioning and navigation technology based on bluetooth[J]. Science of Surveying and Mapping, 2019, 44(6):89-95.(in Chinese) [11] SADRUDDIN H, MAHMOUD A, ATIA M M. Enhancing body-mounted LiDAR SLAM using an IMU-based pedestrian dead reckoning(PDR) model[C]. 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA:IEEE, 2020. [12] CAMPOS C,ELVIRA R,RODRÍGUEZ J J G,et al. ORB-slam3:an accurate open-source library for visual, visual-inertial, and multi-map SLAM[J]. IEEE Transactions on Robotics, 2021, Early Access:1-17. [13] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB:an efficient alternative to SIFT or SURF[C]. IEEE International Conference on Computer Vision, Barcelona, Spain:IEEE, 2011. [14] MUR-ARTALR, MONTIELJMM, TARDOSJD.ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163. [15] 胡月志, 李娜, 胡钊政, 等.基于ORB全局特征与最近邻的交通标志快速识别算法[J].交通信息与安全, 2016(1):23-29. HU Yuezhi, LI Na, HU Zhaozheng, et al. Fast sign recognition based on ORB holistic feature and k-nearest neighbor method[J]. Journal of Transport Information and Safety, 2016(1):23-29.(in Chinese) [16] 陶倩文, 胡钊政, 黄刚, 等.基于消防安全疏散标志的高精度室内视觉定位[J].交通信息与安全, 2018, 36(02):39-46+60. TAO Qianwen,HU Zhaozheng,HUANG Gang,et al.High-accuracy vision-based indoor positioning using building safety evacuation signs. Journal of Transport Information and Safety, 2018, 36(2):39-46+60.(in Chinese) [17] BAY H, TUYTELAARS T, VAN GOOL L. Surf:speeded up robust features[C]. European Conference on Computer Vision, Graz, Austria:ECCV, 2006. [18] ELLOUMI W, LATOUI A, CANALS R, et al. Indoor pedestrian localization with a smartphone:a comparison of inertial and vision-based methods[J]. IEEE Sensors Journal, 2016, 16(13):5376-5388. [19] SUÁREZ I,SFEIR G,BUENAPOSADA J M,et al. BEBLID:boosted efficient binary local image descriptor[J]. Pattern Recognition Letters, 2020(133):366-372. [20] Zhang ZY. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334.
点击查看大图
计量
- 文章访问数: 9385
- HTML全文浏览量: 276
- PDF下载量: 1459
- 被引次数: 0