Self-Localization of Guide Robots Through Image Classification

Main Article Content

Muhammad S. Alam
https://orcid.org/0000-0002-9419-3928
Farhan B. Mohamed
https://orcid.org/0000-0002-5298-8642
AKM B. Hossain

Abstract

The field of autonomous robotic systems has advanced tremendously in the last few years, allowing them to perform complicated tasks in various contexts. One of the most important and useful applications of guide robots is the support of the blind. The successful implementation of this study requires a more accurate and powerful self-localization system for guide robots in indoor environments. This paper proposes a self-localization system for guide robots.  To successfully implement this study, images were collected from the perspective of a robot inside a room, and a deep learning system such as a convolutional neural network (CNN) was used. An image-based self-localization guide robot image-classification system delivers a more accurate solution for indoor robot navigation. The more accurate solution of the guide robotic system opens a new window of the self-localization system and solves the more complex problem of indoor robot navigation. It makes a reliable interface between humans and robots. This study successfully demonstrated how a robot finds its initial position inside a room. A deep learning system, such as a convolutional neural network, trains the self-localization system as an image classification problem.  The robot was placed inside the room to collect images using a panoramic camera. Two datasets were created from the room images based on the height above and below the chest. The above-mentioned method achieved a localization accuracy of 98.98%.

Article Details

How to Cite
1.
Self-Localization of Guide Robots Through Image Classification. Baghdad Sci.J [Internet]. 2024 Feb. 25 [cited 2024 Apr. 27];21(2(SI):0832. Available from: https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/view/9648
Section
article

How to Cite

1.
Self-Localization of Guide Robots Through Image Classification. Baghdad Sci.J [Internet]. 2024 Feb. 25 [cited 2024 Apr. 27];21(2(SI):0832. Available from: https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/view/9648

References

Faseeha U, Ghazal S, Rehmani B, Rafique R, Taba S. Impaired Glove for Blind and Impaired Person. KJCIS. 2020 Jan 1;3(1):07-07. https://kjcis.kiet.edu.pk/index.php/kjcis/article/view/30/25

Shaikh B, Faraz SM, Jafri SR, Ali SU. Self-localization of mobile robot using map matching algorithm. J. Eng. Appl. Sci. . 2021;40(1):69-77.

Najjar AB, Al-Issa AR, Hosny M. Dynamic indoor path planning for the visually impaired. J. King Saud Univ. - Comput. Inf. Sci. . 2022 Oct 1;34(9):7014-24. https://doi.org/10.1016/j.jksuci.2022.03.004

Jiang Y, Pan X, Li K, Lv Q, Dick RP, Hannigan M, Shang L. Ariel: Automatic wi-fi based room fingerprinting for indoor localization. InProceedings of the 2012 ACM conference on ubiquitous computing 2012 Sep 5 (pp. 441-450). https://dl.acm.org/doi/abs/10.1145/2370216.2370282

Ryumin D, Kagirov I, Axyonov A, Pavlyuk N, Saveliev A, Kipyatkova I, Zelezny M, Mporas I, Karpov A. A multimodal user interface for an assistive robotic shopping cart. Electronics. 2020 Dec 8;9(12):2093. https://doi.org/10.3390/electronics9122093

Rosa S, Lu X, Wen H, Trigoni N. Leveraging user activities and mobile robots for semantic mapping and user localization. InProceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. 2017 Mar 6; (pp. 267-268). https://doi.org/10.1145/3029798.3038343

Remaggi L, Jackson PJ, Coleman P, Wang W. Acoustic reflector localization: Novel image source reversion and direct localization methods. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2016 Dec 1;25(2):296-309. https://doi.org/10.1109/TASLP.2016.2633802

Camara LG, Gäbert C, Přeučil L. Highly robust visual place recognition through spatial matching of CNN features. ICRA. 2020 May 31; (pp. 3748-3755). IEEE. https://doi.org/10.1109/ICRA40945.2020.9196967

AL-TAMEEMI, M. I. RMSRS: Rover Multi-purpose Surveillance Robotic System. Baghdad Sci. J. .2020; 17(3 (Suppl.)): 1049-1049. https://doi.org/10.21123/bsj.2020.17.3(Suppl.).1049

Alsaedi, E. M., & kadhim Farhan, A. . Retrieving Encrypted Images Using Convolution Neural Network and Fully Homomorphic Encryption. Baghdad Sci. J. . 2023; 20(1): 0206-0206. https://doi.org/10.21123/bsj.2022.6550

Hasan, A. M., Qasim, A. F., Jalab, H. A., & Ibrahim, R. W. Breast cancer MRI classification based on fractional entropy image enhancement and deep feature extraction. Baghdad Sci. J. .2023; 20(1): 0221-0221. https://doi.org/10.21123/bsj.2022.6782

Wenzel P, Schön T, Leal-Taixé L, Cremers D. Vision-based mobile robotics obstacle avoidance with deep reinforcement learning. ICRA. 2021 May 30 ;(pp. 14360-14366). https://doi.org/ 10.1109/ICRA48506.2021.9560787

Wu S, Zhong S, Liu Y. Deep residual learning for image steganalysis. Multimed. Tools. Appl. . 2018 May; 77:10437-53. https://link.springer.com/article/10.1007/s11042-017-4440-4

Alam MS, Mohamed FB, Selamat A, Hossain AB. A review of recurrent neural network based camera localization for indoor environments. IEEE Access. 2023 May 2. https://doi.org/10.1109/ACCESS.2023.3272479

Alam MS, Hossain AK, Mohamed FB. Performance Evaluation of Recurrent Neural Networks Applied to Indoor Camera Localization.IJETAE.2022. https://doi.org/10.46338/ijetae0822_15

Vásquez BP, Matía F. A tour-guide robot: Moving towards interaction with humans. Eng. Appl. Artif. Intell. . 2020 Feb 1; 88:103356. https://doi.org/10.1016/j.engappai.2019.103356

Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput. Biol. Med. . 2021 Apr 1;131:104253. https://doi.org/10.1016/j.compbiomed.2021.104253

Similar Articles

You may also start an advanced similarity search for this article.