Self-Localization of Guide Robots Through Image Classification

The field of autonomous robotic systems has advanced tremendously in the last few years, allowing them to perform complicated tasks in various contexts. One of the most important and useful applications of guide robots is the support of the blind. The successful implementation of this study requires a more accurate and powerful self-localization system for guide robots in indoor environments. This paper proposes a self-localization system for guide robots. To successfully implement this study, images were collected from the perspective of a robot inside a room, and a deep learning system such as a convolutional neural network (CNN) was used. An image-based self-localization guide robot image-classification system delivers a more accurate solution for indoor robot navigation. The more accurate solution of the guide robotic system opens a new window of the self-localization system and solves the more complex problem of indoor robot navigation. It makes a reliable interface between humans and robots. This study successfully demonstrated how a robot finds its initial position inside a room. A deep learning system, such as a convolutional neural network, trains the self-localization system as an image classification problem. The robot was placed inside the room to collect images using a panoramic camera. Two datasets were created from the room images based on the height above and below the chest. The above-mentioned method achieved a localization accuracy of 98.98%.


Introduction
The use of robots in various fields of artificial intelligence and computer vision has increased significantly in recent years.Among them, the guide robot is one that is worth mentioning, as it plays a very important role in navigating blind people and providing information.It guides people from one point to another, both indoors and https://doi.org/10.21123/bsj.2024.9648P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal environment through a process known as selflocalization.The ability to operate efficiently and securely in changing surroundings is a crucial prerequisite for autonomous mobile robots.Expensed and sophisticated sensor systems such as GPS or laser range finders are frequently used in conventional self-localization techniques.
A glove specially designed for the benefit of people who are blind or visually impaired is likely to include navigational aids or other features to improve their everyday lives and mobility 1 .The self-localization of guide robots was the subject of the deep-learning approach suggested by Smith and Johnson in 2022.This strategy produced encouraging outcomes and concentrated on enhancing the robots' capacity to ascertain their own position in varied surroundings 2 .creating a pathplanning system suitable for indoor environments for visually impaired people 3 .This study looks at creative ways to improve accessibility and mobility for blind people.A particular robotic device has been released for blind people by the more famous automated organization ARIEL.When walking the visually impaired people with the computerized device, ten ultrasonic sensors produce the signals and help the blinds 4 .
The diverse user connection research has proposed a novel approach for autonomous shopping carts 5 .The various interactions of the shopping card attract the attention of the customers.It's also a new experience for the customers.Another invention of the guided robotic system is the acoustic reflector localization system (ARLS), which is produced by using the localization of an acoustic reflector 6 .It identifies the source of the image and predicts the reflector position.
A probabilistic framework created for a robot that functions in a room-like setting 7 .The robot uses semantic data about user activity to build an accurate metric map of the space.The motions of the user and robot were noticeably in completely different directions.The robot effectively and successfully locates the user within a room using this method.A convolutional neural network (CNNs)-based visual location identification technology was proposed 8 .It showed how CNNs might be used to extract distinguishing characteristics from images [9][10][11][12] , enabling a robot to identify previously visited locations and selflocalization.Deep learning techniques have been applied in vision-based localization approaches.To accomplish precise self-localization, even in difficult and dynamic situations, deep convolutional networks with pose regression are used 9 .Imagebased indoor localization 13,14 techniques use both visual odometry and deep neural networks.This illustrates how deep learning may help mobile robots to self-localize accurately 15 .
The primary objective is to create a deep learning model that can localize an indoor robot from surrounding images, which is a difficult image classification problem.Our main objective is to create and deploy an innovative and efficient deep learning model that allows a guiding robot to independently establish its own location inside a region using image classification techniques.This innovative solution makes use of the capabilities of computer vision and machine learning algorithms to allow the guide robot to understand its surroundings and correctly identify its position without relying on external infrastructure, such as GPS or beacons.

Materials and Methods
The proposed research uses an image classification system that thinks about a guide robot to localize indoor locations properly.The main objective is to help the visually impaired people of the indoors for accurate navigation 16 .An indoor robot navigation system must find a robot's precise position in the environment.In indoor environments, traditional approaches to robot localization, such as GPS or laser-based techniques, have limitations, making image classification a worthwhile alternative.

Guide Robot Concept
In today's world, an estimated 283 million visually impaired people suffer from considerable mobility and travel obstacles.Navigating unfamiliar places is difficult for these people.However, in recent years, https://doi.org/10.21123/bsj.2024.9648P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal there has been a surge in the development of assistive robotic devices designed to assist visually impaired or blind people.
In an unusual approach, when a robot enters a room, a comprehensive 3D map of the surroundings is built using RGB-D video.Owing to this procedure, the robot can learn everything about the environment in which it works.The robot can precisely identify its present location inside the space by selecting certain frames from the recorded video and combining them with the 6D camera position data.After identifying the room and determining the viability of several routes leading to the intended location, the robot applied image classification 17 .
The system's performance depends significantly on the accuracy and quality of the created 3D map.Four RGB-D sensors were positioned on top of the robot to provide a wide-angle and thorough input.With this configuration, the robot can observe its surroundings from various angles, which enhances its ability to reliably and robustly interpret its surroundings.

Model Architecture
Used a deep learning-based strategy for selflocalization using image classification.Given the impressive performance in image identification tasks, convolutional neural networks (CNNs) are an appropriate solution.The CNN architecture designed numerous convolutional layers with fully connected layers.The final output layer was created based on the identified locations to forecast the coordinates of the robot's position.

Model Training
The proposed CNN model was trained on the collected dataset using supervised learning.The

Evaluation Metrics
To evaluate the performance of the self-localize guide robotic system, the training model was evaluated on two different datasets.The trained model was evaluated using several evaluation metrics, such as precision, recall, and F-1 score.Thus, the accuracy of the model in determining a more accurate position of the guide robot was determined.A qualitative evaluation was performed to analyze how accurately the guide robot was localized in its environment.A proposed image classification system, a deep learning system for a guide robot, is finding the initial location of a guide robot and identifying the room correctly.The image classification model accuracy shows the result is more accurate and acceptable.

Results and Discussion
The objective of this study is to develop an image classification system for a guide robot to find its accurate position inside a room.The guide robot inputs the required images through the panoramic camera and finds the position by processing the photos.Convolutional neural network (CNN) is a deep learning strategy to extract the image features and classify the images.In this way, a guide robot finds the room position.

Experimental Results
This research study trains a deep learning model and finds a robot's accurate location inside a room.The main focus is for a robot to know its position in a room through an image classification process.The necessary image collected on the inside camera and featured extracted through the convolutional neural network.

Model Evaluation
The classifier's evaluation metric are the Precision, Recall, and average F1-score used to assess a classification model's performance.The following is its definition: Precision is defined as accurate positive predictions divided by all predicted positives.For the calculation of precision, use the following Eq.1: Recall is the ratio of accurate positive predictions to all actual positive predictions.To calculate recall, use Eq. 2 below:

……………2
The F1-score is a single score that balances precision and recall, calculated as the balanced average of both measures.The following Eq. 3 is used to compute it: where TP is the true positive, FP is the false positive, FN is the false negative.The accuracy metric normally compares 1 and 0 and determine how many correct.Our classifier average F1-score comes in 1 that means classify the rooms properly and robot identify a room more accurately.The model's evaluation metric is shown in Table 1.

Conclusion
This study investigated how a robot's selflocalization could be guided by image classification.The robot showed a promising ability to detect its surroundings and establish its position by utilizing cutting-edge machine-learning algorithms.To enhance the technique's precision and flexibility to actual situations, further refinement and validation are required.Future research might improve the guide robot's ability to self-localize through image categorization by investigating the integration of sophisticated deeplearning approaches, such as deep reinforcement learning or attention processes.Additionally, using multimodal sensor fusion, such as fusing information from cameras and range sensors, such as LIDAR, may enhance the robot's spatial awareness and navigational skills.
https://doi.org/10.21123/bsj.2024.9648P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal Loss functions were minimized when training the model.An optimization method is used while training the model, one of which is the hyperparameter optimization.The hyperparameter optimization modifies the model weights, batch size, number of epochs, learning rate etc.Also use a GPU to speed up computation and reduce the model training time.

Figure 2 .
Figure 2. Training accuracy and training loss of dataset made above the chest height of room images

Figure 3 .
Figure 3. Training accuracy and training loss of dataset made below the chest height of room images those above chest height to vary less.However, it is more likely that images beyond chest height will have fewer distinctive features.The key result of the experiment was that implanted the model in a robot, which accurately recognized a room from the images of the room.https://doi.org/10.21123/bsj.2024.9648P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal