Skip to main content

Autonomous navigation in robotics

 Autonomous Navigation in Robots

Have you ever wondered how robots navigate an unknown environment and get themselves from point A to B? You might be surprised to know that the concepts involved in this intriguing process might be familiar to you.

Lets take a dive into the basics of how robots perceive and navigate their worlds. Generally speaking, a few different components are required:

  • A source of depth data, such as a LIDAR or depth camera.
  • Some form of odometry, like wheel encoders or an IMU.
  • A computer to crunch numbers and take decisions.

Autonomous navigation in robotics has always been a subject of study, from early research and the first commercial robots capable of this in the 1980's to the famous robotic vacuum cleaner robot Roomba to cutting edge tech developed by the military. 

 

The Roomba robotic vacuum cleaner
Generally speaking, for a robot to navigate an unknown space, it needs to be aware of obstacles. To do this, some form of depth sensing sensor must be present. Common choices include a 360 degree LIDAR sensor, which is capable of sensing distances to obstacles present 360 degrees around it or a depth camera like an Intel Realsense, which is capable of generating images which show how far away each pixel is from the camera.
 
The RGB and depth outputs of a depth camera

The output from these sensors are used to build a "map" of the space the robot is navigating. The map tells the robot where it can and can't go. The map can then be modified in order to take account for factors such as the robots own dimensions in order to build a costmap.
Costmap of an environment

 The costmap shows which parts of the map can and cannot be visited. Armed with this knowledge, the computer can now calculate a path using a path planning algorithm such as A* in order to create a path that both takes into account path length and obstacles.
 
The role of the odometry source is in order to track the robots movement, specifically its angular and linear velocities and its current rotation and location. This, in association with the depth information can be used to calculate the robots movement and position with respect to its environment.

In actual applications, software for implementing all this behavior is usually already written and only needs modifications in order to work with a particular robot or environment, an example might be the ROS navigation stack.

This article was authored by Zahran Sajid, the research head at ACM VITCC.

References:
  1. http://wiki.ros.org/
  2. https://en.wikipedia.org/wiki/Autonomous_robot#Autonomous_navigation

 

Comments

Popular posts from this blog

A roadmap to become a Web3 developer

Software Development and Practices Not to be a “if I know everything then only execute” sort of a person in this domain. Learning about blockchain development and finding the right roadmap can be a challenging task but it doesn't matter here. Here are some of the practices that I followed and still relying on those facts: Learn through practical implementation: This is one of the most important and best practices that you need to follow and will very surely help you towards becoming a full stack web3 developer. Executing after learning the entire curriculum: Build projects by taking the guide of documentations, articles from platforms like ethereum stack exchange and short youtube videos rather than depending on an entire course on Udemy or Coursera. It's not like I'm suggesting not to do courses but don't just get fixed on only the topics that they teach, think out of the box and explore other things as well. Join discord servers and stay up to date with