In 1962, US President John F. Kennedy declared, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because […]
In 1962, US President John F. Kennedy declared, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” And seven years later, Apollo 11 astronauts Neil Armstrong and Buzz Aldrin took the first steps on the lunar surface. We are now celebrating 50 years of the moon landing!
The time has come to do other things, including putting the first human on Mars. The goal is manifold more ambitious and longer-term than the lunar landing. But in the last 50 years there has been manifold increase in science, technologies, and experience too (coming from continuous human presence at the International Space Station for over 18 years) which could make human Mars mission within reach.
This means putting greater emphasis on STEM so that the Next Gen is fully trained and prepared for a mission to Mars and deep space. As a part of this training, we are building a Rover which would be able to Move Autonomously using Real-time Stereo vision (MARS) to prepare the base for human presence and enable scientific exploration of the Martian surface once the Astronauts land.
Our Moving Autonomously using Real-time Stereo vision (MARS) Rover has 4 functions:
Map objects in its surrounding
Become aware of its location within the surrounding
Information about what those objects are (obstacles or objects of interest)
Map path from its current position to final position while avoiding obstacles and passing through objects of interest
We want the rover to function autonomously and make intelligent decisions in a GPS denied environment. This would be the standard procedure for all autonomous rovers meant for deep space exploration as there would not be communication time between rover and Earth could range from several minutes (in case of Mars) to 45 minutes for Saturn and 4.5 hours for Pluto.
The rover is a 4 wheel drive to provide greater control and maneuverability when turning left or right. It makes it easier for the robot to navigate uneven or rocky terrain, and free itself in case any of the wheels fall in a ditch.
All wheels powered means that the robot can carry heavier loads including laptop and science and geology equipment. Power to the rover is provided by a 24V LiPo battery. It has 4 IG42 motors which are controlled using Sabertooth motor driver.
Simultaneous Localization and Mapping (SLAM)
We are using 2 Intel RealSense Cameras: the Depth Camera D435 and the Tracking Camera T265. The Depth Camera (D435) uses stereo vision to calculate the depth of objects around it. This provides information about the distance of the rover from the obstacle or the object of interest.
The Tracking Camera (T265) has 2 fish eye cameras. It uses visual cues or landmarks to help the rover map its location with the surrounding and ultimately enable the rover to navigate the environment.
We are able to get depth and location information using Intel Realsense SDK as well as using Visual Studio C++. However we would need to upgrade our knowledge and learning curve to make SLAM possible in our rover.
Object Identification: Obstacle or Object of Interest?
We are getting Google Tensorflow on live video stream coming from the RGB camera to identify objects. This would enable the rover to identify the object as an obstacle such as a sharp rock or a deep crater (to be avoided) or an object of interest, such as interesting rock or soil surface (to be navigated to). We are using Tensorflow Object detection API and will be training our algorithm on different surfaces, possibly using data from the NASA’s Lunar Reconnaissance Orbiter (LRO), Mars Reconnaissance Orbiter, and the Mars Science Laboratory (Mars Curiosity Rover).
We will keep updating our progress with this rover on this page.
Winners: Micro:bit Challenge North America Runners Up 2020. NASA SpaceApps 2019, 2018, 2017, 2014. Imagining the Skies 2019. Jesse Ketchum Astronomy Award 2018. Hon. Mention at 2019 NASA Planetary Defense Conference. Emerald Code Grand Prize 2018. Canadian Space Apps 2017.