In 1962, US President John F. Kennedy declared, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” And seven years later, Apollo 11 astronauts Neil Armstrong and Buzz Aldrin took the first steps on the lunar surface. We are now celebrating 50 years of the moon landing!

11
Our MARS Rover

The time has come to do other things, including putting the first human on Mars. The goal is manifold more ambitious and longer-term than the lunar landing. But in the last 50 years there has been manifold increase in science, technologies, and experience too (coming from continuous human presence at the International Space Station for over 18 years) which could make human Mars mission within reach.

10
Front Side (Uses 2 Stereo Cameras for Depth, Tracking and Navigation)

This means putting greater emphasis on STEM so that the Next Gen is fully trained and prepared for a mission to Mars and deep space. As a part of this training, we are building a Rover which would be able to Move Autonomously using Real-time Stereo vision (MARS) to prepare the base for human presence and enable scientific exploration of the Martian surface once the Astronauts land.

2
The backside of the 4 wheel drive rover

Our Moving Autonomously using Real-time Stereo vision (MARS) Rover has 4 functions:

  1. Map objects in its surrounding
  2. Become aware of its location within the surrounding
  3. Information about what those objects are (obstacles or objects of interest)
  4. Map path from its current position to final position while avoiding obstacles and passing through objects of interest

We want the rover to function autonomously and make intelligent decisions in a GPS denied environment. This would be the standard procedure for all autonomous rovers meant for deep space exploration as there would not be communication time between rover and Earth could range from several minutes (in case of Mars) to 45 minutes for Saturn and 4.5 hours for Pluto.

Rover

7
Arushi wiring the Motor Controller for the Rover

The rover is a 4 wheel drive to provide greater control and maneuverability when turning left or right. It makes it easier for the robot to navigate uneven or rocky terrain, and free itself in case any of the wheels fall in a ditch.

All wheels powered means that the robot can carry heavier loads including laptop and science and geology equipment. Power to the rover is provided by a 24V LiPo battery. It has 4 IG42 motors which are controlled using Sabertooth motor driver.

Simultaneous Localization and Mapping (SLAM)

We are using 2 Intel RealSense Cameras: the Depth Camera D435 and the Tracking Camera T265. The Depth Camera (D435) uses stereo vision to calculate the depth of objects around it. This provides information about the distance of the rover from the obstacle or the object of interest.

4
Arushi testing the various camera streams from the Rover

The Tracking Camera (T265) has 2 fish eye cameras. It uses visual cues or landmarks to help the rover map its location with the surrounding and ultimately enable the rover to navigate the environment.

We are able to get depth and location information using Intel Realsense SDK as well as using Visual Studio C++. However we would need to upgrade our knowledge and learning curve to make SLAM possible in our rover.

3
Getting inputs from all the 5 camera streams (RGB + Depth Left and Right + Tracking Left and Right)

Object Identification: Obstacle or Object of Interest?

We are getting Google Tensorflow on live video stream coming from the RGB camera to identify objects. This would enable the rover to identify the object as an obstacle such as a sharp rock or a deep crater (to be avoided) or an object of interest, such as interesting rock or soil surface (to be navigated to). We are using Tensorflow Object detection API and will be training our algorithm on different surfaces, possibly using data from the NASA’s Lunar Reconnaissance Orbiter (LRO), Mars Reconnaissance Orbiter, and the Mars Science Laboratory (Mars Curiosity Rover).

1
Artash working on Tensorflow so that the Rover can make intelligent decisions for navigation

We will keep updating our progress with this rover on this page.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s