Skip Navigation: Avoid going through Home page links and jump straight to content

MFEX: Microrover Flight Experiment

Rover Control Workstation

System Creator: Brian K. Cooper


This set of software, running on a Silicon Graphics Onyx2 graphics supercomputer, allows the rover uplink team to generate sequences of commands for the rover to execute. It provides an easy to use graphical user interface to the numerous available rover commands. This interface consists of windowed screens that contains mouse clickable buttons, sliders and text input areas.

top_sm.jpg

We first select the rover command we want from the selection area and input the command parameters . A command sequence is built up in this way and appears in the command sequence area. The image above shows the main Rover Control Workstation program window.

rcw1_sm.jpg

The Rover Driver (Brian K. Cooper- primary, Jack Morrison backup) uses the screen shown above to visualize the surface of Mars taken from the lander IMP camera. He wears special battery powered goggles that allow him to see the scene in 3D with stereo images presented to each eye. A unique joystick called a Spaceball is used to move a model of the rover on the screen (also in stereo) so that the rover model looks just like it would on Mars. The system continuously calculates the coordinates of the rover model (x,y,z in meters, also range and heading) and this is used to tell the rover where to go. As you can see in the image above the rover model is sitting on a small hill (this was from a test we conducted at JPL, not on Mars). Giant "lawn" darts are used as icons to show where we want the rover to go. These locations are added to the rover sequence and uploaded to the rover for it to perform. We will typically create a long sequence of commands for the rover and upload these once a day.

rcw2_sm.jpg

The Rover Control Workstation also provides a "virtual reality" type interface to view the surface of Mars from any location and angle around the lander and to create rover waypoints or goals. The rover driver can "fly" a virtual camera over the lander and zoom into any terrain feature for a close up look. This is accomplished by processing the stereo images from the lander camera and creating a 3D terrain model. This is displayed in real time along with models of the lander and rover all in stereo. This allows the driver to make decisions about the traversability of the terrain and to watch out for hazards that the rover should avoid.


Back to the Rover Control & Navigation Page.


Web Page Author: Brian K. Cooper

All information on this site, including text and images describing the Rover Control Workstation is copyright © 1997, Jet Propulsion Laboratory, California Institute of Technology and the National Aeronautics and Space Administration.