Sensing in Space A HUMANOID ROBONAUT “SEES” WITH TWO SENSOR CAMERAS. Built in partnership with General Motors (GM) and the Oceaneering Space Systems of Houston, NASA’s Robonaut (R2) is the second generation of highly dexterous humanoid robots designed to work alongside humans and execute simple, repetitive or dangerous tasks on Earth or on board the International Space Station (ISS). Developed in 1997, the first generation of Robonaut (R1) was a human-like robotic assistant capable of performing simple maintenance tasks. Its successor, R2, is a fully modular, highly dexterous 300-pound robot that consists of a head and a torso with two arms and two hands. R2’s technological improvements include an improved range of sensors that features two Prosilica GC2450 color cameras from Allied Vision Technologies (AVT, Burnaby, Canada) and an infrared time-of-flight (TOF) camera. Capable of speeds more than four times faster than R1, R2 features a total of 350 sensors for tactile, force, position, range-finding and vision sensing and 38 power PC processors enabling it to perform functions, such as object recognition and manipulation. R2 also is able to react to its surroundings and operate semi-autonomously. Other technological improvements include optimized overlapping dual arm dexterous workspace, series elastic joint technology, extended finger and thumb travel, miniaturized six-axis load cells, redundant force sensing, ultra high-speed joint controllers and extreme neck travel. Capable of 42 degrees of freedom, including 24 in its hands and fingers alone, R2’s dexterity allows it to use the same tools as astronauts, removing the need for robot specific tools. REAL-TIME VISION RECOGNITION R2’s vision equipment is housed inside its helmet. The system uses color, pixel intensity and texturebased segmentation, as well as advanced pattern recognition techniques to extract the necessary information. To simplify the procedure, the system focuses on certain areas of the image using region of interests (ROI). ROI is a function that allows a certain portion of the available pixels to be read out from the camera resulting in a faster frame rate and less data to be processed. In addition, the TOF sensor data allows the background to be removed in order to focus on the object of interest. Built-in classification techniques within the software are used to perform threedimensional (3-D) and patternrecognition functions in real-time to allow R2 to compute feasible trajectories and decide where to place its hands to execute a set of pre-determined tasks, such as opening boxes autonomously. The software used by the system is Mvtec’s Halcon 9.0. Mvtec is an AVT software partner. R2’S MISSION R2 underwent a series of rigorous tests prior to its launch on space shuttle Discovery in February 2011. During stage one, R2 was hard-mounted and stationed in the Destiny laboratory on board the ISS where it was monitored while executing tasks and operations similar to those performed on Earth. If successful, R2 could move on to stage two of the mission and become mobile to perform station maintenance tasks such as vacuuming or cleaning filters. The ultimate goal is to send R2 outside the ISS to perform dangerous extravehicular activity (EVA) tasks during stage three. There are no plans to return R2 to Earth. benefits R2’s dexterity allows it to use the same tools as astronauts removing the need for robot specific tools. Built-in classification techniques within the software are used to perform 3-D and patternrecognition functions in real-time. R2 is capable of 42 degrees of freedom.
Published by QualityMagazine. View All Articles.
This page can be found at http://digital.bnpmedia.com/article/Case+Study/717318/68484/article.html.