NASA project could be next big thing in video games

The video game’s description promised that two doctors were about to be swallowed by the jaws of fate.

Watching the action on the screen, Alan Pope said, somewhat unnecessarily, “You want to be careful with the scalpel when it’s jumping around.”

“This is really tricky,” replied Chad Stephens, studying the chest he was about to cut open.

The NASA research scientist wore a headset with sensors that monitored his brain waves. As he focused, his brain waves changed frequency, indicated by changing colors of LED lights below the screen. They flickered between red and yellow, then steadied on green. Stephens made a sudden movement with his hand.

“Oh, that’s good!” Pope exclaimed. A robotic voice from the game “Trauma Center: New Blood” intoned, “The incision has been made.”

Pope has a master’s degree in electrical engineering and a doctorate in clinical psychology. Stephens is working on his psychology dissertation. Both work in aeronautics at NASA’s Langley Research Center, where their core research on cockpit design has led to an interesting spinoff: a biofeedback device that makes video games harder for tense or distracted players.

The application is the latest in Pope’s research on brain wave monitoring. His studies on cockpit automation in the 1990s led to the creation of attention-training games by a North
Carolina company.

It’s all part of NASA’s transfer of tax-funded research into the private sector for development of new products and uses that benefit the public as well as the space agency. Langley research has been used by Speedo to create faster racing swimsuits, by Medtronics Inc. to stabilize heart-failure patients, and by NASCAR to clean air inside race cars.

NASA research can be found in medical devices, industrial smokestack scrubbers, panoramic cameras, solar refrigerators, emergency air supplies for coal miners, parachutes for airplanes and more. The agency says its research has been used to save 444,000 lives since 2004, and has generated $5.1 billion for the economy.

The biofeedback technology is commercially available from NASA’s Technology Gateway. Its inventors say it could be used in anything from video games to training simulations for medical doctors to athletes – wherever improved concentration would also improve performance.

“We didn’t invent the Wii, and we didn’t create the game,” Stephens said, “but we figured out how we could modify the controls to serve as feedback to the user.”

In April, NASA obtained a patent for using the technology, called MindShift, on the Wii gaming system. An application has been submitted for Kinect.

“If a company was interested, it could involve this biofeedback mechanism into a game,” Stephens said. “For example, a wizard could use his mental powers to go through the game. Maybe use it in a challenge here and there to demonstrate his skill.

“We could measure fear and have that impact the game in some way. It has the potential to involve a player’s real emotion into the game.”

NASA uses the technology to assess new cockpit designs by monitoring brain waves, heart rate and eye movement of pilots in a simulator.

“We want to be able to sense what kind of state they’re in and provide feedback on a continuous basis so they can understand how their state affects their performance,” Pope said. “If we design something that takes up too much of their attention or doesn’t draw their attention, we probably have poor design.”

Pilots can be taught to focus through standard biofeedback training, where the subject tries to control his heart rate, for example, while moving a line or a bar on a monitor. That can be boring, Pope said. Many of the test pilots enjoyed golf, so the researchers incorporated the biofeedback training into a game where being “in the zone” – mind in perfect sync with body – is important.

They tested a putting green that reacted to a player’s brain waves. When stressed or unfocused, the biofeedback mechanism caused the indoor putting green to undulate, the hole to shrink and a targeting laser to swing back and forth. By relaxing and focusing on the task at hand, players could sink the putt more easily.

Then the researchers tried it with video games, which have real-world possibilities, too.

“Playing a surgery game with an unsteady hand is challenging,” Pope said. “You could imagine this as a surgeon in training learning how to focus, while it teaches the manual skill of surgery.”

They booted up “Link’s Crossbow Training” on the Wii. Pope put on the headset, with a clip on his left earlobe to track his pulse. The cursor jumped around, and he couldn’t aim until his heart rate slowed.

He picked off a single target on the first screen and two targets on the next. Then five targets popped up at once, and the cursor jumped around so he couldn’t aim.

“Got a little excited there,” Pope said. As he calmed down and focused, the cursor steadied and he knocked down the targets one after the other.

“We think this is the next stage beyond motion sensing games,” he said. “That’s why we call it MindShift. We wanted developers to think about games in different ways. That would require a mind shift.”

[Source]

Advertisements

Web Solutions Inspire Cloud Computing Software

NASA Technology

In 2008, a NASA effort to standardize its websites inspired a breakthrough in cloud computing technology. The innovation has spurred the growth of an entire industry in open source cloud services that has already attracted millions in investment and is currently generating hundreds of millions in revenue.

William Eshagh was part of the project in the early days, when it was known as NASA.net. “The feeling was that there was a proliferation of NASA websites and approaches to building them. Everything looked different, and it was all managed differently—it was a fragmented landscape.”

altNASA.net aimed to resolve this problem by providing a standard set of tools and methods for web developers. The developers, in turn, would provide design, code, and functionality for their project while adopting and incorporating NASA’s standardized approach. Says Eshagh, “The basic idea was that the web developer would write their code and upload it to the website, and the website would take care of everything else.”

altEven though the project was relatively narrow in its focus, the developers soon realized that they would need bigger, more foundational tools to accomplish the job. “We were trying to create a ‘platform layer,’ which is the concept of giving your code over to the service. But in order to build that, we actually needed to go a step deeper,” says Eshagh.

That next step was to create an “infrastructure service.” Whereas NASA.net was a platform for dealing with one type of application, an infrastructure service is a more general tool with a simpler purpose: to provide access to computing power. While such computing power could be used to run a service like NASA.net, it could also be used for other applications.

Put another way, what the team came to realize was that they needed to create a cloud computing service. Cloud computing is the delivery of software, processing power, and storage over the Internet. Whether these resources are as ordinary as a library of music files or as complex as a network of supercomputers, cloud computing enables an end user to control them remotely and simply. “The idea is to be able to log on to the service and say ‘I want 10 computers,’ and within a minute, I can start using those computers for any purpose whatsoever,” says Eshagh.

As the scope of the project expanded, NASA.net came to be known as Nebula. Much more than setting standards for Agency web developers, Nebula was intended to provide NASA developers, researchers, and scientists with a wide range of services for accessing and managing the large quantities of data the Agency accumulates every day. This was an enormous undertaking that only a high-powered cloud computing platform could provide.

Raymond O’Brien, former program manager of Nebula, says the project was in some ways ahead of its time. “Back in 2008 and 2009, people were still trying to figure out what ‘cloud’ meant. While lots of people were calling themselves ‘cloud enabled’ or ‘cloud ready,’ there were few real commercial offerings. With so little clarity on the issue, there was an opportunity for us to help fill that vacuum.”


altAs the team built Nebula, one of the most pressing questions they faced was that of open source development, or the practice of building software in full view of the public over the Internet.

On the one hand, proprietary code might have helped the project overcome early hurdles, as commercial software can offer off-the-shelf solutions that speed up development by solving common problems. Proprietary software is sometimes so useful and convenient that the Nebula team wasn’t even sure that they could create the product without relying on closed source solutions at some point.

On the other hand, open source development would facilitate a collaborative environment without borders—literally anyone with the know-how and interest could access the code and improve on it. Because Nebula had evolved into a project that was addressing very general, widespread needs—not just NASA-wide, but potentially worldwide—the possibility of avoiding restrictive licensing agreements by going open source was very attractive.

O’Brien says that broad appeal was an important part of Nebula’s identity. “From the beginning, we wanted this project to involve a very large community—private enterprises, academic institutions, research labs—that would take Nebula and bring it to the next level. It was a dream, a vision. It was that way from the start.”

Despite uncertainties, the development team decided to make Nebula purely open source. Eshagh says the real test for that philosophy came when those constraints were stretched to their limits. “Eventually, we determined that existing open source tools did not fully address Nebula’s requirements,” he says. “But instead of turning to proprietary tools, we decided to write our own.”

The problem was with a component of the software called the cloud controller, or the tool that can turn a single server or pool of servers into many virtual servers, which can then be provisioned remotely using software. In effect, the controller gives an end user access in principle to as much or as little computing power and storage as is needed. Existing tools were either written in the wrong programming language or under the wrong software license.

Within a matter of days, the Nebula team had built a new cloud controller from scratch, in Python (their preferred programming language for the controller), and under an open source license. When the team announced this breakthrough on its blog, they immediately began attracting attention from some of the biggest players in the industry. “We believed we were addressing a general problem that would have broad interest,” says Eshagh. “As it turns out, that prediction couldn’t have been more accurate.”

Technology Transfer

Rackspace Inc., of San Antonio, Texas, was one of the companies most interested in the technology. Rackspace runs the second largest public cloud in the world and was at the time offering computing and storage services using software they had created in-house. Jim Curry, general manager of Rackspace Cloud Builders, says they faced hurdles similar to those NASA faced in building a private cloud. “We tried to use available technology,” he says, “but it couldn’t scale up to meet our needs.”

The engineers at Rackspace wrote their own code for a number of years, but Curry says they didn’t see it as a sustainable activity. “We’re a hosting company—people come to us when they want to run standard server environments with a high level of hosting support that we can offer them. Writing proprietary code for unique technologies is not something we wanted to be doing long-term.”

The developers at Rackspace were fans of open source development and had been looking into open source solutions right at the time the Nebula team announced its new cloud controller. “Just weeks before we were going to announce our own open source project, we saw that what NASA had released looked very similar to what we were trying to do.” Curry reached out to the Nebula team, and within a week the two development teams met and agreed that it made sense to collaborate on the project going forward.

Each of the teams brought something to the table, says Curry. “The nice thing about it was that we were more advanced than NASA in some areas and vice versa, and we each complemented the other very well. For example, NASA was further along with their cloud controller, whereas we were further along on the storage side of things.”

The next step was for each organization to make its code open source so the two teams could launch the project as an independent, open entity. Jim Curry says the team at Rackspace was stunned by the speed at which NASA moved through the process. “Within a period of 30–45 days, NASA completed the process of getting the agreements to have this stuff done. From my perspective, they moved as fast as any company I’ve ever worked with, and it was really impressive to watch.”

The OpenStack project, the successor to Nebula with development from Rackspace, was announced in July 2010. As open source software, OpenStack has attracted a very broad community: nearly 2,500 independent developers and 150 companies are a part of it—including such giants as AT&T, HP, Cisco, Dell, and Intel. Semi-annual developers’ conferences, where members of the development community meet to exchange ideas and explore new directions for the software, now attract over 1,000 participants from about two dozen different countries.

Benefits

Because OpenStack is free, companies who use it to deploy servers do not need to pay licensing fees—fees that can easily total thousands of dollars per server per year. With the number of companies that have already adopted OpenStack, the software has potentially saved millions of dollars in server costs.

“Before OpenStack,” says Curry, “your only option was to pay someone money to solve the problem that OpenStack is addressing today. For people who want it as a solution, who like the idea of consuming open source, they now have an alternative to proprietary options.”

Not only is OpenStack saving money; it is also generating jobs and revenue at a remarkable pace. Curry says that dozens of Rackspace’s 80 cloud engineering jobs are directly attributable to OpenStack, and that the technology has created hundreds of jobs throughout the industry. “Right now, trying to find someone with OpenStack experience, especially in San Francisco, is nearly impossible, because demand is so high.”

The technology is currently generating hundreds of millions in revenue: Rackspace’s public cloud alone— which largely relies on OpenStack—currently takes in $150 million a year. Curry, Eshagh, and O’Brien all predict that the software will be its own billion-dollar industry within a few years.

Because OpenStack is open source, and is modified and improved by the people who use it, it is more likely to remain a cutting-edge solution for cloud computing needs. Says Eshagh, “We are starting to see the heavyweights in the industry adding services on top of OpenStack—which they can do because they have a common framework to build from. That means we’ll see even more services and products being created.”

In 2012, Rackspace took steps to secure OpenStack’s future as a free and open source project: the company began the process of spinning off the platform into its own nonprofit organization. By separating itself from any one commercial interest, Curry says, the project will be better positioned to continue doing what its founders hoped it would.

O’Brien maintains that OpenStack’s potential is far from being realized. “It’s hard to characterize in advance. If you had asked an expert about Linux years ago, who could have predicted that it would be in nearly everything, as it is today? It’s in phones and mobile devices. It’s in 75 percent of deployed servers. It’s even used to support space missions. OpenStack has a chance to hit something similar to that in cloud computing.”

Curry agrees: “In the future, you can envision almost all computing being done in the cloud, much of which could be powered by OpenStack. I think that NASA will need to receive significant credit for that in the history books. What we’ve been able to do is unbelievable— especially when you remember that it all started in a NASA lab.”

[Source]

The Center for the Advancement of Science in Space is increasingly green-lighting research projects for the International Space Station.

The crystals on the left were grown in microgravity. Those on the right formed on Earth.  (NASA)

Get ready for the rodents in outer space to outnumber the humans. The Center for the Advancement of Science in Space is increasingly green-lighting research projects for the International Space Station.

The organization expects that the unique conditions of outer space could lead to research breakthroughs.

“We believe there are scientific projects that people haven’t even thought about taking gravity out of the equation, and if they realize how easy it is and how accessible it is to get to the space station they’d be all over it,” said Greg Johnson, the executive director of the Center for the Advancement of Science in Space.

The organization approved 28 projects in 2013 and expects to launch more this year. In 2011 it began managing the U.S. lab on the International Space Station for NASA.

“There are things we can learn about the planet from 250 miles we frankly just can’t learn from here,” Johnson said. “We can learn about algal blooms in oceans. We can better understand patterns in the atmosphere and how they interface with land masses and water masses.”

In zero gravity, human and animal bones degenerate, opening a door for studying osteoporosis. Prolia, a drug designed to treat postmenopausal osteoporosis, was developed using research on lab rats that were tested on the space shuttle Endeavour in 2001.

One of the current experiments taking place on the International Space Station addresses Huntington’s disease, in which proteins clump up in a patient’s brain. The surface of the proteins mutate, making it hard for researchers to analyze them. Without an accurate depiction of the protein, scientists can’t design a drug to latch onto the surface and serve as a meaningful treatment for Huntington’s disease.

Gwen Owens, a Ph.D candidate at UCLA-Caltech, is studying the Huntington’s disease protein in crystal form. She heard an NPR segment about the Center for the Advancement of Science in Space and recalled a researcher’s work using micogravity. Given that crystals grow better in space, she figured it was worth pursuing.

“The real bottleneck is getting the crystals to form,” Owens said. “Once we have the crystals, it’s not that easy, but it’s not that hard.” Owens will get a better understanding of just how valuable zero gravity proves to be when her lab gets results back in September.

[Source]

Mars Cameras Make Panoramic Photography a Snap

Aside

Originating Technology/NASA Contribution

The Gigapan robotic platform holds a digital camera.
The Gigapan robotic platform now enables photographers on Earth to capture and create super-sized digital panoramas.

If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape.

The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels.

Gigapixel images are more than 200 times the size captured by today’s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

Partnership

The technology behind the Mars rover PMAs inspired Randy Sargent at Ames Research Center and Illah Nourbakhsh at Carnegie Mellon University (CMU) to look at ways consumers might be able to use similar technology for more “down-to-Earth” photography and virtual exploration.

In 2005, Sargent and Nourbakhsh created the Global Connection Project, a collaboration of scientists from CMU, Google Inc., and the National Geographic Society, whose vision is to encourage better understanding of the Earth’s cultures through images. This vision inspired the development of their Gigapan products.

After seeing what the Pancams and PMAs could do, Sargent created a prototype for a consumer-version of a robotic camera platform. He worked with Rich LeGrand of Charmed Labs LLC, in Austin, Texas, to design and manufacture the Gigapan robotic platform for standard digital cameras.

Product Outcome

The Gigapan robotic platform is, in essence, an intelligent tripod that enables an amateur photographer to set up detailed shots with ease. A user sets the upper-left and lower-right corners of the panorama, and the Gigapan simply will capture as many images as the user or scene requires. With this level of automation, a 500-picture panorama is no more complicated than a 4-picture panorama; only the camera’s memory limits the size of the panorama.

The Global Connection Project also created two other Gigapan products: a Gigapan Web site and panorama stitching software born from the Ames Vision Workbench, an image processing and computer vision library developed by the Autonomous Systems and Robotics Area in the Intelligent Systems Division.

A high-resolution composite photograph shows a monk atop a temple in Nepal, the temple at a distance, and a restaurant behind the temple.
Gigapan allows a photographer to capture extremely high-resolution panoramas, which a user can explore in depth. In this wide view of Boudhanath Stupa in Kathmandu, Nepal, it is possible to zoom all the way into the smallest, barely visible points in the picture, such as the monk standing on the roof of the temple or the sign above the Tibet Kitchen Restaurant and Bar.
Gigapan panoramic image courtesy of Jessee Mayfield.

The robotic platform works with the stitching software by precisely manipulating and aligning each shot ahead of time. The Gigapan software complements the robotic platform by arranging the parts of the panorama (potentially hundreds of individual photographs) into a grid where they are stitched together into a single, very large Gigapan image.

The Global Connection Project won a 2006 “Economic Development Award” from the Tech Museum Awards for its work in creating photographic overlays for Google Earth of areas affected by natural disasters. Government workers and concerned citizens used the images on Google Earth to see which areas needed help in the aftermath of Hurricane Katrina, Hurricane Rita, and the 2005 earthquake in Kashmir.

On the Gigapan Web site, a user can display a wide bird’s eye panorama and can then zoom in with impressive bug’s eye high-quality detail. On first impression, a panoramic photograph on Gigapan’s site might seem to be simply a wide-angle cityscape of a temple in Kathmandu. With each successive click, however, the user can zoom deeper and deeper into the photo, revealing more and more clear details: a monk hanging prayer flags on the roof of the temple and the Tibet Kitchen Restaurant and Bar a few blocks behind the temple, with a sign extolling passersby to taste their gourmet food.

As part of a continuing effort to connect people and cultures, the Global Connection Project encourages all users to upload their own panoramas from around the world on the Gigapan site. Users can explore such varied landscapes as a temple in Nepal, the Burning Man festival in the Nevada desert, a market in Guatemala, or the Boston skyline from the Charles River. Because of the much greater number of pixels, the resolution is unprecedented; the Gigapan software and robotic platforms can theoretically produce prints on 40-foot-wide paper without any loss in quality.

Whether or not photographers use the Gigapan mounts and software, anyone can upload their panoramas to the Gigapan Web site. Many users of Gigapan have uploaded standard panorama photographs, as well (although the site suggests photographs be at least 50 megabytes). This is just fine with the Gigapan and the Global Connection Project coordinators, whose aim is simply to encourage exploration and understanding of the various cultures in our world.

The Fine Family Foundation is sponsoring work with the Global Connection Project to enable botanists, geologists, archeologists, and other scientists around the world to document different aspects of the Earth’s cultures and ecosystems using Gigapan technology. Scientists are using Gigapan to document life in the upper redwood forest canopy in California, volcanoes in Hawaii, and glaciers in Norway.

There are also educational uses for the Gigapan: The Pennsylvania Board of Tourism uses Gigapan for Web site visitors wanting to explore Civil War sites virtually. Also, in collaboration with the United Nations Educational, Scientific and Cultural Organization (UNESCO), the Global Connection Project has distributed Gigapan to students in Pittsburgh, South Africa, and the Republic of Trinidad and Tobago, encouraging them to photograph their local culture and share those panoramas with the world. “The hope is that students will be able to have deeper connections to other cultures,” said Sargent.

A time-lapse Gigapan robotic mount is now in development, and a professional unit for larger SLR-style cameras may be released before the end of 2008.

Gigapan is a trademark of Carnegie Mellon University.

Drill Here? NASA’s Curiosity Mars Rover Inspects Site

Sandstone Target 'Windjana' May Be Next Martian Drilling Site

NASA’s Curiosity Mars rover has driven within robotic-arm’s reach of the sandstone slab at the center of this April 23 view from the rover’s Mast Camera. The rover team plans to have Curiosity examine a target patch on the rock, called “Windjana,” to aid a decision about whether to drill there. Credit: NASA/JPL-Caltech/MSSS

 

April 25, 2014

The team operating NASA’s Curiosity Mars rover is telling the rover to use several tools this weekend to inspect a sandstone slab being evaluated as a possible drilling target.

If this target meets criteria set by engineers and scientists, it could become the mission’s third drilled rock, and the first that is not mudstone. The team calls it “Windjana,” after a gorge in Western Australia.

The planned inspection, designed to aid a decision on whether to drill at Windjana, includes observations with the camera and X-ray spectrometer at the end of the rover’s arm, use of a brush to remove dust from a patch on the rock, and readings of composition at various points on the rock with an instrument that fires laser shots from the rover’s mast.

Curiosity’s hammering drill collects powdered sample material from the interior of a rock, and then the rover prepares and delivers portions of the sample to onboard laboratory instruments. The first two Martian rocks drilled and analyzed this way were mudstone slabs neighboring each other in Yellowknife Bay, about 2.5 miles (4 kilometers) northeast of the rover’s current location at a waypoint called “the Kimberley.” Those two rocks yielded evidence of an ancient lakebed environment with key chemical elements and a chemical energy source that provided conditions billions of years ago favorable for microbial life.

From planned drilling at Windjana or some nearby location on sandstone at the Kimberley, Curiosity’s science team hopes to analyze the cement that holds together the sand-size grains in the rock.

“We want to learn more about the wet process that turned sand deposits into sandstone here,” said Curiosity Project Scientist John Grotzinger, of the California Institute of Technology in Pasadena. “What was the composition of the fluids that bound the grains together? That aqueous chemistry is part of the habitability story we’re investigating.”

Understanding why some sandstones in the area are harder than others also could help explain major shapes of the landscape where Curiosity is working inside Gale Crater. Erosion-resistant sandstone forms a capping layer of mesas and buttes. It could even hold hints about why Gale Crater has a large layered mountain, Mount Sharp, at its center.

NASA’s Mars Science Laboratory Project is using Curiosity to assess ancient habitable environments and major changes in Martian environmental conditions. NASA’s Jet Propulsion Laboratory, a division of Caltech, built the rover and manages the project for NASA’s Science Mission Directorate in Washington.

The spectrometer on the rover’s robotic arm is the Alpha Particle X-Ray Spectrometer (APXS), which was provided by the Canadian Space Agency. The camera on the arm is the Mars Hand Lens Imager (MAHLI), built and operated by Malin Space Science Systems, San Diego. The laser on the mast is part of the Chemistry and Camera instrument (ChemCam), from the U.S. Department of Energy’s Los Alamos National Laboratory in New Mexico and the French national space agency, CNES. The rover’s wire-bristle brush, the Dust Removal Tool, was built by Honeybee Robotics, New York.

[Source]

Onboard Systems Record Unique Videos of Space Missions

Originating Technology/NASA Contribution

A worker assembling electronics for the LCROSS mission Image courtesy of Ecliptic Enterprises Corporation An artist’s rendering of LCROSS launching toward the Moon
The science payload panel (left) for LCROSS (in the artist rendering above) underwent final testing at Ames Research Center in late 2007. The Data Handling Unit built by Ecliptic Enterprises Corporation is the gold box near the center.

It was one of the few times that a crash landing would be deemed a success. On October 9, 2009, nine sensor instruments—including five cameras—onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) watched closely as the Moon-bound spacecraft released the spent upper stage of its Centaur launch vehicle at the lunar surface. The instrument-bearing shepherding spacecraft beamed back video of the Centaur’s impact and then descended through the resulting plume, gathering data on the composition of the ejected material until it too impacted within the lunar crater Cabeus. The mission yielded a wealth of information confirming what scientists had hoped for: the presence of water on the Moon.

A specially designed avionics unit controlled, routed, and transmitted back to Earth the precious data gathered from all of LCROSS’ onboard instruments. The crucial control unit was the outcome of a collaboration between NASA’s Ames Research Center and a unique company whose products have benefited from this and other NASA partnerships.

Partnership

In 1999, a company called BlastOff! Corporation was formed in Pasadena, California, with the intent of landing the first commercial robotic lander, equipped with a host of onboard video and imaging systems, on the Moon. The company folded in 2001, but a group of BlastOff! employees went on to form Ecliptic Enterprises Corporation, also of Pasadena, to take advantage of the expertise they developed in creating ruggedized video systems for use in space.

Onboard video systems for rockets or spacecraft provide stunning footage of launches and space activities—valuable material for educating and inspiring interest in space exploration. But another significant benefit is the essential information these video feeds provide to engineers on the ground. While casual viewers get to experience a virtual ride into space, watching the Earth fall away under a rocket’s flames, engineers gain important situational awareness, allowing them to monitor and evaluate a rocket launch or the activity of a complicated mechanical device on a spacecraft.

The need for comprehensive situational awareness became readily apparent in the aftermath of the Columbia disaster. The Space Shuttle Columbia broke up while reentering Earth’s atmosphere during its 2003 mission, killing its seven crewmembers. Investigators concluded the shuttle’s destruction was caused by hot gasses entering through a hole in the thermal protection of the vehicle’s left wing; the hole was caused by the impact of a chunk of foam insulation that broke away from the shuttle’s external fuel tank during launch.

Seeking ways to improve situational awareness for future shuttle launches, NASA examined the use of multiple onboard cameras. On the STS-114 Return to Flight mission, Ecliptic’s external tank camera captured the breakaway of a large piece of insulating foam, an incident which again grounded the shuttle fleet for nearly a year until the problem could be resolved. Of the multiple cameras trained on the shuttle during the launch, only Ecliptic’s onboard camera provided the precise time the foam broke away from the tank, says company CEO and former Jet Propulsion Laboratory engineer Rex Ridenoure.

“The value of onboard video for situational awareness really got a boost in the aftermath of the Columbia tragedy,” he says. Now the shuttle features multiple camera systems mounted on the external tank and on the solid rocket boosters.

Ecliptic RocketCam systems were incorporated into multiple other NASA missions—including the Delta II rockets for the 2003 twin Mars Exploration Rover missions and on the Demonstration of Autonomous Rendezvous Technology (DART) spacecraft in 2005—before the company signed a Memorandum of Understanding with Ames in 2007 to collaborate on projects for onboard imaging systems and related technologies. As part of this collaboration, Ecliptic opened a small office in the Ames-based NASA Research Park.

Ecliptic became involved in the LCROSS mission when Ames principal investigator Anthony Colaprete realized the company’s digital video controller—the avionics unit of its RocketCam Digital Video System (DVS) technology—could serve as the core technology for the spacecraft’s Data Handling Unit, providing cost-effective control capabilities for its sensors.

“Up until this time, our video controllers were only controlling video cameras and other imaging sensors,” says Ridenoure. “LCROSS wanted us to control several other sensors we had never seen, with lots of switching between them, lots of data formatting.” The demands of the LCROSS mission required Ecliptic to develop a controller capable of handling higher data rates, more frequent sensor switching, and more data formatting complexities than its previous systems.

“LCROSS helped us push the capabilities of these digital video controllers and set us up to start tackling high-speed and high-definition video,” Ridenoure says.

Ecliptic was able to advance its technology even further following the successful LCROSS mission. The company collaborated with NASA’s Dryden Flight Research Center to develop a high-speed video system for monitoring the parachute deployments for the Constellation Abort Flight Test program, designed to test a launch abort system for the Orion crew capsule. Ridenoure says Ecliptic’s work with Dryden developed the company’s high-speed video capabilities and primed its technology for high-definition applications, since high-speed and high-definition share similar data rates.

Product Outcome

An eight-camera digital video system with a pen for size comparison
Image courtesy of Ecliptic Enterprises Corporation
Smaller, more capable RocketCam Digital Video Systems such as this will be used onboard future NASA, commercial, and defense rockets and spacecraft.

To date, Ecliptic’s analog and digital RocketCam systems have been employed on more than 80 rocket launches and spacecraft missions for customers including NASA, the U.S. Department of Defense, and multiple aerospace companies. The company’s technology has captured unique perspectives of an array of rocket launches, including Delta IIs, IIIs, and IVs; Atlas IIs, IIIs, and Vs; Titan IVs; and Minotaur Is and IVs. Ecliptic video systems also allowed the world to share the experience of Scaled Composite’s SpaceShipOne aircraft making the first-ever privately funded human space flight. The company’s systems are part of launches roughly every 4 to 6 weeks—with 8 to 12 in a typical year.

Ecliptic does not manufacture cameras, but rather takes off-the-shelf sensors, ruggedizes them to withstand the extreme conditions of launches and space operations, and houses them in protective pods and other enclosures that are affixed to the rocket or spacecraft. The company’s key technology is its digital video controller, which is why Ridenoure is quick to note that Ecliptic is not a “camera company.”

“Ninety percent of what we do is avionics, sensor-handling, and data switching, like what we did for LCROSS,” he says.

Thanks to the company’s work on the LCROSS mission and the Abort Flight Test program, Ecliptic has gained not only desirable technical capabilities but also caché among commercial spacecraft developers.

“From a commercialization angle, it’s largely because our systems were baselined and approved for various challenging NASA missions that commercial satellite programs have confidence in our systems for their spacecraft,” Ridenoure says. Ecliptic systems with “lots of heritage with the one we had on LCROSS” are now on geosynchronous commercial satellites. The company has also generated a preliminary design for a high-definition video system based on the experience it gained from its Ames and Dryden work and anticipates its first sale in this category within the next year.

As NASA and commercial space partners develop new vehicles for traveling into low Earth orbit and beyond, Ecliptic expects to provide the video footage that will keep engineers apprised and the public in awe. Ecliptic systems are set to be incorporated on the Orbital Sciences Cygnus vehicle and Taurus II rocket, designed to ferry cargo to the International Space Station (ISS) as part of the Commercial Orbital Transportation Services program. This year, the company received its largest contract ever to supply the United Space Alliance with RocketCam DVS technology for solid rocket boosters on future NASA launch vehicles.

Ecliptic will also enable a major educational and public outreach project when it launches onboard the two Gravity Recovery and Interior Laboratory (GRAIL) spacecraft, set to map the gravitational field of the Moon in late 2011. MoonKAM (Moon Knowledge Acquired by Middle School Students) will be composed of Ecliptic video systems on each spacecraft and is sponsored by Sally Ride Science, led by the former astronaut who also initiated the ISS EarthKAM. Students will be able to schedule video recordings and retrieve the clips from NASA’s datastream for educational activities, hopefully inspiring a new generation of space explorers.

RocketCam™ is a trademark of Ecliptic Enterprises Corporation.

[Source]

Rocket-Powered Parachutes Rescue Entire Planes

Originating Technology/NASA Contribution

Parachute system
This BRS Aerospace Inc. parachute system, designed for sport aircraft, deploys its chute (contained in the white canister) in less than 1 second, thanks to a solid rocket motor (the black tube on top).

When Boris Popov was 8 years old, he took one of his mother’s sheets and some thread, made a parachute, climbed a tree, and jumped. The homemade chute did little to break Popov’s fall; his father took the disappointed boy aside and said, “Son, you’ve got to start higher.”

Years later in the mid-1970s, recent college graduate Popov was hang gliding over a lake when the boat that was towing him accelerated too quickly, ripping the control bar from his hands. Some 500 feet in the air, Popov’s glider went into a spiral, coming apart as Popov plummeted to the water. As he fell, Popov realized that if he only had some kind of parachute, he could have been saved. Before impact, he promised himself that, if he survived, he would create a solution that would save people in these types of emergency situations.

“BRS is
a classic
example of
taxpayers’ money being spent
on research
that has
translated into
246 lives saved.”

Decades later, the U.S. air transportation system was suffering its own kind of free fall. The terrorist attacks of 9/11 led to stringent security measures that complicated and slowed down air travel. Even as the industry recovered from the effects of the attacks, increased flights and passenger demand strained the National Airspace System (NAS) at levels never before experienced. At the same time, NASA was exploring ways of extending aviation to rural America using smaller general aviation (GA) aircraft and local community airports. The NASA Small Aircraft Transportation System (SATS) project envisioned an on-demand, point-to-point, widely distributed transportation system relying on small aircraft (4-10 passengers) operating out of the Nation’s more than 5,400 public-use landing facilities. With about 98 percent of the population living within 20 miles of at least one such airport, SATS could provide cheaper, faster, and more practical options for business and leisure travel, medical services, and package delivery.

Though the SATS project concluded its research in 2006, the pursuit of a nationwide GA transportation system continues through other initiatives like NASA’s Green Flight Centennial Challenge, scheduled for 2011, which encourages competing teams to maximize fuel efficiency for personal aircraft, as well as reduce noise and improve safety. Technological advances are still necessary, however, to make such a system viable, such as improving the safety of small aircraft. One solution has come in the form of an invention developed by Popov, who having survived his fall, began investigating methods of ballistically deploying parachutes for aircraft in emergency situations. Today, with the help of a NASA partnership, the parachute that Popov wished for when plunging to Earth is saving hundreds of small aircraft pilots from a similar fate.

Partnership

Popov founded Ballistic Recovery Systems Inc. (now BRS Aerospace) of Saint Paul, Minnesota, in 1980. He formed the company to commercialize his solution to personal aircraft accidents like the one he experienced: a whole aircraft parachute recovery system. Soon BRS was developing parachutes for hang gliders, ultralights, and experimental aircraft, and the company received Federal Aviation Administration certification for a retrofit system for the Cessna 150 GA airplane. The company’s innovative safety solution for small aircraft led to Small Business Innovation Research (SBIR) contracts with Langley Research Center aimed at advancing the BRS parachute system for use with larger and heavier GA aircraft. The NASA funding helped BRS with the development of thin-film parachutes, continuous reinforcement manufacturing methods that result in stronger parachutes, and smart deployment devices—all of which help overcome one of the main obstacles to whole-aircraft parachute systems for larger vehicles: reducing bulk and weight while maintaining parachute strength.

“You can’t have a 50-gallon drum full of parachute in the back of a Cessna. It’s not going to work,” Popov says. Just as important as the research and development funding for BRS, he says, was NASA’s support of its parachute system.

“One of our primary needs for working with NASA was to promote and encourage the concept of a ballistic parachute on aircraft,” Popov says. “There was a lot of skepticism that this system could even work. NASA was very proactive in creating a safety mentality in general aviation.”

Product Outcome

Parachute arresting the descent of an aircraft
With the help of NASA funding, BRS developed parachutes that have saved hundreds of small aircraft—and their pilots and passengers. Here, a Cirrus SR20’s parachute deploys at over 100 miles per hour, arresting the plane’s descent. BRS parachute systems are standard equipment on Cirrus aircraft.

The BRS parachute system—first featured in Spinoff 2002—is deployed by a solid rocket motor activated when the pilot pulls on the cockpit handle release. The rocket fires at over 100 miles per hour and extracts the parachute in less than 1 second. Thanks to a patented shock attenuation device, the chute opens according to the speed of the aircraft; at high speeds, the chute opens only 25 percent for the first few seconds to reduce airspeed to the point where the chute can open fully and still sustain the opening shock. (The lightweight parachute material has to sustain the force of the rocket deployment, as well as the force of the aircraft.) At low speeds and altitudes, the chute opens quickly and completely to ensure rescue.

The system’s versatility makes it effective in a range of accident situations, from mid-air collisions and structural failure to a spiral dive or stall spin. The parachute arrests the descent of the entire aircraft and deposits the vehicle and its occupants on the ground with a typical impact force equivalent to falling 7 feet, which is largely absorbed by the aircraft’s landing gear and seats. Not only are lives saved, but in many incidents, expensive aircraft are preserved to fly again.

BRS has sold more than 30,000 systems worldwide since its founding. The parachute is now standard equipment on the Cirrus SR20 and SR22 planes, the Flight Design CT light-sport aircraft (LSA), the Piper Aircraft PiperSport LSA, and as an option on the new Cessna 162 Skycatcher. The company is projecting sales of close to $20 million this year.

“Our system is standard equipment on the world’s top selling single-engine aircraft, Cirrus. It’s standard equipment on the world’s top selling LSA, the CT. The number one producer of ultralights has our product as standard equipment. You can see a trend here,” Popov says.

BRS also produces parachute systems for military unmanned aerial vehicles, military cargo parachutes, and military training aircraft recovery parachutes. On training aircraft, if the pilot has to eject, “you basically have a 5,000-pound bomb that could go unpiloted down into a neighborhood,” Popov says. “We, however, can bring down the pilot and trainer aircraft safely to the ground.”

While parachutes for larger aircraft are still in the works, BRS does have a system designed for small jets, and its NASA partnership has provided the company with the technology that may eventually enable parachutes for commercial airlines and jets. In the meantime, Popov welcomes the role NASA has played in helping turn the promise he made to himself that day at the lake into a reality for the 246 people whose lives have been saved by the BRS parachute so far.

“BRS is a classic example of taxpayers’ money being spent on research that has translated into 246 lives saved,” he says. “That’s a justifiable and profound benefit.”

He tells a favorite story about a grandfather flying a Cirrus SR20 over the Canadian Rockies with his grandkids in the back seat. The grandfather lost control of the plane, which became inverted at night in the mountains. “You’re likely not going to recover from that,” Popov says. The grandfather deployed the parachute, and the plane settled gently on the side of mountain, where a rescue helicopter found it the next day. After being hoisted out by a helicopter and flown to a nearby airstrip, they put on a new prop and landing gear and flew the plane out.

“This grandfather thought he may have just killed himself and his grandkids, but when he pulled the handle and felt the parachute deploy, he knew he had just prevented that from happening,” Popov says.

“How many millions of dollars is that worth?”

[Source]