Big Data and Remote Sensing – It’s all about information and applications

From the launch of the first Earth observing satellite, to today’s growing space industry, the volume of remote sensing data continues to grow at a remarkable rate. Furthermore, with the emerging utilization of drones, aka unmanned aerial vehicles, and deployment of low-cost satellite constellations, we are on the cusp of a momentous leap forward in data accessibility.

For example, consider the growth of private-sector drones, i.e., those used for scientific research, civil applications and business development. According to a March 2013 report from the Association for Unmanned Vehicle Systems International (AUVSI), assuming the FAA determines how drones fit within commercial airspace by 2015, it is expected that in just ten years the drone industry in the U.S. will generate more than 100,000 jobs and $80 billion of revenue. This includes an immense number of individual drones, on the order of hundreds of thousands, each generating their own streams of remote sensing data.

As another example, consider the pending growth of new low-cost commercial satellite constellations, such as those planned by Skybox Imaging and Planet Labs. Current plans include 28 satellites to be launched by Planet Labs and 24+ satellites to be launched by Skybox Imaging, where each constellation has the objective of achieving cost-effective, near real-time, high-resolution imaging of our planet’s surface. Planet Labs plans to launch its constellation in early 2014, and Skybox Imaging plans to begin launching later in 2013, so data from both companies will soon be available.

There are many questions associated with all of this growth: Where will all this data be stored? How will data be efficiently discovered, accessed and visualized? What types of processing and data management tools will be needed? How will this data be used? What new types of applications will be devised to leverage the information derived from this data?

Amongst these questions, we focus our discussion here on the applications. However, note that the challenges associated with data storage, discovery and dissemination are not trivial, and are equally critical to the success of this industry. But for now let’s consider some of applications that utilize information derived from this imagery.

The AUVSI report indicates a number of areas where drones are already being utilized, including: wildfire mapping, agricultural monitoring, disaster management, power line surveys, law enforcement, telecommunication, weather monitoring, aerial imaging/mapping, television and movies production, oil and gas exploration, freight transport, and environmental mapping. Similar application areas are also highlighted in informational material from Skybox Imaging and Planet Labs, as well as in discussions throughout the remote sensing industry.

To provide more specific examples, the following hypothetical applications were recently reported in an article on Skybox Imaging in Wired (06.18.13): “the number of cars in the parking lot of every Walmart in America; the number of fuel tankers on the roads of the three fastest-growing economic zones in China; the size of the slag heaps outside the largest gold mines in southern Africa; the rate at which the wattage along key stretches of the Ganges River is growing brighter.”

Other example applications include: lawn and vegetation greenness indices for marketing landscape maintenance; water surface conditions for commercial and recreational fishing; flooding and damage assessments for insurance claims; number of beach visitors for targeted advertising; crop health for precision agriculture and investment futures; and many more.

Even with these few examples we see a glimpse of the enormous economic potential for the growing remote sensing industry. A common theme throughout is the need to accurately and efficiently deliver information in a timely manner. To do so still requires the development and implementation of many new hardware and software solutions; however, in that regard there are also many opportunities. This is a significant time for remote sensing, and it will be exciting to see how the industry develops in the near future.

This is part 2 of a series on big data and remote sensing… visit part 1 here.


Crowdfunding in Space – Democratizing support for satellite and space inspired projects

ARKYDEver come up with the next great idea in remote sensing, space technology or geospatial inspired art? Interested in alternative sources to fund your idea? Check out these innovators who have turned to crowdfunding to support their projects:

  • ARKYD: A Space Telescope for Everyone has raised $1,234,748 on Kickstarter (and still going this month) to develop and launch a space telescope that can be controlled by users to acquire images of deep space.
  • SkyCube: The First Satellite Launched by You! raised $116,890 on Kickstarter to build a small nano-satellite that will take images of the Earth and broadcast simple messages from space. SkyCube is scheduled for launch in November 2013 on a SpaceX launch to the International Space Station.
  • ArduSat – Your Adruino Experiment in Space raised $106,300 on Kickstarter for completing system integration tasks on an open platform CubeSat that can be used by the public to “run their own space-based applications” and experiments.
  • Space Elevator Science – Climb to the Sky – A Tethered Tower raised $110,353 on Kickstarter to build a platform of tethered high-altitude balloons and a robot that can climb two kilometers up to those balloons.
  • Uwingu – A New Way to Fund Space Exploration, Research and Education raised $79,896 on Indiegogo to fund start-up costs for creating The Uwingu Fund, which will “provide grants to those that propose meritorious projects in space exploration, space research or space education.”
  • KickSat – Your personal spacecraft in space! Raised $74,586 on Kickstarter to build a fleet of Sprite Spacecraft, tiny satellites about the size of a few postage stamps, and a larger CubeSat that will be used to deploy the Sprites once in orbit.
  • Plasma Jet Electric Thrusters for Spacecraft raised $72,871 on Kickstarter to develop a prototype plasma jet thruster for interplanetary transportation.
  • Hermes Spacecraft raised $20,843 on Kickstarter to develop and test rocket thrusters for a reusable suborbital spacecraft.
  • Safe is Not An Option: Our Futile Obsession in Spaceflight raised $5,341 on Kickstarter to publish a book “on our irrational approach to safety in human spaceflight.”
  • Painting for Satellites raised $3,525 on Kickstarter to paint rooftops as large-scale artwork to be viewed from orbiting satellites.
  • Let’s launch a Balloon into Space raised $3,384 on Gofundme for a sixth grade class to launch a weather balloon into near space.
  • Be a Producer on Timothy Feathergrass: The Movie! raised $2,794 on Kickstarter to support film festival entry fees for a movie about a “young man who builds a satellite but can’t afford to launch it into space.”
  • There are also a number of other projects just getting started.

Just think of the possibilities for your next great idea.

For more information: Kickstarter; Indiegogo; Gofundme

Space is Calling – Can your phone do that?

PhoneSat 1.0

PhoneSat 1.0 (image courtesy NASA)

What coverage areas are included in your mobile phone plan? Does it include the section of space – outer space that is – defined as low Earth orbit? If not, don’t worry, NASA’s PhoneSats have that covered.

This past Sunday, 21 April 2013, NASA successfully launched a trio of low-cost nanosatellites; all built using Google Nexus smartphones. Collectively referred to as the PhoneSat mission, these satellites are a technology demonstration project being used to determine if smartphones can be used to control satellite avionics, i.e., the general communication and navigation requirements for standard satellite operation.

But these aren’t the first smartphone-enabled satellites. You may recall that earlier this year on February 25 the United Kingdom launched STRaND-1, also built using a Google Nexus device, thus taking honors as the first smartphone satellite in orbit.

In addition to the PhoneSat mission, the April 21 launch from NASA’s Wallops Island Flight Facility in Virginia also marked an important milestone for Orbital Sciences Corporation. With the successful maiden launch of their Antares rocket, and subsequent satellite payload delivery, Orbital Sciences completed a significant step towards ultimately providing cargo supply missions to the International Space Station. As with similar missions already being conducted by SpaceX, this launch represents another important achievement for NASA and the U.S. commercial space industry, and another contribution to the exciting new future of our space economy.

The three PhoneSat satellites, which are part of NASA’s Small Spacecraft Technology Program, were predominantly assembled using off-the-shelf components and all conform to the specifications of 1U CubeSats, measuring just 10x10x10cm. Despite the smartphone capabilities, however, you won’t be getting a call from these satellites anytime soon; the ability to send and receive both calls and messages has been disabled on the phones. Modifications have also been made to incorporate a larger external lithium battery and integrate a powerful radio transmitter. But otherwise the satellites are designed to specifically take advantage of the powerful microprocessors and other miniaturized components inherent to today’s smartphones.

Among various tests on how phone hardware and software operates in a space environment, the PhoneSat mission is using the built-in cameras on all three smartphone satellites to acquire images of the Earth’s surface. The images are then being transmitted at regular intervals via small data packets such that amateur radio operators around the world can receive the individual packets and send them to researchers at NASA Ames Research Center. Using this citizen science approach, the ultimate goal is to assemble a complete mosaic of the Earth using a compilation of just these smartphone images.

With the successful initiation of the PhoneSat and STRaND missions, think of what might be next on the horizon. Think of what apps you might develop that could be implemented on an orbiting smartphone? Just think of the possibilities.

For more information on the PhoneSat program:

Space Apps and You – Check out the official list of this year’s challenges

Space Apps ChallengeIn preparation for next month’s 48-hour global codeathon taking place April 20-21, the Space Apps team has now released their official list of challenges, with more than 50 opportunities in which you can participate:

If you’re not already familiar with the Space Apps Challenge, it’s an amazing opportunity to work together with collaborators around the world to “solve current challenges relevant to both space exploration and social need.” For more information, please refer to our previous Space Apps post and visit the official website:

General categories for this year’s codeathon include software, hardware, citizen science, and data visualization. And they’re not just all about space… check out the challenges for “Backyard Poultry Farmer”, “Lego Rovers”, “OpenROV”, “Off the Grid”, “In the Sky with Diamonds” and “Renewable Energy Explorer.” There’s something for everyone.

While we think all of the proposed challenges are exciting, we here at HySpeed Computing have a particular interest in geospatial technologies and would therefore like to highlight some specific challenges speaking directly to the areas of remote sensing and earth observation:

  • Earth Day Challenge – “How can space data help us here on Earth? April 22 is Earth Day. Create a visualization of how pollution levels have changed over time.  Many pollution problems have been vastly improved, such as water pollution in the Great Lakes, and air pollution in Los Angeles. But others have significantly worsened, like CO2 emissions and ozone depletion.”
  • The Blue Marble – “Create an app, platform or website that consolidates a collection of space imagery and makes it more accessible to more people.”
  • EarthTiles – “Take global imagery data from Landsat, EOS, Terra, and other missions and turn them in to imagery tiles that can be used in an open source street map. This would enable incredible amounts of visualization and contextual data narration to occur, especially if such tiles were able to be updated on a regular basis as new data is released.”
  • Seeing Water From Space – “Create a web map of Chile water resources, showing how they have changed over time and how their changes over time relate to changes in climate.”
  • Earth From Space – “Using images taken by middle school students through the ISS EarthKAM program, create an educational application that allows users to overlay EarthKAM images on a 3D model of earth, annotate and comment on the images, and share their work via social media. This application can be web based or designed as a mobile application for an Android device.”

We’re excited to see what you can accomplish using Earth imagery. So look for an in-person event near you, or participate virtually from your own location. Good luck and happy coding!

Big Data and Remote Sensing – Where does all this imagery fit into the picture?

There has been a lot of talk lately about “big data” and how the future of innovation and business success will be dominated by those best able to harness the information embedded in big data. So how does remote sensing play a role in this discussion?

We know remote sensing data is big. For example, the NASA Earth Observing System Data and Information System (EOSDIS), which includes multiple data centers distributed around the U.S., currently has more than 7.5 petabytes of archived imagery. Within the EROS data center alone there are over 3.5 million individual Landsat scenes totaling around 1 petabyte of data. And this is but a subset of all the past and currently operating remote sensing instruments. There are many more, particularly when considering the various international and commercial satellites, not to mention the array of classified military satellites and the many instruments yet to be launched. Remote sensing imagery therefore certainly satisfies the big data definition of size.

But what about information content? A significant aspect of the big data discussion is geared towards developing large-scale analytics to extract information and applying those results towards answering science questions, addressing societal needs, spurring further innovation, and enhancing business development. This is one of the key aspects – and challenges – of big data, i.e., not just improving the capacity to collect data but also developing the software, hardware and algorithms needed to store, analyze and interpret this data.

Remote sensing researchers have long been using remote sensing data to address localized science questions, such as assessing the amount of developed versus undeveloped land in a particular metropolitan area, or quantifying timber resources in a given forested area. Subsequently, as software and hardware capabilities for processing large volumes of imagery became more accessible, and image availability also increased, remote sensing correspondingly expanded to encompass regional and global scales, such as estimating vegetation biomass covering the Earth’s land surfaces, or measuring the sea surface temperatures of our oceans. With today’s processing capacity, this has been extended yet further to include investigations of large-scale dynamic processes, such as assessing global ecosystem shifts resulting from climate change, or improving the modeling of weather patterns and storm events around the world.

Additionally, consider the contribution remote sensing makes to the planning and development of transportation infrastructure in the northern hemisphere, where the opening of new trans-arctic shipping routes and changes to other existing high-latitude shipping routes are being predicted using models that depend on remote sensing data for input and/or validation. And also consider agricultural crop forecasting, which relies heavily on information and observations derived from remote sensing data, and can not only have economic impacts but also be used to indicate potential regions of economic and political instability resulting from insufficient food supplies.

Such examples, and others like them, represent a logical progression as research and applications keep pace with greater data availability and ongoing improvements in processing tools. But the field of remote sensing, and its associated data, is continuing to grow. What else can remote sensing tell us and how else can this immense volume of data be used? Are there relationships yet to be exploited that can be used to indicate consumer behavior and habits in certain markets? Are there geospatial patterns in population expansion that can be used to better predict future development and resource utilization?

There’s a world of imagery out there. What are your ideas on how to use it?

Get Your Code On – The NASA International Space Apps Challenge is coming

Space Apps ChallengeGet ready to flex your fingers and exercise your brain. Next month from April 20-21 NASA is hosting the International Space Apps Challenge – a 48-hour global hackathon. Everyone and anyone is welcome to attend.

The International Space Apps Challenge is a 2 day technology development event during which citizens from around the world will work together to solve current challenges relevant to both space exploration and social need.”

There are currently over 75 cities around the world hosting in-person events, including one extraordinary location orbiting the Earth onboard the International Space Station. These in-person events, which are independently organized by local volunteers, provide the opportunity to interact and collaborate with fellow participants. However, if there’s not a venue near you, or you think best when you’re in your own environment, you can also contribute to the event virtually from your own location, perhaps even gathering a group of friends to create your own mini-event. To participate, either in-person or virtually, simply visit the Space Apps website – – and register.

You don’t have to be a ‘space’ professional to contribute, nor do you need to be an expert programmer. The objective of the event is to bring together a diverse group of people with a varied range of skills and backgrounds that have “a passion for changing the world and are willing to contribute.” Last year’s event, which numbered more than 2000 participants, received contributions from an assorted array of scientists, engineers, artists, writers, entrepreneurs and many more. All that’s required is a spirit of innovation.

Participants in the App Challenge are encouraged to work as teams, but can also work alone, to “solve challenges relevant to improving life on Earth and life in space.” Top solutions from each location will be entered in the global competition, where winners will be awarded prizes and recognition for their achievements. A list of suggested challenges will be posted on the event website in the near future. In the meantime, current suggestions for this year’s event include:

  • “Help tell the ‘why’ of space exploration through the creation of compelling narratives and visualizations of the stories and data from NASA’s history.”
  • “Design a CubeSat (or constellation of CubeSats) that can utilize extra space onboard future robotic Mars missions to help us understand more about the Red Planet.”
  • “Help revitalize antiquated data by creating open source tools to transform, display, and visualize data.”

But this is just a small sample of the challenges yet to come. Perhaps you also have your own ideas and would like to develop your own unique contribution. This too is welcomed. And you can even get started in advance (but the bulk of the work should be completed the weekend of the event) so that you have a head start and hit the ground running.

So grab your favorite laptop, tablet, or other device and get comfortable. It’s time to code. Good luck everyone!

For more on the NASA Space Apps Challenge:  

GTC 2013 – Set your sights on processing speed

GTCNVIDIA will soon be hosting its annual GPU Technology Conference – GTC 2013 – later this month from March 18-21 in San Jose, CA. Last year’s conference saw the release of NVIDIA’s astounding new Kepler GPU architecture. Be sure to tune in to this year’s conference to see what’s next in the world of high performance GPU computing.

Can’t attend in person? NVIDIA will be live streaming the keynote addresses (currently listed as upcoming events on, but be sure to check the conference website  for details so as not to miss out). NVIDIA also records all the session speakers and makes the content available later for everyone to view. In fact, you can currently visit GTC On-Demand at the conference website to explore sessions from past conferences.

If nothing else, don’t miss the opening keynote address (March 19 @9am PST) by Jen-Hsun Huang, NVIDIA’s co-founder, President and CEO. He’ll be discussing “what’s next in computing and graphics, and preview disruptive technologies and exciting demonstrations across industries.” Jen-Hsun puts on quite a show. It’s not only informative with respect to NVIDIA’s direction and vision, but also entertaining to watch. After all, you’d expect nothing else from the industry leader in computer graphics and visualization.

And what about geospatial processing? How does GTC 2013 fit into the science of remote sensing and GIS? The answer lies in the power of GPU computing to transform our ability to more rapidly process large datasets and implement complex algorithms. It’s a rapidly growing field, and impressive to see the levels of speedup that are being achieved, in some cases more than 100x faster on the GPU than on the CPU alone. Amongst the conference sessions this year will be numerous general presentations and workshops on the latest techniques for leveraging GPUs to accelerate your processing workflow. More specifically, there will be a collection of talks directly related to remote sensing, such as detecting man-made structures from high resolution aerial imagery, retrieving atmospheric ozone profiles from satellite data, and implementing algorithms for orthorectification, pan-sharpening, color-balancing and mosaicking. Other relevant sessions include a real-time processing system for hyperspectral video, and many more on a variety of other image processing topics.

HySpeed Computing is excited to see what this year’s conference has to offer. How about you?

For more on GTC 2013:

Smartphones in Space – STRaND-1 using a Google Nexus One for satellite operations

It seems someone is always coming up with a new application or novel use for their smartphone. Now we can add satellite operations to the latest list of smartphone innovations.

STRaND and team

STRaND-1 satellite with team members from Surrey Space Center and Surrey Satellite Technology (credit SSTL)

The STRaND-1 satellite was successfully launched into space on 25 February 2013 from the Satish Dhawan Space Centre in Sriharikota, India. STRaND-1 (which stands for Surrey Training, Research, and Nanosatellite Demonstrator) contains a complete Google Nexus One running the Android operating system. According to STRaND-1 developers, this isn’t some “stripped-down” version of the phone, but rather the whole phone “mounted against one of the panels, with the phone camera peeping out through a porthole.”

STRaND-1 was developed by researchers at the University of Surrey’s Surrey Space Center as well as engineers from Surrey Satellite Technology. This is the first contribution from the United Kingdom following the satellite design specifications of the CubeSat program. By standardizing the design format of nanosatellites, the CubeSat program provides a cost-efficient avenue to launch and deploy small satellites. Organizations building CubeSats largely originate from academia, mostly universities and high schools, but also include commercial companies.

In the case of STRaND-1, the satellite measures just 10cm x 30cm in size and weighs only 4.3kg. And the satellite was built using mostly commercial off-the-shelf components. The Google Nexus One smartphone will be used to run a number of Apps, including a collection of Apps selected from a community competition. These include: ‘iTesa’, which will record the magnitude of the magnetic field around the phone; ‘STRAND Data’, which will display satellite telemetry data on the phone; ‘360’, which will be used to collect imagery of the Earth using the phone’s camera and then use this imagery to establish satellite position; and ‘Scream in Space’, which will be used to project user-uploaded screams into space using the phone’s speakers.

After the initial phase of operation and experiments using a linux-based computer, also onboard the satellite, a second phase of the STRaND-1 mission will switch satellite operations to the smartphone. This will not only further test the ability of off-the-shelf phone components to operate in a space environment, but also validate the phone’s ability to run advanced guidance, navigation and control systems. With this achievement, STRaND-1 will become the first ever smartphone-operated satellite.

The next time you pick up your phone, think about the possibilities.

For more information on STRaND-1:

HySpeed Computing – Reviewing our progress and looking ahead

Join HySpeed Computing as we highlight our accomplishments from the past year and look ahead to what is sure to be a productive 2013.

The past year has been an eventful period in the life of HySpeed Computing. This was the year we introduced ourselves to the world, launching our website ( and engaging the community through social media platforms (i.e., using the usual suspects – Facebook, LinkedIn and Google+). If you’re reading this, you’ve found our blog, and we thank you for your interest. We’ve covered a variety of topics to date, from community data sharing and building an innovation community to Earth remote sensing and high performance computing. As our journey continues we will keep sharing our insights and also welcome you to participate in the conversation.

August of 2012 marked the completion of work on our grant from the National Science Foundation (NSF). The project, funded through the NSF SBIR/STTR and ERC Collaboration Opportunity, was a partnership between HySpeed Computing and the Bernard M. Gordon Center for Subsurface Sensing and Imaging Systems at Northeastern University. Through this work we were able to successfully utilize GPU computing to accelerate a remote sensing tool for the analysis of submerged marine environments. Our accelerated version of the algorithm was 45x faster than the original, thus approaching the capacity for real-time processing of this complex algorithm.

HySpeed Computing president, Dr. James Goodman, also attended a number of professional conferences and meetings during 2012. This included showcasing our achievements in geospatial applications and community data sharing at the International Coral Reef Symposium in Cairns, Australia and the NASA HyspIRI Science Workshop in Washington, D.C. and presenting our accomplishments in remote sensing algorithm acceleration at the GPU Technology Conference in Pasadena, CA and the VISualize Conference in Washington, D.C. Along the way we met, and learned from, a wonderfully diverse group of other scientist and professionals. We are encouraged by the direction and dedication we see in the community and honored to be a contributor to this progress.

HyPhoonSo what are we looking forward to in 2013? You heard it here first – we are proud to soon be launching HyPhoon, a gateway for accessing and sharing both datasets and applications. The initial HyPhoon release will focus on providing the community with free and open access to remote sensing datasets. We already have data from the University of Queensland, Rochester Institute of Technology, University of Puerto Rico at Mayaguez, NASA, and the Galileo Group, with additional commitments from others. This data will be available for the community to use in research projects, class assignments, algorithm development, application testing and validation, and in some cases also commercial applications. In other words, in the spirit of encouraging innovation, these datasets are offered as a community resource and open to your creativity. We look forward to seeing what you accomplish.

Connect with us through our website or via social media to become pre-registered to be the first to access the data as soon as it becomes available!

Beyond datasets, HyPhoon will also soon include a marketplace for community members to access advanced algorithms, and sell user-created applications. Are you a scientist with an innovative new algorithm? Are you a developer who can help transform research code into user applications? Are you working in the application domain and have ideas for algorithms that would benefit your work? Are you looking to reach a larger audience and expand your impact on the community? If so, we encourage you to get involved in our community.

HySpeed Computing is all about accelerating innovation and technology transfer.

Remote Sensing in the Cloud – Introducing the ENVI Services Engine

remote sensing in the cloudA popular topic these days is cloud computing. And the world of remote sensing is no exception. New developments in software, hardware, and connectivity are offering innovative options for performing remote sensing image analysis and visualization tasks in the cloud.

One example of the recent advance in cloud computing capabilities for geospatial scientists is the development of the ENVI Services Engine by Exelis Visual Information Solutions (Exelis VIS). Using what was previously the domain of desktop computing – this software engine brings the image analysis tools of ENVI into the cloud. This translates into an ability to deploy ENVI processing tools, such as image classification, anomaly detection and change detection, into an online environment. Additionally, because the system uses a HTTP REST interface and was constructed utilizing open source standards, implementing the software is feasible across a variety of operating systems and different hardware devices.

This flexibility of the ENVI Services Engine, and cloud computing in general, speaks directly to the “bring your own device” movement. Rather than being limited to certain operating systems or certain types of hardware, users have many more options to satisfy their preferences. Access and processing thus becomes feasible from a variety of tablets, mobile phones and laptops, in addition to the usual array of desktops and workstations.

As an example, consider the ability to access imagery and derived data layers from your favorite mobile device. Now consider being able to adjust your analysis on-the-fly from this same device based on observations while in the field. With the image processing being tasks handled on remote servers, extensive computing capacity is no longer required on your local device. This enables not just remote access to image processing, but also the ability for on-demand visualization and display of entire databases full of different images and results.

Having the image processing tasks performed on the same servers, or on servers closer to, where the imagery is stored is also more computationally efficient, since imagery does not need to be first transferred to local computers and results then transferred back to the servers. This is particularly relevant for large data archives, where even simple changes to existing algorithms, or the addition of new algorithms, may necessitate re-processing vast volumes of data.

Although the concept of cloud computing is not new, it has become apparent that the software and hardware landscape has evolved, making cloud computing for geospatial analysis significantly more attractive than ever before.

Attendees of the VISualize conference earlier this year received a sneak-peek at the ENVI Services Engine. The software was also recently on display at the GEOINT conference this past October. However, official release of the software isn’t scheduled until early 2013. For more information: