Live Webinar – Delivering On-Demand Geoanalytics at Scale

Join us for a live webinar
Tuesday, April 18, 2017 | 10:30am EDT/3:30pm BST

Register Now

Delivering On-Demand Geoanalytics at Scale

Things have changed. Vast amounts of imagery are freely available on cloud platforms while big datasets can be hosted and accessed in enterprise environments in ways that were previously cost prohibitive. The ability to efficiently and accurately analyze this data at scale is critical to making informed decisions in a timely manner.

Developed with your imagery needs in mind, the Geospatial Services Framework (GSF) provides a scalable, highly configurable framework for deploying batch and on-demand geospatial applications like ENVI and IDL as a web service. Whether you are a geospatial professional in need of a robust software stack for end-to-end data processing, or a decision maker in need of consolidated analytics for deriving actionable information from complex large-scale data, GSF can be configured to meet your needs.

This webinar will show you real-world example applications that:

  • Describe the capabilities of GSF for scalable data processing and information delivery
  • Introduce the diverse ecosystem of geospatial analysis tools exposed by GSF
  • Illustrate the development of customized ENVI applications within the GSF environment

What are your geospatial data analysis needs?

Register Now

If you can’t attend the live webinar, register anyways and we’ll email you a link to the recording.

ENVI Analytics Symposium 2016 – Geospatial Signatures to Analytical Insights

HySpeed Computing is pleased to announce our sponsorship of the upcoming ENVI Analytics Symposium taking place in Boulder, CO from August 23-24, 2016.

EAS 2016

Building on the success of last year’s inaugural symposium, the 2016 ENVI Analytics Symposium “continues its exploration of remote sensing and big data analytics around the theme of Geospatial Signatures to Analytical Insights.

“The concept of a spectral signature in remote sensing involves measuring reflectance/emittance characteristics of an object with respect to wavelength. Extending the concept of a spectral signature to a geospatial signature opens the aperture of our imagination to include textural, spatial, contextual, and temporal characteristics that can lead to the discovery of new patterns in data. Extraction of signatures can in turn lead to new analytical insights on changes in the environment which impact decisions from national security to critical infrastructure to urban planning.

“Join your fellow thought leaders and practitioners from industry, academia, government, and non-profit organizations in Boulder for an intensive exploration of the latest advancements of analytics in remote sensing.”

Key topics to be discussed at this year’s event include Global Security and GEOINT, Big Data Analytics, Small Satellites, UAS and Sensors, and Algorithms to Insights, among many others.

There will also be a series of pre- and post-symposium workshops to gain in-depth knowledge on various geospatial analysis techniques and technologies.

For more information: http://harrisgeospatial.com/eas/Home.aspx

It’s shaping up to be a great conference. We look forward to seeing you there.

ENVI Analytics Symposium – Come explore the next generation of geoanalytic solutions

HySpeed Computing is pleased to announce our sponsorship of the upcoming ENVI Analytics Symposium taking place in Boulder, CO from August 25-26, 2015.

ENVI Analytics Symposium

The ENVI Analytics Symposium (EAS) will bring together the leading experts in remote sensing science to discuss technology trends and the next generation of solutions for advanced analytics. These topics are important because they can be applied to a diverse range of needs in environmental and natural resource monitoring, global food production, security, urbanization, and other fields of research.

The need to identify technology trends and advanced analytic solutions is being driven by the staggering growth in high-spatial and spectral resolution earth imagery, radar, LiDAR, and full motion video data. Join your fellow thought leaders and practitioners from industry, academia, government, and non-profit organizations in Boulder, Colorado for an intensive exploration of the latest advancements of analytics in remote sensing.

Core topics to be discussed at this event include Algorithms and Analytics, Applied Research, Geospatial Big Data, and Remote Sensing Phenomenology.

For more information: http://www.exelisvis.com/eas/HOME.aspx

We look forward to seeing you there.

Geospatial Solutions in the Cloud

Source: Exelis VIS whitepaper – 12/2/2014 (reprinted with permission)

What are Geospatial Analytics?

Geospatial analytics allow people to ask questions of data that exist within a spatial context. Usually this means extracting information from remotely sensed data such as multispectral imagery or LiDAR that is focused on observing the Earth and the things happening on it, both in a static sense or over a period of time. Familiar examples of this type of geospatial analysis include Land Classification, Change Detection, Soil and Vegetative indexes, and depending on the bands of your data, Target Detection and Material Identification. However, geospatial analytics can also mean analyzing data that is not optical in nature.

So what other types of problems can geospatial analytics solve? Geospatial analytics comprise of more than just images laid over a representation of the Earth. Geospatial analytics can ask questions of ANY type of geospatial data, and provide insight into static and changing conditions within a multi-dimensional space. Things like aircraft vectors in space and time, wind speeds, or ocean currents can be introduced into geospatial algorithms to provide more context to a problem and to enable new correlations to be made between variables.

Many times, advanced analytics like these can benefit from the power of cloud, or server-based computing. Benefits from the implementation of cloud-based geospatial analytics include the ability to serve on-demand analytic requests from connected devices, run complex algorithms on large datasets, or perform continuous analysis on a series of changing variables. Cloud analytics also improve the ability to conduct multi-modal analysis, or processes that take into account many different types of geospatial information.

Here we can see vectors of a UAV along with the ground footprint of the sensor overlaid in Google Earth™, as well as a custom interface built on ENVI that allows users to visualize real-time weather data in four dimensions (Figure 1).

geospatial_cloud_fig1

Figure 1 – Multi-Modal Geospatial Analysis – data courtesy NOAA

These are just a few examples of non-traditional geospatial analytics that cloud-based architecture is very good at solving.

Cloud-Based Geospatial Analysis Models 

So let’s take a quick look at how cloud-based analytics work. There are two different operational models for running analytics, the on-demand model and the batch process model. In a on-demand model (Figure 2), a user generally requests a specific piece of information from a web-enabled device such as a computer, a tablet, or a smart phone. Here the user is making the request to a cloud based resource.

geospatial_cloud_fig2

Figure 2 – On-Demand Analysis Model

Next, the server identifies the requested data and runs the selected analysis on it. This leverages scalable server architecture that can vastly decrease the amount of time it takes to run the analysis and eliminate the need to host the data or the software on the web-enabled device. Finally, the requested information is sent back to the user, usually at a fraction of the bandwidth cost required to move large amounts of data or full resolution derived products through the internet.

In the automated batch process analysis model (Figure 3), the cloud is designed to conduct prescribed analysis to data as it becomes available to the system, reducing the amount of manual interaction and time that it takes to prepare or analyze data. This system can take in huge volumes of data from various sources such as aerial or satellite images, vector data, full motion video, radar, or other data types, and then run a set of pre-determined analyses on that data depending on the data type and the requested information.

geospatial_cloud_fig3

Figure 3 – Automated Batch Process Model

Once the data has been pre-processed, it is ready for consumption, and the information is pushed out to either another cloud based asset, such as an individual user that needs to request information or monitor assets in real-time, or simply placed into a database in a ‘ready-state’ to be accessed and analyzed later.

The ability for this type of system to leverage the computing power of scalable server stacks enables the processing of huge amounts of data and greatly reduces the time and resources needed to get raw data into a consumable state.

Solutions in the Cloud

HySpeed Computing

Now let’s take a look at a couple of use cases that employ ENVI capabilities in the cloud. The first is a web-based interface that allows users to perform on-demand geospatial analytics on hyperspectral data supplied by HICO™, the Hyperspectral Imager for the Coastal Ocean (Figure 4). HICO is a hyperspectral imaging spectrometer that is attached to the International Space Station (ISS) and is designed specifically for sampling the coastal ocean in an effort to further our understanding of the world’s coastal regions.

geospatial_cloud_fig4

Figure 4 – The HICO Sensor – image courtesy of NASA

Developed by HySpeed Computing, the prototype HICO Image Processing System (Figure 5) allows users to conduct on-demand image analysis of HICO’s imagery from a web-based browser through the use of ENVI cloud capabilities.

geospatial_cloud_fig5

Figure 5 – The HICO Image Processing System – data courtesy of NASA

The interface exposes several custom ENVI tasks designed specifically to take advantage of the unique spectral resolution of the HICO sensor to extract information characterizing the coastal environment. This type of interface is a good example of the on-demand scenario presented earlier, as it allows users to conduct on-demand analysis in the cloud without the need to have direct access to the data or the computing power to run the hyperspectral algorithms.

The goal of this system is to provide ubiquitous access to the robust HICO catalog of hyperspectral data as well as the ENVI algorithms needed to analyze them. This will allow researchers and other analysts the ability to conduct valuable coastal research using web-based interfaces while capitalizing on the efforts of the Office of Naval Research, NASA, and Oregon State University that went into the development, deployment, and operation of HICO.

Milcord

Another use case involves a real-time analysis scenario that comes from a company called Milcord and their dPlan Next Generation Mission Manager (Figure 6). The goal of dPlan is to “aid mission managers by employing an intelligent, real-time decision engine for multi-vehicle operations and re-planning tasks” [1]. What this means is that dPlan helps folks make UAV flight plans based upon a number of different dynamic factors, and delivers the best plan for multiple assets both before and during the actual operation.

geospatial_cloud_fig6

Figure 6 – The dPlan Next Generation Mission Manager

Factors that are used to help score the flight plans include fuel availability, schedule metrics based upon priorities for each target, as well as what are known as National Image Interpretability Rating Scales, or NIIRS (Figure 7). NIIRS are used “to define and measure the quality of images and performance of imaging systems. Through a process referred to as “rating” an image, the NIIRS is used by imagery analysts to assign a number which indicates the interpretability of a given image.” [2]

geospatial_cloud_fig7

Figure 7 – Extent of NIIRS 1-9 Grids Centered in an Area Near Calgary

These factors are combined into a cost function, and dPlan uses the cost function to find the optimal flight plan for multiple assets over a multitude of targets. dPlan also performs a cost-benefit analysis to indicate if the asset cannot reach all targets, and which target might be the lowest cost to remove from the plan, or whether another asset can visit the target instead.

dPlan employs a custom ESE application to generate huge grids of Line of Sight values and NIIRs values associated with a given asset and target (Figure 8). dPlan uses this grid of points to generate route geometry, for example, how close and at what angle does the asset need to approach the target.

geospatial_cloud_fig8

Figure 8 – dPlan NIIRS Workflow

The cloud-computing power leveraged by dPlan allows users to re-evaluate flight plans on the fly, taking into account new information as it becomes available in real time. dPlan is a great example of how cloud-based computing combined with powerful analysis algorithms can solve complex problems in real time and reduce the resources needed to make accurate decisions amidst changing environments.

Solutions in the Cloud

So what do we do here at Exelis to enable folks like HySpeed Computing and Milcord to ask these kinds of questions from their data and retrieve reliable answers? The technology they’re using is called the ENVI Services Engine (Figure 9), an enterprise-ready version of the ENVI image analytics stack. We currently have over 60 out-of-the-box analysis tasks built into it, and are creating more with every release.

geospatial_cloud_fig9

Figure 9 – The ENVI Services Engine

The real value here is that ENVI Services Engine allows users to develop their own analysis tasks and expose them through the engine. This is what enables users to develop unique solutions to geospatial problems and share them as repeatable processes for others to use. These solutions can be run over and over again on different data and provide consistent dependable information to the persons requesting the analysis. The cloud based technology makes it easy to access from web-enabled devices while leveraging the enormous computing power of scalable server instances. This combination of customizable geospatial analysis tasks and virtually limitless computing power begins to address some of the limiting factors of analyzing what is known as big data, or datasets so large and complex that traditional computing practices are not sufficient to identify correlations within disconnected data streams.

Our goal here at Exelis is to enable you to develop custom solutions to industry-specific geospatial problems using interoperable, off-the-shelf technology. For more information on what we can do for you and your organization, please feel free to contact us.

Sources:

[1] 2014. Milcord. “Geospatial Analytics in the Cloud: Successful Application Scenarios” webinar. https://event.webcasts.com/starthere.jsp?ei=1042556

[2] 2014. The Federation of American Scientists. “National Image Interpretability Rating Scales”. http://fas.org/irp/imint/niirs.htm

 

Geospatial Learning Resources – An overview of the 2013 ENVI Rapid Learning Series

ENVI Rapid Learning SeriesHave you downloaded, or upgraded to, the latest version of ENVI? Are you just learning the new interface, or already a seasoned expert? No matter your experience level, if you’re an ENVI user then it’s worth viewing the ENVI Rapid Learning Series.

This series is a collection of short 30-minute webinars that address different technical aspects and application tips for using ENVI. Originally hosted live in the fall of 2013, the webinars are now available online to view at your convenience:

  • ENVI and ArcGIS Interoperability Tips “Learn best practices and tips for using ENVI to extract information from your imagery. Get new information, ask questions, become a better analyst.” (recorded 10/16/13)
  • Using NITF Data in ENVI “Learn best practices and tips for using NITF data with ENVI to extract information from your imagery. Get new information, ask questions, become a better analyst.” (recorded 10/23/13)
  • Image Transformations in ENVI “Join Tony Wolf as he explores how image transformations can provide unique insight into your data in ENVI. Learn how to use the display capabilities of ENVI to visually detect differences between image bands and help identify materials on the ground.” (recorded 10/30/13)
  • Working with Landsat 8 Data in ENVI “Learn how to use Landsat 8 cirrus band, quality assurance band, and thermal channels in ENVI for classification, NDVI studies, and much more.” (recorded 11/6/13)
  • Using NPP VIIRS Imagery in ENVI “Join Thomas Harris as he explores the newly developed support for NPP VIIRS in ENVI. By opening a dataset as an NPP VIIRS file type, the user is presented with an intuitive interface that makes visualizing the data and correcting the ‘bowtie’ effect a snap.” (recorded 11/13/13)
  • Georeference, Image Registration, Orthorectification “This webinar looks at how to geometrically correct data in ENVI for improved accuracy and analysis results. Join the ENVI team as we demo georeferencing, image registration, and orthorectification capabilities and answer questions from attendees.” (recorded 11/20/13)
  • An Introduction to ENVI LiDAR “This webinar takes a quick look at the ENVI LiDAR interface and demonstrates how to easily transform geo-referenced point-cloud LiDAR data into useful geographic information system (GIS) layers. ENVI LiDAR can automatically extract Digital Elevation Models (DEMs), Digital Surface Models (DSMs), contour lines, buildings, trees, and power lines from your raw point-cloud LiDAR data. This information can be exported in multiple formats and to 3D visual databases.” (recorded 11/27/13)
  • IDL for the Non-Programmer “This webinar highlights some of the tools available to ENVI and IDL users, which allow them to analyze data and extend ENVI. Learn where to access code snippets, detailed explanations of parameters, and demo data that comes with the ENVI + IDL install.” (recorded 12/4/13)
  • ENVI Services Engine: What is it? “This webinar takes a very basic look at ENVI capabilities at the server level. It shows diagrams depicting how web based analysis works, and shows some examples of JavaScript clients calling the ENVI Services Engine. Benefits of this type of technology include developing apps or web based clients to run analysis, running batch analysis on multiple datasets, and integrating ENVI image analysis into the Esri platform.” (recorded 12/11/13)
  • Atmospheric Correction “This webinar looks at the different types of Atmospheric Correction tools available in ENVI. It starts with a look at what Atmospheric Correction is used for, and when you and don’t need to apply it. Finally it gives a live look at QUAC and FLAASH and how to configure these tools to get the best information from your data.” (recorded 12/18/13)

To access the ENVI webinars: http://www.exelisvis.com/Learn/EventsTraining/Webinars.aspx

Open Access Spectral Libraries – Online resources for obtaining in situ spectral data

Coral SpectraThere are many different analysis techniques used in remote sensing, ranging from the simple to complex. In imaging spectrometry, i.e. hyperspectral remote sensing, a common technique is to utilize measured field or laboratory spectra to drive physics-based image classification and material detection algorithms. Here the measured spectra are used as representative examples of the materials and species that are assumed present in the remote sensing scene. Spectral analysis techniques can then be used to ascertain the presence, distribution and abundance of these materials and species throughout an image.

In most cases the best approach is to measure field spectra for a given study area yourself using a field-portable spectrometer; however, the time and cost associated with such fieldwork can oftentimes be prohibitive. Another alternative is to utilize spectral libraries, which contain catalogs of spectra already measured by other researchers.

Below are examples of open access spectral libraries that are readily available online:

  • The ASTER Spectral Library, hosted by the Jet Propulsion Laboratory (JPL), contains a compilation of three other libraries, the Johns Hopkins University Spectral Library, the JPL Spectral Library and the USGS Spectral Library. The ASTER library currently contains over 2400 spectra and can be ordered in its entirety via CD-ROM or users can also search, graph and download individual spectra online.
  • The SPECCHIO Spectral Library is an online database maintained by the Remote Sensing Laboratories in the Department of Geography at University of Zurich. Once users have registered with the system to create an account, the SPECCHIO library can be accessed remotely over the internet or alternatively downloaded and installed on a local system. The library is designed specifically for community data sharing, and thus users can both download existing data and upload new spectra.
  • The Vegetation Spectral Library was developed by the Systems Ecology Laboratory at the University of Texas at El Paso with support from the National Science Foundation. In addition to options to search, view and download spectra, this library also helpfully includes photographs of the actual species and materials from which the data was measured. Registered users can also help contribute data to further expand the archive.
  • The ASU Spectral Library is hosted by the Mars Space Flight Facility at Arizona State University, and contains thermal emission spectra for numerous geologic materials. While the library is designed to support research on Mars, the spectra are also applicable to research closer to home here on Earth.
  • The Jet Propulsion Laboratory is currently building the HyspIRI Ecosystem Spectral Library. This library is still in its development phase, and hence contains only a limited number of spectra at this time. Nonetheless, it is expected to grow, since the library was created as a centralized resource for the imaging spectrometry community to contribute and share spectral measurements.

It is doubtless that other spectral libraries exist and that many thousands of additional spectra have been measured for individual research projects. It is expected that more and more of this data will be available online and more uniform collection standards will be adopted, particularly as airborne and space-based hyperspectral sensors continue to become more prevalent.

Searching for other remote sensing data resources? Check out these earlier posts on resources for obtaining general remote sensing imagery as well as imaging spectrometry and lidar data.

Remote Sensing Archives – An overview of online resources for lidar data

Previous posts on data access have focused on general resources for obtaining remote sensing imagery – Getting your hands on all that imagery – and specific resources related to imaging spectrometry – A review of online resources for hyperspectral imagery. To add to this compendium of data resources, the following includes an overview of online archives for lidar data.

USGS lidar Mount St Helens

Lidar image of Mount St. Helens (courtesy USGS)

Lidar (light detection and ranging), also commonly referred to as LiDAR or LIDAR, is an “active” remote sensing technology, whereby laser pulses are used to illuminate a surface and the reflected return signals from these pulses are used to indicate the range (distance) to that surface. When combined with positional information and other data recorded by the airborne system, lidar produces a three-dimensional representation of the surface and the objects on that surface. Lidar technology can be utilized for terrestrial applications, e.g. topography, vegetation canopy height and infrastructure surveys, as well as aquatic applications, e.g. bathymetry and coastal geomorphology.

Below is an overview of archives that contain lidar data products and resources:

  • CLICK (Center for LIDAR Information Coordination and Knowledge) provides links to different publically available USGS lidar resources, including EAARL, the National Elevation Dataset and EarthExplorer. The CLICK website also hosts a searchable database of lidar publications and an extensive list of links to relevant websites for companies and academic institutions using lidar data in their work.
  • EAARL (Experimental Advanced Airborne Research Lidar) is an airborne sensor system that has the capacity to seamlessly measure both submerged bathymetric surfaces and adjacent terrestrial topography. By selecting the “Data” tab on the EAARL website, and then following links to specific surveys, users can view acquisition areas using Google Maps and access data as ASCII xyz files, GeoTIFFs and LAS files (a standardized lidar data exchange format).
  • NED (National Elevation Dataset) is the USGS seamless elevation data product for the United States and its territorial islands. NED is compiled using the best available data for any given area, where the highest resolution and most accurate of which is derived from lidar data and digital photogrammetry. NED data are available through the National Map Viewer in a variety of formats, including ArcGRID, GeoTIFF, BIL and GridFloat. However, to access the actual lidar data, and not just the resulting integrated products, users need to visit EarthExplorer.
  • EarthExplorer is a consolidated data discovery portal for the USGS data archives, which includes airborne and satellite imagery, as well as various derived image products. EarthExplorer allows users to search by geographic area, date range, feature class and data type, and in most cases instantly download selected data. To access lidar data, which are provided as LAS files, simply select the lidar checkbox under the Data Sets tab as part of your search criteria.
  • JALBTCX (Joint Airborne Lidar Bathymetry Technical Center of Expertise) performs data collection and processing for the U.S. Army Corps of Engineers, the U.S. Naval Meteorology and Oceanography Command and NOAA. The JALBTCX website includes a list of relevant lidar publications, a description of the National Coastal Mapping Program, and a link to data access via NOAA’s Digital Coast.
  • Digital Coast is a service provided by NOAA’s Coastal Services Center that integrates coastal data accessibility with software tools, technology training and success stories. Of the 950 data layers currently listed in the Digital Coast archive, lidar data represents nearly half of the available products. Searching for lidar data can be achieved using the Data Access Viewer and selecting “elevation” as the data type in your search, or by following the “Data” tab on the Digital Coast home page and entering “lidar” for the search criteria. The data is available in a variety of data formats, including ASCII xyz, LAS, LAZ, GeoTIFF and ASCII Grid, among others.
  • NCALM (National Center for Airborne Laser Mapping) is a multi-university partnership funded by the U.S. National Science Foundation, whose mission is to provide community access to high quality lidar data and support scientific research in airborne laser mapping. Data is accessible through the OpenTopography portal, either in KML format for display in Google Earth, as pre-processed DEM products, or as lidar point clouds in ASCII and LAS formats.

Lidar can be useful on its own, e.g. topography and bathymetry, and can also be merged with other remote sensing data, such as multispectral and hyperspectral imagery, to provide valuable three-dimensional information as input for further analysis. For example, lidar derived bathymetry can be used as input to improve hyperspectral models of submerged environments in the coastal zone. There has also been more widespread use of full-waveform lidar, which provides increased capacity to discriminate surface characteristics and ground features, as well as increased use of lidar return intensity, which can be used to generate a grayscale image of the surface.

What is readily apparent is that as the technology continues to improve, data acquisition becomes more cost effective, and data availability increases, lidar will play an important role in more and more remote sensing investigations.

That’s Not Real – Algorithm testing and validation using synthetic data

NIST Hyperspectral Image Projector

Example synthetic image of Enrique Reef, Puerto Rico using the NIST Hyperspectral Image Projector (HIP)

An important aspect of developing new algorithms or analysis techniques involves testing and validation. In remote sensing this is typically performed using an image with known characteristics, i.e. field measurements or other expert on-the-ground knowledge. However, obtaining or creating such a dataset can be challenging. As an alternative, many researchers have turned to synthetic data to address specific validation needs.

So what are the challenges behind using “real” data for validation? Let’s consider some of the common questions addressed through remote sensing, such as classifying images into categories describing the scene (e.g. forest, water, land, buildings, etc…) or identifying the presence of particular objects or materials (e.g. oil spill, active fire areas, coastal algae blooms, etc…). To validate these types of analyses, one needs knowledge of how much and where these materials are located in the given scene. While this can sometimes be discerned through experience and familiarity with the study area, in most cases this requires physically visiting the field and collecting measurements or observations of different representative points and areas throughout the scene. The resulting data is extremely useful for testing and validation, and recommended whenever feasible; however, conducting thorough field studies is not always practical, particularly when time and budget is limited.

Here we explore a few options that researchers use for creating synthetic images, from the simple to the complex:

  • A simple approach is to create an image with a grid of known values, or more specifically known spectra, where each cell in the grid represents a different material. Subsequent validation analysis can be used to confirm that a given methodology accurately categorizes each of the known materials. To add greater variability to this approach, different levels of noise can be added to the input spectra used to create the grid cells, or multiple spectra can be used to represent each of the materials. While seemingly simplistic, such grids can be useful for assessing fundamental algorithm performance.
  • The grid concept can be further extended to encompass significantly greater complexity, such as creating an image using a range of feasible parameter combinations. As an example from the field of coral reef remote sensing, a model can be used to simulate an image with various combinations of water depth, water properties, and habitat composition. If water depth is segmented into 10 discrete values, water properties are represented by 3 parameters, each with 5 discrete values, and habitat composition is depicted using just 3 categories (e.g. coral, algae and sand), this results in 3750 unique parameter combinations. Such an image can be used to test the ability of an algorithm to accurately retrieve each of these parameters under a variety of conditions.
  • To add more realism, it is also feasible to utilize a real image as the basis for creating a synthetic image. This becomes particularly important when there is a need to incorporate more realistic spatial and spectral variability in the analysis. From the field of spectral unmixing, for example, an endmember abundance map derived from a real image can be used to create a new image with a different set of endmembers. This maintains the spatial relationships present in the real image, while at the same time allowing flexibility in the spectral composition. The result is a synthetic image that can be used to test endmember extraction, spectral unmixing and other image classification techniques.
  • Another approach based on “real” imagery is the NIST Hyperspectral Image Projector (HIP), which is used to project realistic hyperspectral scenes for testing sensor performance. In other words, the HIP is used to generate and display synthetic images derived from real images. As with the above example, a real image is first decomposed into a set of representative endmember spectra and abundances. The HIP then uses a combination of spectral and spatial engines to project these same spectra and endmembers, thereby replicating the original scene. The intent here is not necessarily to use the synthetic data to validate image processing techniques, but rather to test sensor performance by differentiating environmental effects from sensor effects.

Even though it’s a powerful tool, keep in mind that synthetic data won’t solve all your validation needs. You still need to demonstrate that your algorithm works in the “real world”, so it’s best to also incorporate actual measured data in your analysis.

Coral Reef Remote Sensing – A new book for coral reef science and management

Coral Reef Remote SensingAnnouncing publication of “Coral Reef Remote Sensing: A Guide for Mapping, Monitoring and Management”, edited by Dr. James Goodman, president of HySpeed Computing, and his colleagues Dr. Sam Purkis from Nova Southeastern University and Dr. Stuart Phinn from University of Queensland.

This ground breaking new book explains and demonstrates how satellite, airborne, ship and ground based imaging technologies, collectively referred to as “remote sensing”, are essential for understanding and managing coral reef environments around the world.

The book includes contributions from an international group of leading coral reef scientists and managers, demonstrating how remote sensing resources are now unparalleled in the types of information they produce, the level of detail provided, the area covered  and the length of the time over which they have been collected.  When used in combination with field data and knowledge of coral reef ecology and oceanography, remote sensing contributes an essential source of information for understanding, assessing and managing coral reefs around the world.

The authors have produced a book that comprehensively explains how each remote sensing data collection technology works, and more importantly how they are each used for coral reef management activities around the world.

In the words of Dr. Sylvia Earle – renowned scientist and celebrated ocean explorer:

This remarkable book, Coral Reef Remote Sensing: A Guide for Mapping, Monitoring and Management, for the first time documents the full range of remote sensing systems, methodologies and measurement capabilities essential to understanding more fully the status and changes over time of coral reefs globally. Such information is essential and provides the foundation for policy development and for implementing management strategies to protect these critically endangered ecosystems.

I wholeheartedly recommend this book to scientists, students, managers, remote sensing specialists, and anyone who would like to be inspired by the ingenious new ways that have been developed and are being applied to solve one of the world’s greatest challenges: how to take care of the ocean that takes care of us.

If it had been available in 1834, Charles Darwin would surely have had a copy on his shelf.

We invite you to explore the book (including a sneak peek inside the chapters) and see how you can put the information to use on your own coral reef projects.

Remote Sensing Data Access – A review of online resources for hyperspectral imagery

Hyperspectral CubeIn our previous post – Remote Sensing Data Archives – we explored some of the many general online data discovery tools for obtaining remote sensing imagery. We now sharpen our focus to the field of hyperspectral remote sensing, aka imaging spectrometry, and delve into resources for accessing this particularly versatile type of imagery.

Hyperspectral imaging emerged on the remote sensing scene in the 1980s, originating at the Jet Propulsion Laboratory (JPL) with the development and deployment of the Airborne Imaging Spectrometer (AIS), followed soon thereafter by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). Since then hyperspectral imaging has evolved into a robust remote sensing discipline, with satellite and airborne sensors contributing to numerous applications in Earth observation, and other similarly sophisticated sensors being used for missions to the moon and Mars.

The premise behind hyperspectral imaging is that these sensors measure numerous, relatively narrow, contiguous portions of the electromagnetic spectrum, thereby providing detailed spectral information on how electromagnetic energy is reflected (or emitted) from a surface. To give this some perspective and provide an example, this can equate to measuring the visible portion of the spectrum using 50 or more narrow bands as opposed to three broad bands (i.e., red, green and blue) that we typically see with cameras and our eyes. Because objects (plants, soil, water, buildings, roads, etc…) reflect light differently as a function of their composition and structure, this enhanced spectral resolution offers more information with which to identify and map features on the Earth’s surface.

For those interested in hyperspectral remote sensing, and curious to see what can be achieved using this type of data, let’s look at some of the archives that are available:

  • Hyperion – The Hyperion sensor (220 bands; 400-2500nm; 30m resolution) is located on NASA’s EO-1 satellite, and although deployed in 2000 as part of a one-year demonstration mission, the satellite and its onboard sensors have shown remarkable stamina, continuing to collect data today. Archive data from Hyperion are available through both Earth Explorer and GloVis, and new data can be requested through an online Data Acquisition Request (DAR).
  • HICO – The Hyperspectral Imager for Coastal Ocean sensor (128 bands; 350-1080nm; 90m resolution) was installed on the International Space Station (ISS) in 2009 and is uniquely configured for the acquisition of ‘dark’ targets, specifically coastal aquatic areas. The sensor was initially developed and sponsored by the Office of Naval Research, with continuing support now provided through NASA’s ISS Program. Archive data from HICO, as well as requests for new data, are available through the HICO website hosted by Oregon State University; however, interested users must first submit a short proposal to become part of the HICO user community.
  • CHRIS – The Compact High Resolution Imaging Spectrometer (18-62 bands; 410-1020nm; 17-34m resolution) is the main payload on ESA’s Proba-1 satellite, which was launched in 2001. As with the EO-1 satellite, Proba-1 was only intended to serve as a short-lived technology demonstrator, but has managed to continue collecting valuable science data for more than a decade. Data from CHRIS are available to registered users, obtained via submittal and acceptance of a project proposal, through ESA’s Third Party Missions portfolio on Earthnet Online.
  • AVIRIS – The Airborne Visible Infrared Imaging Spectrometer (224 bands; 400-2500nm, 4-20m resolution) has been supporting hyperspectral projects for more than two decades, and can be credited as a true pioneer in the field. AVIRIS is most commonly flown onboard a Twin Otter turboprop or ER-2 jet, but has also been configured to operate from several other airborne platforms. Images from 2006-2011 are available through the AVIRIS Flight Data Locator, with plans to soon expand this archive to include additional imagery from 1992-2005 (currently available through request from JPL).
  • NEON – The National Ecological Observatory Network is a continental-scale network of 60 observation sites located across the United States, where a standardized set of field and airborne data are being collected to support ecological research. Remote sensing data are being acquired via the Airborne Observation Platform, which includes a high-resolution digital camera, waveform LiDAR, and imaging spectrometer. The NEON project is adapting an open data policy, but data acquisition and distribution tools are currently still in development. Thus, initial “prototype” data, which includes a sampling of hyperspectral imagery, are being made available through the NEON Prototype Data Sharing (PDS) system.
  • TERN – The Terrestrial Ecosystem Research Network is an Australian equivalent of NEON, providing a distributed network of observation facilities, datasets, map products and analysis tools to support Australian ecosystem science. Within this larger project is the AusCover facility, which leads the remote sensing image acquisition and field validation efforts for TERN. Current hyperspectral datasets available through AusCover include both airborne data and a comprehensive collection of Hyperion imagery. Data are accessible through the TERN Data Discovery Portal and the AusCover Visualization Portal.

These aren’t the only hyperspectral instruments in operation. There are new instruments, such as the Next Generation AVIRIS (AVIRIS-NG), Hyperspectral Thermal Emission Spectrometer (HyTES) and Portable Remote Imaging Spectrometer (PRISM), which all recently conducted their first science missions in 2012. There are a growing number of hyperspectral programs and instruments operated by government agencies and universities, such as the NASA Ames Research Center and the Carnegie Airborne Observatory (CAO). There are various airborne sensors operated or produced by commercial organizations, such as the Galileo Group, SpecTIR, HyVista and ITRES. And there are also a number of new satellite-based sensors on the horizon, including HyspIRI (NASA), EnMAP (Germany), PRISMA (Italy) and HISUI (Japan).

It’s an exciting field, with substantial growth in both sensor technology and analysis methods continuing to emerge. As the data becomes more and more available, so too does the potential for more researchers to get involved and new applications to be developed.