Geospatial Solutions in the Cloud

Source: Exelis VIS whitepaper – 12/2/2014 (reprinted with permission)

What are Geospatial Analytics?

Geospatial analytics allow people to ask questions of data that exist within a spatial context. Usually this means extracting information from remotely sensed data such as multispectral imagery or LiDAR that is focused on observing the Earth and the things happening on it, both in a static sense or over a period of time. Familiar examples of this type of geospatial analysis include Land Classification, Change Detection, Soil and Vegetative indexes, and depending on the bands of your data, Target Detection and Material Identification. However, geospatial analytics can also mean analyzing data that is not optical in nature.

So what other types of problems can geospatial analytics solve? Geospatial analytics comprise of more than just images laid over a representation of the Earth. Geospatial analytics can ask questions of ANY type of geospatial data, and provide insight into static and changing conditions within a multi-dimensional space. Things like aircraft vectors in space and time, wind speeds, or ocean currents can be introduced into geospatial algorithms to provide more context to a problem and to enable new correlations to be made between variables.

Many times, advanced analytics like these can benefit from the power of cloud, or server-based computing. Benefits from the implementation of cloud-based geospatial analytics include the ability to serve on-demand analytic requests from connected devices, run complex algorithms on large datasets, or perform continuous analysis on a series of changing variables. Cloud analytics also improve the ability to conduct multi-modal analysis, or processes that take into account many different types of geospatial information.

Here we can see vectors of a UAV along with the ground footprint of the sensor overlaid in Google Earth™, as well as a custom interface built on ENVI that allows users to visualize real-time weather data in four dimensions (Figure 1).

geospatial_cloud_fig1

Figure 1 – Multi-Modal Geospatial Analysis – data courtesy NOAA

These are just a few examples of non-traditional geospatial analytics that cloud-based architecture is very good at solving.

Cloud-Based Geospatial Analysis Models 

So let’s take a quick look at how cloud-based analytics work. There are two different operational models for running analytics, the on-demand model and the batch process model. In a on-demand model (Figure 2), a user generally requests a specific piece of information from a web-enabled device such as a computer, a tablet, or a smart phone. Here the user is making the request to a cloud based resource.

geospatial_cloud_fig2

Figure 2 – On-Demand Analysis Model

Next, the server identifies the requested data and runs the selected analysis on it. This leverages scalable server architecture that can vastly decrease the amount of time it takes to run the analysis and eliminate the need to host the data or the software on the web-enabled device. Finally, the requested information is sent back to the user, usually at a fraction of the bandwidth cost required to move large amounts of data or full resolution derived products through the internet.

In the automated batch process analysis model (Figure 3), the cloud is designed to conduct prescribed analysis to data as it becomes available to the system, reducing the amount of manual interaction and time that it takes to prepare or analyze data. This system can take in huge volumes of data from various sources such as aerial or satellite images, vector data, full motion video, radar, or other data types, and then run a set of pre-determined analyses on that data depending on the data type and the requested information.

geospatial_cloud_fig3

Figure 3 – Automated Batch Process Model

Once the data has been pre-processed, it is ready for consumption, and the information is pushed out to either another cloud based asset, such as an individual user that needs to request information or monitor assets in real-time, or simply placed into a database in a ‘ready-state’ to be accessed and analyzed later.

The ability for this type of system to leverage the computing power of scalable server stacks enables the processing of huge amounts of data and greatly reduces the time and resources needed to get raw data into a consumable state.

Solutions in the Cloud

HySpeed Computing

Now let’s take a look at a couple of use cases that employ ENVI capabilities in the cloud. The first is a web-based interface that allows users to perform on-demand geospatial analytics on hyperspectral data supplied by HICO™, the Hyperspectral Imager for the Coastal Ocean (Figure 4). HICO is a hyperspectral imaging spectrometer that is attached to the International Space Station (ISS) and is designed specifically for sampling the coastal ocean in an effort to further our understanding of the world’s coastal regions.

geospatial_cloud_fig4

Figure 4 – The HICO Sensor – image courtesy of NASA

Developed by HySpeed Computing, the prototype HICO Image Processing System (Figure 5) allows users to conduct on-demand image analysis of HICO’s imagery from a web-based browser through the use of ENVI cloud capabilities.

geospatial_cloud_fig5

Figure 5 – The HICO Image Processing System – data courtesy of NASA

The interface exposes several custom ENVI tasks designed specifically to take advantage of the unique spectral resolution of the HICO sensor to extract information characterizing the coastal environment. This type of interface is a good example of the on-demand scenario presented earlier, as it allows users to conduct on-demand analysis in the cloud without the need to have direct access to the data or the computing power to run the hyperspectral algorithms.

The goal of this system is to provide ubiquitous access to the robust HICO catalog of hyperspectral data as well as the ENVI algorithms needed to analyze them. This will allow researchers and other analysts the ability to conduct valuable coastal research using web-based interfaces while capitalizing on the efforts of the Office of Naval Research, NASA, and Oregon State University that went into the development, deployment, and operation of HICO.

Milcord

Another use case involves a real-time analysis scenario that comes from a company called Milcord and their dPlan Next Generation Mission Manager (Figure 6). The goal of dPlan is to “aid mission managers by employing an intelligent, real-time decision engine for multi-vehicle operations and re-planning tasks” [1]. What this means is that dPlan helps folks make UAV flight plans based upon a number of different dynamic factors, and delivers the best plan for multiple assets both before and during the actual operation.

geospatial_cloud_fig6

Figure 6 – The dPlan Next Generation Mission Manager

Factors that are used to help score the flight plans include fuel availability, schedule metrics based upon priorities for each target, as well as what are known as National Image Interpretability Rating Scales, or NIIRS (Figure 7). NIIRS are used “to define and measure the quality of images and performance of imaging systems. Through a process referred to as “rating” an image, the NIIRS is used by imagery analysts to assign a number which indicates the interpretability of a given image.” [2]

geospatial_cloud_fig7

Figure 7 – Extent of NIIRS 1-9 Grids Centered in an Area Near Calgary

These factors are combined into a cost function, and dPlan uses the cost function to find the optimal flight plan for multiple assets over a multitude of targets. dPlan also performs a cost-benefit analysis to indicate if the asset cannot reach all targets, and which target might be the lowest cost to remove from the plan, or whether another asset can visit the target instead.

dPlan employs a custom ESE application to generate huge grids of Line of Sight values and NIIRs values associated with a given asset and target (Figure 8). dPlan uses this grid of points to generate route geometry, for example, how close and at what angle does the asset need to approach the target.

geospatial_cloud_fig8

Figure 8 – dPlan NIIRS Workflow

The cloud-computing power leveraged by dPlan allows users to re-evaluate flight plans on the fly, taking into account new information as it becomes available in real time. dPlan is a great example of how cloud-based computing combined with powerful analysis algorithms can solve complex problems in real time and reduce the resources needed to make accurate decisions amidst changing environments.

Solutions in the Cloud

So what do we do here at Exelis to enable folks like HySpeed Computing and Milcord to ask these kinds of questions from their data and retrieve reliable answers? The technology they’re using is called the ENVI Services Engine (Figure 9), an enterprise-ready version of the ENVI image analytics stack. We currently have over 60 out-of-the-box analysis tasks built into it, and are creating more with every release.

geospatial_cloud_fig9

Figure 9 – The ENVI Services Engine

The real value here is that ENVI Services Engine allows users to develop their own analysis tasks and expose them through the engine. This is what enables users to develop unique solutions to geospatial problems and share them as repeatable processes for others to use. These solutions can be run over and over again on different data and provide consistent dependable information to the persons requesting the analysis. The cloud based technology makes it easy to access from web-enabled devices while leveraging the enormous computing power of scalable server instances. This combination of customizable geospatial analysis tasks and virtually limitless computing power begins to address some of the limiting factors of analyzing what is known as big data, or datasets so large and complex that traditional computing practices are not sufficient to identify correlations within disconnected data streams.

Our goal here at Exelis is to enable you to develop custom solutions to industry-specific geospatial problems using interoperable, off-the-shelf technology. For more information on what we can do for you and your organization, please feel free to contact us.

Sources:

[1] 2014. Milcord. “Geospatial Analytics in the Cloud: Successful Application Scenarios” webinar. https://event.webcasts.com/starthere.jsp?ei=1042556

[2] 2014. The Federation of American Scientists. “National Image Interpretability Rating Scales”. http://fas.org/irp/imint/niirs.htm

 

Advertisements

Advantages of Cloud Computing in Remote Sensing Applications

The original version of this post appears in the June 26 edition of Exelis VIS’s Imagery Speaks, by James Goodman, CEO HySpeed Computing

Below we explore the role of cloud computing in geospatial image processing, and the advantages this technology provides to the overall remote sensing toolbox.

The underlying concept of cloud computing is not new; dating back to the advent of the client-server model in mainframe computing, where the utilization of local devices to perform tasks on a server, or set of connected servers, has a long history within the computing industry.

With the rise of the personal computer, and the relative cost efficiency of memory and processing speed for these systems, there ensued a similarly rich history of computing using the local desktop environment.

As a result, in many application domains, including that of remote sensing, a dichotomy developed in the computing industry, with a large portion of the user community reliant on personal computers and mostly the government and big business utilizing large-scale servers.

More recently, however, there has been an industry-wide surge in the prevalence of cloud computing applications within the general user community. Driven in large part by rapidly growing data volumes and the profound increase and diversity of mobile computing devices, as well as a desire for access to centralized analytics, cloud computing is now a common component in our everyday experience.

Where does cloud computing fit within remote sensing? Given the online availability of weather maps and high-resolution satellite base maps, it can be argued that cloud computing is already regularly used in remote sensing. However, there are an innumerable number of other remote sensing applications, with societal and economic benefits, that are not currently available in the cloud.

Since most of these applications are not directed at the consumer market, but instead relevant predominantly to business, government, education and scientific concerns, what then are the advantages of cloud computing in remote sensing?

  • Provides online, on-demand, scalable image processing capabilities.
  • Delivers image-derived products and visualization tools to a global user community.
  • Allows processing tools to be efficiently co-located with large image databases.
  • Removes software barriers and hardware requirements from non-specialists.
  • Facilitates rapid integration and deployment of new algorithms and processing tools.
  • Accelerates technology transfer in remote sensing through improved application sharing.
  • Connects remote sensing scientists more directly with the intended end-users.

At HySpeed Computing we are partnering with Exelis Visual Information Solutions to develop a cloud computing platform for processing data from the Hyperspectral Imager for the Coastal Ocean (HICO) – a uniquely capable sensor located on the International Space Station (ISS). The backbone of the computing framework is based on the ENVI Services Engine, with a user interface built using open-source software tools such as GeoServer and Leaflet.

A prototype version of the web-enabled HICO processing system will soon be publically available for testing and evaluation by the community. Links to access the system will be provided on our website once it is released.

We envision a remote sensing future where the line between local and cloud computing becomes obscured, where applications can be interchangeably run in any computing environment, where developers can utilize their programming language of choice, where scientific achievements and innovations are readily shared through a distributed processing network, and where image-derived information is rapidly distributed to the global user community.

And what’s most significant about this vision is that the future is closer than you imagine.

About HySpeed Computing: Our mission is to provide the most effective analysis tools for deriving and delivering information from geospatial imagery. Visit us at hyspeedcomputing.com.

 

Application Tips for ENVI 5.x – Calculating vegetation indices for NDVI and beyond

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Calculate a collection of vegetation indices for hyperspectral and multispectral imagery using ENVI’s Vegetation Index Calculator.

Scenario: In this tip, vegetation indices are calculated for two variants of AVIRIS data from Jasper Ridge, California: one version using the full range of 224 possible hyperspectral bands (400-2500 nm); and the other using a version that has been spectrally convolved to match 8 of the 11 possible multispectral bands of Landsat 8 OLI (i.e., all bands except the thermal and panchromatic).

The AVIRIS data (JasperRidge98av_flaash_refl; shown below) was obtained from the ENVI Classic Tutorial Data available from the Exelis website, and has already been corrected to surface reflectance using FLAASH.

Jasper Ridge, CA

Vegetation Indices: There are numerous vegetation indices included in ENVI, so in most cases there is already a vegetation tool available that meets your needs. These indices can be found in three main locations within the ENVI Toolbox: (1) Spectral > Vegetation; (2) SPEAR > SPEAR Vegetation Delineation; and (3) THOR > THOR Stressed Vegetation.

The core functionality for deriving vegetation properties in ENVI is the Vegetation Index Calculator (located in Toolbox > Spectral > Vegetation). This tool provides access to 27 different vegetation indices, and will conveniently pre-select the indices that can be calculated for a given input image dependent on the spectral characteristics of the data. Despite this bit of assistance, however, properly implementing and interpreting the various vegetation indices still requires thorough understanding of what is being calculated. To obtain this information, details and references for each index are provided in the ENVI help documentation.

  • Broadband Greenness [5 indices]: Normalized Difference Vegetation Index, Simple Ratio Index, Enhanced Vegetation Index, Atmospherically Resistant Vegetation Index, Sum Green Index.
  • Narrowband Greenness [7 indices]: Red Edge Normalized Difference Vegetation Index, Modified Red Edge Simple Ratio Index, Modified Red Edge Normalized Difference Vegetation Index, Vogelmann Red Edge Index 1, Vogelmann Red Edge Index 2, Vogelmann Red Edge Index 3, Red Edge Position Index.
  • Light Use Efficiency [3 indices]: Photochemical Reflectance Index, Structure Insensitive Pigment Index, Red Green Ratio Index.
  • Canopy Nitrogen [1 index]: Normalized Difference Nitrogen Index
  • Dry or Senescent Carbon [3 indices]: Normalized Difference Lignin Index, Cellulose Absorption Index, Plant Senescence Reflectance Index.
  • Leaf Pigment [4 indices]: Carotenoid Reflectance Index 1, Carotenoid Reflectance Index 2, Anthocyanin Reflectance Index 1, Anthocyanin Reflectance Index 2.
  • Canopy Water Content [4 indices]: Water Band Index, Normalized Difference Water Index, Moisture Stress Index, Normalized Difference Infrared Index.

There are also five additional vegetation tools included in Toolbox > Spectral Vegetation. The Vegetation Suppression Tool essentially removes the spectral contributions of vegetation from the image. The NDVI tool simply provides direct access to the commonly used Normalized Difference Vegetation Index. And the three other tools consolidate select subsets of the above vegetation indices into specific application categories: Agricultural Stress Tool, Fire Fuel Tool, and Forest Health Tool.

Two additional vegetation tools are also available as part of the THOR and SPEAR toolboxes. The THOR Stressed Vegetation and the SPEAR Vegetation Delineation tools both provide workflow approaches to calculating vegetation indices, inclusive of options such as atmospheric correction, mask definition, and spatial filtering. The SPEAR Vegetation Delineation tool uses NDVI to assess the presence and relative vigor of vegetation, whereas the THOR Stressed Vegetation tool provides a step-by-step methodology for processing imagery using the same suite of vegetation indices as defined for the Spectral toolbox.

It is important to note that input images should be atmospherically corrected prior to running the vegetation tools, or in the case of the SPEAR and THOR tools atmospherically corrected as part of the image processing workflow.

The Tip: This example demonstrates the steps used for running ENVI’s Vegetation Index Calculator. Interested users are also encouraged to download the tutorial data from Exelis, or use their own data, and explore what the other vegetation tools have to offer.

  • As specified above, two sets of imagery are used in this example: one is the full AVIRIS hyperspectral dataset, and the other is a spectrally convolved Landsat 8 OLI multispectral dataset of the same image.
  • After opening the images in ENVI, the vegetation tool is started by selecting Spectral > Vegetation > Vegetation Index Calculator.
  • The opening dialog window is used to specify the Input File along with any desired Spatial Subset and/or Mask Band.
  • Next is the main dialog for selecting Vegetation Indices and specifying the Output Filename. There is also an option for Biophysical Cross Checking, which compares results from different indices and masks out pixels with conflicting data values. Using Biophysical Cross Checking is application dependent, but can be useful for removing anomalous pixels from your analysis.
  • As illustrated below, the general process for calculating Vegetation Indices is always the same for any given dataset; the only difference is the list of vegetation indices that are actually available for a particular set of bands. In our example, the full AVIRIS hyperspectral dataset allows for 25 different indices to be calculated, whereas the Landsat 8 LOI multispectral dataset allows only 6 indices.

Vegetation Index Calculator

  • Once you have selected the relevant vegetation indices for your application, simply select OK and the Vegetation Index Calculator will generate an output file with individual bands corresponding to each of the selected vegetation indices.

Shown below is the output data from our two images, along with example quicklooks demonstrating the variability in the various output indices. The reason for this variability is that each index derives different, but related, biophysical information. Thus, be sure to look at the definitions and references for each index to help guide interpretation of the output.

Vegetation Indices

Application Tips for ENVI 5.x – An IDL application for opening HDF5 formatted HICO scenes

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Open a HICO dataset stored in HDF5 format using an IDL application prepared by the U.S. Naval Research Laboratory.

This is a supplement to an earlier post that similarly describes how to open HDF5 formatted HICO files using either the H5_Browser or new HDF5 Reader in ENVI.

HICO Montgomery Reef, Australia

Scenario: This tip demonstrates how to implement IDL code for opening a HDF5 HICO scene from Montgomery Reef, Australia into ENVI format. Subsequent steps are included for preparing the resulting data for further analysis.

The HICO dataset used in this example (H2012095004112.L1B_ISS) was downloaded from the NASA GSFC archive, which can be reached either through the HICO website at Oregon State University or the NASA GSFC Ocean Color website. Note that you can also apply to become a registered HICO Data User through the OSU website, and thereby obtain access to datasets already in ENVI format.

The IDL code used in this example is available from the NASA GSFC Ocean Color website under Documents > Software/Tools > IDL Library > hico. The three IDL files you need are: byte_ordering.pro, nrl_hico_h5_to_flat.pro and write_nrl_header.pro.

The same IDL code is also included here for your convenience:  nrl_hico_h5_to_flat,  byte_ordering  and  write_nrl_header (re-distributed here with permission; disclaimers included in the code). However, to use these files (which were renamed so they could be attached to the post), you will first need to change the file extensions from *.txt to *.pro.

Running this code requires only minor familiarity working with IDL and the IDL Workbench.

The Tip: Below are steps to open the HICO L1B radiance and navigation datasets in ENVI using the IDL code prepared by the Naval Research Laboratory:

  • Start by unpacking the compressed folder (e.g., H2012095004112.L1B_ISS.bz2). If other software isn’t readily available, a good option is to download 7-zip for free from http://www.7-zip.org/.
  • Rename the resulting HDF5 file with a *.h5 extension (e.g., H2012095004112.L1B_ISS.h5). This allows the HDF5 tools in the IDL application to recognize the appropriate format.
  • If you downloaded the IDL files from this post, rename them from *.txt to *.pro (e.g., nrl_hico_h5_to_flat.txt to nrl_hico_h5_to_flat.pro); otherwise, if you downloaded them from the NASA website they already have the correct naming convention.
  • Open the IDL files in the IDL Workbench. To do so, simply double-click the files in your file manager and the files should automatically open in IDL if it is installed on your machine. Alternatively, you can launch either ENVI+IDL or just IDL and then select File > Open in the IDL Workbench.
  • Compile each of the files in the following order: (i) nrl_hico_h5_to_flat.pro, (ii) byte_ordering.pro, and (iii) write_nrl_header.pro. In the IDL Workbench this can be achieved by clicking on the tab associated with a given file and then selecting the Compile button in the menu bar.
  • You will ultimately only run the code for nrl_hico_h5_to_flat.pro, but this application is dependent on the other files; hence the reason they also need to be compiled.
  • Run the code for nrl_hico_h5_to_flat.pro, which this is done by clicking the tab for this file and then selecting the Run button in the menu bar.
  • You will then be prompted for an *.h5 input file (e.g., H2012095004112.L1B_ISS.h5), and a directory where you wish to write the output files.
  • There is no status bar associated with this operation; however, if you look closely at the IDL prompt in the IDL Console at the bottom of the Workbench you will note that it changes color while the process is running and returns to its normal color when the process is complete. In any event, the procedure is relatively quick and typically finishes in less than a minute.
  • Once complete, two sets of output files are created (data files + associated header files), one for the L1B radiance data and one for the navigation data.

Data Preparation: Below are the final steps needed to prepare the HICO data for further processing (repeated here in part from our previous post):

  • Open the L1B radiance and associated navigation data in ENVI. You will notice one side of the image exhibits a black stripe containing zero values.
  • As noted on the HICO website: “At some point during the transit and installation of HICO, the sensor physically shifted relative to the viewing slit. The edge of the viewing slit was visible in every scene.” This effect is removed by simply cropping out affected pixels in each of the data files. For scenes in standard forward orientation (+XVV), cropping includes 10 pixels on the left of the scene and 2 pixels on the right. Conversely, for scenes in reverse orientation (-XVV), cropping is 10 pixels on the right and 2 on the left.
  • If you’re not sure about the orientation of a particular scene, the orientation is specified in the newly created header file under hico_orientation_from_quaternion.
  • Spatial cropping can be performed by selecting Raster Management > Resize Data in the ENVI toolbox, choosing the relevant input file, selecting the option for Spatial Subset, subset the image for Samples 11-510 for forward orientation (3-502 for reverse orientation), and assigning a new output filename. Repeat as needed for each dataset.
  • The HDF5 formatted HICO scenes also require spectral cropping to reduce the total wavelengths from 128 to the 87 band subset from 0.4-0.9 um (400-900 nm). The bands outside this subset are considered less accurate and typically not included in analysis.
  • Spectral cropping can also be performed by selecting Raster Management > Resize Data in the ENVI toolbox, only in this case using the option to Spectral Subset and selecting bands 10-96 (corresponding to 0.40408-0.89669 um) while excluding bands 1-9 and 97-128. This step need only be applied to the hyperspectral L1B radiance data.
  • If desired, spectral and spatial cropping can both be applied in the same step.
  • The HICO scene is now ready for further processing and analysis in ENVI.

For more information on the sensor, detailed data characteristics, ongoing research projects, publications and presentations, and much, much more, HICO users are encouraged to visit the HICO website at Oregon State University. This is an excellent resource for all things HICO.

Coral Reef Reflectance Characteristics – Impacts of increasing water depth on spectral similarity

In a previous post we demonstrated how dendrograms can be used as effective tools for investigating the similarities and differences of reflectance spectra. Here we expand on our earlier discussion, and explore how dendrograms can be used to illustrate the decreasing separability of coral reef spectra with increasing water depth.

Coral reefs are known for their exceptional biodiversity, containing a complex array of mobile and sessile organisms. From a remote sensing perspective, however, coral reefs can be particularly challenging study areas, mostly due to the confounding effects of the overlying water column. Varying water depth and varying water properties can both contribute significant complexity to the interpretation and identification of features on the sea floor.

Given this complexity, it is therefore necessary in remote sensing to simplify coral reef ecosystems into a collection of generalized components, each representing a unique compilation of species and/or substrate types. When grouping species for analysis, and when interpreting image classification output, it is important to understand the spectral similarity – or dissimilarity – of different image features.

As an example, let’s first consider an example where reef habitat composition is represented by four fundamental components: coral, sponge, sand and submerged aquatic vegetation (SAV). As shown in our previous post, the average in situ spectra of these four components exhibit unique reflectance characteristics and can be readily differentiated at 0.1 spectral angle. However, with increasing water depth (approximated here using a semi-analytic model for clear tropical water) the separability of these components decreases. At 3 m water depth it becomes more difficult to differentiate sand from SAV, and coral from sponge; and at 10 m water depth analysis is essentially reduced to a two-component systems: sand versus coral, sponge and SAV.

Coral Components Dendrogram

Let’s now compare and contrast the above results with the same analysis applied to all of the individual spectra used to create these component averages. This includes measurements from 24 coral species, 10 sponge species, 3 SAV species and areas of sand. As shown below, many individual species (and in some cases small groups of species) can be readily differentiated when the overlying water column is not considered. However, when the effects of the water column are included, the ability to distinguish individual species diminishes significantly with increasing water depth.

Coral Species Dendrogram 0mCoral Species Dendrogram 3mCoral Species Dendrogram 10m

While the relationship between water depth and spectral similarity is to be expected, what is particularly informative from these dendrograms is the ability to discern which species group together at different depths. For example, note that coral species do not always group with other coral species, but are observed to also group with both sponges and SAV. Additionally, because spectra do not necessarily group according to type, it becomes apparent that three spectral groups can be reasonably differentiated at 10 m rather than just two groups as suggested from the analysis using just averages for each component.

Such information can be immensely valuable for guiding image analysis, as well as aiding the interpretation of results. So if you’re working on remote sensing of coral reefs, it’s worth exploring the spectral characteristics of the dominant species in your study area, and investigating how spectral similarity changes with water depth.

Related post: Assessing Spectral Similarity – Visualizing hierarchical clustering using a dendrogram

Acknowledgement: Spectral data used in the above examples were collected using a GER-1500 spectrometer by M. Lucas at the University of Puerto Rico at Mayaguez for a NASA EPSCoR sponsored research project on the biodiversity of coastal and terrestrial ecosystems.

Hyperspectral Imaging from the ISS – Highlights from the 2014 HICO Data Users Meeting

The annual HICO Data Users Meeting was recently held in Washington, D.C. from 7-8 May 2014. This meeting was an opportunity for the HICO science community to exchange ideas, present research accomplishments, showcase applications, and discuss hyperspectral image processing techniques. With more than a dozen presentations and ample discussion throughout, it was an insightful and very informative meeting.

HREP-HICO

The HICO and RAIDS Experiment Payload installed on the Japanese Experiment Module (credit: NASA)

Highlights from 2104 HICO Data Users Meeting include:

  • Mary Kappus (Naval Research Laboratory) summarized the status of the HICO mission, including an overview of current instrument and data management operations. Notable upcoming milestones include the 5 year anniversary of HICO in September 2014 and the acquisition of HICO’s 10,000th scene – impressive achievements for a sensor that began as just a technology demonstration.
  • Jasmine Nahorniak (Oregon State University) presented an overview of the OSU HICO website, which provides a comprehensive database of HICO sensor information and data characteristics. The website also includes resources for searching and downloading data from the OSU HICO archives, visualizing orbit and target locations in Google Earth, and an online tool (currently in beta testing) for performing atmospheric correction using tafkaa_6s.
  • Sean Bailey (NASA Goddard Space Flight Center) outlined the HICO data distribution and image processing capabilities at NASA. HICO support was initially added to SeaDAS in April 2013, with data distribution beginning in July 2013. In less than a year, as of February 2014, NASA has distributed 4375 HICO scenes to users in 25 different countries. NASA is also planning to soon incorporate additional processing capabilities in SeaDAS to generate HICO ocean color products.
  • With respect to HICO applications: Lachlan McKinna (NASA GSFC) presented a project using time series analysis to detect bathymetry changes in Shark Bay, Western Australia; Marie Smith (University of Cape Town) described a chlorophyll study in Saldanha Bay, South Africa; Darryl Keith (US EPA) discussed the use of HICO for monitoring coastal water quality; Wes Moses (NRL) summarized HICO capabilities for retrieving estimates of bathymetry, bottom type, surface velocity and chlorophyll; and Curtiss Davis (OSU) presented HICO applications for assessing rivers, river plumes, lakes and estuaries.
  • In terms of image processing techniques, Marcos Montes (NRL) summarized the requirements and techniques for improved geolocation, ZhongPing Lee (UMass Boston) presented a methodology for atmospheric correction using cloud shadows, and Curtiss Davis (OSU) discussed various aspects of calibration and atmospheric correction.
  • James Goodman (HySpeed Computing) presented an overview of the functionality and capabilities of the HICO Online Processing Tool, a prototype web-enabled, scalable, geospatial data processing system based on the ENVI Services Engine. The tool is scheduled for release later this year, at which time it will be openly available to the science community for testing and evaluation.

Interested in more information? The meeting agenda and copies of presentations are provided on the OSU HICO website.

About HICO (http://hico.coas.oregonstate.edu/): “The Hyperspectral Imager for the Coastal Ocean (HICO™) is an imaging spectrometer based on the PHILLS airborne imaging spectrometers. HICO is the first spaceborne imaging spectrometer designed to sample the coastal ocean. HICO samples selected coastal regions at 90 m with full spectral coverage (380 to 960 nm sampled at 5.7 nm) and a very high signal-to-noise ratio to resolve the complexity of the coastal ocean. HICO demonstrates coastal products including water clarity, bottom types, bathymetry and on-shore vegetation maps. Each year HICO collects approximately 2000 scenes from around the world. The current focus is on providing HICO data for scientific research on coastal zones and other regions around the world. To that end we have developed this website and we will make data available to registered HICO Data Users who wish to work with us as a team to exploit these data.”

Application Tips for ENVI 5.x – Image to map registration using GCPs

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Re-utilize ground control points (GCPs) originally obtained from the “Image Registration Workflow” to perform “Image to Map” registration of an image or its associated spatially equivalent images.

Scenario: This tip demonstrates the steps used to align a chlorophyll image derived from a hyperspectral HICO scene of the Turkish Straits with a multispectral Landsat 8 OLI image mosaic of the same area.

In some situations it is advantageous, or necessary, to re-use GCPs to geo-locate more than one spatially equivalent image. For example, instances where analysis is first performed on a non-registered source image and output products must be geo-located using the same GCPs as used for the source image.

Shown here is a depiction of the registration output for a HICO scene of the Turkish Straits, achieved following steps outlined in one of our previous tips: Improved geo-location using the Image Registration Workflow. What follows is an example of how to similarly geo-locate a derived chlorophyll image using the same GCPs as used in this registration.

HICO Turkish Straits

The Tip: In this example, the process isn’t as direct as using ENVI’s “Warp from GCPs” tools, or using the “Registration: Image to Map” tool, since these do not produce the desired output. Instead, as outlined below, we perform “Image to Map” registration via the “Registration: Image to Image” tool:

  • This example requires that you first use the “Image Registration Workflow” to generate a desired set of GCP points for geo-locating your selected source image.
  • Once the GCP points have been created, open and display the base image (here a Landsat 8 OLI mosaic) and non-registered warp image (here a HICO-derived chlorophyll image).
  • Start the “Registration: Image to Image” tool, found under “Geometric Correction > Registration” in the Toolbox.
  • In the opening dialog, select the desired band from the base image (in this example we use the OLI green 562.3 nm band), and then click Ok.
  • Note that the registration tool will attempt to automatically generate tie points between selected bands from the base and warp images; however, these points will be discarded and replaced using the existing file. This means that the specific bands selected for the base and warp images isn’t critical at this stage of the process.
  • In the next dialog, select the warp file to be registered, perform any desired spectral subsetting, and click Ok.
  • Now select the band from the warp image to be used for registration process (here the chlorophyll band).
  • When asked if you would like to “select an optional existing tie points file”, respond No.
  • Next, since you won’t be using the automatically generated tie points, accept the default “Automatic Registration Parameters”, being sure that “Examine tie points before warping” is set to Yes, and then click Ok.
  • The tool now opens windows for “Ground Control Points Selection” and the “Image to Image GCP List” as well as displays the base and warp images using the ENVI Classic 3-window interface.
  • In the “Ground Control Points Selection”, under “Options”, select “Clear All Points”. In the same window, under “File”, select “Restore GCPs from ASCII…”, and choose the appropriate GCP file that was previously generated using the “Image Registration Workflow”.
  • The “Image to Image GCP List” is now populated with your previously derived GCPs, which are also now shown in the two image displays.

Turkish Straits GCPs

  • In the “Ground Control Points Selection”, under “Options”, select “Warp File (as Image to Map)…”, choose the desired warp file to be registered (the chlorophyll image), perform any desired spectral subsetting, and click Ok.
  • The final window that appears is the “Registration Parameters” dialog. Here you will set the registration parameters equivalent to those of the previously registered source image. Note that the default parameters are likely not the same, and will need to be adjusted using the metadata from the original output image derived from the “Image Registration Workflow”. The metadata can be accessed directly from the image header file, or through the ENVI interface.
  • Enter the appropriate parameters for the “Output Project and Map Extent”, which includes the projection, coordinates of the upper left corner, output pixel size, and output image size.
  • Now enter the same “Warp Parameters” as used in the original registration process, select an output filename, and then click Ok.

Turkish Straits registration parameters

  • Once the registration process has completed, the output image now has the same geo-location as that obtained for the original geo-located source image.

HICO Turkish Straits chl

Assessing Spectral Similarity – Visualizing hierarchical clustering using a dendrogram

When conducting remote sensing analysis, it can often be very instructive to evaluate the spectral similarity – or dissimilarity – of different image features.

Below we demonstrate the use of a dendrogram to quantitatively analyze and visually assess the similarity and hierarchical clustering of reflectance spectra.

Understanding spectral relationships can impart valuable information on: the potential spectral variability within a given scene, the capacity to differentiate certain unique features in an image, the need for clustering spectrally similar features, or the number of spectral endmembers that could be used to describe a particular area. Spectral analysis can therefore be an important first step in understanding the capabilities and limitations of different analysis methods and application objectives.

Spectra are typically obtained from field or laboratory measurements, from an existing spectral library, or derived from the image itself. Similarity between individual spectra, or between clusters of similar spectra, can be mathematically analyzed using different distance metrics, such as root-mean-square error or spectral angle. The magnitude of similarity amongst these distances can then be used as an indicator of the ability to differentiate and/or identify spectral features using different image processing techniques.

When analyzing a relatively small set of spectra, evaluating results from a similarity assessment can be easily achieved by simply examining the output directly. For example, if we consider a set of coral reef spectra representing coral, sponge, sand and submerged aquatic vegetation (SAV), analysis using spectral angle reveals measurable differences between all four spectra. This is also apparent by plotting the spectra themselves, and seeing that each feature exhibits unique characteristics.

Spectral analysis coral reef

However, when the number of spectra is increased, and the spectral relationships become more complex, visual assessment of both the spectral signatures and similarity output becomes more difficult to interpret. For example, expanding our coral reef analysis, we now investigate spectra from 10 individual sponge species, and find the interpretation of results to be less obvious. There are clearly species exhibiting close similarities, as well as differences, but it is not immediately apparent which species can be easily differentiated and which species need to be clustered.

Spectral angle sponge species

Reflectance spectra sponge species

A useful method that can assist with this analysis is utilizing output from the distance calculations to build a dendrogram. This provides a visual representation of spectral relationships, as well as quantitative information relevant to image analysis. For example, as shown below, of the 10 sponge species in our analysis, 6 of these species can be differentiated at 0.1 spectral angle; however, there are two sets of species, (i) Chondrilla spp. and C. caribensis and (ii) A. lacunosa and I. felix, that are closely similar at 0.1 spectral angle and would need to be clustered at this level of analysis.

Dendrogram sponge species

To further illustrate the utility of using a “spectral dendrogram”, we expand our coral reef analysis to include spectra from 24 coral species, 3 SAV species and sand, in addition to the 10 sponge species. As evident from the dendrogram, spectral relationships in this analysis are significantly more complex. For example, at 0.1 spectra angle there are multiple situations where different spectral types (e.g., coral and sponge, or SAV and sponge) are closely similar and can’t be spectrally differentiated. This has important implications, and potential limitations, for subsequent spectral analysis and image classification results.

Dendrogram coral reef species

These are but a few examples illustrating the types of visualization and levels of information that can be derived from these plots. However, potential applications are as varied as your spectra, so we invite you to explore the use of dendrograms in your own spectral analysis.

Related post: Coral Reef Reflectance Characteristics – Impacts of increasing water depth on spectral similarity

Acknowledgement: Spectral data used in the above examples were collected using a GER-1500 spectrometer by M. Lucas at the University of Puerto Rico at Mayaguez for a NASA EPSCoR sponsored research project on the biodiversity of coastal and terrestrial ecosystems.

EnMAP Coral Reef Simulation – The first of its kind

The GFZ German Research Center for Geosciences and HySpeed Computing announce the first ever simulation of a coral reef scene using the EnMAP End-to-End Simulation tool. This synthetic, yet realistic, scene of French Frigate Shoals will be used to help test marine and coral reef related analysis capabilities of the forthcoming EnMAP hyperspectral satellite mission.

EeteS EnMAP Simulation FFS

EeteS simulation of EnMAP scene for French Frigate Shoals, Hawaii

EnMAP (Environmental Mapping and Analysis Program) is a German hyperspectral satellite mission scheduled for launch in 2017. As part of the satellite’s development, the EnMAP End-to-End Simulation tool (EeteS) was created at GFZ to provide accurate simulation of the entire image generation, calibration and processing chain. EeteS is also being used to assist with overall system design, the optimization of fundamental instrument parameters, and the development and evaluation of data pre-processing and scientific-exploitation algorithms.

EeteS has previously been utilized to simulate various terrestrial scenes, such as agriculture and forest areas, but until now had not previously been used for generating a coral reef scene. Considering the economic and ecologic importance of coral reef ecosystems, the ability to refine existing analysis tools and develop new algorithms prior to launch is a critical step towards efficiently implementing new reef remote sensing capabilities once EnMAP is operational.

The input imagery for the French Frigate Shoals simulation was derived from a mosaic of four AVIRIS flightlines, acquired in April 2000 as part of an airborne hyperspectral survey of the Northwestern Hawaiian Islands by NASA’s Jet Propulsion Laboratory. Selection of this study area was based in part on the availability of this data, and in part due to the size of the atoll, which more than adequately fills the full 30 km width of an EnMAP swath. In addition to flightline mosaicking, image pre-processing included atmospheric and geographic corrections, generating a land/cloud mask, and minimizing the impact of sunglint. The final AVIRIS mosaic was provided as a single integrated scene of at-surface reflectance.

For the EeteS simulation, the first step was to transform this AVIRIS mosaic into raw EnMAP data using a series of forward processing steps that model atmospheric conditions and account for spatial, spectral, and radiometric differences between the two sensors. The software then simulates the full EnMAP image processing chain, including onboard calibration, atmospheric correction and orthorectification modules to ultimately produce geocoded at-surface reflectance.

The resulting scene visually appears to be an exact replica of the original AVIRIS mosaic, but more importantly now emulates the spatial and spectral characteristics of the new EnMAP sensor. The next step is for researchers to explore how different hyperspectral algorithms can be used to derive valuable environmental information from this data.

For more information on EnMAP and EeteS: http://www.enmap.org/

EeteS image processing and above description performed with contributions from Drs. Karl Segl and Christian Rogass (GFZ German Research Center for Geosciences).