A Year of Hyperspectral Image Processing in the Cloud – HICO IPS reaches a milestone

It has been a little more than one year since we first launched the HICO Image Processing System (HICO IPS), and its performance continues to be exceptional. In fact, as a prototype, HICO IPS has exceeded all expectations, working flawlessly since it was first launched in May 2015.


HICO IPS is a web-application for on-demand remote sensing image analysis that allows users to interactively select images and algorithms, dynamically launch analysis routines, and then see results displayed directly in an online map interface. More details are as follows:

  • System developed to demonstrate capabilities for remote sensing image analysis in the cloud
  • Software stack utilizes a combination of commercial and open-source software
  • Core image processing and data management performed using ENVI Services Engine
  • Operational system hosted on Rackspace cloud server
  • Utilizes imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), which was deployed on the International Space Station (ISS) from 2009-2014
  • Example algorithms are included for assessing coastal water quality and other nearshore environmental conditions
  • Application developed in collaboration between HySpeed Computing and Exelis Visual Information Solutions (now Harris Geospatial Solutions)
  • Project supported by the Center for the Advancement of Science in Space (CASIS)

And here’s a short overview of HICO IPS accomplishments and performance in the past year, including some infographics to help illustrate how the system has been utilized:

  • The application has received over 5000 visitors
  • Users represent over 100 different countries
  • System has processed a total of 1000 images
  • Equivalent area processed is 4.5 million square kilometers
  • The most popular scene selected for analysis was the Yellow River
  • The most popular algorithm was Chlorophyll followed closely by Land Mask
  • Application has run continuously without interruption since launch in May 2015

HICO IPS infographics

Try it out today for yourself: http://hyspeedgeo.com/HICO/

What’s New in ENVI 5.3

As the geospatial industry continues to evolve, so too does the software. Here’s a look at what’s new in ENVI 5.3, the latest release of the popular image analysis software from Exelis VIS.


  • New data formats and sensors. ENVI 5.3 now provides support to read and display imagery from Deimos-2, DubaiSat-2, Pleiades-HR and Spot mosaic tiles, GeoPackage vectors, Google-formatted SkySat-2, and Sentinel-2.
  • Spectral indices. In addition to the numerous indices already included in ENVI (more than 60), new options include the Normalized Difference Mud Index (NDMI) and Modified Normalized Difference Water Index (MNDWI).
  • Atmospheric correction. The Quick Atmospheric Correction (QUAC) algorithm has been updated with the latest enhancements from Spectral Sciences, Inc. to help improve algorithm accuracy.
  • Digital elevation model. Users can now download the GMTED2010 DEM (7.5 arc seconds resolution) from the Exelis VIS website for use in improving the accuracy of Image Registration using RPC Orthorectification and Auto Tie Point Generation.
  • Point clouds. If you subscribe to the ENVI Photogrammetry Module (separate license from ENVI), then the Generate Point Clouds by Dense Image Matching tool is now available for generating 3D point clouds from GeoEye-1, IKONOS, Pleiades-1A, QuickBird, Spot-6, WorldView-1,-2 and -3, and the Digital Point Positioning Data Base (DPPDB).
  • LiDAR. The ENVI LiDAR module has been merged with ENVI and can now be launched directly from within the ENVI interface.
  • Geospatial PDF. Your views, including all currently displayed imagery, layers and annotations in those views, can now be exported directly to geospatial PDF files.
  • Spatial subset. When selecting files to add to the workspace, the File Selection tool now includes options to subset files by raster, vector, region of interest or map coordinates.
  • Regrid raster. Users can now regrid raster files to custom defined grids (geographic projection, pixel size, spatial extent and/or number of rows and columns).
  • Programming. The latest ENVI release also includes dozens of new tasks, too numerous to list here, that can be utilized for developing custom user applications in ENVI and ENVI Services Engine.

To learn more about the above features and improvements, as well as many more, read the latest release notes or check out the ENVI help documentation.

ENVI 5.3

Application Tips for ENVI 5 – Exporting a Geospatial PDF

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s print and export options to generate a Geospatial PDF.

Geospatial PDF

Scenario: This tip utilizes a Landsat-8 scene of California’s Central Valley to demonstrate the steps for creating a Geospatial PDF using two different options: (1) using Print Layout; and (2) using Chip View to Geospatial PDF.

Geospatial PDFs allow you to easily share your geospatial output in standard PDF format while still enabling users to measure distances and identify locations in geographic coordinates, but without need for any specialized GIS or remote sensing software.

Option 1 – Print Layout

  • The Print Layout option requires ENVI 5.0 or later and works only on Windows platforms. It also requires that you launch ENVI in 32-bit mode and have a licensed ArcGIS application on the same system.
  • If you’re looking for the ENVI 32-bit mode (as opposed to the now standard 64-bit mode), it is typically found in either the ‘32-bit’ or ‘ENVI for ArcGIS’ subdirectory of the ENVI directory located under Start > All Programs.
  • Now, using your data of choice, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF. In our example, we simply use a color infrared image of our example Landsat-8 scene. However, if desired, your output can include multiple layers and even annotations.
  • Once you are satisfied with the View, go to File > Print…, and this will launch the Print Layout viewer where you can make further adjustments to your output before exporting it to Geospatial PDF.
  • Note: If the File > Print… option doesn’t produce the desired output in Print Layout (which doesn’t directly support all file types, georeferencing formats or annotation styles), then you can also use File > Chip View To > Print… as another option. The Chip View To option creates a screen capture of whatever is in the active View, so it can accommodate anything you can display in a View, but with the tradeoff that there is slightly less functionality in the Print Layout format options.
  • In our example, for instance, the File > Print… option didn’t support the Landsat-8 scene when opened using the ‘MTL.txt’ file, but instead of using the Chip View To option, as a different workaround we resaved the scene in ENVI format to retain full functionality of Print Layout.
  • Once in the Print Layout viewer, you can apply different ArcMap templates, adjust the zoom level and location of the image, and edit features in the template. Here we made a few edits to the standard LetterPortrait.mxt template as the basis for our output.

ENVI Print Layout

  • To output your results to a Geospatial PDF, select the Export button at the top of the Print Layout viewer, enter a filename, and then select Save.
  • Note that Print Layout can also be used to Print your output using the Print button.
  • You have now created a Geospatial PDF of your work (see our example: CA_Central_Valley_1.pdf). Also, see below for tips on viewing and interacting with this file in Adobe Reader and Adobe Acrobat.

Option 1 – Chip View to Geospatial PDF

  • The Chip View to Geospatial PDF requires ENVI 5.2 or later, but does not require ArcGIS.
  • This option directly prints whatever is in the active View to a Geospatial PDF, so it has fewer options than the Print Layout option, but can still be very useful for those without an ArcGIS license.
  • As above, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF, including multiple layers and annotations as desired. Here we again simply use a color infrared image of our example Landsat-8 scene, but this time include text annotations and a north arrow added directly to the View.
  • Once you are satisfied with the View, go to File > Chip View To > Geospatial PDF…, enter a filename, and then select OK.
  • Note that the Chip View To option can also be used to export your work to a File, PowerPoint or Google Earth.
  • Congratulations again. You have now created another Geospatial PDF of your work (see our example: CA_Central_Valley_2.pdf).

CA Central Valley 2

Viewing Output in Adobe

  • As mentioned, Geospatial PDFs allow you to measure distances and identify locations in geographic coordinates using a standard PDF format. Geospatial PDFs can be viewed in either Adobe Acrobat or Reader (v9 or later).
  • In Adobe Reader, the geospatial tools can be found under Edit > Analysis in the main menu bar. In Adobe Acrobat, the geospatial tools can be enabled by selecting View > Tools > Analyze in the main menu bar, and then accessed in the Tools pane under Analyze.
  • To measure distance, area and perimeter, select the Measuring Tool.
  • To see the cursor location in geographic coordinates, select the Geospatial Location Tool.
  • And to find a specific location, select the Geospatial Location Tool, right click on the image, select the Find a Location tool, and then enter the desired coordinates.

So now that you’re familiar with the basics of creating Geospatial PDFs, be sure to consider using them in your next project. They’re definitely a powerful way to share both images and derived output products with your colleagues and customers.

Application Tips for ENVI 5 – Image classification of drone video frames

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s new video support (introduced in ENVI 5.2) to extract an individual frame from HD video and then perform supervised classification on the resulting image file.

ENVI drone video analysis

Scenario: This tip demonstrates the steps used for implementing the ENVI Classification Workflow using a HD video frame extracted from a drone overflight of a banana plantation in Costa Rica (video courtesy Elevated Horizons). In this example image classification is utilized to delineate the total number of observable banana bunches in the video frame. In banana cultivation, bunches are often covered using blue plastic sleeves for protection from insects and disease and for increasing yield and quality. Here the blue sleeves provide a unique spectral signature (color) for use in image classification, and hence a foundation for estimating total crop yield when analysis is extrapolated or applied to the entire plantation.

The Tip: Below are the steps used to extract the video frame and implement the Classification Workflow in ENVI 5.2:

  • There are three options for opening and viewing video in ENVI: (i) drag-and-drop a video into the ENVI display; (ii) from the main toolbar select File > Open to select a video; and (iii) from the main toolbar select Display > Full Motion Video, and then use the Open button at the top of the video player to select a video.

ENVI video player

  • Once opened, the video player can be used to playback video using standard options for play, pause, and stepping forward and backward. There are also options to add and save bookmarks, adjust the brightness and frame rate, and export individual frames, or even the entire video, for analysis in ENVI.
  • Here we have selected to export a single frame using the “Export Frame to ENVI” button located at the top of the video player.
  • The selected video frame is then automatically exported to the Layer Manager and added to the currently active View. Note that the new file is only temporary, so be sure to save this file to a desired location and filename if you wish to retain the file for future analysis.
  • We next launch the Classification Workflow by selecting Toolbox > Classification > Classification Workflow.
  • For guidance on implementing the Classification Workflow, please visit our earlier post – Implementing the Classification Workflow – to see a detailed example using Landsat data of Lake Tahoe, or refer to the ENVI documentation for more information.
  • In the current Classification example, we selected to Use Training Data (supervised classification), delineate four different classes (banana bunch, banana plant, bare ground, understory vegetation), run the Mahalanobis Distance supervised classification algorithm, and not implement any post-classification smoothing or aggregation.

ENVI drone video classification workflow

  • Classification output includes the classified raster image (ENVI format), corresponding vector file (shapefile), and optionally the classification statistics (text file). Shown here is the classification vector output layered on top of the classification image, where blue represents the observable banana bunches in this video frame.

ENVI drone video classification output

With that analysis accomplished, there are a number of different options within ENVI for extending this analysis to other frames, from as simple as manually repeating the same analysis across multiple individual frames to as sophisticated as creating a custom IDL application to utilize ENVI routines for automatically classifying all frames in the entire video. However, we leave this for a future post.

In the meantime, we can see that the ability to export frames to ENVI for further analysis opens up a wealth of image analysis options. We’re excited to explore the possibilities.

Application Tips for ENVI – Implementing the Classification Workflow

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s automated step-by-step Classification Workflow to perform a supervised classification.

Scenario: This tip demonstrates the steps used for supervised classification of an index stack created from a Landsat 8 scene of Lake Tahoe, CA USA. The index stack combines three different spectral indices into a single multi-layer image. The indices include the Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Normalized Difference Snow Index (NDSI).

Here we are using the index stack as a form of data reduction and normalization; however, in most application users will utilize most or all of the individual spectral bands to maximize the spectral information used in the classification analysis.

Lake Tahoe Landsat image classification

Lake Tahoe, CA: Landsat 8 image (upper left); index stack (lower left); supervised classification output (right).


The Tip: Below are the steps used to implement the Classification Workflow in ENVI:

  • After opening the selected image in ENVI, launch the workflow from the toolbox by selecting: Toolbox > Classification > Classification Workflow
  • The first step of the workflow allows you to select the input image, perform any spatial and spectral subsetting, and also select a mask, if applicable.

ENVI Classification Workflow file selection

  • The next step provides the option to specify whether the classification is to be performed using No Training Data (unsupervised classification) or to Use Training Data (supervised classification). In our example we have selected to Use Training Data.
  • For supervised classification, the user is next given a chance to interactively define or upload the training data. Had we selected unsupervised classification, then our next step would have been to select parameters for implementing the ISODATA classification algorithm.
  • To define the training data, users have the option of uploading a previously defined training dataset, or alternatively to use the ENVI annotation tools to interactively select polygons, ellipses, rectangles or points to define training areas for each desired class.
  • There is also an option at this stage in the workflow to specify the supervised classification scheme (Maximum Likelihood, Minimum Distance, Mahalanobis Distance, or Spectral Angle Mapper) and any of its associated classification parameters. In our example we use the Maximum Likelihood classification scheme with its default parameters.

ENVI Classification Workflow training data

  • Note that you can select the Preview button at the bottom left of the workflow window to see the classification results dynamically updated as you proceed through the training data definition process. However, there are limits on how big an area can be previewed. If the area is too large then the preview will appear black by default. If this occurs, then simply increase the zoom and/or reduce the size of the preview window.
  • It also important to remember to save your training data once complete so that you can later replicate the same classification process or utilize the data in another image.
  • In our example we have defined five classes (water, snow/ice, vegetation, barren, and cloud), each represented using five different training polygons.
  • Once satisfied with the training data, selecting Next at the bottom of the window will initiate the classification process.
  • Once classification is complete, if you’re not happy with the results or want to change the training data or input parameters, then there’s no cause for concern. You can easily move forward and backward throughout the classification process using the Back and Next buttons at the bottom of the workflow window, allowing you to check your results and/or go back and change settings.
  • Once the classification is complete the output will be displayed in ENVI, and the user is then given additional options to refine the output using smoothing (removes speckling) and aggregation (removes small regions). We have selected to do both for our example.
  • The final step after smoothing and aggregation is to save the results, which includes options for saving the classification image, classification vectors, and classification statistics.

ENVI Classification Workflow output

We have demonstrated just one of many different classification options included in the Classification Workflow. To learn more about the various different algorithms and settings for supervised and unsupervised classification techniques, just read through the ENVI help documentation and/or follow the classification tutorial included with ENVI.

Geospatial Solutions in the Cloud

Source: Exelis VIS whitepaper – 12/2/2014 (reprinted with permission)

What are Geospatial Analytics?

Geospatial analytics allow people to ask questions of data that exist within a spatial context. Usually this means extracting information from remotely sensed data such as multispectral imagery or LiDAR that is focused on observing the Earth and the things happening on it, both in a static sense or over a period of time. Familiar examples of this type of geospatial analysis include Land Classification, Change Detection, Soil and Vegetative indexes, and depending on the bands of your data, Target Detection and Material Identification. However, geospatial analytics can also mean analyzing data that is not optical in nature.

So what other types of problems can geospatial analytics solve? Geospatial analytics comprise of more than just images laid over a representation of the Earth. Geospatial analytics can ask questions of ANY type of geospatial data, and provide insight into static and changing conditions within a multi-dimensional space. Things like aircraft vectors in space and time, wind speeds, or ocean currents can be introduced into geospatial algorithms to provide more context to a problem and to enable new correlations to be made between variables.

Many times, advanced analytics like these can benefit from the power of cloud, or server-based computing. Benefits from the implementation of cloud-based geospatial analytics include the ability to serve on-demand analytic requests from connected devices, run complex algorithms on large datasets, or perform continuous analysis on a series of changing variables. Cloud analytics also improve the ability to conduct multi-modal analysis, or processes that take into account many different types of geospatial information.

Here we can see vectors of a UAV along with the ground footprint of the sensor overlaid in Google Earth™, as well as a custom interface built on ENVI that allows users to visualize real-time weather data in four dimensions (Figure 1).


Figure 1 – Multi-Modal Geospatial Analysis – data courtesy NOAA

These are just a few examples of non-traditional geospatial analytics that cloud-based architecture is very good at solving.

Cloud-Based Geospatial Analysis Models 

So let’s take a quick look at how cloud-based analytics work. There are two different operational models for running analytics, the on-demand model and the batch process model. In a on-demand model (Figure 2), a user generally requests a specific piece of information from a web-enabled device such as a computer, a tablet, or a smart phone. Here the user is making the request to a cloud based resource.


Figure 2 – On-Demand Analysis Model

Next, the server identifies the requested data and runs the selected analysis on it. This leverages scalable server architecture that can vastly decrease the amount of time it takes to run the analysis and eliminate the need to host the data or the software on the web-enabled device. Finally, the requested information is sent back to the user, usually at a fraction of the bandwidth cost required to move large amounts of data or full resolution derived products through the internet.

In the automated batch process analysis model (Figure 3), the cloud is designed to conduct prescribed analysis to data as it becomes available to the system, reducing the amount of manual interaction and time that it takes to prepare or analyze data. This system can take in huge volumes of data from various sources such as aerial or satellite images, vector data, full motion video, radar, or other data types, and then run a set of pre-determined analyses on that data depending on the data type and the requested information.


Figure 3 – Automated Batch Process Model

Once the data has been pre-processed, it is ready for consumption, and the information is pushed out to either another cloud based asset, such as an individual user that needs to request information or monitor assets in real-time, or simply placed into a database in a ‘ready-state’ to be accessed and analyzed later.

The ability for this type of system to leverage the computing power of scalable server stacks enables the processing of huge amounts of data and greatly reduces the time and resources needed to get raw data into a consumable state.

Solutions in the Cloud

HySpeed Computing

Now let’s take a look at a couple of use cases that employ ENVI capabilities in the cloud. The first is a web-based interface that allows users to perform on-demand geospatial analytics on hyperspectral data supplied by HICO™, the Hyperspectral Imager for the Coastal Ocean (Figure 4). HICO is a hyperspectral imaging spectrometer that is attached to the International Space Station (ISS) and is designed specifically for sampling the coastal ocean in an effort to further our understanding of the world’s coastal regions.


Figure 4 – The HICO Sensor – image courtesy of NASA

Developed by HySpeed Computing, the prototype HICO Image Processing System (Figure 5) allows users to conduct on-demand image analysis of HICO’s imagery from a web-based browser through the use of ENVI cloud capabilities.


Figure 5 – The HICO Image Processing System – data courtesy of NASA

The interface exposes several custom ENVI tasks designed specifically to take advantage of the unique spectral resolution of the HICO sensor to extract information characterizing the coastal environment. This type of interface is a good example of the on-demand scenario presented earlier, as it allows users to conduct on-demand analysis in the cloud without the need to have direct access to the data or the computing power to run the hyperspectral algorithms.

The goal of this system is to provide ubiquitous access to the robust HICO catalog of hyperspectral data as well as the ENVI algorithms needed to analyze them. This will allow researchers and other analysts the ability to conduct valuable coastal research using web-based interfaces while capitalizing on the efforts of the Office of Naval Research, NASA, and Oregon State University that went into the development, deployment, and operation of HICO.


Another use case involves a real-time analysis scenario that comes from a company called Milcord and their dPlan Next Generation Mission Manager (Figure 6). The goal of dPlan is to “aid mission managers by employing an intelligent, real-time decision engine for multi-vehicle operations and re-planning tasks” [1]. What this means is that dPlan helps folks make UAV flight plans based upon a number of different dynamic factors, and delivers the best plan for multiple assets both before and during the actual operation.


Figure 6 – The dPlan Next Generation Mission Manager

Factors that are used to help score the flight plans include fuel availability, schedule metrics based upon priorities for each target, as well as what are known as National Image Interpretability Rating Scales, or NIIRS (Figure 7). NIIRS are used “to define and measure the quality of images and performance of imaging systems. Through a process referred to as “rating” an image, the NIIRS is used by imagery analysts to assign a number which indicates the interpretability of a given image.” [2]


Figure 7 – Extent of NIIRS 1-9 Grids Centered in an Area Near Calgary

These factors are combined into a cost function, and dPlan uses the cost function to find the optimal flight plan for multiple assets over a multitude of targets. dPlan also performs a cost-benefit analysis to indicate if the asset cannot reach all targets, and which target might be the lowest cost to remove from the plan, or whether another asset can visit the target instead.

dPlan employs a custom ESE application to generate huge grids of Line of Sight values and NIIRs values associated with a given asset and target (Figure 8). dPlan uses this grid of points to generate route geometry, for example, how close and at what angle does the asset need to approach the target.


Figure 8 – dPlan NIIRS Workflow

The cloud-computing power leveraged by dPlan allows users to re-evaluate flight plans on the fly, taking into account new information as it becomes available in real time. dPlan is a great example of how cloud-based computing combined with powerful analysis algorithms can solve complex problems in real time and reduce the resources needed to make accurate decisions amidst changing environments.

Solutions in the Cloud

So what do we do here at Exelis to enable folks like HySpeed Computing and Milcord to ask these kinds of questions from their data and retrieve reliable answers? The technology they’re using is called the ENVI Services Engine (Figure 9), an enterprise-ready version of the ENVI image analytics stack. We currently have over 60 out-of-the-box analysis tasks built into it, and are creating more with every release.


Figure 9 – The ENVI Services Engine

The real value here is that ENVI Services Engine allows users to develop their own analysis tasks and expose them through the engine. This is what enables users to develop unique solutions to geospatial problems and share them as repeatable processes for others to use. These solutions can be run over and over again on different data and provide consistent dependable information to the persons requesting the analysis. The cloud based technology makes it easy to access from web-enabled devices while leveraging the enormous computing power of scalable server instances. This combination of customizable geospatial analysis tasks and virtually limitless computing power begins to address some of the limiting factors of analyzing what is known as big data, or datasets so large and complex that traditional computing practices are not sufficient to identify correlations within disconnected data streams.

Our goal here at Exelis is to enable you to develop custom solutions to industry-specific geospatial problems using interoperable, off-the-shelf technology. For more information on what we can do for you and your organization, please feel free to contact us.


[1] 2014. Milcord. “Geospatial Analytics in the Cloud: Successful Application Scenarios” webinar. https://event.webcasts.com/starthere.jsp?ei=1042556

[2] 2014. The Federation of American Scientists. “National Image Interpretability Rating Scales”. http://fas.org/irp/imint/niirs.htm


A Look at What’s New in ENVI 5.2

Earlier this month Exelis Visual Information Solutions released ENVI 5.2, the latest version of their popular geospatial analysis software.

ENVI 5.2

ENVI 5.2 includes a number of new image processing tools as well as various updates and improvements to current capabilities. We’ve already downloaded our copy and started working with the new features. Here’s a look at what’s included.

A few of the most exciting new additions to ENVI include the Spatiotemporal Analysis tools, Spectral Indices tool, Full Motion Video player, and improved integration with ArcGIS:

  • Spatiotemporal Analysis. Just like the name sounds, this feature provides users the ability to analyze stacks of imagery through space and time. Most notably, tools are now available to build a raster series, where images are ordered sequentially by time, to reproject images from multiple sensors into a common projection and grid size, and to animate and export videos of these raster series.
  • Spectral Indices. Expanding on the capabilities of the previous Vegetation Index Calculator, the new Spectral Indices tool includes 64 different indices, which in addition to analyzing vegetation can also be used to investigate geology, man-made features, burned areas and water. The tool conveniently selects only those indices that can be calculated for a given input image dependent on its spectral characteristics. So when you launch the tool you’ll only see those indices that can be calculated using your imagery.
  • Full Motion Video. ENVI 5.2 now supports video, allowing users to not just play video, but also convert video files to time-enabled raster series and extract individual video frames for analysis using standard ENVI tools. Supported file formats include Skybox SkySat video, Adobe Flash Video and Shockwave Flash, Animated GIF, Apple Quicktime, Audio Video Interleaved, Google WebM Matroska, Matroska Video, Motion JPEG and JPEG2000, MPEG-1 Part 2, MPEG-2 Transport Stream, MPEG-2 Part 2, MPEG-4 Part 12 and MPEG-4 Part 14.
  • Integration with ArcGIS. Originally introduced in ENVI 5.0, additional functionality has been added for ENVI to seamlessly interact with ArcGIS, including the ability to integrate analysis tools and image output layers in a concurrent session of ArcMap. For those working in both software domains, this helps simplify your geospatial workflows and more closely integrate your raster and vector analyses.

Other noteworthy additions in this ENVI release include:

  • New data types. ENVI 5.2 now provides support to read and display imagery from AlSat-2A, Deimos-1, Gaofen-1, Proba-V S10, Proba-V S1, SkySat-1, WorldView-3, Ziyuan-1-02C and Ziyuan-3A, as well as data formats GRIB-1, GRIB-2, Multi-page TIFF and NetCDF-4.
  • NNDiffuse Pan Sharpening. A new pan sharpening tool based on nearest neighbor diffusion has been added, which is multi-threaded for high-performance image processing.
  • Scatter Plot Tool. The previous scatter plot tool has been updated and modernized, allowing users to dynamically switch bands, calculate spectral statistics, interact with ROIs, and generate density slices of the displayed spectral data.
  • Raster Color Slice. This useful tool has also been updated, particularly from a performance perspective, providing dynamic updates in the image display according to parameter changes made in the tool.

For those interested in implementing ENVI in the cloud, the ENVI 5.2 release also marks the release of ENVI Services Engine 5.2 , which is an enterprise version of ENVI that facilitates on-demand, scalable, web-based image processing applications. As an example, HySpeed Computing is currently developing a prototype implementation of ESE for processing hyperspectral imagery from the HICO sensor on the International Space Station. The HICO Image Processing System will soon be publically available for testing and evaluation by the community. A link to access the system will be provided on our website once it is released.


To learn about the above features, and many more not listed here, see the video from Exelis VIS and/or read the latest release notes on ENVI 5.2.

We’re excited to put the new tools to work. How about you?

VISualize 2014 – Call for abstracts now open

UPDATE (6-April-2015): Announcing the ENVI Analytics Symposium – taking place in Boulder, CO from August 25-26, 2015. Those looking for the VISualize symposium, which has been indefinitely postponed, should consider attending the inaugural ENVI Analytics Symposium as a great opportunity to explore the next generation of geoanalytic solutions.

Just announced!  VISualize 2014, the annual IDL & ENVI User Group Meeting hosted by Exelis Visual Information Solutions, will be taking place October 14-16 at the World Wildlife Fund in Washington, DC.

HySpeed Computing is honored to once again be co-sponsoring this year’s VISualize. We are excited to speak with you and see your latest remote sensing applications.

At this year’s meeting HySpeed Computing will be presenting results from our latest project – a prototype cloud computing system for remote sensing image processing and data visualization. We hope to see you there.

Abstract submission deadline is September 12. Register today!


“Please join us at VISualize 2014, October 14th – 16th, at the World Wildlife Fund in Washington, DC. This three day event explores real-world applications of ENVI and IDL with a specific focus on Modern Approaches for Remote Sensing & Monitoring Environmental Extremes.

Suggested topics include:

  • Using new data platforms such as UAS, microsatellites, and SAR sensors for environmental assessments
  • Land subsidence monitoring and mapping techniques
  • Remote sensing solutions for precision agriculture mapping
  • Drought, flood, and extreme precipitation event monitoring and assessment
  • Wildfire and conservation area monitoring, management, mitigation, and planning
  • Monitoring leaks from natural gas pipelines

Don’t miss this excellent opportunity to connect with industry thought leaders, researchers, and scientists.”

Register today!


NASA Takes Over Navy Instrument On ISS

A version of this article appears in the May 19 edition of Aviation Week & Space Technology, p. 59, Frank Morring, Jr.

HREP on JEMEFA hyperspectral imager on the International Space Station (ISS) that was developed by the U.S. Navy as an experiment in littoral-warfare support is finding new life as an academic tool under NASA management, and already has drawn some seed money as a pathfinder for commercial Earth observation.

Facing Earth in open space on the Japanese Experiment Module’s porchlike Exposed Facility, the Hyperspectral Imager for Coastal Oceans (HICO) continues to return at least one image a day of near-shore waters with unprecedented spectral and spatial resolution.

HICO was built to provide a low-cost means to study the utility of hyperspectral imaging from orbit in meeting the Navy’s operational needs close to shore. Growing out of its experiences in the Persian Gulf and other shallow-water operations, the Office of Naval Research wanted to evaluate the utility of space-based hyperspectral imagery to characterize littoral waters and conduct bathymetry to track changes over time that could impact operations.

The Naval Research Laboratory (NRL) developed HICO, which was based on airborne hyperspectral imagery technology and off-the-shelf hardware to hold down costs. HICO was launched Sept. 10, 2009, on a Japanese H-2 transfer vehicle as part of the HICO and RAIDS (Remote Atmospheric and Ionospheric Detection System) Experimental Payloads; it returned its first image two weeks later.

In three years of Navy-funded operations, HICO “exceeded all its goals,” says Mary Kappus, coastal and ocean remote sensing branch head at NRL.

“In the past it was blue ocean stuff, and things have moved more toward interest in the coastal ocean,” she says. “It is a much more difficult environment. In the open ocean, multi-spectral was at least adequate.”

NASA, the U.S. partner on the ISS, took over HICO in January 2013 after Navy funding expired. The Navy also released almost all of the HICO data collected during its three years running the instrument. It has been posted for open access on the HICO website managed by Oregon State University.

While the Navy program was open to most researchers, the principal-investigator approach and the service’s multistep approval process made it laborious to gain access on the HICO instrument.

“[NASA] wanted it opened up, and we had to get permission from the Navy to put the historical data on there,” says Kappus. “So anything we collect now goes on there, and then we ask the Navy for permission to put old data on there. They reviewed [this] and approved releasing most of it.”

Under the new regime NRL still operates the HICO sensor, but through the NASA ISS payload office at Marshall Space Flight Center. This more-direct approach has given users access to more data and, depending on the target’s position relative to the station orbit, a chance to collect two images per day instead of one. Kappus explains that the data buffer on HICO is relatively small, so coordination with the downlink via the Payload Operations Center at Marshall is essential to collecting data before the buffer fills up.

Task orders are worked through the same channels. Presenting an update to HICO users in Silver Spring, Md., on May 7, Kappus said 171 of 332 total “scenes” targeted between Nov. 11, 2013, and March 12 were requested by researchers backed by the NRL and NASA; international researchers comprised the balance.

Data from HICO is posted on NASA’s Ocean Color website, where usage also is tracked. After the U.S., “China is the biggest user” of the website data, Kappus says, followed by Germany, Japan and Russia. The types of data sought, such as seasonal bathymetry that shows changes in the bottom of shallow waters, has remained the same through the transition from Navy to NASA.

“The same kinds of things are relevant for everybody; what is changing in the water,” she says.

HICO offers unprecedented detail from its perch on the ISS, providing 90-meter (295-ft.) resolution across wavelengths of 380-960 nanometers sampled at 5.7 nanometers. Sorting that rich dataset requires sophisticated software, typically custom-made and out of the reach of many users.

To expand the user set for HICO and future Earth-observing sensors on the space station, the Center for the Advancement of Science in Space, the non-profit set up by NASA to promote the commercial use of U.S. National Laboratory facilities on the ISS, awarded a $150,000 grant to HySpeed Computing, a Miami-based startup, and [Exelis] to demonstrate an online imaging processing system that can rapidly integrate new algorithms.

James Goodman, president/CEO of HySpeed, says the idea is to build a commercial way for users to process HICO data for their own needs at the same place online that they get it.

“Ideally a copy of this will [be] on the Oregon State server where the data resides,” Goodman says. “As a HICO user you would come in and say ‘I want to use this data, and I want to run this process.’ So you don’t need your own customized remote-sensing software. It expands it well beyond the research crowd that has invested in high-end remote-sensing software. It can be any-level user who has a web browser.”

Application Tips for ENVI 5.x – An IDL application for opening HDF5 formatted HICO scenes

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Open a HICO dataset stored in HDF5 format using an IDL application prepared by the U.S. Naval Research Laboratory.

This is a supplement to an earlier post that similarly describes how to open HDF5 formatted HICO files using either the H5_Browser or new HDF5 Reader in ENVI.

HICO Montgomery Reef, Australia

Scenario: This tip demonstrates how to implement IDL code for opening a HDF5 HICO scene from Montgomery Reef, Australia into ENVI format. Subsequent steps are included for preparing the resulting data for further analysis.

The HICO dataset used in this example (H2012095004112.L1B_ISS) was downloaded from the NASA GSFC archive, which can be reached either through the HICO website at Oregon State University or the NASA GSFC Ocean Color website. Note that you can also apply to become a registered HICO Data User through the OSU website, and thereby obtain access to datasets already in ENVI format.

The IDL code used in this example is available from the NASA GSFC Ocean Color website under Documents > Software/Tools > IDL Library > hico. The three IDL files you need are: byte_ordering.pro, nrl_hico_h5_to_flat.pro and write_nrl_header.pro.

The same IDL code is also included here for your convenience:  nrl_hico_h5_to_flat,  byte_ordering  and  write_nrl_header (re-distributed here with permission; disclaimers included in the code). However, to use these files (which were renamed so they could be attached to the post), you will first need to change the file extensions from *.txt to *.pro.

Running this code requires only minor familiarity working with IDL and the IDL Workbench.

The Tip: Below are steps to open the HICO L1B radiance and navigation datasets in ENVI using the IDL code prepared by the Naval Research Laboratory:

  • Start by unpacking the compressed folder (e.g., H2012095004112.L1B_ISS.bz2). If other software isn’t readily available, a good option is to download 7-zip for free from http://www.7-zip.org/.
  • Rename the resulting HDF5 file with a *.h5 extension (e.g., H2012095004112.L1B_ISS.h5). This allows the HDF5 tools in the IDL application to recognize the appropriate format.
  • If you downloaded the IDL files from this post, rename them from *.txt to *.pro (e.g., nrl_hico_h5_to_flat.txt to nrl_hico_h5_to_flat.pro); otherwise, if you downloaded them from the NASA website they already have the correct naming convention.
  • Open the IDL files in the IDL Workbench. To do so, simply double-click the files in your file manager and the files should automatically open in IDL if it is installed on your machine. Alternatively, you can launch either ENVI+IDL or just IDL and then select File > Open in the IDL Workbench.
  • Compile each of the files in the following order: (i) nrl_hico_h5_to_flat.pro, (ii) byte_ordering.pro, and (iii) write_nrl_header.pro. In the IDL Workbench this can be achieved by clicking on the tab associated with a given file and then selecting the Compile button in the menu bar.
  • You will ultimately only run the code for nrl_hico_h5_to_flat.pro, but this application is dependent on the other files; hence the reason they also need to be compiled.
  • Run the code for nrl_hico_h5_to_flat.pro, which this is done by clicking the tab for this file and then selecting the Run button in the menu bar.
  • You will then be prompted for an *.h5 input file (e.g., H2012095004112.L1B_ISS.h5), and a directory where you wish to write the output files.
  • There is no status bar associated with this operation; however, if you look closely at the IDL prompt in the IDL Console at the bottom of the Workbench you will note that it changes color while the process is running and returns to its normal color when the process is complete. In any event, the procedure is relatively quick and typically finishes in less than a minute.
  • Once complete, two sets of output files are created (data files + associated header files), one for the L1B radiance data and one for the navigation data.

Data Preparation: Below are the final steps needed to prepare the HICO data for further processing (repeated here in part from our previous post):

  • Open the L1B radiance and associated navigation data in ENVI. You will notice one side of the image exhibits a black stripe containing zero values.
  • As noted on the HICO website: “At some point during the transit and installation of HICO, the sensor physically shifted relative to the viewing slit. The edge of the viewing slit was visible in every scene.” This effect is removed by simply cropping out affected pixels in each of the data files. For scenes in standard forward orientation (+XVV), cropping includes 10 pixels on the left of the scene and 2 pixels on the right. Conversely, for scenes in reverse orientation (-XVV), cropping is 10 pixels on the right and 2 on the left.
  • If you’re not sure about the orientation of a particular scene, the orientation is specified in the newly created header file under hico_orientation_from_quaternion.
  • Spatial cropping can be performed by selecting Raster Management > Resize Data in the ENVI toolbox, choosing the relevant input file, selecting the option for Spatial Subset, subset the image for Samples 11-510 for forward orientation (3-502 for reverse orientation), and assigning a new output filename. Repeat as needed for each dataset.
  • The HDF5 formatted HICO scenes also require spectral cropping to reduce the total wavelengths from 128 to the 87 band subset from 0.4-0.9 um (400-900 nm). The bands outside this subset are considered less accurate and typically not included in analysis.
  • Spectral cropping can also be performed by selecting Raster Management > Resize Data in the ENVI toolbox, only in this case using the option to Spectral Subset and selecting bands 10-96 (corresponding to 0.40408-0.89669 um) while excluding bands 1-9 and 97-128. This step need only be applied to the hyperspectral L1B radiance data.
  • If desired, spectral and spatial cropping can both be applied in the same step.
  • The HICO scene is now ready for further processing and analysis in ENVI.

For more information on the sensor, detailed data characteristics, ongoing research projects, publications and presentations, and much, much more, HICO users are encouraged to visit the HICO website at Oregon State University. This is an excellent resource for all things HICO.