Application Tips for ENVI 5 – Using FLAASH for atmospheric correction of airborne hyperspectral data

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize the FLAASH Atmospheric Correction tool in ENVI for correction of airborne hyperspectral data.

About FLAASH (from ENVI documentation): “FLAASH [Fast Line-of-sight Atmospheric Analysis of Hypercubes] is a first-principles atmospheric correction tool that corrects wavelengths in the visible through near-infrared and shortwave infrared regions, up to 3 µm. FLAASH works with most hyperspectral and multispectral sensors.”


RGB composite of AISA Eagle data for coastal region in southwest Puerto Rico

Scenario: This tip demonstrates the steps used to perform atmospheric correction using FLAASH for airborne data acquired in southwest Puerto Rico using an AISA Eagle VNIR II Hyperspectral Imaging Sensor. Data covers the spectral range 400-970 nm, with 128 spectral bands at ~5 nm spectral resolution, and with a 2 m ground sampling distance.

The Tip: Below are steps used to implement FLAASH for the example AISA data:

  • Note that the settings used below were selected to work best for this example, and different values may be more appropriate for your particular application. For more details on running FLAASH, please refer to the ENVI documentation, which includes an excellent step-by-step overview of using FLAASH, as well as a tutorial using AVIRIS data that demonstrates FLAASH.
  • Prior to starting FLAASH, the first step is to perform radiometric calibration and make sure the data is in the appropriate format, specifically: radiance data; BIL or BIP; floating point, long integer or integer; and units of uW/(cm^2 * sr * nm). This can be easily achieved using the “Radiometric Calibration” tool and selecting the option to “Apply FLAASH Settings”.
  • The “Radiometric Calibration” tool requires data file have gains and offsets for each band. Additionally, the tool expects input units of W/(m^2 * sr * um), so depending on the units of your data you may need to manually edit the gain and offset information in your header data. For instance, the uncorrected AISA data in our example has units of 1000*mW/(cm^2 * sr *um), which is equivalent to 100*W/(m^2 * sr * um). This means the gain for each band is 0.01 (i.e., 1/100) and the offset is 0.0.
  • If you suspect the gains and offsets are not correct for your data, then be sure to check the output data from FLAASH to confirm reflectance values fall within acceptable limits. For instance, if we incorrectly set the gain equal to 0.1 in our example the resulting reflectance values are substantially higher than feasible (e.g., greater than 100% reflectance for bright targets), and if we set the gain equal to 0.001 the reflectance values are substantially lower (e.g., producing negative values for most targets).
  • To start the “Radiometric Calibration” tool, select “Radiometric Correction > Radiometric Calibration” in the Toolbox and choose the appropriate input file. In the dialog window that appears, select “Apply FLAASH Settings”, assign an output filename, and then hit “Ok” to run the correction. Once the process is completed, your data is now ready for input to FLAASH.


  • To start FLAASH, select “Radiometric Correction > Atmospheric Correction Module > FLAASH Atmospheric Correction” from the Toolbox. This will launch the main dialog window for entering FLAASH parameters.
  • Begin by selecting the appropriate Input Radiance File (i.e., the output from the radiometric correction). At this point a dialog window will open for selecting the “Radiance Scale Factors”. If you used the “Radiometric Calibration” tool for preparing your data, as above, then simply select “Use single scale factor for all bands” and leave the “Single scale factor” equal to 1.0.


  • The next step is to assign a filename for the “Output Reflectance File” (the main output file), a directory for the “Output Directory for FLAASH Files” (the directory for all ancillary output files), and a name for the “Rootname for FLAASH Files” (used for naming the ancillary files).
  • Now enter all of the relevant sensor and scene specific information: date and time of acquisition, altitude of sensor, ground elevation, center latitude and longitude, and pixel size.
  • The next step is to select options for the atmospheric model, water retrieval, aerosol model, aerosol retrieval, visibility, spectral polishing and wavelength recalibration. Details for all of these options are provided in the FLAASH documentation. In our example, we use the Tropical atmospheric and Maritime aerosol models (both appropriate for coastal Puerto Rico), the 820 nm water absorption feature for water retrieval, no aerosol retrieval (this data doesn’t have the necessary wavelength bands to run these calculations), 40 km initial visibility, spectral polishing with a width of 3 bands, and no wavelength recalibration.


  • In the Advanced settings we leave most parameters set to the default values, with the exception of “Use Tiled Processing”, which we set to “No”. See FLAASH documentation for more details on these various parameters. Note here that the default units for the output data is surface reflectance scaled by 10000, but this can be changed if desired.


  • Before running FLAASH, you can “Save…” the input parameters to a file for use in future runs, or alternatively, in the “Advanced Settings” there is an option to “Automatically Save Template File”, which can also be used to save the input parameters to a file.
  • When you are ready, execute FLAASH by clicking “Apply”.
  • Note that errors will sometimes occur, causing FLAASH to cancel the correction process. This can result from incompatibilities between the selected processing options and certain data characteristics. For example, the above data produces an error when using image tiling, but runs fine when tiling is disabled; whereas tiling works perfectly well for other datasets. So if this happens, try adjusting the FLAASH settings and re-running the correction.
  • Once FLAASH has completed, be sure to examine your output data for acceptability, and ideally, if available, utilize measured field data to validate the atmospheric correction output.

Acknowledgement: Data was collected for the University of Puerto Rico at Mayaguez by the Galileo Group Inc. in 2013 for a NASA EPSCoR sponsored research project on the biodiversity of coastal and terrestrial ecosystems.


Application Tips for ENVI 5.1 – Saving and Restoring Views and Layers

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize the new option in ENVI 5.1 to save views and layers from an active ENVI session, and subsequently restore them in a later session. This is a welcome addition to ENVI and will save you lots of time when reopening images!

Scenario: This tip demonstrates the steps used to save and restore a 2 x 2 view layout of a project in the Florida Keys. Data shown in this example includes (clockwise from upper left): HICO scene; Landsat 7 scene; land mask derived from HICO using NDWI; chlorophyll-a concentration derived from HICO using OC4 algorithm.


The Tip: Below are steps used to save and restore views and layers in ENVI 5.1:

  • Select a view layout, and then open and display the desired imagery in each view.
  • In this example we use a 2 x 2 view layout: Views > 2×2 Views.
  • We display the HICO and Landsat data as RGB data, the land mask as a binary grayscale, and the chlorophyll-a data using a rainbow color table.
  • To save this layout (stored as a .jsn file), simply select: File > Views & Layers > Save.
  • To restore the layout when starting a new ENVI session, select: File > Views & Layers > Restore.

According to the ENVI documentation “When you save the views and layers, any files open in ENVI [refer to documentation for list of specific types that can be included in a saved session], the layout of the view, the layers loaded into those views, and any properties (center coordinates, zoom factor, raster stretch, color table, and so forth) will be saved. Any open portals will also be saved to the file.”

Application Tips for ENVI 5 – Using the Image Registration Workflow

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s Image Registration Workflow to improve geo-location of an image by aligning it with a higher accuracy base image.

Scenario: This tip demonstrates the steps used to align a hyperspectral HICO image of the Florida Keys with a multispectral Landsat 7 image mosaic of the same area.

HICO - LANDSAT alignment

Data: This example utilizes a HICO image of the Florida Keys (H2011314130732.L1B_ISS) downloaded in ENVI format from the HICO website at Oregon State University. The same image is also available in HDF5 format from the NASA GSFC Ocean Color website (read this tip for an overview on opening HICO HDF5 files in ENVI).

HICO – Hyperspectral Imager for the Coastal Ocean – is an imaging spectrometer located on the International Space Station. HICO images are currently distributed with only rough geo-location information, and can be off by as much as 10 km. The Florida Keys HICO image used in this example has been geo-located using this coarse information (the HICO website provides a good summary of the steps used for this process).

Additionally, as the base image for the registration process, this example utilizes a Landsat 7 mosaic generated from two L1T Landsat images (LE70150422000036EDC00 and LE70150432000036EDC01) downloaded from USGS EarthExplorer. The L1T processing level “provides systematic radiometric and geometric accuracy by incorporating ground control points while employing a Digital Elevation Model (DEM) for topographic accuracy.” The two Landsat 7 scenes were mosaicked using ENVI’s Seamless Mosaic Tool.

The Tip: Below are steps used to implement the Image Registration Workflow for this example:

  • Open both the HICO image and Landsat 7 mosaic in ENVI, and start the ‘Image Registration Workflow’ found in the Toolbox under Geometric Correction > Registration.
  • In the opening dialog window select the Landsat 7 mosaic as the ‘Base Image File’, select the HICO image as the ‘Warp Image File’, and then click ‘Next’.
  • The next dialog window that appears is for ‘Tie Points Generation’. In some cases it is possible to automatically generate acceptable tie points utilizing the default values and without selecting any seed points. You can explore this process by simply selecting ‘Next’ at the bottom of the dialog and reviewing the resulting points that are generated. If the output isn’t acceptable, then just select ‘Back’ to revert back to the ‘Tie Points Generation’ dialog. For our example, this automated process produced just 5 tie points with an RMSE of 3.15. Furthermore, the generated tie points did not properly correspond to equivalent features in both images. We can do better.
  • Returning to the ‘Tie Points Generation’ dialog, under the ‘Main’ tab, we adjusted the ‘Matching Method’ to ‘[Cross Modality] Mutual Information’, changed the ‘Transform’ to ‘RST’, and left the other parameters set to their default values.

Tie points generation - main

  • Under the ‘Advanced’ tab we set the ‘Matching Band in Base Image’ to ‘Band 2 (0.5600)’, the ‘Matching Band in Warp Image’ to ‘Band 28 (0.5587)’, and left the default values for all other parameters.

Tie points generation - advanced

  • To add user-selected tie-points, select ‘Start Editing’ under the ‘Seed Tie Points’ tab. This displays the base image in the active view, and sets the cursor to the ‘Vector Create’ tool. To add a point, left-click in the desired location on the base image, and then right-click to select ‘Accept as Individual Points’. This brings up the warp image in the active view, where you use the same steps (left-click on location and right-click to accept) to select the equivalent location in the warp image. The process is then iterated between the base and warp images until you have selected the desired number of tie-points. If needed, you can switch the cursor to the pan or zoom tools, or turn layer visibility on/off, to better navigate the images for point selection.
  • Select ‘Stop Editing’ once you have added at least 3 tie-points (we added just 4 in our example). Note that you can always go back and delete or add points as needed until you are satisfied. You can even return to the tie-point selection step after reviewing results from the automatic tie-point generation process.
  • Now select ‘Next’ at the bottom of the ‘Tie Points Generation’ dialog to automatically generate tie-points based on these seed points. Once the point generation process is complete, the next dialog window that appears is for ‘Review and Warp’.
  • The total number of tie-points is listed under the ‘Tie Points’ tab in this dialog. In our example the process produced a total of 14 tie-points. Select ‘Show Table’ to view a list of the tie-points, including information on the error associated with each point, as well as the total RMSE calculated for all current points.

Tie points table

  • Before proceeding with the warp process, it is recommended that you visually inspect each tie-point to confirm it correctly identifies corresponding locations in both images. Individual points can be selected and visualized by first selecting the point’s row in the attribute table (accomplished as shown above by selecting the box to the left of the point’s ID number). Once selected, the active point is highlighted and centered in the base image, and you can then use the ‘Switch To Warp’ and ‘Switch To Base’ buttons in the ‘Tie Points’ tab to alternate between the two images.

Tie point review

  • If you see a point that isn’t correct, simply select the small red X at the bottom of the attribute table to delete the active point. In the Key Largo example, two points were associated with non-permanent features on the water surface and were thus appropriately discarded. This reduced the total number of points in the example to 12, and improved the overall RMSE to 0.86, which was deemed acceptable for this application.
  • While the deletion process isn’t directly reversible, if you inadvertently delete a point you can always close the table, go back to the ‘Tie Points Generation’ dialog and regenerate all tie-points.
  • Once you are satisfied with the tie-points, close the attribute table and select the ‘Warping’ tab in the main ‘Review and Warp’ dialog. This displays the parameter options for performing the final alignment of the warp image. In our example we set the ‘Warping Method’ to ‘RST’, ‘Resampling’ to ‘Nearest Neighbor’, and ‘Output Pixel Size From’ to ‘Warp Image’.

Tie points warp

  • The final warp processing is then initiated by selecting the ‘Next’ button at the bottom of the dialog. Once complete, all that remains is to select an ‘Output Filename’ and format for the warped image and an ‘Output Tie Point File’ for saving the tie-points.

It is important to note that the above parameter values were selected to work best for this example and for other HICO images; however, different values may be more appropriate for your particular application. For detailed descriptions of parameters and general instructions on running the Image Registration Workflow, as well as a hands-on tutorial, be sure to look at the documentation included with ENVI.

Application Tips for ENVI 5.1 – Opening HICO scenes formatted in HDF5

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Open a HICO dataset stored in HDF5 format and transform the data into standard ENVI formatted files ready for further processing and analysis.

(Update 14-June-2014): A companion post is now available describing how to open HDF5 HICO scenes using an IDL application prepared by the U.S. Naval Research Laboratory.

Scenario: This tip demonstrates two methods for opening a HDF5 formatted HICO scene in ENVI: (i) one method uses the new HDF5 tool released in ENVI 5.1 and (ii) the other method uses the H5_BROWSER routine in IDL. Subsequent steps are then used to save the data in standard ENVI format and prepare the resulting data for further analysis.

  • HICO – Hyperspectral Imager for the Coastal Ocean – is an imaging spectrometer located on the International Space Station. Visit the HICO website at Oregon State University for detailed information on data characteristics and tips for working with this data.
  • HDF5 – Hierarchical Data Format v5 – is a file format used to organize and store large multifaceted data collections. Visit the HDF Group for more information.


Why two methods? Not only is it informative to describe both methods, but since the new HDF5 tool in ENVI is generic, it does not always support access to all data files within a given dataset. Hence, for users who wish to remain within the ENVI/IDL system, it is often necessary to leverage the H5_BROWSER routine to open all available data in a particular HDF5 scene.

(Update 5-Mar-2014): Exelis VIS has released ENVI 5.1 Hotfix 1, which addresses the issue of being able to read all data files within a given HICO dataset. The new HDF5 tool is now fully functional for the entire dataset. See below for details.

Data: HICO data is currently available from two different online sources: (i) as ENVI files through the HICO website, which requires users to submit a brief proposal to become a registered HICO Data User, or (ii) as HDF5 files through the NASA GSFC Ocean Color website, which requires users to register with NASA’s EOSDIS User Registration Service.

This example utilizes a HICO scene of the Florida Keys (H2011314130732.L1B_ISS) downloaded from the NASA GSFC Ocean Color website.


The HDF5 formatted HICO scenes contain six core datasets: (i) products (L1B top of atmosphere radiance), (ii) images (three-band true color image), (iii) navigation (latitudes, longitudes, sensor azimuth and zenith, and solar azimuth and zenith), (iv) quality (scan-line quality flags), (v) data (L0 uncorrected data), and (vi) metadata (FGDC and HICO metadata information). Note: In more recent files the L0 uncorrected data is not included and the HDF5 files therefore only contain five core datasets.

Tip 1 – HDF5 Tool in ENVI 5.1: Below are steps to open the navigation and quality datasets in ENVI using the new HDF5 tool:

  • Start by unpacking the compressed folder (e.g., H2011314130732.L1B_ISS.bz2). If other software isn’t readily available, a good option is to download 7-zip for free from
  • Rename the resulting HDF5 file with a *.h5 extension (e.g., H2011314130732.L1B_ISS.h5). This allows the HDF5 tools to more easily recognize the appropriate format.
  • Open the ENVI generic HDF5 reader by selecting File > Open As > Generic Formats > HDF5, and then select the desired *.h5 file in the file selection dialog.
  • The left column of the HDF5 reader lists the Available Datasets and the right column lists the Raster Builder. The right column is where you will create and populate the two HICO datasets to open in ENVI.
  • First, for the quality dataset, rename the default raster in the right column of the Raster Builder (e.g., rename Raster 1 to H2011314130732_L1B_ISS_quality). This can be easily accomplished by right clicking on the raster name and selecting Rename Raster. Next, add the quality flags to this raster by clicking on the <flags (512×2000)> dataset in the left column and then clicking the arrow in the center of the HDF5 tool to add this data to the recently created raster.
  • Now add a second raster to the right column using the New Raster button at the bottom of the right column (it’s the rightmost icon with the small green plus sign). Rename this raster (e.g., rename Raster 2 to H2011314130732_L1B_ISS_navigation), and use the center selection arrow to add all six data layers from the navigation dataset in the left column to the new raster in the right column.
  • Note that if you will be opening data from more than one HICO scene, then you can also build a Template from these settings and use the Template as the basis to open more datasets.
  • Your HDF5 dialog window should look similar to the following:


  • Now select Open Rasters at the bottom of the dialog window to open both of the rasters as data layers in ENVI.
  • These data layers can then be saved in ENVI format by selecting File > Save As…, selecting a given data layer in the file selection dialog, and assigning an Output Filename in the Save File As Parameters dialog.

(Update 5-Mar-2014): Follow the same steps as above to open and save the L1B product using the HDF5 reader. However, for the three band color image, note that the HDF5 metadata itself misidentifies this data as band interleaved by pixel (BIP) when in fact it is band sequential (BSQ). To save this data in ENVI format, first open and save the data using the same steps as above, and then use Edit ENVI Header to adjust the band interleave of the resulting ENVI file from BIP to BSQ.

Tip 2 – H5_VIEWER in IDL: Below are steps to open the L1B product in ENVI using the IDL H5_BROWSER tool:

  • As evident in the current example, the left column of the generic HDF5 reader does not indicate any data is present within the products data layer. Hence, we use an alternative approach here to access the L1B product data. You can also use this same approach to open other datasets within the HDF5 file.
  • This tip requires both ENVI and IDL, so to get started you will need to make sure you have launched ENVI+IDL rather than just ENVI. By launching ENVI+IDL you will start both the standard ENVI interface and the IDL Workbench.
  • In the IDL Workbench, within the IDL command-line console, enter the following commands to start the HDF5 Browser (substituting your own filename as appropriate):


  • The HDF5 Browser window should now appear, where the left column of the window lists the available datasets (now with data listed under the products dataset) and the right column shows details about any given selected dataset.


  • Within the H5 Browser, select the Lt data in the products dataset (as shown above), and then select Open.
  • Returning now to the IDL command-line console, enter the following lines (again substituting your own filename as appropriate) to save the data to disk in ENVI format and write an associated ENVI header file.


  • Alternatively, you can also use the following code to create your own simple IDL program. To do so, simply start a new program by selecting File > New File in the IDL Workbench, copy and paste text from this file (hy_open_hico.pdf) into the new program, save the program with the same name as listed in the code (i.e., hy_open_hico), replace the hardcoded filenames with your own filenames, compile the code, and then run.


Data Preparation: Below are the final steps needed to prepare the HICO data for further processing:

  • As noted on the HICO website: “At some point during the transit and installation of HICO, the sensor physically shifted relative to the viewing slit. The edge of the viewing slit was visible in every scene.” This effect is removed by simply cropping out affected pixels in each of the above data files (i.e., quality, navigation and products). For scenes in standard forward orientation (+XVV), cropping includes 10 pixels on the left of the scene and 2 pixels on the right. Conversely, for scenes in reverse orientation (-XVV), cropping is 10 pixels on the right and 2 on the left.
  • If you’re not sure about the orientation of a particular scene, the orientation is specified in the HDF5 metadata under Metadata > HICO > Calibration > hico_iss_orientation.
  • Spatial cropping can be performed by selecting Raster Management > Resize Data in the ENVI toolbox, choosing the relevant input file, selecting the option for Spatial Subset, cropping the image for Samples 11-510 for forward orientation (3-502 for reverse orientation), and assigning a new output filename. Repeat as needed for each dataset.
  • The HDF5 formatted HICO scenes also require spectral cropping to reduce the total wavelengths from 128 bands to a 87 band subset from 400-900 nm. The bands outside this subset are considered less accurate and typically not included in analysis.
  • Spectral cropping can also be performed by selecting Raster Management > Resize Data in the ENVI toolbox, only in this case using the option to Spectral Subset and selecting bands 10-96 (corresponding to 404.08-896.69 nm) while excluding bands 1-9 and 97-128. This step need only be applied to the hyperspectral L1B product data.
  • The header file for the L1B product data should now be updated to include information for the wavelengths (87 bands from 404.08-896.69 nm), fwhm (10 nm for 400-745 nm; and 20 nm for 746-900 nm) and gain values (0.020 for all bands).
  • Editing the header can be accomplished using the option for Raster Management > Edit ENVI Header in the ENVI toolbox, or by directly editing the actual header file using a standard text editor. For simplicity, as illustrated below, you can cut and paste the relevant header info from the following file (example_header.pdf) into the header.


  • The HICO scene is now ready for further processing and analysis in ENVI.
  • As an example, the latitude and longitude bands in the navigation file can be used to build a GLT to geolocate the scene using the rough coordinates provided with the data distribution. The HICO website provides a step-by-step overview of this geolocation process.

For more information on the sensor, detailed data characteristics, ongoing research projects, publications and presentations, and much, much more, HICO users are encouraged to visit the HICO website at Oregon State University. This is an excellent resource for all things HICO.

Geospatial Learning Resources – An overview of the 2013 ENVI Rapid Learning Series

ENVI Rapid Learning SeriesHave you downloaded, or upgraded to, the latest version of ENVI? Are you just learning the new interface, or already a seasoned expert? No matter your experience level, if you’re an ENVI user then it’s worth viewing the ENVI Rapid Learning Series.

This series is a collection of short 30-minute webinars that address different technical aspects and application tips for using ENVI. Originally hosted live in the fall of 2013, the webinars are now available online to view at your convenience:

  • ENVI and ArcGIS Interoperability Tips “Learn best practices and tips for using ENVI to extract information from your imagery. Get new information, ask questions, become a better analyst.” (recorded 10/16/13)
  • Using NITF Data in ENVI “Learn best practices and tips for using NITF data with ENVI to extract information from your imagery. Get new information, ask questions, become a better analyst.” (recorded 10/23/13)
  • Image Transformations in ENVI “Join Tony Wolf as he explores how image transformations can provide unique insight into your data in ENVI. Learn how to use the display capabilities of ENVI to visually detect differences between image bands and help identify materials on the ground.” (recorded 10/30/13)
  • Working with Landsat 8 Data in ENVI “Learn how to use Landsat 8 cirrus band, quality assurance band, and thermal channels in ENVI for classification, NDVI studies, and much more.” (recorded 11/6/13)
  • Using NPP VIIRS Imagery in ENVI “Join Thomas Harris as he explores the newly developed support for NPP VIIRS in ENVI. By opening a dataset as an NPP VIIRS file type, the user is presented with an intuitive interface that makes visualizing the data and correcting the ‘bowtie’ effect a snap.” (recorded 11/13/13)
  • Georeference, Image Registration, Orthorectification “This webinar looks at how to geometrically correct data in ENVI for improved accuracy and analysis results. Join the ENVI team as we demo georeferencing, image registration, and orthorectification capabilities and answer questions from attendees.” (recorded 11/20/13)
  • An Introduction to ENVI LiDAR “This webinar takes a quick look at the ENVI LiDAR interface and demonstrates how to easily transform geo-referenced point-cloud LiDAR data into useful geographic information system (GIS) layers. ENVI LiDAR can automatically extract Digital Elevation Models (DEMs), Digital Surface Models (DSMs), contour lines, buildings, trees, and power lines from your raw point-cloud LiDAR data. This information can be exported in multiple formats and to 3D visual databases.” (recorded 11/27/13)
  • IDL for the Non-Programmer “This webinar highlights some of the tools available to ENVI and IDL users, which allow them to analyze data and extend ENVI. Learn where to access code snippets, detailed explanations of parameters, and demo data that comes with the ENVI + IDL install.” (recorded 12/4/13)
  • ENVI Services Engine: What is it? “This webinar takes a very basic look at ENVI capabilities at the server level. It shows diagrams depicting how web based analysis works, and shows some examples of JavaScript clients calling the ENVI Services Engine. Benefits of this type of technology include developing apps or web based clients to run analysis, running batch analysis on multiple datasets, and integrating ENVI image analysis into the Esri platform.” (recorded 12/11/13)
  • Atmospheric Correction “This webinar looks at the different types of Atmospheric Correction tools available in ENVI. It starts with a look at what Atmospheric Correction is used for, and when you and don’t need to apply it. Finally it gives a live look at QUAC and FLAASH and how to configure these tools to get the best information from your data.” (recorded 12/18/13)

To access the ENVI webinars:

A Look at What’s New in ENVI 5.1

ENVI 5.1(16-Dec-2013) Today Exelis Visual Information Solutions released ENVI 5.1, the latest version of their popular geospatial analysis software.

We’ve already downloaded and installed our copy, so read below if you want to be one of the first to learn about the new features. Or better yet, if you or your organization are current with the ENVI maintenance program, you too can download the new version and start using it yourself today.

Below are a few highlights of the new features in ENVI 5.1:

  • Region of Interest (ROI) Tool. Previously only accessible in ENVI Classic, users can now define and manage ROIs in the new interface. This includes the ability to manually draw ROIs, generate ROIs from band thresholds, grow existing ROIs, and create multi-part ROIs. Additionally, ROIs are now stored as georeferenced features, which means they can be easily ported between images.
  • Seamless Mosaic Workflow. The Georeferenced Mosaicking tool has been replaced with the new Seamless Mosaic Workflow. This tool allows user to create high quality seamless mosaics by combing multiple georeferenced scenes. Included is the ability to create and edit seamlines, perform edge feathering and color correction, and export finished mosaics to ENVI or TIFF formats.  Also included are tutorials and tutorial data to learn the simple and advanced features of this workflow.
  • Spectral Data. Both the Spectral Profile and Spectral Library viewers include improvements for visualizing and analyzing spectral data. The software also includes updated versions of four key spectral libraries: ASTER Spectral Library Version 2, U.S. Geological Survey Digital Spectral Library 06, Johns Hopkins University Spectral Library, and the NASA Jet Propulsion Laboratory Spectral Library.
  • Additional Data Types. ENVI 5.1 can now open generic HDF5 files, which includes data distributed from sensors like NPP VIIRS, SSOT, ResourceSat-2, and HICO. Additional data types and file formats also now supported include ECRG, GeoEye-1 in DigitalGlobe format, Goktuk-2, KOMPSAT-3, NigeriaSat-1 and -2, RASAT, and others.
  • Added Landsat 8 Support. Various improvements have been included for the handling of Landsat 8 data, such as automatically reading the thermal infrared coefficients from the associated metadata, including the Quality and Cirrus Cloud bands in the Data and Layer Managers, correcting reflectance gains and offsets for solar elevation, and updating FLAASH to process Landsat 8 imagery.

These and other welcome improvements continue to expand the capabilities of ENVI, and we’re excited to start working with the new features.

For more on ENVI:

Application Tips for ENVI 5 – Utilizing the new portal view for visualizing data layers

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Demonstrate the use of ENVI’s new portal view for visualizing data in multiple layers.

Scenario: This tip utilizes a Landsat ETM+ scene from Kaneohe Bay, Oahu, Hawaii downloaded from USGS EarthExplorer. The example calculates the normalized difference water index (NDWI) proposed by McFeeters (1996, IJRS 17:1425-1432), displays the index results using the Raster Color Slice tool, and then utilizes the portal view to visually validate the capacity of NDWI to differentiate land from water.

ENVI Portal

The Tip: Below are steps to open the image in ENVI, calculate the NDWI index, display output using Raster Color Slice, and use the portal to visualize the NDWI output as a ‘sub-view’ within a standard RGB view:

  • Open the Landsat scene in ENVI. After un-compressing the download folder from USGS, simply use a standard method for opening files (e.g., File > Open…) to open the <*_MTL.txt> metadata file. This loads all of the Landsat bands into the Data Manager, and opens an RGB layer in the Layer Manager.
  • Calculate NDWI. Use Band Math (Toolbox > Band Ratio > Band Math) to implement the following equation (float(b2)-float(b4))/(float(b2)+float(b4)), where b2 is Band-2 (560 nm), b4 is Band-4 (835 nm), and the float() operation is used to transform integers to floating point values and avoid byte overflow.


  • Examine the NDWI output. Using the Cursor Value tool (Display > Cursor Value…) to explore the resulting grayscale NDWI image, it becomes apparent that there is a threshold near zero where values above the threshold are water and those below are land and cloud. While this data is alone sufficient for analysis, let’s use something more colorful for visualizing the output.
  • Display NDWI output using raster color slices. Start the Raster Color Slice tool (right click the NDWI layer name in the Layer Manager and select Raster Color Slices…), select the NDWI data in the Select Input File dialog, and then accept the default settings in the Edit Raster Color Slices dialog.

Raster Color Slice

  • Examine the Raster Color Slice layer. The color slices reveal a visually clear distinction between water and land/cloud, where in this example warm colors (reds and yellows) indicate water and cool colors (blues and green) indicate land/cloud.
  • Display output using a portal. First, make sure to put the layers in the appropriate order. Drag the RGB layer to the top of the display in the Layer Manager, and make sure the color slice layer is second. Then start the Portal tool (select the Portal icon on the toolbar, or Display > Portal). This will open a new smaller ‘sub-view’ that reveals the color slice NDWI layer within the larger view of the RGB layer. Alternatively, you could also use the portal to similarly view the grayscale NDWI layer. To do so, hover over the top of the portal to make the View Portal toolbar visible, right click on the toolbar, and select Load New Layer.

ENVI Portal

  • Explore data layers with the portal. The portal itself is interactive, which means it can be easily moved and resized to examine different portions of the image. In this example the portal can be moved around the image (left click within the portal and drag using your mouse) to explore how well the NDWI index works in different areas.
  • Other visualization options. In addition to the default portal display, the portal can also be animated to alternate between the different layers. The three options are Blend (transitions layer transparency), Flicker (toggles between layers), and Swipe (moves a vertical divider that separates the layers). These animations can be started as follows: hover over the top of the portal to make the View Portal toolbar visible, right click the toolbar, and select the desired animation option. The speed of animation can then be changed using the faster and slower buttons, and the animation can be paused or restarted using the pause and play buttons, located on the portal toolbar.

Application Tips for ENVI 5 – Opening and preparing HYPERION imagery for analysis

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Unzip, open and save a HYPERION scene, such that the resulting imagery is in a standard ENVI format ready for further processing and analysis.

Scenario: This tip utilizes a HYPERION scene from Kaneohe Bay on the northeast shore of Oahu, Hawaii. This scene was downloaded as a compressed folder from USGS EarthExplorer using the website’s search and browse capabilities to locate and identify a cloud-free scene for Kaneohe Bay. EarthExplorer KaneoheBay The Tip: Below are steps to transform the compressed folder into an ENVI image format:

  • File Formats. HYPERION data is currently available from the USGS in three different product formats: Standard Format L1GST (radiometric corrections and systematic geometric corrections; GeoTIFF), L1R Product (radiometric corrections and pseudo geometric correction using corner points only; HDF) and L1T Product (radiometric corrections and systematic geometric corrections with ground control points; GeoTIFF). Working with the non-geometrically corrected L1R imagery can be useful when implementing pre-processing steps such as destriping. More information on HYPERION and these formats can be found on the EO-1 websites (,, the EO-1 Data Dictionary ( and the EO-1 User Guide (
  • Unzip. The first step is to unpack the compressed folder, which in many cases can be accomplished using the tools already available with your operating system. For example, Windows XP/Vista/7/8 allows users to view and copy contents from a .zip archive by double-clicking the folder to open it and then dragging selected files to a new location. Alternatively, if you do not already have such capability, or your file is in a different format (e.g., .tgz), then a good option is to download 7-zip for free from
  • Open Standard Format L1GST. At first glance the files for this GeoTIFF format can appear a bit intimidating since the image data is stored in 242 separate TIFF files. But the scene is actually very easy to open using the associated <*_MTL_L1GST.txt> metadata file included in the same directory. To open the entire scene, including all 242 bands, simply select a standard method for opening files (e.g., File > Open…), and then select the metadata file. This will initiate a dialog that says ‘Opening EO1 Hyperion’. Similar results can be also be achieved by selecting File > Open As > EO1 > GeoTIFF, File > Open As > Generic Formats > TIFF/GeoTIFF, or the ‘Open’ button from the ENVI or Data Manager toolbars.

ENVI Kaneohe Bay

  • Open L1R Product. This product is in HDF format and can be readily opened using File > Open As > EO1 > HDF, and selecting the <*.L1R> file. Note that unlike the GeoTIFF, this product can not be opened using the standard File > Open or ‘Open’ buttons.
  • Open L1T Product. This product is in GeoTIFF format, so it can be opened in the same manner as the Standard Format L1GST, only in this case using the associated <*_MTL_L1T.txt> metadata file.
  • Save as ENVI Format. Now that you have the scene open in ENVI it is straightforward to save it to ENVI format. Simply select File > Save As…, select the relevant file in the ‘Select Input File’ dialog, and then select OK. A ‘Save File As Parameters’ dialog will then appear, in which you select ENVI as the Output Format, select an output filename, and then select OK.
  • Metadata. If you open the associated header file for one of these ENVI files, or examine the metadata using ENVI’s Metadata Viewer, you will see that it contains the wavelength, FWHM, radiance gains and offsets, and solar irradiance defined for each band. Among other uses, these values allow you to calibrate the data to radiance or top-of-atmosphere reflectance using ENVI’s Radiometric Calibration tool.
  • ENVI Header. When examining the file header, you will also note that the resulting files still retain the full original complement of 242 bands; however, only 198 of these bands have been calibrated. A listing of the HYPERION spectral information can be found on the EO-1 website ( When the bands are ordered sequentially for each detector, the calibrated bands are 8-57 in the VNIR and 77-224 in the SWIR. However, in some situations the resulting header information does not correctly list the proper band designations and other associated spectral information, which can lead to erroneous results. For example, the band located at 851.92 nm is an uncalibrated band and should consist entirely of 1s or 0s. So if your scene indicates otherwise, then you may need to manually edit your header file (the <*.hdr> file is a text file that can be opened and edited in a standard text editor). An example corrected header for the scene from Kaneohe Bay is included here (EO1H0640452010029110K2_L1GST.hdr) as a guideline from which to copy and paste. In all likelihood your scene’s header information for wavelength, fwhm, bbl, data gain values and data offset values should appear similar.

Customizing ENVI – IDL resources for building your own geospatial applications

This post is part of a series on getting the most out of your geospatial applications. Check back regularly to see the latest tips and examples.

IDL Heron ReefDo you currently build your own geospatial applications using IDL? Are you interested in streamlining your image processing workflow by automating your analysis? Or perhaps you’re just getting started in remote sensing and want to learn how to use IDL to build custom applications? Below is an introductory list of online resources to assist with your IDL programming needs.

IDL – the Interactive Data Language – a “scientific programming language” offered by Exelis Visual Information Solutions (Exelis VIS) – is commonly used for data analysis in the fields of astronomy, medical imaging and remote sensing. IDL is an interpreted array-based language that includes a large user-accessible library of numerical, statistical and other data analysis routines, as well as robust functionality for the graphic display and visualization of your data.

IDL is also the language used to develop ENVI, and the foundation on which users extend ENVI and the recently released ENVI Services Engine. This means that by mastering the fundamentals of IDL you can develop custom ENVI routines that transform your image processing algorithms into your own geospatial applications. These apps can then be easily embedded in the ENVI interface such that you create your own user-generated toolbox options.

Good places to start for programming IDL and customizing your own ENVI applications are the many different manuals, tutorials and code examples distributed with the software and/or provided on the Exelis VIS website. For example, within the contents of ENVI Help is an entire section devoted to “Programming with ENVI”, which includes instructions on transitioning to the new ENVI API, best practices for creating your own toolbox extensions, and a list of available ENVI-specific programming routines. The Exelis VIS website also includes Forums, Help Articles, a user-contributed Code Library, and the ability to register for ENVI and IDL training courses.

Beyond these core Exelis VIS resources, an extensive IDL user community has also developed, providing a plethora of programming tips and a diverse array of code examples:

  • There are large IDL libraries, such as the Ocean Color IDL Library from NASA’s Ocean Color Discipline Processing Group, the IDL Astronomy User’s Library from the Astrophysics Science Division at NASA’s Goddard Space Flight Center, and the Markwardt IDL Library from Craig Markwardt.
  • There are blogs devoted to IDL, such as those from Michael Galloy and Mort Canty, as well as websites from David Fanning and Ken Bowman.
  • There are also a number of books on IDL programming, including “Image Analysis, Classification, and Change Detection in Remote Sensing: With Algorithms for ENVI/IDL, Second Edition” by Morton Canty (with an upcoming third edition to be released in 2014), “Modern IDL: A Guide to IDL Programming” by Michael Galloy, and “Practical IDL Programming” by Liam Gumley, among others.

These are but a few of the many resources available at your disposal. A quick internet search will reveal many others. And don’t forget that looking at code written by others is a great way to learn, even if it’s not directly applicable to your application. So be sure to take advantage of what’s out there and start transforming your innovative algorithms and processing routines into your own custom apps.

Application Tips for ENVI 5.0 – Building a mask from a classified image

This post is part of a series on getting the most out of your geospatial applications. Check back regularly to see the latest tips and examples.

Objective: Utilize one or more classes from a classified image to generate a mask in ENVI 5.0, such that the mask can then be utilized to exclude selected areas from further analysis or display.

Scenario: For this tip we utilize a coral reef scene where the goal is to mask clouds from areas containing water and submerged habitat. In this example, as an approximation, an unsupervised classification has been used to segment the image into 20 classes, where 9 of the 20 classes have been visually identified as containing clouds.


The Tip: Below are steps that can be used to transform this classification output into a cloud mask:

  • From the ‘Toolbox’, select Raster Management > Masking > Build Mask [In ENVI Classic this same command is found under Basic Tools > Masking > Build Mask]
  • In the ‘Build Mask Input File’ dialog, select the classification output file from which you will be building the mask, and then select OK
  • In the ‘Mask Definition’ dialog, select Options > Import Data Range…
  • In the ‘Select Input for Mask Data Range’ dialog, select the same classification output file that was selected as the basis for the mask, and then select OK
  • In the ‘Input for Data Range Mask’ dialog, enter the minimum and maximum values corresponding to the relevant classes that contain clouds, and then select OK.


  • In our example the cloud classes all happen to be contiguous, however, in many situations this is not the case. When this occurs, simply repeat the process for selecting the minimum and maximum values until all classes have been selected. To do so, from the ‘Mask Definition’ dialog, select Options > Import Data Range… , enter the appropriate minimum and maximum values in the ‘Input for Data Range Mask’ dialog, and then select OK. A theoretical example is shown below.


  • Note that masks can also be defined using a number of other sources, such as ROIs, shapefiles, annotations and other data ranges.
  • Once all of the classes have been selected, make sure the clouds are set to ‘off’ (i.e., cloud values will be zero in the mask). In the ‘Mask Definition’ dialog, select Option > Selected Areas “Off”
  • As the final step, in the ‘Mask Definition’ dialog, enter an output filename (or select the option to output result to memory) and then select Apply.
  • The result is a mask that can be used to remove clouds from the image and/or exclude them from further analysis.


As a parting thought, it can be observed in this example that the final cloud mask is incomplete, in that it does not include cloud shadows and fails to completely encompass all the cloud areas. This is not a result of the mask process, but rather a function of using an abbreviated unsupervised classification to identify the cloud areas, which was done only for the purposes of this example. If a more complete cloud mask is desired, a greater number of classes can be used in the unsupervised classification to further segment the image, or other algorithms can be used that are specifically designed for the detection of clouds and cloud shadows.