Application Tips for ENVI 5 – Exporting a Geospatial PDF

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s print and export options to generate a Geospatial PDF.

Geospatial PDF

Scenario: This tip utilizes a Landsat-8 scene of California’s Central Valley to demonstrate the steps for creating a Geospatial PDF using two different options: (1) using Print Layout; and (2) using Chip View to Geospatial PDF.

Geospatial PDFs allow you to easily share your geospatial output in standard PDF format while still enabling users to measure distances and identify locations in geographic coordinates, but without need for any specialized GIS or remote sensing software.

Option 1 – Print Layout

  • The Print Layout option requires ENVI 5.0 or later and works only on Windows platforms. It also requires that you launch ENVI in 32-bit mode and have a licensed ArcGIS application on the same system.
  • If you’re looking for the ENVI 32-bit mode (as opposed to the now standard 64-bit mode), it is typically found in either the ‘32-bit’ or ‘ENVI for ArcGIS’ subdirectory of the ENVI directory located under Start > All Programs.
  • Now, using your data of choice, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF. In our example, we simply use a color infrared image of our example Landsat-8 scene. However, if desired, your output can include multiple layers and even annotations.
  • Once you are satisfied with the View, go to File > Print…, and this will launch the Print Layout viewer where you can make further adjustments to your output before exporting it to Geospatial PDF.
  • Note: If the File > Print… option doesn’t produce the desired output in Print Layout (which doesn’t directly support all file types, georeferencing formats or annotation styles), then you can also use File > Chip View To > Print… as another option. The Chip View To option creates a screen capture of whatever is in the active View, so it can accommodate anything you can display in a View, but with the tradeoff that there is slightly less functionality in the Print Layout format options.
  • In our example, for instance, the File > Print… option didn’t support the Landsat-8 scene when opened using the ‘MTL.txt’ file, but instead of using the Chip View To option, as a different workaround we resaved the scene in ENVI format to retain full functionality of Print Layout.
  • Once in the Print Layout viewer, you can apply different ArcMap templates, adjust the zoom level and location of the image, and edit features in the template. Here we made a few edits to the standard LetterPortrait.mxt template as the basis for our output.

ENVI Print Layout

  • To output your results to a Geospatial PDF, select the Export button at the top of the Print Layout viewer, enter a filename, and then select Save.
  • Note that Print Layout can also be used to Print your output using the Print button.
  • You have now created a Geospatial PDF of your work (see our example: CA_Central_Valley_1.pdf). Also, see below for tips on viewing and interacting with this file in Adobe Reader and Adobe Acrobat.

Option 1 – Chip View to Geospatial PDF

  • The Chip View to Geospatial PDF requires ENVI 5.2 or later, but does not require ArcGIS.
  • This option directly prints whatever is in the active View to a Geospatial PDF, so it has fewer options than the Print Layout option, but can still be very useful for those without an ArcGIS license.
  • As above, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF, including multiple layers and annotations as desired. Here we again simply use a color infrared image of our example Landsat-8 scene, but this time include text annotations and a north arrow added directly to the View.
  • Once you are satisfied with the View, go to File > Chip View To > Geospatial PDF…, enter a filename, and then select OK.
  • Note that the Chip View To option can also be used to export your work to a File, PowerPoint or Google Earth.
  • Congratulations again. You have now created another Geospatial PDF of your work (see our example: CA_Central_Valley_2.pdf).

CA Central Valley 2

Viewing Output in Adobe

  • As mentioned, Geospatial PDFs allow you to measure distances and identify locations in geographic coordinates using a standard PDF format. Geospatial PDFs can be viewed in either Adobe Acrobat or Reader (v9 or later).
  • In Adobe Reader, the geospatial tools can be found under Edit > Analysis in the main menu bar. In Adobe Acrobat, the geospatial tools can be enabled by selecting View > Tools > Analyze in the main menu bar, and then accessed in the Tools pane under Analyze.
  • To measure distance, area and perimeter, select the Measuring Tool.
  • To see the cursor location in geographic coordinates, select the Geospatial Location Tool.
  • And to find a specific location, select the Geospatial Location Tool, right click on the image, select the Find a Location tool, and then enter the desired coordinates.

So now that you’re familiar with the basics of creating Geospatial PDFs, be sure to consider using them in your next project. They’re definitely a powerful way to share both images and derived output products with your colleagues and customers.

Advertisements

Application Tips for ENVI 5 – Image classification of drone video frames

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s new video support (introduced in ENVI 5.2) to extract an individual frame from HD video and then perform supervised classification on the resulting image file.

ENVI drone video analysis

Scenario: This tip demonstrates the steps used for implementing the ENVI Classification Workflow using a HD video frame extracted from a drone overflight of a banana plantation in Costa Rica (video courtesy Elevated Horizons). In this example image classification is utilized to delineate the total number of observable banana bunches in the video frame. In banana cultivation, bunches are often covered using blue plastic sleeves for protection from insects and disease and for increasing yield and quality. Here the blue sleeves provide a unique spectral signature (color) for use in image classification, and hence a foundation for estimating total crop yield when analysis is extrapolated or applied to the entire plantation.

The Tip: Below are the steps used to extract the video frame and implement the Classification Workflow in ENVI 5.2:

  • There are three options for opening and viewing video in ENVI: (i) drag-and-drop a video into the ENVI display; (ii) from the main toolbar select File > Open to select a video; and (iii) from the main toolbar select Display > Full Motion Video, and then use the Open button at the top of the video player to select a video.

ENVI video player

  • Once opened, the video player can be used to playback video using standard options for play, pause, and stepping forward and backward. There are also options to add and save bookmarks, adjust the brightness and frame rate, and export individual frames, or even the entire video, for analysis in ENVI.
  • Here we have selected to export a single frame using the “Export Frame to ENVI” button located at the top of the video player.
  • The selected video frame is then automatically exported to the Layer Manager and added to the currently active View. Note that the new file is only temporary, so be sure to save this file to a desired location and filename if you wish to retain the file for future analysis.
  • We next launch the Classification Workflow by selecting Toolbox > Classification > Classification Workflow.
  • For guidance on implementing the Classification Workflow, please visit our earlier post – Implementing the Classification Workflow – to see a detailed example using Landsat data of Lake Tahoe, or refer to the ENVI documentation for more information.
  • In the current Classification example, we selected to Use Training Data (supervised classification), delineate four different classes (banana bunch, banana plant, bare ground, understory vegetation), run the Mahalanobis Distance supervised classification algorithm, and not implement any post-classification smoothing or aggregation.

ENVI drone video classification workflow

  • Classification output includes the classified raster image (ENVI format), corresponding vector file (shapefile), and optionally the classification statistics (text file). Shown here is the classification vector output layered on top of the classification image, where blue represents the observable banana bunches in this video frame.

ENVI drone video classification output

With that analysis accomplished, there are a number of different options within ENVI for extending this analysis to other frames, from as simple as manually repeating the same analysis across multiple individual frames to as sophisticated as creating a custom IDL application to utilize ENVI routines for automatically classifying all frames in the entire video. However, we leave this for a future post.

In the meantime, we can see that the ability to export frames to ENVI for further analysis opens up a wealth of image analysis options. We’re excited to explore the possibilities.

Application Tips for ENVI – Implementing the Classification Workflow

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s automated step-by-step Classification Workflow to perform a supervised classification.

Scenario: This tip demonstrates the steps used for supervised classification of an index stack created from a Landsat 8 scene of Lake Tahoe, CA USA. The index stack combines three different spectral indices into a single multi-layer image. The indices include the Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Normalized Difference Snow Index (NDSI).

Here we are using the index stack as a form of data reduction and normalization; however, in most application users will utilize most or all of the individual spectral bands to maximize the spectral information used in the classification analysis.

Lake Tahoe Landsat image classification

Lake Tahoe, CA: Landsat 8 image (upper left); index stack (lower left); supervised classification output (right).

 

The Tip: Below are the steps used to implement the Classification Workflow in ENVI:

  • After opening the selected image in ENVI, launch the workflow from the toolbox by selecting: Toolbox > Classification > Classification Workflow
  • The first step of the workflow allows you to select the input image, perform any spatial and spectral subsetting, and also select a mask, if applicable.

ENVI Classification Workflow file selection

  • The next step provides the option to specify whether the classification is to be performed using No Training Data (unsupervised classification) or to Use Training Data (supervised classification). In our example we have selected to Use Training Data.
  • For supervised classification, the user is next given a chance to interactively define or upload the training data. Had we selected unsupervised classification, then our next step would have been to select parameters for implementing the ISODATA classification algorithm.
  • To define the training data, users have the option of uploading a previously defined training dataset, or alternatively to use the ENVI annotation tools to interactively select polygons, ellipses, rectangles or points to define training areas for each desired class.
  • There is also an option at this stage in the workflow to specify the supervised classification scheme (Maximum Likelihood, Minimum Distance, Mahalanobis Distance, or Spectral Angle Mapper) and any of its associated classification parameters. In our example we use the Maximum Likelihood classification scheme with its default parameters.

ENVI Classification Workflow training data

  • Note that you can select the Preview button at the bottom left of the workflow window to see the classification results dynamically updated as you proceed through the training data definition process. However, there are limits on how big an area can be previewed. If the area is too large then the preview will appear black by default. If this occurs, then simply increase the zoom and/or reduce the size of the preview window.
  • It also important to remember to save your training data once complete so that you can later replicate the same classification process or utilize the data in another image.
  • In our example we have defined five classes (water, snow/ice, vegetation, barren, and cloud), each represented using five different training polygons.
  • Once satisfied with the training data, selecting Next at the bottom of the window will initiate the classification process.
  • Once classification is complete, if you’re not happy with the results or want to change the training data or input parameters, then there’s no cause for concern. You can easily move forward and backward throughout the classification process using the Back and Next buttons at the bottom of the workflow window, allowing you to check your results and/or go back and change settings.
  • Once the classification is complete the output will be displayed in ENVI, and the user is then given additional options to refine the output using smoothing (removes speckling) and aggregation (removes small regions). We have selected to do both for our example.
  • The final step after smoothing and aggregation is to save the results, which includes options for saving the classification image, classification vectors, and classification statistics.

ENVI Classification Workflow output

We have demonstrated just one of many different classification options included in the Classification Workflow. To learn more about the various different algorithms and settings for supervised and unsupervised classification techniques, just read through the ENVI help documentation and/or follow the classification tutorial included with ENVI.

Working with Spectral Indices using Landsat – Building an ‘index stack’

As part of our ongoing series using spectral indices to automatically delineate landscape features such as clouds, snow/ice, water and vegetation in Landsat imagery, here we extend this analysis to create an ‘index stack’ using a set of three indices.

Specifically, we utilize a Landsat 8 image of Lake Tahoe to generate output layers for the Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Normalized Difference Snow Index (NDSI). We then stack these output layers into a single image and display the resulting ‘index stack’ as an RGB image.

Lake Tahoe index stack

The specific steps and equations utilized for calculating the three indices are outlined in our earlier posts in this series: NDVI, NDWI, and NDSI. These indices, along with many other spectral indices, can also be calculated using the new Spectral Index tool included in ENVI 5.2; however, note that the NDWI calculation in this tool is a different index than the one presented here.

Once the indices have been calculated, the next step is to stack the output layers together into a single image. In ENVI this can be accomplished using the Layer Stacking tool found under Raster Management in the ENVI Toolbox.

The resulting image can then be displayed as a standard RGB, where in our example we have stacked the indices as follows: R – NDSI, G – NDVI, B – NDWI.

Lake Tahoe index compilation

It becomes readily apparent in this image stack that particular colors can be equated to different landscape features. For example, vegetation displays here as green, water as purple, snow/ice as magenta, and soil, rocks, and barren land as blue. Clouds also appear as a mixture of purple and magenta, so in this case these indices alone are not sufficient for differentiating clouds from water and snow/ice. Hence there is a need for including additional indices when developing a robust automated assessment procedure.

The index stack not only provides rapid visualization of different landscape features, but also delivers the numerical foundation for quantitative analysis and image classification using the index values. Considering the many different indices that are available beyond those presented here, the possibilities for expanding and modifying this type of analysis are virtually limitless.

So while these types of indices may be conceptually simple, together they can be powerful tools for image analysis.

Enhancing the Landsat 8 Quality Assessment band – Detecting snow/ice using NDSI

This is the third installment in a series on developing a set of indices to automatically delineate features such as clouds, snow/ice, water and vegetation in Landsat imagery.

In this series of investigations, the challenge we have given ourselves is to utilize relatively simple indices and thresholds to refine some or all of the existing Landsat 8 quality assessment procedure, and wherever possible to also maintain backward compatibility with previous Landsat missions.

The two previous articles focused on Differentiating water using NDWI and Using NDVI to delineate vegetation.

Landsat 8 Lake Tahoe Snow/Ice

Here we explore the Normalized Difference Snow Index (NDSI) (Dozier 1989, Hall et al. 1995; Hall and Riggs 2014) to demonstrate how this index can be utilized to delineate the presence of snow/ice.

Note that NDSI is already included in the Landsat 8 quality assessment procedure; however, as currently implemented, NDSI is used primarily as a determining parameter in the decision trees for the Cloud Cover Assessment algorithms.

Additionally, as discussed in the MODIS algorithm documentation (Hall et al. 2001), NDSI has some acknowledged limits, in that snow can sometimes be confused with water, and that lower NDSI thresholds are occasionally needed to properly identify snow covered forests. This suggests that NDSI performance can be improved through integration with other assessment indices.

For now we consider NDSI on its own, but with plans to ultimately integrate this and other indices into a rule-based decision tree for generating a cohesive overall quality assessment.

NDSI is calculated using the following general equation: NDSI = (Green – SWIR)/(Green + SWIR). To calculate this index for our example Landsat 8 images in ENVI, we used Band Math (Toolbox > Band Ratio > Band Math) to implement the following equation (float(b3)-float(b6))/(float(b3)+float(b6)), where b3 is Band-3 (Green), b6 is Band-6 (SWIR), and the float() operation is used to transform integers to floating point values and avoid byte overflow.

After visually inspecting output to develop thresholds based on observed snow/ice characteristics in our test images, results of the analysis indicate the following NDSI snow/ice thresholds: low confidence (NDSI ≥ 0.4), medium confidence (NDSI ≥ 0.5), and high confidence (NDSI ≥ 0.6).

Example 1: Lake Tahoe

This example illustrates output from a Landsat 8 scene of Lake Tahoe acquired on April 12, 2014 (LC80430332014102LGN00). For this image, both the NDSI output and QA assessment successfully differentiate snow/ice from other image features. The only significant difference, as can be observed here in the medium confidence output, is that the QA assessment identifies two lakes to the east and southeast of Lake Tahoe that are not included in the NDSI output. Without knowledge of ground conditions at the time of image acquisition, however, it is not feasible to assess the relative accuracy of whether these are, or are not, ice-covered lakes. Otherwise, the snow/ice output is in agreement for this image.

Landsat 8 Lake Tahoe NDSI Snow/Ice

Example 2: Cape Canaveral

This example illustrates output from a Landsat 8 scene of Cape Canaveral acquired on October 21, 2013 (LC80160402013294LGN00). Given its location, there is not expected to be any snow/ice identified in the image, as is the case for the high confidence NDSI output. However, for the medium and low confidence NDSI output there is some confusion with clouds, and for the QA assessment there is confusion with clouds and sand. This suggests a need to either incorporate other indices to refine the snow/ice output and/or include some geographic awareness in the analysis to eliminate snow/ice in regions where it is not expected to occur.

Landsat 8 Cape Canaveral NDSI Snow/Ice

Stay tuned for future posts on other Landsat 8 assessment options, as well as a discussion on how to combine the various indices into a single integrated quality assessment algorithm.

In the meantime, we welcome your feedback on how these indices perform on your own images.

– –

Dozier, J. (1989). Spectral signature of alpine snow cover from the Landsat Thematic Mapper. Remote sensing of Environment, 28, p. 9-22.

Hall, D. K., Riggs, G. A., and Salomonson, V. V. (1995). Development of methods for mapping global snow cover using Moderate Resolution Imaging Spectroradiometer (MODIS) data. Remote sensing of Environment, 54(2), p. 127-140.

Hall, D. K., Riggs, G. A., Salomonson, V. V., Barton, J. S., Casey, K., Chien, J. Y. L., DiGirolamo, N. E., Klein, A. G., Powell, H. W., and Tait, A. B. (2001). Algorithm theoretical basis document (ATBD) for the MODIS snow and sea ice-mapping algorithms. NASA GSFC. 45 pp.

Hall, D. K., and Riggs, G. A. (2014). Normalized-Difference Snow Index (NDSI). in Encyclopedia of Snow, Ice and Glaciers, Eds. V. P. Singh, P. Singh, and U. K. Haritashya. Springer. p. 779-780.

Enhancing the Landsat 8 Quality Assessment band – Using NDVI to delineate vegetation

This is the second installment in a series on developing alternative indices to automatically delineate features such as clouds, snow/ice, water and vegetation in Landsat imagery.

The previous article focused on utilizing the Normalized Difference Water Index to differentiate water from non-water (see Differentiating water using NDWI).

In this series of investigations, the challenge we have given ourselves is to utilize relatively simple indices and thresholds to refine some or all of the existing Landsat 8 quality assessment procedure, and wherever possible to also maintain backward compatibility with previous Landsat missions.

Landsat8 Lake Tahoe Vegetation

In this article we explore one of the most commonly used vegetation indices, the Normalized Difference Vegetation Index (NDVI) (Kriegler et al. 1969, Rouse et al. 1973, Tucker 1979), to see how it can be utilized to delineate the presence of vegetation. Since the Landsat 8 quality assessment band currently does not include output for vegetation, NDVI seems like a logical foundation for performing this assessment.

NDVI is typically used to indicate the amount, or relative density, of green vegetation present in an image; however, here we adapt this index to more simply indicate confidence levels with respect to the presence of vegetation.

To calculate NDVI in ENVI, you can either directly use the included NDVI tool (Toolbox > Spectral > Vegetation > NDVI) or calculate NDVI yourself using Band Math (Toolbox > Band Ratio > Band Math). If using Band Math, then implement the following equation (float(b5)-float(b4))/(float(b5)+float(b4)), where b4 is Band-4 (Red), b5 is Band-5 (NIR), and the float() operation is used to transform integers to floating point values and avoid byte overflow.

After visually inspecting output to develop thresholds based on observed vegetation characteristics in our test images, results of the analysis indicate the following NDVI vegetation thresholds: low confidence (NDVI ≥ 0.2), medium confidence (NDWI ≥ 0.3), and high confidence (NDWI ≥ 0.4).

Example 1: Lake Tahoe

This example illustrates output from a Landsat 8 scene of Lake Tahoe acquired on April 12, 2014 (LC80430332014102LGN00). The NDVI output for this image successfully differentiates vegetation from water, cloud, snow/ice and barren/rocky land. Note particularly how the irrigated agricultural fields to the east and southeast of Lake Tahoe are appropriately identified, and how the thresholds properly indicate increased vegetation trending westward of Lake Tahoe as one transitions downslope from the Sierra Nevada into the Central Valley of California.

Landsat8 Lake Tahoe NDVI Vegetation

Example 2: Cape Canaveral

This example illustrates output from a Landsat 8 scene of Cape Canaveral acquired on October 21, 2013 (LC80160402013294LGN00). As with the Lake Tahoe example, NDVI once again performs well at differentiating vegetation from water, cloud and barren land. Given the cloud extent and high prevalence of both small and large water bodies present in this image, NDVI demonstrates a robust capacity to effectively delineate vegetation. Such results are not unexpected given the general acceptance and applicability of this index in remote sensing science.

Landsat8 Cape Canaveral NDVI Vegetation

We’ll continue to explore other enhancements in future posts, and ultimately combine the various indices into a single integrated quality assessment algorithm.

In the meantime, we’re interested in hearing your experiences working with Landsat quality assessment and welcome your suggestions and ideas.

– –

Kriegler, F.J., W.A. Malila, R.F. Nalepka, and W. Richardson (1969). Preprocessing transformations and their effects on multispectral recognition. Proceedings of the Sixth International Symposium on Remote Sensing of Environment, p. 97-131.

Rouse, J. W., R. H. Haas, J. A. Schell, and D. W. Deering (1973). Monitoring vegetation systems in the Great Plains with ERTS, Third ERTS Symposium, NASA SP-351 I, p. 309-317.

Tucker, C. J. (1979). Red and photographic infrared linear combinations for monitoring vegetation. Remote sensing of Environment, 8(2), p. 127-150.

Enhancing the Landsat 8 Quality Assessment band – Differentiating water using NDWI

(Update: 09-23-2014) Just added – see our related post on Using NDVI to delineate vegetation.

Are you working with Landsat 8 or other earlier Landsat data? Are you looking for solutions to automatically delineate features such as clouds, snow/ice, water and vegetation? Have you looked at the Landsat 8 Quality Assessment band, but find the indicators don’t meet all your needs?

If so, you’re not alone. This is a common need in most remote sensing applications.

After recently exploring the contents of the Quality Assessment (QA) band for examples from Lake Tahoe and Cape Canaveral (see Working with Landsat 8), it became readily apparent that there is room for improvement in the quality assessment indicators. So we set out to identify possible solutions to help enhance the output.

Landsat8 Lake Tahoe - Water

The challenge we gave ourselves was to utilize only relatively simple indices and thresholds to further refine some or all of the existing Landsat 8 quality assessment procedure, and wherever possible to also maintain backward compatibility with previous Landsat missions.

As a first step, let’s explore how the Normalized Difference Water Index (NDWI), as described by McFeeters (1996), can be utilized to differentiate water from non-water.

To calculate NDWI in ENVI, we used Band Math (Toolbox > Band Ratio > Band Math) to implement the following equation (float(b3)-float(b5))/(float(b3)+float(b5)), where b3 is Band-3 (Green), b5 is Band-5 (NIR), and the float() operation is used to transform integers to floating point values and avoid byte overflow.

The NDWI output was visually inspected to develop thresholds based on known image and landscape features. Additionally, as with the QA band, rather than identify a single absolute threshold, three threshold values were used to indicate low, medium and high confidence levels whether water is present.

As a caveat at this stage, note that this analysis currently only incorporates two example test images, which is far from rigorous. Many more examples would need to be incorporated to perform thorough calibration and validation of the proposed index. It is also expected that developing a robust solution will entail integrating the different indices into a rule-based decision tree (e.g., if snow/ice or cloud, then not water).

Results of the NDWI analysis for water indicate the following: low confidence (NDWI ≥ 0.0), medium confidence (NDWI ≥ 0.06), and high confidence (NDWI ≥ 0.09).

Example 1: Lake Tahoe

This example illustrates output for a subset Landsat 8 scene of Lake Tahoe acquired on April 12, 2014 (LC80430332014102LGN00). Here we see improvement over the QA band water index, which exhibits significant confusion with vegetation. The NDWI output performs very well at the high confidence level, but includes some confusion with snow/ice and cloud at the low and medium confidence levels. We expect much of this confusion can be resolved once a decision tree is incorporated into the analysis.

Landsat8 Lake Tahoe Quality Assessment - Water

Example 2: Cape Canaveral

This example illustrates output for a subset Landsat 8 scene of Cape Canaveral acquired on October 21, 2013 (LC80160402013294LGN00). As with the previous example, there is significant improvement over the existing QA band water index. There is again some confusion with cloud at the low and medium confidence levels, but strong performance at the high confidence level. As a result, this output also shows promise as the foundation for further improvements using a decision tree.

Landsat8 Cape Canaveral Quality Assessment - Water

We’ll continue to explore other enhancements in future posts. In the meantime, we’d love to hear your experiences working with Landsat quality assessment and welcome your suggestions and ideas.

– –

McFeeters, S. K. (1996). The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. International Journal of Remote Sensing, 17(7), 1425-1432.

Working with Landsat 8 – Using and interpreting the Quality Assessment (QA) band

So you’ve downloaded a Landsat 8 scene and eager to begin your investigation. As you get started, let’s explore how the Quality Assessment band that is distributed with the data can be used to help improve your analysis.

Landsat8 Lake Tahoe

What is the QA band?

As summarized on the USGS Landsat 8 product information website: “Each pixel in the QA band contains a decimal value that represents bit-packed combinations [QA bits] of surface, atmosphere, and sensor conditions that can affect the overall usefulness of a given pixel.”

“Rigorous science applications seeking to optimize the value of pixels used in a study will find QA bits useful as a first level indicator of certain conditions. Otherwise, users are advised that this file contains information that can be easily misinterpreted and it is not recommended for general use.”

What are QA bits?

Rather than utilize multiple bands for indicating conditions such as water, clouds and snow, the QA band integrates this information into 16-bit data values referred to as QA bits. As a result, a significant amount of information is packed into a single band; however, this also means that certain steps are required to extract the multi-layered information content from the integrated QA bits.

“The pixel values in the QA file must be translated to 16-bit binary form to be used effectively. The gray shaded areas in the table below show the bits that are currently being populated in the Level 1 QA Band, and the conditions each describe. None of the currently populated bits are expected to exceed 80% accuracy in their reported assessment at this time.”

Landsat8 QA Bands

“For the single bits (0, 1, 2, and 3):

  • 0 = No, this condition does not exist
  • 1 = Yes, this condition exists.”

“The double bits (4-5, 6-7, 8-9, 10-11, 12-13, and 14-15) represent levels of confidence that a condition exists:

  • 00 = Algorithm did not determine the status of this condition
  • 01 = Algorithm has low confidence that this condition exists (0-33 percent confidence)
  • 10 = Algorithm has medium confidence that this condition exists (34-66 percent confidence)
  • 11 = Algorithm has high confidence that this condition exists (67-100 percent confidence).”

How are QA bits calculated?

QA bit values are calculated at various stages during the radiometric and geometric correction process. An overview of the algorithms used for calculating QA bits is provided in the LDCM CAL/VAL Algorithm Description Document.

The single QA bits (0-3) are used to signify: missing data and pixels outside the extent of the image following geometric correction (designated fill); dropped lines (dropped frame); and pixels hidden from sensor view by the terrain (terrain occlusion).

The double QA bits (4-15) are calculated using the LDCM Cloud Cover Assessment (CCA) system, which consists of several intermediate CCA algorithms whose results are merged to create final values for each Landsat 8 scene. The algorithms utilize a series of spectral tests, and in one case a statistical decision tree model, to assess the presence of cloud, cirrus cloud, snow/ice, and water.

As the name implies, the heritage of the CCA system is based on cloud detection; hence algorithms are directed primarily at identifying clouds, with secondary attention to snow/ice and water. Keep this in mind when interpreting results, particularly with respect to water discrimination, which is reportedly poor in most cases.

How do I use QA bits?

While it is feasible to translate individual QA bits into their respective information values, or implement thresholds to extract specific values or ranges of values, this isn’t practical for accessing the full information content contained in the QA band.

Instead, try using the L-LDOPE Toolbelt, a no-cost tool available from the USGS Landsat 8 website that includes “functionality for computing histograms, creating masks, extracting statistics, reading metadata, reducing spatial resolution, band and spatial subsetting, and unpacking bit-packed values… the new tool [also] extracts bits from the OLI Quality Assessment (QA) band to allow easy identification and interpretation of pixel condition.”

Note that the L-LDOPE Toolbelt does not include a graphical user interface, but instead operates using command-line instructions. So be sure to download the user guide, which includes the specific directions for implementing the various executables.

L-LDOPE Toolbelt example

As an example, let’s walk through the steps needed to unpack the QA bits from a Landsat 8 image of Lake Tahoe using a Windows 7 x64 desktop system:

  • Unzip the L-LDOPE Toolbelt zip file and place the contents in the desired local directory.
  • Open the Windows Command Prompt (All Programs > Accessories > Command Prompt) and navigate to the respective ‘bin’ directory for your operating system (‘windows64bit_bin’ in our example).
  • For simplicity, copy the QA file (e.g., LC80430332014102LGN00_BQA.TIF) to the same ‘bin’ directory as identified in the previous step. For users familiar with command-line applications the data can be left in a separate directory with the executable command adjusted accordingly.
  • Execute the unpacking application (unpack_oli_qa.exe) using the following command (typed entirely on one line):

Landsat8 Unpack QA

  • The above example extracts all the QA bits using the default confidence levels and places them in separate output files.
  • Refer to the user guide for instructions on how to change the defaults, extract only select QA bits, and/or combine output into a single file.

Example 1: Lake Tahoe

This example illustrates QA output for a subset Landsat 8 scene of Cape Canaveral acquired on October 21, 2013 (LC80160402013294LGN00). Here the cloud discrimination is reasonable but includes confusion with beach areas along the coastline, the snow/ice output interestingly misidentifies some cloud and beach areas, and water discrimination is again poorly defined.

Landsat8 Lake Tahoe QA

Example 2: Cape Canaveral

This example illustrates QA output for a subset Landsat 8 scene of Lake Tahoe acquired on April 12, 2014 (LC80430332014102LGN00). Note that snow/ice in the surrounding mountains in identified with reasonable accuracy, cloud discrimination is also reasonable but includes significant confusion with snow/ice, and water is poorly characterized, including many extraneous features beyond just water bodies.

Landsat8 Cape Canaveral QA

With these examples in mind, it is worth repeating: “Rigorous science applications seeking to optimize the value of pixels used in a study will find QA bits useful as a first level indicator of certain conditions. Otherwise, users are advised that this file contains information that can be easily misinterpreted and it is not recommended for general use.”

Be sure to keep this in mind when exploring the information contained in the QA band.

For more info on L-LDOPE Toolbet: https://landsat.usgs.gov/L-LDOPE_Toolbelt.php

For more info on Landsat 8: https://landsat.usgs.gov/landsat8.php

 

Application Tips for ENVI 5.x – Calculating vegetation indices for NDVI and beyond

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Calculate a collection of vegetation indices for hyperspectral and multispectral imagery using ENVI’s Vegetation Index Calculator.

Scenario: In this tip, vegetation indices are calculated for two variants of AVIRIS data from Jasper Ridge, California: one version using the full range of 224 possible hyperspectral bands (400-2500 nm); and the other using a version that has been spectrally convolved to match 8 of the 11 possible multispectral bands of Landsat 8 OLI (i.e., all bands except the thermal and panchromatic).

The AVIRIS data (JasperRidge98av_flaash_refl; shown below) was obtained from the ENVI Classic Tutorial Data available from the Exelis website, and has already been corrected to surface reflectance using FLAASH.

Jasper Ridge, CA

Vegetation Indices: There are numerous vegetation indices included in ENVI, so in most cases there is already a vegetation tool available that meets your needs. These indices can be found in three main locations within the ENVI Toolbox: (1) Spectral > Vegetation; (2) SPEAR > SPEAR Vegetation Delineation; and (3) THOR > THOR Stressed Vegetation.

The core functionality for deriving vegetation properties in ENVI is the Vegetation Index Calculator (located in Toolbox > Spectral > Vegetation). This tool provides access to 27 different vegetation indices, and will conveniently pre-select the indices that can be calculated for a given input image dependent on the spectral characteristics of the data. Despite this bit of assistance, however, properly implementing and interpreting the various vegetation indices still requires thorough understanding of what is being calculated. To obtain this information, details and references for each index are provided in the ENVI help documentation.

  • Broadband Greenness [5 indices]: Normalized Difference Vegetation Index, Simple Ratio Index, Enhanced Vegetation Index, Atmospherically Resistant Vegetation Index, Sum Green Index.
  • Narrowband Greenness [7 indices]: Red Edge Normalized Difference Vegetation Index, Modified Red Edge Simple Ratio Index, Modified Red Edge Normalized Difference Vegetation Index, Vogelmann Red Edge Index 1, Vogelmann Red Edge Index 2, Vogelmann Red Edge Index 3, Red Edge Position Index.
  • Light Use Efficiency [3 indices]: Photochemical Reflectance Index, Structure Insensitive Pigment Index, Red Green Ratio Index.
  • Canopy Nitrogen [1 index]: Normalized Difference Nitrogen Index
  • Dry or Senescent Carbon [3 indices]: Normalized Difference Lignin Index, Cellulose Absorption Index, Plant Senescence Reflectance Index.
  • Leaf Pigment [4 indices]: Carotenoid Reflectance Index 1, Carotenoid Reflectance Index 2, Anthocyanin Reflectance Index 1, Anthocyanin Reflectance Index 2.
  • Canopy Water Content [4 indices]: Water Band Index, Normalized Difference Water Index, Moisture Stress Index, Normalized Difference Infrared Index.

There are also five additional vegetation tools included in Toolbox > Spectral Vegetation. The Vegetation Suppression Tool essentially removes the spectral contributions of vegetation from the image. The NDVI tool simply provides direct access to the commonly used Normalized Difference Vegetation Index. And the three other tools consolidate select subsets of the above vegetation indices into specific application categories: Agricultural Stress Tool, Fire Fuel Tool, and Forest Health Tool.

Two additional vegetation tools are also available as part of the THOR and SPEAR toolboxes. The THOR Stressed Vegetation and the SPEAR Vegetation Delineation tools both provide workflow approaches to calculating vegetation indices, inclusive of options such as atmospheric correction, mask definition, and spatial filtering. The SPEAR Vegetation Delineation tool uses NDVI to assess the presence and relative vigor of vegetation, whereas the THOR Stressed Vegetation tool provides a step-by-step methodology for processing imagery using the same suite of vegetation indices as defined for the Spectral toolbox.

It is important to note that input images should be atmospherically corrected prior to running the vegetation tools, or in the case of the SPEAR and THOR tools atmospherically corrected as part of the image processing workflow.

The Tip: This example demonstrates the steps used for running ENVI’s Vegetation Index Calculator. Interested users are also encouraged to download the tutorial data from Exelis, or use their own data, and explore what the other vegetation tools have to offer.

  • As specified above, two sets of imagery are used in this example: one is the full AVIRIS hyperspectral dataset, and the other is a spectrally convolved Landsat 8 OLI multispectral dataset of the same image.
  • After opening the images in ENVI, the vegetation tool is started by selecting Spectral > Vegetation > Vegetation Index Calculator.
  • The opening dialog window is used to specify the Input File along with any desired Spatial Subset and/or Mask Band.
  • Next is the main dialog for selecting Vegetation Indices and specifying the Output Filename. There is also an option for Biophysical Cross Checking, which compares results from different indices and masks out pixels with conflicting data values. Using Biophysical Cross Checking is application dependent, but can be useful for removing anomalous pixels from your analysis.
  • As illustrated below, the general process for calculating Vegetation Indices is always the same for any given dataset; the only difference is the list of vegetation indices that are actually available for a particular set of bands. In our example, the full AVIRIS hyperspectral dataset allows for 25 different indices to be calculated, whereas the Landsat 8 LOI multispectral dataset allows only 6 indices.

Vegetation Index Calculator

  • Once you have selected the relevant vegetation indices for your application, simply select OK and the Vegetation Index Calculator will generate an output file with individual bands corresponding to each of the selected vegetation indices.

Shown below is the output data from our two images, along with example quicklooks demonstrating the variability in the various output indices. The reason for this variability is that each index derives different, but related, biophysical information. Thus, be sure to look at the definitions and references for each index to help guide interpretation of the output.

Vegetation Indices

Application Tips for ENVI 5.x – An IDL application for opening HDF5 formatted HICO scenes

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Open a HICO dataset stored in HDF5 format using an IDL application prepared by the U.S. Naval Research Laboratory.

This is a supplement to an earlier post that similarly describes how to open HDF5 formatted HICO files using either the H5_Browser or new HDF5 Reader in ENVI.

HICO Montgomery Reef, Australia

Scenario: This tip demonstrates how to implement IDL code for opening a HDF5 HICO scene from Montgomery Reef, Australia into ENVI format. Subsequent steps are included for preparing the resulting data for further analysis.

The HICO dataset used in this example (H2012095004112.L1B_ISS) was downloaded from the NASA GSFC archive, which can be reached either through the HICO website at Oregon State University or the NASA GSFC Ocean Color website. Note that you can also apply to become a registered HICO Data User through the OSU website, and thereby obtain access to datasets already in ENVI format.

The IDL code used in this example is available from the NASA GSFC Ocean Color website under Documents > Software/Tools > IDL Library > hico. The three IDL files you need are: byte_ordering.pro, nrl_hico_h5_to_flat.pro and write_nrl_header.pro.

The same IDL code is also included here for your convenience:  nrl_hico_h5_to_flat,  byte_ordering  and  write_nrl_header (re-distributed here with permission; disclaimers included in the code). However, to use these files (which were renamed so they could be attached to the post), you will first need to change the file extensions from *.txt to *.pro.

Running this code requires only minor familiarity working with IDL and the IDL Workbench.

The Tip: Below are steps to open the HICO L1B radiance and navigation datasets in ENVI using the IDL code prepared by the Naval Research Laboratory:

  • Start by unpacking the compressed folder (e.g., H2012095004112.L1B_ISS.bz2). If other software isn’t readily available, a good option is to download 7-zip for free from http://www.7-zip.org/.
  • Rename the resulting HDF5 file with a *.h5 extension (e.g., H2012095004112.L1B_ISS.h5). This allows the HDF5 tools in the IDL application to recognize the appropriate format.
  • If you downloaded the IDL files from this post, rename them from *.txt to *.pro (e.g., nrl_hico_h5_to_flat.txt to nrl_hico_h5_to_flat.pro); otherwise, if you downloaded them from the NASA website they already have the correct naming convention.
  • Open the IDL files in the IDL Workbench. To do so, simply double-click the files in your file manager and the files should automatically open in IDL if it is installed on your machine. Alternatively, you can launch either ENVI+IDL or just IDL and then select File > Open in the IDL Workbench.
  • Compile each of the files in the following order: (i) nrl_hico_h5_to_flat.pro, (ii) byte_ordering.pro, and (iii) write_nrl_header.pro. In the IDL Workbench this can be achieved by clicking on the tab associated with a given file and then selecting the Compile button in the menu bar.
  • You will ultimately only run the code for nrl_hico_h5_to_flat.pro, but this application is dependent on the other files; hence the reason they also need to be compiled.
  • Run the code for nrl_hico_h5_to_flat.pro, which this is done by clicking the tab for this file and then selecting the Run button in the menu bar.
  • You will then be prompted for an *.h5 input file (e.g., H2012095004112.L1B_ISS.h5), and a directory where you wish to write the output files.
  • There is no status bar associated with this operation; however, if you look closely at the IDL prompt in the IDL Console at the bottom of the Workbench you will note that it changes color while the process is running and returns to its normal color when the process is complete. In any event, the procedure is relatively quick and typically finishes in less than a minute.
  • Once complete, two sets of output files are created (data files + associated header files), one for the L1B radiance data and one for the navigation data.

Data Preparation: Below are the final steps needed to prepare the HICO data for further processing (repeated here in part from our previous post):

  • Open the L1B radiance and associated navigation data in ENVI. You will notice one side of the image exhibits a black stripe containing zero values.
  • As noted on the HICO website: “At some point during the transit and installation of HICO, the sensor physically shifted relative to the viewing slit. The edge of the viewing slit was visible in every scene.” This effect is removed by simply cropping out affected pixels in each of the data files. For scenes in standard forward orientation (+XVV), cropping includes 10 pixels on the left of the scene and 2 pixels on the right. Conversely, for scenes in reverse orientation (-XVV), cropping is 10 pixels on the right and 2 on the left.
  • If you’re not sure about the orientation of a particular scene, the orientation is specified in the newly created header file under hico_orientation_from_quaternion.
  • Spatial cropping can be performed by selecting Raster Management > Resize Data in the ENVI toolbox, choosing the relevant input file, selecting the option for Spatial Subset, subset the image for Samples 11-510 for forward orientation (3-502 for reverse orientation), and assigning a new output filename. Repeat as needed for each dataset.
  • The HDF5 formatted HICO scenes also require spectral cropping to reduce the total wavelengths from 128 to the 87 band subset from 0.4-0.9 um (400-900 nm). The bands outside this subset are considered less accurate and typically not included in analysis.
  • Spectral cropping can also be performed by selecting Raster Management > Resize Data in the ENVI toolbox, only in this case using the option to Spectral Subset and selecting bands 10-96 (corresponding to 0.40408-0.89669 um) while excluding bands 1-9 and 97-128. This step need only be applied to the hyperspectral L1B radiance data.
  • If desired, spectral and spatial cropping can both be applied in the same step.
  • The HICO scene is now ready for further processing and analysis in ENVI.

For more information on the sensor, detailed data characteristics, ongoing research projects, publications and presentations, and much, much more, HICO users are encouraged to visit the HICO website at Oregon State University. This is an excellent resource for all things HICO.