Live Webinar – Delivering On-Demand Geoanalytics at Scale

Join us for a live webinar
Tuesday, April 18, 2017 | 10:30am EDT/3:30pm BST

Register Now

Delivering On-Demand Geoanalytics at Scale

Things have changed. Vast amounts of imagery are freely available on cloud platforms while big datasets can be hosted and accessed in enterprise environments in ways that were previously cost prohibitive. The ability to efficiently and accurately analyze this data at scale is critical to making informed decisions in a timely manner.

Developed with your imagery needs in mind, the Geospatial Services Framework (GSF) provides a scalable, highly configurable framework for deploying batch and on-demand geospatial applications like ENVI and IDL as a web service. Whether you are a geospatial professional in need of a robust software stack for end-to-end data processing, or a decision maker in need of consolidated analytics for deriving actionable information from complex large-scale data, GSF can be configured to meet your needs.

This webinar will show you real-world example applications that:

  • Describe the capabilities of GSF for scalable data processing and information delivery
  • Introduce the diverse ecosystem of geospatial analysis tools exposed by GSF
  • Illustrate the development of customized ENVI applications within the GSF environment

What are your geospatial data analysis needs?

Register Now

If you can’t attend the live webinar, register anyways and we’ll email you a link to the recording.

ENVI Analytics Symposium 2016 – Geospatial Signatures to Analytical Insights

HySpeed Computing is pleased to announce our sponsorship of the upcoming ENVI Analytics Symposium taking place in Boulder, CO from August 23-24, 2016.

EAS 2016

Building on the success of last year’s inaugural symposium, the 2016 ENVI Analytics Symposium “continues its exploration of remote sensing and big data analytics around the theme of Geospatial Signatures to Analytical Insights.

“The concept of a spectral signature in remote sensing involves measuring reflectance/emittance characteristics of an object with respect to wavelength. Extending the concept of a spectral signature to a geospatial signature opens the aperture of our imagination to include textural, spatial, contextual, and temporal characteristics that can lead to the discovery of new patterns in data. Extraction of signatures can in turn lead to new analytical insights on changes in the environment which impact decisions from national security to critical infrastructure to urban planning.

“Join your fellow thought leaders and practitioners from industry, academia, government, and non-profit organizations in Boulder for an intensive exploration of the latest advancements of analytics in remote sensing.”

Key topics to be discussed at this year’s event include Global Security and GEOINT, Big Data Analytics, Small Satellites, UAS and Sensors, and Algorithms to Insights, among many others.

There will also be a series of pre- and post-symposium workshops to gain in-depth knowledge on various geospatial analysis techniques and technologies.

For more information: http://harrisgeospatial.com/eas/Home.aspx

It’s shaping up to be a great conference. We look forward to seeing you there.

What’s New in ENVI 5.3

As the geospatial industry continues to evolve, so too does the software. Here’s a look at what’s new in ENVI 5.3, the latest release of the popular image analysis software from Exelis VIS.

ENVI

  • New data formats and sensors. ENVI 5.3 now provides support to read and display imagery from Deimos-2, DubaiSat-2, Pleiades-HR and Spot mosaic tiles, GeoPackage vectors, Google-formatted SkySat-2, and Sentinel-2.
  • Spectral indices. In addition to the numerous indices already included in ENVI (more than 60), new options include the Normalized Difference Mud Index (NDMI) and Modified Normalized Difference Water Index (MNDWI).
  • Atmospheric correction. The Quick Atmospheric Correction (QUAC) algorithm has been updated with the latest enhancements from Spectral Sciences, Inc. to help improve algorithm accuracy.
  • Digital elevation model. Users can now download the GMTED2010 DEM (7.5 arc seconds resolution) from the Exelis VIS website for use in improving the accuracy of Image Registration using RPC Orthorectification and Auto Tie Point Generation.
  • Point clouds. If you subscribe to the ENVI Photogrammetry Module (separate license from ENVI), then the Generate Point Clouds by Dense Image Matching tool is now available for generating 3D point clouds from GeoEye-1, IKONOS, Pleiades-1A, QuickBird, Spot-6, WorldView-1,-2 and -3, and the Digital Point Positioning Data Base (DPPDB).
  • LiDAR. The ENVI LiDAR module has been merged with ENVI and can now be launched directly from within the ENVI interface.
  • Geospatial PDF. Your views, including all currently displayed imagery, layers and annotations in those views, can now be exported directly to geospatial PDF files.
  • Spatial subset. When selecting files to add to the workspace, the File Selection tool now includes options to subset files by raster, vector, region of interest or map coordinates.
  • Regrid raster. Users can now regrid raster files to custom defined grids (geographic projection, pixel size, spatial extent and/or number of rows and columns).
  • Programming. The latest ENVI release also includes dozens of new tasks, too numerous to list here, that can be utilized for developing custom user applications in ENVI and ENVI Services Engine.

To learn more about the above features and improvements, as well as many more, read the latest release notes or check out the ENVI help documentation.

ENVI 5.3

Application Tips for ENVI 5 – Exporting a Geospatial PDF

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s print and export options to generate a Geospatial PDF.

Geospatial PDF

Scenario: This tip utilizes a Landsat-8 scene of California’s Central Valley to demonstrate the steps for creating a Geospatial PDF using two different options: (1) using Print Layout; and (2) using Chip View to Geospatial PDF.

Geospatial PDFs allow you to easily share your geospatial output in standard PDF format while still enabling users to measure distances and identify locations in geographic coordinates, but without need for any specialized GIS or remote sensing software.

Option 1 – Print Layout

  • The Print Layout option requires ENVI 5.0 or later and works only on Windows platforms. It also requires that you launch ENVI in 32-bit mode and have a licensed ArcGIS application on the same system.
  • If you’re looking for the ENVI 32-bit mode (as opposed to the now standard 64-bit mode), it is typically found in either the ‘32-bit’ or ‘ENVI for ArcGIS’ subdirectory of the ENVI directory located under Start > All Programs.
  • Now, using your data of choice, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF. In our example, we simply use a color infrared image of our example Landsat-8 scene. However, if desired, your output can include multiple layers and even annotations.
  • Once you are satisfied with the View, go to File > Print…, and this will launch the Print Layout viewer where you can make further adjustments to your output before exporting it to Geospatial PDF.
  • Note: If the File > Print… option doesn’t produce the desired output in Print Layout (which doesn’t directly support all file types, georeferencing formats or annotation styles), then you can also use File > Chip View To > Print… as another option. The Chip View To option creates a screen capture of whatever is in the active View, so it can accommodate anything you can display in a View, but with the tradeoff that there is slightly less functionality in the Print Layout format options.
  • In our example, for instance, the File > Print… option didn’t support the Landsat-8 scene when opened using the ‘MTL.txt’ file, but instead of using the Chip View To option, as a different workaround we resaved the scene in ENVI format to retain full functionality of Print Layout.
  • Once in the Print Layout viewer, you can apply different ArcMap templates, adjust the zoom level and location of the image, and edit features in the template. Here we made a few edits to the standard LetterPortrait.mxt template as the basis for our output.

ENVI Print Layout

  • To output your results to a Geospatial PDF, select the Export button at the top of the Print Layout viewer, enter a filename, and then select Save.
  • Note that Print Layout can also be used to Print your output using the Print button.
  • You have now created a Geospatial PDF of your work (see our example: CA_Central_Valley_1.pdf). Also, see below for tips on viewing and interacting with this file in Adobe Reader and Adobe Acrobat.

Option 1 – Chip View to Geospatial PDF

  • The Chip View to Geospatial PDF requires ENVI 5.2 or later, but does not require ArcGIS.
  • This option directly prints whatever is in the active View to a Geospatial PDF, so it has fewer options than the Print Layout option, but can still be very useful for those without an ArcGIS license.
  • As above, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF, including multiple layers and annotations as desired. Here we again simply use a color infrared image of our example Landsat-8 scene, but this time include text annotations and a north arrow added directly to the View.
  • Once you are satisfied with the View, go to File > Chip View To > Geospatial PDF…, enter a filename, and then select OK.
  • Note that the Chip View To option can also be used to export your work to a File, PowerPoint or Google Earth.
  • Congratulations again. You have now created another Geospatial PDF of your work (see our example: CA_Central_Valley_2.pdf).

CA Central Valley 2

Viewing Output in Adobe

  • As mentioned, Geospatial PDFs allow you to measure distances and identify locations in geographic coordinates using a standard PDF format. Geospatial PDFs can be viewed in either Adobe Acrobat or Reader (v9 or later).
  • In Adobe Reader, the geospatial tools can be found under Edit > Analysis in the main menu bar. In Adobe Acrobat, the geospatial tools can be enabled by selecting View > Tools > Analyze in the main menu bar, and then accessed in the Tools pane under Analyze.
  • To measure distance, area and perimeter, select the Measuring Tool.
  • To see the cursor location in geographic coordinates, select the Geospatial Location Tool.
  • And to find a specific location, select the Geospatial Location Tool, right click on the image, select the Find a Location tool, and then enter the desired coordinates.

So now that you’re familiar with the basics of creating Geospatial PDFs, be sure to consider using them in your next project. They’re definitely a powerful way to share both images and derived output products with your colleagues and customers.

Remote Sensing Analysis in the Cloud – Introducing the HICO Image Processing System

HySpeed Computing is pleased to announce release of the HICO Image Processing System – a prototype web application for on-demand remote sensing image analysis in the cloud.

HICO IPS: Chesapeake Bay Chla

What is the HICO Image Processing System?

The HICO IPS is an interactive web-application that allows users to specify image and algorithm selections, dynamically launch analysis routines in the cloud, and then see results displayed directly in the map interface.

The system capabilities are demonstrated using imagery collected by the Hyperspectral Imager for the Coastal Ocean (HICO) located on the International Space Station, and example algorithms are included for assessing coastal water quality and other nearshore environmental conditions.

What is needed to run the HICO IPS?

No specialized software is required. You just need an internet connection and a web browser to run the application (we suggest using Google Chrome).

How is this different than online map services?

This is an application-server, not a map-server, so all the results you see are dynamically generated on-demand at your request. It’s remote sensing image analysis in the cloud.

What software was used to create the HICO IPS?

The HICO IPS is a combination of commercial and open-source software; with core image processing performed using the recently released ENVI Services Engine.

What are some of the advantages of this system?

The system can be configured for any number of different remote sensing instruments and applications, thus providing an adaptable framework for rapidly implementing new algorithms and applications, as well as making these applications and their output readily available to the global user community.

Try it out today and let us know what you think: http://hyspeedgeo.com/HICO/

 

Related posts

Calculating a land/water mask using HICO IPS

Deriving chlorophyll concentration using HICO IPS

Evaluating water optical properties using HICO IPS

Characterizing shallow coastal environments using HICO IPS

ENVI Analytics Symposium – Come explore the next generation of geoanalytic solutions

HySpeed Computing is pleased to announce our sponsorship of the upcoming ENVI Analytics Symposium taking place in Boulder, CO from August 25-26, 2015.

ENVI Analytics Symposium

The ENVI Analytics Symposium (EAS) will bring together the leading experts in remote sensing science to discuss technology trends and the next generation of solutions for advanced analytics. These topics are important because they can be applied to a diverse range of needs in environmental and natural resource monitoring, global food production, security, urbanization, and other fields of research.

The need to identify technology trends and advanced analytic solutions is being driven by the staggering growth in high-spatial and spectral resolution earth imagery, radar, LiDAR, and full motion video data. Join your fellow thought leaders and practitioners from industry, academia, government, and non-profit organizations in Boulder, Colorado for an intensive exploration of the latest advancements of analytics in remote sensing.

Core topics to be discussed at this event include Algorithms and Analytics, Applied Research, Geospatial Big Data, and Remote Sensing Phenomenology.

For more information: http://www.exelisvis.com/eas/HOME.aspx

We look forward to seeing you there.

Geospatial Solutions in the Cloud

Source: Exelis VIS whitepaper – 12/2/2014 (reprinted with permission)

What are Geospatial Analytics?

Geospatial analytics allow people to ask questions of data that exist within a spatial context. Usually this means extracting information from remotely sensed data such as multispectral imagery or LiDAR that is focused on observing the Earth and the things happening on it, both in a static sense or over a period of time. Familiar examples of this type of geospatial analysis include Land Classification, Change Detection, Soil and Vegetative indexes, and depending on the bands of your data, Target Detection and Material Identification. However, geospatial analytics can also mean analyzing data that is not optical in nature.

So what other types of problems can geospatial analytics solve? Geospatial analytics comprise of more than just images laid over a representation of the Earth. Geospatial analytics can ask questions of ANY type of geospatial data, and provide insight into static and changing conditions within a multi-dimensional space. Things like aircraft vectors in space and time, wind speeds, or ocean currents can be introduced into geospatial algorithms to provide more context to a problem and to enable new correlations to be made between variables.

Many times, advanced analytics like these can benefit from the power of cloud, or server-based computing. Benefits from the implementation of cloud-based geospatial analytics include the ability to serve on-demand analytic requests from connected devices, run complex algorithms on large datasets, or perform continuous analysis on a series of changing variables. Cloud analytics also improve the ability to conduct multi-modal analysis, or processes that take into account many different types of geospatial information.

Here we can see vectors of a UAV along with the ground footprint of the sensor overlaid in Google Earth™, as well as a custom interface built on ENVI that allows users to visualize real-time weather data in four dimensions (Figure 1).

geospatial_cloud_fig1

Figure 1 – Multi-Modal Geospatial Analysis – data courtesy NOAA

These are just a few examples of non-traditional geospatial analytics that cloud-based architecture is very good at solving.

Cloud-Based Geospatial Analysis Models 

So let’s take a quick look at how cloud-based analytics work. There are two different operational models for running analytics, the on-demand model and the batch process model. In a on-demand model (Figure 2), a user generally requests a specific piece of information from a web-enabled device such as a computer, a tablet, or a smart phone. Here the user is making the request to a cloud based resource.

geospatial_cloud_fig2

Figure 2 – On-Demand Analysis Model

Next, the server identifies the requested data and runs the selected analysis on it. This leverages scalable server architecture that can vastly decrease the amount of time it takes to run the analysis and eliminate the need to host the data or the software on the web-enabled device. Finally, the requested information is sent back to the user, usually at a fraction of the bandwidth cost required to move large amounts of data or full resolution derived products through the internet.

In the automated batch process analysis model (Figure 3), the cloud is designed to conduct prescribed analysis to data as it becomes available to the system, reducing the amount of manual interaction and time that it takes to prepare or analyze data. This system can take in huge volumes of data from various sources such as aerial or satellite images, vector data, full motion video, radar, or other data types, and then run a set of pre-determined analyses on that data depending on the data type and the requested information.

geospatial_cloud_fig3

Figure 3 – Automated Batch Process Model

Once the data has been pre-processed, it is ready for consumption, and the information is pushed out to either another cloud based asset, such as an individual user that needs to request information or monitor assets in real-time, or simply placed into a database in a ‘ready-state’ to be accessed and analyzed later.

The ability for this type of system to leverage the computing power of scalable server stacks enables the processing of huge amounts of data and greatly reduces the time and resources needed to get raw data into a consumable state.

Solutions in the Cloud

HySpeed Computing

Now let’s take a look at a couple of use cases that employ ENVI capabilities in the cloud. The first is a web-based interface that allows users to perform on-demand geospatial analytics on hyperspectral data supplied by HICO™, the Hyperspectral Imager for the Coastal Ocean (Figure 4). HICO is a hyperspectral imaging spectrometer that is attached to the International Space Station (ISS) and is designed specifically for sampling the coastal ocean in an effort to further our understanding of the world’s coastal regions.

geospatial_cloud_fig4

Figure 4 – The HICO Sensor – image courtesy of NASA

Developed by HySpeed Computing, the prototype HICO Image Processing System (Figure 5) allows users to conduct on-demand image analysis of HICO’s imagery from a web-based browser through the use of ENVI cloud capabilities.

geospatial_cloud_fig5

Figure 5 – The HICO Image Processing System – data courtesy of NASA

The interface exposes several custom ENVI tasks designed specifically to take advantage of the unique spectral resolution of the HICO sensor to extract information characterizing the coastal environment. This type of interface is a good example of the on-demand scenario presented earlier, as it allows users to conduct on-demand analysis in the cloud without the need to have direct access to the data or the computing power to run the hyperspectral algorithms.

The goal of this system is to provide ubiquitous access to the robust HICO catalog of hyperspectral data as well as the ENVI algorithms needed to analyze them. This will allow researchers and other analysts the ability to conduct valuable coastal research using web-based interfaces while capitalizing on the efforts of the Office of Naval Research, NASA, and Oregon State University that went into the development, deployment, and operation of HICO.

Milcord

Another use case involves a real-time analysis scenario that comes from a company called Milcord and their dPlan Next Generation Mission Manager (Figure 6). The goal of dPlan is to “aid mission managers by employing an intelligent, real-time decision engine for multi-vehicle operations and re-planning tasks” [1]. What this means is that dPlan helps folks make UAV flight plans based upon a number of different dynamic factors, and delivers the best plan for multiple assets both before and during the actual operation.

geospatial_cloud_fig6

Figure 6 – The dPlan Next Generation Mission Manager

Factors that are used to help score the flight plans include fuel availability, schedule metrics based upon priorities for each target, as well as what are known as National Image Interpretability Rating Scales, or NIIRS (Figure 7). NIIRS are used “to define and measure the quality of images and performance of imaging systems. Through a process referred to as “rating” an image, the NIIRS is used by imagery analysts to assign a number which indicates the interpretability of a given image.” [2]

geospatial_cloud_fig7

Figure 7 – Extent of NIIRS 1-9 Grids Centered in an Area Near Calgary

These factors are combined into a cost function, and dPlan uses the cost function to find the optimal flight plan for multiple assets over a multitude of targets. dPlan also performs a cost-benefit analysis to indicate if the asset cannot reach all targets, and which target might be the lowest cost to remove from the plan, or whether another asset can visit the target instead.

dPlan employs a custom ESE application to generate huge grids of Line of Sight values and NIIRs values associated with a given asset and target (Figure 8). dPlan uses this grid of points to generate route geometry, for example, how close and at what angle does the asset need to approach the target.

geospatial_cloud_fig8

Figure 8 – dPlan NIIRS Workflow

The cloud-computing power leveraged by dPlan allows users to re-evaluate flight plans on the fly, taking into account new information as it becomes available in real time. dPlan is a great example of how cloud-based computing combined with powerful analysis algorithms can solve complex problems in real time and reduce the resources needed to make accurate decisions amidst changing environments.

Solutions in the Cloud

So what do we do here at Exelis to enable folks like HySpeed Computing and Milcord to ask these kinds of questions from their data and retrieve reliable answers? The technology they’re using is called the ENVI Services Engine (Figure 9), an enterprise-ready version of the ENVI image analytics stack. We currently have over 60 out-of-the-box analysis tasks built into it, and are creating more with every release.

geospatial_cloud_fig9

Figure 9 – The ENVI Services Engine

The real value here is that ENVI Services Engine allows users to develop their own analysis tasks and expose them through the engine. This is what enables users to develop unique solutions to geospatial problems and share them as repeatable processes for others to use. These solutions can be run over and over again on different data and provide consistent dependable information to the persons requesting the analysis. The cloud based technology makes it easy to access from web-enabled devices while leveraging the enormous computing power of scalable server instances. This combination of customizable geospatial analysis tasks and virtually limitless computing power begins to address some of the limiting factors of analyzing what is known as big data, or datasets so large and complex that traditional computing practices are not sufficient to identify correlations within disconnected data streams.

Our goal here at Exelis is to enable you to develop custom solutions to industry-specific geospatial problems using interoperable, off-the-shelf technology. For more information on what we can do for you and your organization, please feel free to contact us.

Sources:

[1] 2014. Milcord. “Geospatial Analytics in the Cloud: Successful Application Scenarios” webinar. https://event.webcasts.com/starthere.jsp?ei=1042556

[2] 2014. The Federation of American Scientists. “National Image Interpretability Rating Scales”. http://fas.org/irp/imint/niirs.htm

 

A Look at What’s New in ENVI 5.2

Earlier this month Exelis Visual Information Solutions released ENVI 5.2, the latest version of their popular geospatial analysis software.

ENVI 5.2

ENVI 5.2 includes a number of new image processing tools as well as various updates and improvements to current capabilities. We’ve already downloaded our copy and started working with the new features. Here’s a look at what’s included.

A few of the most exciting new additions to ENVI include the Spatiotemporal Analysis tools, Spectral Indices tool, Full Motion Video player, and improved integration with ArcGIS:

  • Spatiotemporal Analysis. Just like the name sounds, this feature provides users the ability to analyze stacks of imagery through space and time. Most notably, tools are now available to build a raster series, where images are ordered sequentially by time, to reproject images from multiple sensors into a common projection and grid size, and to animate and export videos of these raster series.
  • Spectral Indices. Expanding on the capabilities of the previous Vegetation Index Calculator, the new Spectral Indices tool includes 64 different indices, which in addition to analyzing vegetation can also be used to investigate geology, man-made features, burned areas and water. The tool conveniently selects only those indices that can be calculated for a given input image dependent on its spectral characteristics. So when you launch the tool you’ll only see those indices that can be calculated using your imagery.
  • Full Motion Video. ENVI 5.2 now supports video, allowing users to not just play video, but also convert video files to time-enabled raster series and extract individual video frames for analysis using standard ENVI tools. Supported file formats include Skybox SkySat video, Adobe Flash Video and Shockwave Flash, Animated GIF, Apple Quicktime, Audio Video Interleaved, Google WebM Matroska, Matroska Video, Motion JPEG and JPEG2000, MPEG-1 Part 2, MPEG-2 Transport Stream, MPEG-2 Part 2, MPEG-4 Part 12 and MPEG-4 Part 14.
  • Integration with ArcGIS. Originally introduced in ENVI 5.0, additional functionality has been added for ENVI to seamlessly interact with ArcGIS, including the ability to integrate analysis tools and image output layers in a concurrent session of ArcMap. For those working in both software domains, this helps simplify your geospatial workflows and more closely integrate your raster and vector analyses.

Other noteworthy additions in this ENVI release include:

  • New data types. ENVI 5.2 now provides support to read and display imagery from AlSat-2A, Deimos-1, Gaofen-1, Proba-V S10, Proba-V S1, SkySat-1, WorldView-3, Ziyuan-1-02C and Ziyuan-3A, as well as data formats GRIB-1, GRIB-2, Multi-page TIFF and NetCDF-4.
  • NNDiffuse Pan Sharpening. A new pan sharpening tool based on nearest neighbor diffusion has been added, which is multi-threaded for high-performance image processing.
  • Scatter Plot Tool. The previous scatter plot tool has been updated and modernized, allowing users to dynamically switch bands, calculate spectral statistics, interact with ROIs, and generate density slices of the displayed spectral data.
  • Raster Color Slice. This useful tool has also been updated, particularly from a performance perspective, providing dynamic updates in the image display according to parameter changes made in the tool.

For those interested in implementing ENVI in the cloud, the ENVI 5.2 release also marks the release of ENVI Services Engine 5.2 , which is an enterprise version of ENVI that facilitates on-demand, scalable, web-based image processing applications. As an example, HySpeed Computing is currently developing a prototype implementation of ESE for processing hyperspectral imagery from the HICO sensor on the International Space Station. The HICO Image Processing System will soon be publically available for testing and evaluation by the community. A link to access the system will be provided on our website once it is released.

HICO IPS

To learn about the above features, and many more not listed here, see the video from Exelis VIS and/or read the latest release notes on ENVI 5.2.

We’re excited to put the new tools to work. How about you?

Working with Landsat 8 – Using and interpreting the Quality Assessment (QA) band

So you’ve downloaded a Landsat 8 scene and eager to begin your investigation. As you get started, let’s explore how the Quality Assessment band that is distributed with the data can be used to help improve your analysis.

Landsat8 Lake Tahoe

What is the QA band?

As summarized on the USGS Landsat 8 product information website: “Each pixel in the QA band contains a decimal value that represents bit-packed combinations [QA bits] of surface, atmosphere, and sensor conditions that can affect the overall usefulness of a given pixel.”

“Rigorous science applications seeking to optimize the value of pixels used in a study will find QA bits useful as a first level indicator of certain conditions. Otherwise, users are advised that this file contains information that can be easily misinterpreted and it is not recommended for general use.”

What are QA bits?

Rather than utilize multiple bands for indicating conditions such as water, clouds and snow, the QA band integrates this information into 16-bit data values referred to as QA bits. As a result, a significant amount of information is packed into a single band; however, this also means that certain steps are required to extract the multi-layered information content from the integrated QA bits.

“The pixel values in the QA file must be translated to 16-bit binary form to be used effectively. The gray shaded areas in the table below show the bits that are currently being populated in the Level 1 QA Band, and the conditions each describe. None of the currently populated bits are expected to exceed 80% accuracy in their reported assessment at this time.”

Landsat8 QA Bands

“For the single bits (0, 1, 2, and 3):

  • 0 = No, this condition does not exist
  • 1 = Yes, this condition exists.”

“The double bits (4-5, 6-7, 8-9, 10-11, 12-13, and 14-15) represent levels of confidence that a condition exists:

  • 00 = Algorithm did not determine the status of this condition
  • 01 = Algorithm has low confidence that this condition exists (0-33 percent confidence)
  • 10 = Algorithm has medium confidence that this condition exists (34-66 percent confidence)
  • 11 = Algorithm has high confidence that this condition exists (67-100 percent confidence).”

How are QA bits calculated?

QA bit values are calculated at various stages during the radiometric and geometric correction process. An overview of the algorithms used for calculating QA bits is provided in the LDCM CAL/VAL Algorithm Description Document.

The single QA bits (0-3) are used to signify: missing data and pixels outside the extent of the image following geometric correction (designated fill); dropped lines (dropped frame); and pixels hidden from sensor view by the terrain (terrain occlusion).

The double QA bits (4-15) are calculated using the LDCM Cloud Cover Assessment (CCA) system, which consists of several intermediate CCA algorithms whose results are merged to create final values for each Landsat 8 scene. The algorithms utilize a series of spectral tests, and in one case a statistical decision tree model, to assess the presence of cloud, cirrus cloud, snow/ice, and water.

As the name implies, the heritage of the CCA system is based on cloud detection; hence algorithms are directed primarily at identifying clouds, with secondary attention to snow/ice and water. Keep this in mind when interpreting results, particularly with respect to water discrimination, which is reportedly poor in most cases.

How do I use QA bits?

While it is feasible to translate individual QA bits into their respective information values, or implement thresholds to extract specific values or ranges of values, this isn’t practical for accessing the full information content contained in the QA band.

Instead, try using the L-LDOPE Toolbelt, a no-cost tool available from the USGS Landsat 8 website that includes “functionality for computing histograms, creating masks, extracting statistics, reading metadata, reducing spatial resolution, band and spatial subsetting, and unpacking bit-packed values… the new tool [also] extracts bits from the OLI Quality Assessment (QA) band to allow easy identification and interpretation of pixel condition.”

Note that the L-LDOPE Toolbelt does not include a graphical user interface, but instead operates using command-line instructions. So be sure to download the user guide, which includes the specific directions for implementing the various executables.

L-LDOPE Toolbelt example

As an example, let’s walk through the steps needed to unpack the QA bits from a Landsat 8 image of Lake Tahoe using a Windows 7 x64 desktop system:

  • Unzip the L-LDOPE Toolbelt zip file and place the contents in the desired local directory.
  • Open the Windows Command Prompt (All Programs > Accessories > Command Prompt) and navigate to the respective ‘bin’ directory for your operating system (‘windows64bit_bin’ in our example).
  • For simplicity, copy the QA file (e.g., LC80430332014102LGN00_BQA.TIF) to the same ‘bin’ directory as identified in the previous step. For users familiar with command-line applications the data can be left in a separate directory with the executable command adjusted accordingly.
  • Execute the unpacking application (unpack_oli_qa.exe) using the following command (typed entirely on one line):

Landsat8 Unpack QA

  • The above example extracts all the QA bits using the default confidence levels and places them in separate output files.
  • Refer to the user guide for instructions on how to change the defaults, extract only select QA bits, and/or combine output into a single file.

Example 1: Lake Tahoe

This example illustrates QA output for a subset Landsat 8 scene of Cape Canaveral acquired on October 21, 2013 (LC80160402013294LGN00). Here the cloud discrimination is reasonable but includes confusion with beach areas along the coastline, the snow/ice output interestingly misidentifies some cloud and beach areas, and water discrimination is again poorly defined.

Landsat8 Lake Tahoe QA

Example 2: Cape Canaveral

This example illustrates QA output for a subset Landsat 8 scene of Lake Tahoe acquired on April 12, 2014 (LC80430332014102LGN00). Note that snow/ice in the surrounding mountains in identified with reasonable accuracy, cloud discrimination is also reasonable but includes significant confusion with snow/ice, and water is poorly characterized, including many extraneous features beyond just water bodies.

Landsat8 Cape Canaveral QA

With these examples in mind, it is worth repeating: “Rigorous science applications seeking to optimize the value of pixels used in a study will find QA bits useful as a first level indicator of certain conditions. Otherwise, users are advised that this file contains information that can be easily misinterpreted and it is not recommended for general use.”

Be sure to keep this in mind when exploring the information contained in the QA band.

For more info on L-LDOPE Toolbet: https://landsat.usgs.gov/L-LDOPE_Toolbelt.php

For more info on Landsat 8: https://landsat.usgs.gov/landsat8.php

 

Celebrating a Milestone – HySpeed Computing Blog Reaches 10,000 Views

HySpeed Computing 10000We would like to thank the community and all our followers for making the HySpeed Computing blog a success.  We appreciate your support and look forward to providing you many more informative posts.

Notable highlights and achievements for the blog include:

Let us know what topics you would like to see included.

Thank you!