Remote Sensing in the Cloud – Calculating a land/water mask using HICO IPS

Last year we launched the HICO Image Processing System (HICO IPS) – a prototype web application for on-demand remote sensing data analysis in the cloud.

HICO IPS

To demonstrate the capabilities of this system, we implemented a collection of coastal remote sensing algorithms to produce information on water quality, water depth and benthic features using example imagery from the HICO instrument on the International Space Station.

As the HICO IPS approaches its one year anniversary, and continues its excellent performance, we’d like to take a moment to highlight each of the algorithms currently implemented in the system.

Here we begin with an overview of the land/water mask utilized in the HICO IPS.

Objective – Implement an automated algorithm for classifying land versus water, thereby masking land pixels from further analysis and allowing subsequent processing steps to focus on just water pixels.

Algorithm – Generates a binary mask differentiating land from water using the Normalized Difference Water Index (NDWI; McFeeters 1996). This algorithm can be implemented on its own, or as a pre-processing step in other algorithm workflows.

Inputs – User specified HICO scene, with optional region-of-interest; and user adjustable NDWI threshold, where -1.0 ≤ NDWI ≤ 1.0, land ≤ threshold < water, and default threshold = 0.0.

HICO IPS Christchurch

Output – Binary land/water mask (0 = land; 1 = water), where land is displayed in the online map using a black mask and water remains unchanged.

HICO IPS Christchurch mask

Reference – McFeeters SK (1996) The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features, International Journal of Remote Sensing, vol. 17(7), 1425-1432.

Try it out today for yourself: http://hyspeedgeo.com/HICO/

 

Related posts

Introducing the HICO Image Processing System

Deriving chlorophyll concentration using HICO IPS

Evaluating water optical properties using HICO IPS

Characterizing shallow coastal environments using HICO IPS

Advertisements

What’s New in ENVI 5.3

As the geospatial industry continues to evolve, so too does the software. Here’s a look at what’s new in ENVI 5.3, the latest release of the popular image analysis software from Exelis VIS.

ENVI

  • New data formats and sensors. ENVI 5.3 now provides support to read and display imagery from Deimos-2, DubaiSat-2, Pleiades-HR and Spot mosaic tiles, GeoPackage vectors, Google-formatted SkySat-2, and Sentinel-2.
  • Spectral indices. In addition to the numerous indices already included in ENVI (more than 60), new options include the Normalized Difference Mud Index (NDMI) and Modified Normalized Difference Water Index (MNDWI).
  • Atmospheric correction. The Quick Atmospheric Correction (QUAC) algorithm has been updated with the latest enhancements from Spectral Sciences, Inc. to help improve algorithm accuracy.
  • Digital elevation model. Users can now download the GMTED2010 DEM (7.5 arc seconds resolution) from the Exelis VIS website for use in improving the accuracy of Image Registration using RPC Orthorectification and Auto Tie Point Generation.
  • Point clouds. If you subscribe to the ENVI Photogrammetry Module (separate license from ENVI), then the Generate Point Clouds by Dense Image Matching tool is now available for generating 3D point clouds from GeoEye-1, IKONOS, Pleiades-1A, QuickBird, Spot-6, WorldView-1,-2 and -3, and the Digital Point Positioning Data Base (DPPDB).
  • LiDAR. The ENVI LiDAR module has been merged with ENVI and can now be launched directly from within the ENVI interface.
  • Geospatial PDF. Your views, including all currently displayed imagery, layers and annotations in those views, can now be exported directly to geospatial PDF files.
  • Spatial subset. When selecting files to add to the workspace, the File Selection tool now includes options to subset files by raster, vector, region of interest or map coordinates.
  • Regrid raster. Users can now regrid raster files to custom defined grids (geographic projection, pixel size, spatial extent and/or number of rows and columns).
  • Programming. The latest ENVI release also includes dozens of new tasks, too numerous to list here, that can be utilized for developing custom user applications in ENVI and ENVI Services Engine.

To learn more about the above features and improvements, as well as many more, read the latest release notes or check out the ENVI help documentation.

ENVI 5.3

Application Tips for ENVI 5 – Exporting a Geospatial PDF

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s print and export options to generate a Geospatial PDF.

Geospatial PDF

Scenario: This tip utilizes a Landsat-8 scene of California’s Central Valley to demonstrate the steps for creating a Geospatial PDF using two different options: (1) using Print Layout; and (2) using Chip View to Geospatial PDF.

Geospatial PDFs allow you to easily share your geospatial output in standard PDF format while still enabling users to measure distances and identify locations in geographic coordinates, but without need for any specialized GIS or remote sensing software.

Option 1 – Print Layout

  • The Print Layout option requires ENVI 5.0 or later and works only on Windows platforms. It also requires that you launch ENVI in 32-bit mode and have a licensed ArcGIS application on the same system.
  • If you’re looking for the ENVI 32-bit mode (as opposed to the now standard 64-bit mode), it is typically found in either the ‘32-bit’ or ‘ENVI for ArcGIS’ subdirectory of the ENVI directory located under Start > All Programs.
  • Now, using your data of choice, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF. In our example, we simply use a color infrared image of our example Landsat-8 scene. However, if desired, your output can include multiple layers and even annotations.
  • Once you are satisfied with the View, go to File > Print…, and this will launch the Print Layout viewer where you can make further adjustments to your output before exporting it to Geospatial PDF.
  • Note: If the File > Print… option doesn’t produce the desired output in Print Layout (which doesn’t directly support all file types, georeferencing formats or annotation styles), then you can also use File > Chip View To > Print… as another option. The Chip View To option creates a screen capture of whatever is in the active View, so it can accommodate anything you can display in a View, but with the tradeoff that there is slightly less functionality in the Print Layout format options.
  • In our example, for instance, the File > Print… option didn’t support the Landsat-8 scene when opened using the ‘MTL.txt’ file, but instead of using the Chip View To option, as a different workaround we resaved the scene in ENVI format to retain full functionality of Print Layout.
  • Once in the Print Layout viewer, you can apply different ArcMap templates, adjust the zoom level and location of the image, and edit features in the template. Here we made a few edits to the standard LetterPortrait.mxt template as the basis for our output.

ENVI Print Layout

  • To output your results to a Geospatial PDF, select the Export button at the top of the Print Layout viewer, enter a filename, and then select Save.
  • Note that Print Layout can also be used to Print your output using the Print button.
  • You have now created a Geospatial PDF of your work (see our example: CA_Central_Valley_1.pdf). Also, see below for tips on viewing and interacting with this file in Adobe Reader and Adobe Acrobat.

Option 1 – Chip View to Geospatial PDF

  • The Chip View to Geospatial PDF requires ENVI 5.2 or later, but does not require ArcGIS.
  • This option directly prints whatever is in the active View to a Geospatial PDF, so it has fewer options than the Print Layout option, but can still be very useful for those without an ArcGIS license.
  • As above, prepare the active View in ENVI as you would like it to appear in the Geospatial PDF, including multiple layers and annotations as desired. Here we again simply use a color infrared image of our example Landsat-8 scene, but this time include text annotations and a north arrow added directly to the View.
  • Once you are satisfied with the View, go to File > Chip View To > Geospatial PDF…, enter a filename, and then select OK.
  • Note that the Chip View To option can also be used to export your work to a File, PowerPoint or Google Earth.
  • Congratulations again. You have now created another Geospatial PDF of your work (see our example: CA_Central_Valley_2.pdf).

CA Central Valley 2

Viewing Output in Adobe

  • As mentioned, Geospatial PDFs allow you to measure distances and identify locations in geographic coordinates using a standard PDF format. Geospatial PDFs can be viewed in either Adobe Acrobat or Reader (v9 or later).
  • In Adobe Reader, the geospatial tools can be found under Edit > Analysis in the main menu bar. In Adobe Acrobat, the geospatial tools can be enabled by selecting View > Tools > Analyze in the main menu bar, and then accessed in the Tools pane under Analyze.
  • To measure distance, area and perimeter, select the Measuring Tool.
  • To see the cursor location in geographic coordinates, select the Geospatial Location Tool.
  • And to find a specific location, select the Geospatial Location Tool, right click on the image, select the Find a Location tool, and then enter the desired coordinates.

So now that you’re familiar with the basics of creating Geospatial PDFs, be sure to consider using them in your next project. They’re definitely a powerful way to share both images and derived output products with your colleagues and customers.

From here to there – and everywhere – with Geospatial Cloud Computing

Reposted from Exelis VIS, Imagery Speaks, June 30, 2015, by James Goodman, CEO HySpeed Computing.

In a previous article we presented an overview of the advantages of cloud computing in remote sensing applications, and described an upcoming prototype web application for processing imagery from the HICO sensor on the International Space Station.

First, as a follow up, we’re excited to announce availability of the HICO Image Processing System – a cloud computing platform for on-demand remote sensing image analysis and data visualization.

HICO IPS - Chesapeake Bay - Chlorophyll

HICO IPS allows users to select specific images and algorithms, dynamically launch analysis routines in the cloud, and then see results displayed directly in an online map interface. System capabilities are demonstrated using imagery collected by the Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station, and example algorithms are included for assessing coastal water quality and other nearshore environmental conditions.

This is an application-server, and not just a map-server. Thus, HICO IPS is delivering on-demand image processing of real physical parameters, such as chlorophyll concentration, inherent optical properties, and water depth.

The system was developed using a combination of commercial and open-source software, with core image processing performed using the recently released ENVI Services Engine. No specialized software is required to run HICO IPS. You just need an internet connection and a web browser to run the application (we suggest using Google Chrome).

Beyond HICO, and beyond the coastal ocean, the system can be configured for any number of different remote sensing instruments and applications, thus providing an adaptable cloud computing framework for rapidly implementing new algorithms and applications, as well as making these applications and their output readily available to the global user community.

However, this is but one application. Significantly greater work is needed throughout the remote sensing community to leverage these and other exciting new tools and processing capabilities. To participate in a discussion of how the future of geospatial image processing is evolving, and see a presentation of the HICO IPS, join us at the upcoming ENVI Analytics Symposium in Boulder, CO, August 25-26.

With this broader context in mind, and as a second follow-up, we ask the important question when envisioning this future of how we as an industry, and as a research community, are going to get from here to there?

The currently expanding diversity and volume of remote sensing data presents particular challenges for aggregating data relevant to specific research applications, developing analysis tools that can be extended to a variety of sensors, efficiently implementing data processing across a distributed storage network, and delivering value-added products to a broad range of stakeholders.

Based on lessons learned from developing the HICO IPS, here we identify three important requirements needed to meet these challenges:

  • Data and application interoperability need to continue evolving. This need speaks to the use of broadly accessible data formats, expansion of software binding libraries, and development of cross-platform applications.
  • Improved mechanisms are needed for transforming research achievements into functional software applications. Greater impact can be achieved, larger audiences reached, and application opportunities significantly enhanced, if more investment is made in remote sensing technology transfer.
  • Robust tools are required for decision support and information delivery. This requirement necessitates development of intuitive visualization and user interface tools that will assist users in understanding image analysis output products as well as contribute to more informed decision making.

These developments will not happen overnight, but the pace of the industry indicates that such transformations are already in process and that geospatial image processing will continue to evolve at a rapid rate. We encourage you to participate.

About HySpeed Computing: Our mission is to provide the most effective analysis tools for deriving and delivering information from geospatial imagery. Visit us at hyspeedcomputing.com.

To access the HICO Image Processing System: http://hyspeedgeo.com/HICO/

Sunglint Correction in Airborne Hyperspectral Images Over Inland Waters

Announcing recent publication in Revista Brasileira de Cartographia (RBC) – the Brazilian Journal of Cartography. The full text is available open-access online: Streher et al., 2014, RBC, International Issue 66/7, 1437-1449.

Title: Sunglint Correction in Airborne Hyperspectral Images Over Inland Waters

Authors: Annia Susin Streher, Cláudio Clemente Faria Barbosa, Lênio Soares Galvão, James A. Goodman, Evlyn Marcia Leão de Moraes Novo, Thiago Sanna Freire Silva

Abstract: This study assessed sunglint effects, also known as the specular reflection from the water surface, in high-spatial and high-spectral resolution, airborne images acquired by the SpecTIR sensor under different view-illumination geometries over the Brazilian Ibitinga reservoir (Case II waters). These effects were corrected using the Goodman et al. (2008) and the Kutser et al. (2009) methods, and a Kutser et al. (2009) variant based on the continuum removal technique to calculate the oxygen absorption band depth. The performance of each method for reducing sunglint effects was evaluated by a quantitative analysis of pre- and post-sunglint correction reflectance values (residual reflectance images). Furthermore, the analysis was supported by inspection of the reflectance differences along transects placed over homogeneous masses of waters and over specific portions of the scenes affected and non-affected by sunglint. Results showed that the algorithm of Goodman et al. (2008) produced better results than the other two methods, as it approached zero amplitude reflectance values between homogenous water masses affected and non-affected by sunglint. The Kutser et al. (2009) method also presented good performance, except for the most contaminated sunglint portions of the scenes. When the continuum removal technique was incorporated to the Kutser et al. (2009) method, results varied with the scene and were more sensitive to atmospheric correction artifacts and instrument signal-to noise ratio characteristics.

Keywords: coral reefs; remote sensing; field spectra; scale; ecology; biodiversity; conservation hyperspectral remote sensing, specular reflection, water optically active substances, SpecTIR sensor

Figure 5. Deglinted SpecTIR hyperspectral of Ibitinga reservoir (São Paulo, Brazil) images and resultant reflectance profiles after correction by the methods of: (a) Goodman et al. (2008); (b) Kutser et al. (2009); and (c) modified Kutser et al. (2009).

Streher et al. 2015 Fig 5 Deglint

Remote Sensing Analysis in the Cloud – Introducing the HICO Image Processing System

HySpeed Computing is pleased to announce release of the HICO Image Processing System – a prototype web application for on-demand remote sensing image analysis in the cloud.

HICO IPS: Chesapeake Bay Chla

What is the HICO Image Processing System?

The HICO IPS is an interactive web-application that allows users to specify image and algorithm selections, dynamically launch analysis routines in the cloud, and then see results displayed directly in the map interface.

The system capabilities are demonstrated using imagery collected by the Hyperspectral Imager for the Coastal Ocean (HICO) located on the International Space Station, and example algorithms are included for assessing coastal water quality and other nearshore environmental conditions.

What is needed to run the HICO IPS?

No specialized software is required. You just need an internet connection and a web browser to run the application (we suggest using Google Chrome).

How is this different than online map services?

This is an application-server, not a map-server, so all the results you see are dynamically generated on-demand at your request. It’s remote sensing image analysis in the cloud.

What software was used to create the HICO IPS?

The HICO IPS is a combination of commercial and open-source software; with core image processing performed using the recently released ENVI Services Engine.

What are some of the advantages of this system?

The system can be configured for any number of different remote sensing instruments and applications, thus providing an adaptable framework for rapidly implementing new algorithms and applications, as well as making these applications and their output readily available to the global user community.

Try it out today and let us know what you think: http://hyspeedgeo.com/HICO/

 

Related posts

Calculating a land/water mask using HICO IPS

Deriving chlorophyll concentration using HICO IPS

Evaluating water optical properties using HICO IPS

Characterizing shallow coastal environments using HICO IPS

Application Tips for ENVI 5 – Image classification of drone video frames

This is part of a series on tips for getting the most out of your geospatial applications. Check back regularly or follow HySpeed Computing to see the latest examples and demonstrations.

Objective: Utilize ENVI’s new video support (introduced in ENVI 5.2) to extract an individual frame from HD video and then perform supervised classification on the resulting image file.

ENVI drone video analysis

Scenario: This tip demonstrates the steps used for implementing the ENVI Classification Workflow using a HD video frame extracted from a drone overflight of a banana plantation in Costa Rica (video courtesy Elevated Horizons). In this example image classification is utilized to delineate the total number of observable banana bunches in the video frame. In banana cultivation, bunches are often covered using blue plastic sleeves for protection from insects and disease and for increasing yield and quality. Here the blue sleeves provide a unique spectral signature (color) for use in image classification, and hence a foundation for estimating total crop yield when analysis is extrapolated or applied to the entire plantation.

The Tip: Below are the steps used to extract the video frame and implement the Classification Workflow in ENVI 5.2:

  • There are three options for opening and viewing video in ENVI: (i) drag-and-drop a video into the ENVI display; (ii) from the main toolbar select File > Open to select a video; and (iii) from the main toolbar select Display > Full Motion Video, and then use the Open button at the top of the video player to select a video.

ENVI video player

  • Once opened, the video player can be used to playback video using standard options for play, pause, and stepping forward and backward. There are also options to add and save bookmarks, adjust the brightness and frame rate, and export individual frames, or even the entire video, for analysis in ENVI.
  • Here we have selected to export a single frame using the “Export Frame to ENVI” button located at the top of the video player.
  • The selected video frame is then automatically exported to the Layer Manager and added to the currently active View. Note that the new file is only temporary, so be sure to save this file to a desired location and filename if you wish to retain the file for future analysis.
  • We next launch the Classification Workflow by selecting Toolbox > Classification > Classification Workflow.
  • For guidance on implementing the Classification Workflow, please visit our earlier post – Implementing the Classification Workflow – to see a detailed example using Landsat data of Lake Tahoe, or refer to the ENVI documentation for more information.
  • In the current Classification example, we selected to Use Training Data (supervised classification), delineate four different classes (banana bunch, banana plant, bare ground, understory vegetation), run the Mahalanobis Distance supervised classification algorithm, and not implement any post-classification smoothing or aggregation.

ENVI drone video classification workflow

  • Classification output includes the classified raster image (ENVI format), corresponding vector file (shapefile), and optionally the classification statistics (text file). Shown here is the classification vector output layered on top of the classification image, where blue represents the observable banana bunches in this video frame.

ENVI drone video classification output

With that analysis accomplished, there are a number of different options within ENVI for extending this analysis to other frames, from as simple as manually repeating the same analysis across multiple individual frames to as sophisticated as creating a custom IDL application to utilize ENVI routines for automatically classifying all frames in the entire video. However, we leave this for a future post.

In the meantime, we can see that the ability to export frames to ENVI for further analysis opens up a wealth of image analysis options. We’re excited to explore the possibilities.

ENVI Analytics Symposium – Come explore the next generation of geoanalytic solutions

HySpeed Computing is pleased to announce our sponsorship of the upcoming ENVI Analytics Symposium taking place in Boulder, CO from August 25-26, 2015.

ENVI Analytics Symposium

The ENVI Analytics Symposium (EAS) will bring together the leading experts in remote sensing science to discuss technology trends and the next generation of solutions for advanced analytics. These topics are important because they can be applied to a diverse range of needs in environmental and natural resource monitoring, global food production, security, urbanization, and other fields of research.

The need to identify technology trends and advanced analytic solutions is being driven by the staggering growth in high-spatial and spectral resolution earth imagery, radar, LiDAR, and full motion video data. Join your fellow thought leaders and practitioners from industry, academia, government, and non-profit organizations in Boulder, Colorado for an intensive exploration of the latest advancements of analytics in remote sensing.

Core topics to be discussed at this event include Algorithms and Analytics, Applied Research, Geospatial Big Data, and Remote Sensing Phenomenology.

For more information: http://www.exelisvis.com/eas/HOME.aspx

We look forward to seeing you there.

Linking Coral Reef Remote Sensing and Field Ecology: It’s a Matter of Scale

Announcing recent publication in the Journal of Marine Science and Engineering (JMSE). The full text is available open-access online: Lucas and Goodman, JMSE, 2015, vol. 3(1): 1-20.

Authors: Matthew Q. Lucas and James Goodman

Abstract: Remote sensing shows potential for assessing biodiversity of coral reefs. Important steps in achieving this objective are better understanding the spectral variability of various reef components and correlating these spectral characteristics with field-based ecological assessments. Here we analyze >9400 coral reef field spectra from southwestern Puerto Rico to evaluate how spectral variability and, more specifically, spectral similarity between species influences estimates of biodiversity. Traditional field methods for estimating reef biodiversity using photoquadrats are also included to add ecological context to the spectral analysis. Results show that while many species can be distinguished using in situ field spectra, the addition of the overlying water column significantly reduces the ability to differentiate species, and even groups of species. This indicates that the ability to evaluate biodiversity with remote sensing decreases with increasing water depth. Due to the inherent spectral similarity amongst many species, including taxonomically dissimilar species, remote sensing underestimates biodiversity and represents the lower limit of actual species diversity. The overall implication is that coral reef ecologists using remote sensing need to consider the spatial and spectral context of the imagery, and remote sensing scientists analyzing biodiversity need to define confidence limits as a function of both water depth and the scale of information derived, e.g., species, groups of species, or community level.

Keywords: coral reefs; remote sensing; field spectra; scale; ecology; biodiversity; conservation coral reefs; remote sensing; field spectra; scale; ecology; biodiversity; conservation

Figure 8. Estimates of biodiversity

Figure 8. Estimates of biodiversity calculated using the exponential of Shannon entropy, exp(H′), illustrating influence of increasing spectral similarity amongst reef species as a function of increasing water depth: 0* is biodiversity obtained from photoquadrats, 0** is biodiversity calculated using only those species considered prevalent or sizable enough to significantly influence the remote sensing signal (i.e., species included in the spectral measurements for this study area), and 0–10 is biodiversity calculated with consideration for optical similarities amongst species (i.e., based on hierarchical clustering of reflectance spectra as influenced by the overlying water column).

Geospatial Solutions in the Cloud

Source: Exelis VIS whitepaper – 12/2/2014 (reprinted with permission)

What are Geospatial Analytics?

Geospatial analytics allow people to ask questions of data that exist within a spatial context. Usually this means extracting information from remotely sensed data such as multispectral imagery or LiDAR that is focused on observing the Earth and the things happening on it, both in a static sense or over a period of time. Familiar examples of this type of geospatial analysis include Land Classification, Change Detection, Soil and Vegetative indexes, and depending on the bands of your data, Target Detection and Material Identification. However, geospatial analytics can also mean analyzing data that is not optical in nature.

So what other types of problems can geospatial analytics solve? Geospatial analytics comprise of more than just images laid over a representation of the Earth. Geospatial analytics can ask questions of ANY type of geospatial data, and provide insight into static and changing conditions within a multi-dimensional space. Things like aircraft vectors in space and time, wind speeds, or ocean currents can be introduced into geospatial algorithms to provide more context to a problem and to enable new correlations to be made between variables.

Many times, advanced analytics like these can benefit from the power of cloud, or server-based computing. Benefits from the implementation of cloud-based geospatial analytics include the ability to serve on-demand analytic requests from connected devices, run complex algorithms on large datasets, or perform continuous analysis on a series of changing variables. Cloud analytics also improve the ability to conduct multi-modal analysis, or processes that take into account many different types of geospatial information.

Here we can see vectors of a UAV along with the ground footprint of the sensor overlaid in Google Earth™, as well as a custom interface built on ENVI that allows users to visualize real-time weather data in four dimensions (Figure 1).

geospatial_cloud_fig1

Figure 1 – Multi-Modal Geospatial Analysis – data courtesy NOAA

These are just a few examples of non-traditional geospatial analytics that cloud-based architecture is very good at solving.

Cloud-Based Geospatial Analysis Models 

So let’s take a quick look at how cloud-based analytics work. There are two different operational models for running analytics, the on-demand model and the batch process model. In a on-demand model (Figure 2), a user generally requests a specific piece of information from a web-enabled device such as a computer, a tablet, or a smart phone. Here the user is making the request to a cloud based resource.

geospatial_cloud_fig2

Figure 2 – On-Demand Analysis Model

Next, the server identifies the requested data and runs the selected analysis on it. This leverages scalable server architecture that can vastly decrease the amount of time it takes to run the analysis and eliminate the need to host the data or the software on the web-enabled device. Finally, the requested information is sent back to the user, usually at a fraction of the bandwidth cost required to move large amounts of data or full resolution derived products through the internet.

In the automated batch process analysis model (Figure 3), the cloud is designed to conduct prescribed analysis to data as it becomes available to the system, reducing the amount of manual interaction and time that it takes to prepare or analyze data. This system can take in huge volumes of data from various sources such as aerial or satellite images, vector data, full motion video, radar, or other data types, and then run a set of pre-determined analyses on that data depending on the data type and the requested information.

geospatial_cloud_fig3

Figure 3 – Automated Batch Process Model

Once the data has been pre-processed, it is ready for consumption, and the information is pushed out to either another cloud based asset, such as an individual user that needs to request information or monitor assets in real-time, or simply placed into a database in a ‘ready-state’ to be accessed and analyzed later.

The ability for this type of system to leverage the computing power of scalable server stacks enables the processing of huge amounts of data and greatly reduces the time and resources needed to get raw data into a consumable state.

Solutions in the Cloud

HySpeed Computing

Now let’s take a look at a couple of use cases that employ ENVI capabilities in the cloud. The first is a web-based interface that allows users to perform on-demand geospatial analytics on hyperspectral data supplied by HICO™, the Hyperspectral Imager for the Coastal Ocean (Figure 4). HICO is a hyperspectral imaging spectrometer that is attached to the International Space Station (ISS) and is designed specifically for sampling the coastal ocean in an effort to further our understanding of the world’s coastal regions.

geospatial_cloud_fig4

Figure 4 – The HICO Sensor – image courtesy of NASA

Developed by HySpeed Computing, the prototype HICO Image Processing System (Figure 5) allows users to conduct on-demand image analysis of HICO’s imagery from a web-based browser through the use of ENVI cloud capabilities.

geospatial_cloud_fig5

Figure 5 – The HICO Image Processing System – data courtesy of NASA

The interface exposes several custom ENVI tasks designed specifically to take advantage of the unique spectral resolution of the HICO sensor to extract information characterizing the coastal environment. This type of interface is a good example of the on-demand scenario presented earlier, as it allows users to conduct on-demand analysis in the cloud without the need to have direct access to the data or the computing power to run the hyperspectral algorithms.

The goal of this system is to provide ubiquitous access to the robust HICO catalog of hyperspectral data as well as the ENVI algorithms needed to analyze them. This will allow researchers and other analysts the ability to conduct valuable coastal research using web-based interfaces while capitalizing on the efforts of the Office of Naval Research, NASA, and Oregon State University that went into the development, deployment, and operation of HICO.

Milcord

Another use case involves a real-time analysis scenario that comes from a company called Milcord and their dPlan Next Generation Mission Manager (Figure 6). The goal of dPlan is to “aid mission managers by employing an intelligent, real-time decision engine for multi-vehicle operations and re-planning tasks” [1]. What this means is that dPlan helps folks make UAV flight plans based upon a number of different dynamic factors, and delivers the best plan for multiple assets both before and during the actual operation.

geospatial_cloud_fig6

Figure 6 – The dPlan Next Generation Mission Manager

Factors that are used to help score the flight plans include fuel availability, schedule metrics based upon priorities for each target, as well as what are known as National Image Interpretability Rating Scales, or NIIRS (Figure 7). NIIRS are used “to define and measure the quality of images and performance of imaging systems. Through a process referred to as “rating” an image, the NIIRS is used by imagery analysts to assign a number which indicates the interpretability of a given image.” [2]

geospatial_cloud_fig7

Figure 7 – Extent of NIIRS 1-9 Grids Centered in an Area Near Calgary

These factors are combined into a cost function, and dPlan uses the cost function to find the optimal flight plan for multiple assets over a multitude of targets. dPlan also performs a cost-benefit analysis to indicate if the asset cannot reach all targets, and which target might be the lowest cost to remove from the plan, or whether another asset can visit the target instead.

dPlan employs a custom ESE application to generate huge grids of Line of Sight values and NIIRs values associated with a given asset and target (Figure 8). dPlan uses this grid of points to generate route geometry, for example, how close and at what angle does the asset need to approach the target.

geospatial_cloud_fig8

Figure 8 – dPlan NIIRS Workflow

The cloud-computing power leveraged by dPlan allows users to re-evaluate flight plans on the fly, taking into account new information as it becomes available in real time. dPlan is a great example of how cloud-based computing combined with powerful analysis algorithms can solve complex problems in real time and reduce the resources needed to make accurate decisions amidst changing environments.

Solutions in the Cloud

So what do we do here at Exelis to enable folks like HySpeed Computing and Milcord to ask these kinds of questions from their data and retrieve reliable answers? The technology they’re using is called the ENVI Services Engine (Figure 9), an enterprise-ready version of the ENVI image analytics stack. We currently have over 60 out-of-the-box analysis tasks built into it, and are creating more with every release.

geospatial_cloud_fig9

Figure 9 – The ENVI Services Engine

The real value here is that ENVI Services Engine allows users to develop their own analysis tasks and expose them through the engine. This is what enables users to develop unique solutions to geospatial problems and share them as repeatable processes for others to use. These solutions can be run over and over again on different data and provide consistent dependable information to the persons requesting the analysis. The cloud based technology makes it easy to access from web-enabled devices while leveraging the enormous computing power of scalable server instances. This combination of customizable geospatial analysis tasks and virtually limitless computing power begins to address some of the limiting factors of analyzing what is known as big data, or datasets so large and complex that traditional computing practices are not sufficient to identify correlations within disconnected data streams.

Our goal here at Exelis is to enable you to develop custom solutions to industry-specific geospatial problems using interoperable, off-the-shelf technology. For more information on what we can do for you and your organization, please feel free to contact us.

Sources:

[1] 2014. Milcord. “Geospatial Analytics in the Cloud: Successful Application Scenarios” webinar. https://event.webcasts.com/starthere.jsp?ei=1042556

[2] 2014. The Federation of American Scientists. “National Image Interpretability Rating Scales”. http://fas.org/irp/imint/niirs.htm