GPU Accelerated Processing – An example application using coral reef remote sensing

HySpeed Computing recently concluded a two-year grant, funded by the National Science Foundation, to utilize GPU computing to accelerate a remote sensing tool for the analysis of submerged marine environments. The grant was performed in collaboration with researchers in the Center for Subsurface Sensing and Imaging Systems at Northeastern University, integrating expertise from across multiple disciplines.

Remote sensing of submerged ecosystems, such as coral reefs, is a particularly challenging problem. In order to effectively extract information about habitats located on the seafloor, analysis must compensate for confounding influences from the atmosphere, water surface and water column. The interactions involved are complex, and the algorithms used to perform this process are not trivial. The most promising methods used for this task are those that utilize hyperspectral remote sensing as their foundation. An example of one such algorithm, developed by HySpeed Computing founder James Goodman, and selected as the basis for the GPU acceleration project, is summarized below.

Hyperspectral algorithm overview

Overview of coral reef remote sensing algorithm

The hyperspectral algorithm is comprised of two main processing stages, an inversion model and an unmixing model. The inversion model is based on a non-linear numerical optimization routine used to derive environmental information on water properties, water depth and bottom albedo from the hyperspectral imagery. The unmixing model is then used to derive habitat characteristics of the benthic environment. The overall algorithm is effective; however, the inversion model is considered computationally time consuming. It was determined that algorithm efficiency could be improved using GPU computing.

Analysis indicated that the numerical optimization routine was the primary computational bottleneck and thus the logical area to focus acceleration efforts. The first approach, using a steepest descent optimization routine programmed using CUDA, provided a moderate 3x speedup. But results indicated that greater acceleration could be achieved using a different optimization method. After careful consideration, a quasi-newton optimization scheme was ultimately selected and implemented in OpenCL such that a portion of the processing is retained on the CPU while just the computationally intensive function evaluations are implemented on the GPU. This arrangement more equally distributes the processing load across GPU and CPU resources, and thus represents a more efficient solution. In the end, analysis showed that the GPU accelerated version of the model (OpenCL: BFGS-B) is 45x faster than the original model (IDL:LM), and thus approaches the capacity for real-time processing of this complex algorithm.

Hyperspectral algorithm processing times

Comparison of relative processing times for hyperspectral inversion model

At a broader level, this project demonstrated the advantages of incorporating GPU computing into remote sensing image analysis. This is particularly relevant given the growing need for high-performance computing in the remote sensing community, which continues to expand as a result of the increasing number of satellite and airborne sensors, greater data accessibility, and expanded utilization of data intensive technologies. As this trend continues, so too will the opportunities for GPU computing.

This work was made possible by contributions from Prof. David Kaeli and students Matt Sellitto and Dana Schaa at Northeastern University.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s