GTC 2013 – Set your sights on processing speed

GTCNVIDIA will soon be hosting its annual GPU Technology Conference – GTC 2013 – later this month from March 18-21 in San Jose, CA. Last year’s conference saw the release of NVIDIA’s astounding new Kepler GPU architecture. Be sure to tune in to this year’s conference to see what’s next in the world of high performance GPU computing.

Can’t attend in person? NVIDIA will be live streaming the keynote addresses (currently listed as upcoming events on www.ustream.tv/nvidia, but be sure to check the conference website www.gputechconf.com  for details so as not to miss out). NVIDIA also records all the session speakers and makes the content available later for everyone to view. In fact, you can currently visit GTC On-Demand at the conference website to explore sessions from past conferences.

If nothing else, don’t miss the opening keynote address (March 19 @9am PST) by Jen-Hsun Huang, NVIDIA’s co-founder, President and CEO. He’ll be discussing “what’s next in computing and graphics, and preview disruptive technologies and exciting demonstrations across industries.” Jen-Hsun puts on quite a show. It’s not only informative with respect to NVIDIA’s direction and vision, but also entertaining to watch. After all, you’d expect nothing else from the industry leader in computer graphics and visualization.

And what about geospatial processing? How does GTC 2013 fit into the science of remote sensing and GIS? The answer lies in the power of GPU computing to transform our ability to more rapidly process large datasets and implement complex algorithms. It’s a rapidly growing field, and impressive to see the levels of speedup that are being achieved, in some cases more than 100x faster on the GPU than on the CPU alone. Amongst the conference sessions this year will be numerous general presentations and workshops on the latest techniques for leveraging GPUs to accelerate your processing workflow. More specifically, there will be a collection of talks directly related to remote sensing, such as detecting man-made structures from high resolution aerial imagery, retrieving atmospheric ozone profiles from satellite data, and implementing algorithms for orthorectification, pan-sharpening, color-balancing and mosaicking. Other relevant sessions include a real-time processing system for hyperspectral video, and many more on a variety of other image processing topics.

HySpeed Computing is excited to see what this year’s conference has to offer. How about you?

For more on GTC 2013: http://www.gputechconf.com/

GPU Accelerated Processing – An example application using coral reef remote sensing

HySpeed Computing recently concluded a two-year grant, funded by the National Science Foundation, to utilize GPU computing to accelerate a remote sensing tool for the analysis of submerged marine environments. The grant was performed in collaboration with researchers in the Center for Subsurface Sensing and Imaging Systems at Northeastern University, integrating expertise from across multiple disciplines.

Remote sensing of submerged ecosystems, such as coral reefs, is a particularly challenging problem. In order to effectively extract information about habitats located on the seafloor, analysis must compensate for confounding influences from the atmosphere, water surface and water column. The interactions involved are complex, and the algorithms used to perform this process are not trivial. The most promising methods used for this task are those that utilize hyperspectral remote sensing as their foundation. An example of one such algorithm, developed by HySpeed Computing founder James Goodman, and selected as the basis for the GPU acceleration project, is summarized below.

Hyperspectral algorithm overview

Overview of coral reef remote sensing algorithm

The hyperspectral algorithm is comprised of two main processing stages, an inversion model and an unmixing model. The inversion model is based on a non-linear numerical optimization routine used to derive environmental information on water properties, water depth and bottom albedo from the hyperspectral imagery. The unmixing model is then used to derive habitat characteristics of the benthic environment. The overall algorithm is effective; however, the inversion model is considered computationally time consuming. It was determined that algorithm efficiency could be improved using GPU computing.

Analysis indicated that the numerical optimization routine was the primary computational bottleneck and thus the logical area to focus acceleration efforts. The first approach, using a steepest descent optimization routine programmed using CUDA, provided a moderate 3x speedup. But results indicated that greater acceleration could be achieved using a different optimization method. After careful consideration, a quasi-newton optimization scheme was ultimately selected and implemented in OpenCL such that a portion of the processing is retained on the CPU while just the computationally intensive function evaluations are implemented on the GPU. This arrangement more equally distributes the processing load across GPU and CPU resources, and thus represents a more efficient solution. In the end, analysis showed that the GPU accelerated version of the model (OpenCL: BFGS-B) is 45x faster than the original model (IDL:LM), and thus approaches the capacity for real-time processing of this complex algorithm.

Hyperspectral algorithm processing times

Comparison of relative processing times for hyperspectral inversion model

At a broader level, this project demonstrated the advantages of incorporating GPU computing into remote sensing image analysis. This is particularly relevant given the growing need for high-performance computing in the remote sensing community, which continues to expand as a result of the increasing number of satellite and airborne sensors, greater data accessibility, and expanded utilization of data intensive technologies. As this trend continues, so too will the opportunities for GPU computing.

This work was made possible by contributions from Prof. David Kaeli and students Matt Sellitto and Dana Schaa at Northeastern University.

GPU Geospatial Algorithm Acceleration – Getting started with software

HySpeed Computing looks at the technology behind GPU computing.

Geospatial technology and imagery are now pervasive in our society. From the GPS device on your car’s dashboard and your local weather report to national maps of drought conditions and global analysis of the earth’s environment, geospatial data is playing an increasingly central role in our lives. As the importance and availability of geospatial data continues to grow, so too does the need to process greater volumes of data at faster rates to provide output in a timely manner. The result is an accompanying need for increased utilization of high-performance computing to meet these processing demands, an area where GPU computing is certain to play a significant role.

So you have some data – in fact you have a hard-drive filled to its storage capacity with more imagery than your computer can seemingly process in a year – and you can’t wait that long to get results. Let’s look at some example software options of how you can start using GPU computing to accelerate your image processing workflow.

One option is to utilize commercial software packages that inherently employ GPU processing as part of their architecture. For example, in the field of graphic design, Adobe Photoshop versions CS4 and later use GPU computing to speedup certain functions. An equivalent in the geospatial field is the GXL GeoImaging Accelerator (PCI Geomatics), which provides GPU-enabled acceleration for applications such as orthorectification, image mosaic creation, and pan-sharpening. However, widespread integration of GPU capabilities in commercial geospatial software is not yet fully realized, and thus many algorithms and processing options still await acceleration.

As an alternative, users can select to employ a high-level programming language to generate their own applications. For example, GPULib (Tech-X) provides a library of GPU-accelerated IDL functions that can be used to customize ENVI (Exelis VIS). Note that IDL is the language on which ENVI is built and also the foundation for developers to create custom modules that integrate directly with ENVI. Similarly, the Parallel Computing Toolbox (Mathworks), as well as Jacket (AccelerEyes), can be used to speedup MATLAB code. Although not explicitly considered a geospatial software tool, MATLAB (MathWorks) has extensive capabilities in scientific computing, including modules designed specifically for image processing and mapping.

For the experienced programmer, there is the option to go directly to the source and develop specialized software using one of the two dominant parallel computing architectures available for GPU development: CUDA and OpenCL. CUDA was developed by NVIDIA explicitly for leveraging the compute capabilities of NVIDIA GPU cards, whereas OpenCL is an open framework that can be used for both NVIDIA and AMD GPUs. Both are excellent choices for developing custom GPU software, but both also require a reasonable level of comfort with programming as well as an understanding of GPU nuances to get the most out of the acceleration potential.

Essentially, we’re still at the growth end of the GPU curve, but the field is progressing rapidly and GPU computing is quickly gaining momentum. It is going to be exciting to see how this field evolves.

GPU Hardware – Getting started with algorithm acceleration

HySpeed Computing looks at the technology behind GPU computing.

GPUs are rapidly gaining ground in the arena of high performance computing. Here, the parallel computing capabilities of GPUs (graphics processing units), which were previously employed exclusively for graphics rendering, are now being harnessed for general purpose computing. As a result, GPU computing is facilitating impressive speedups in computation time for a range of different applications.

So how can you utilize GPUs to accelerate your processing workflow? GPU computing is a combination of hardware and software. Let’s start by looking at the basic hardware needed to achieve the performance gains of GPU computing. It starts with the graphics card itself:

Is the current GPU in your system capable of performing general computing tasks? In many cases you will be surprised to learn that your existing desktop computer, and in some cases laptop, already have compute-capable GPU devices. If not, you will need to purchase one or more appropriate graphics card(s) or a system configured with the proper graphics card(s). The two main hardware manufacturers for GPU computing hardware are NVIDIA and AMD. Both companies maintain lists of compute-capable devices in their respective Developer websites: NVIDIA; AMD.

In most situations, the best performance is obtained using a dedicated GPU (or GPUs) that is independent of the graphics card utilized for your monitor. This means having extra PCI Express slot(s) in your computer, as well as a power supply that can handle the additional card. However, it is also feasible to utilize a single graphics card for both display purposes and GPU computing, with the limitation that card resources are now divided between different functions. Remember to always check system compatibility if you are adding new hardware.

Generally speaking, better performance is achieved with greater FLOPS, more cores, higher memory, faster clock speeds, and higher memory bandwidth. Nonetheless, the biggest, baddest, most recent GPU is not always the optimal solution for a given problem. There are always tradeoffs between application requirements, system specifications and budget. As a start, consider using a relatively inexpensive card and then moving to more powerful GPU cards once a prototype or proof-of-concept has been established.

Our next post will examine different software options for accelerating geospatial algorithms.

A Week of Innovation – Some Final Thoughts on the 2012 GTC

HySpeed Computing’s president, James Goodman, attended the 2012 NVIDIA GPU Computing Conference. He’s sharing his experiences, thoughts and news coming out of the convention.

GTC Demo Area

GTC attendees explore the latest technology from NVIDIA.

A few days back in the office have given time to further digest the information and events from last week’s GTC conference. New products, new capabilities and exciting new applications were the themes of the week, highlighted by the keynote address by NVIDIA CEO Jen-Hsun Huang. It was indeed an exciting week for GPU computing. Developments in the coming year are sure to be eventful.

A notable announcement during the GTC week was the official release of the Kepler GPU, proclaimed the fastest, most efficient, most powerful GPU ever produced. With this release NVIDIA is once again redefining what can be achieved using GPU computing. Not only will the Kepler enable greater acceleration of existing GPU algorithms, but also the capacity for dynamic parallelism opens up new dimensions in computing capabilities and efficiency. Dynamic parallelism allows GPU kernels to themselves initiate other kernels. Amongst other functionality, this allows programs to dynamically adjust the resolution of analysis, allowing greater computational focus in areas requiring more detail and less computation in areas with less detail. Consider the potential this brings to fields such as fluid dynamics, finite element analysis, hydrology and geophysics.

The Kepler is also the first GPU designed specifically for the cloud. With the growing prevalence of BYOD – Bring Your Own Device – there is a corresponding need for users to be able to securely work anywhere on any device. The Kelper enables the virtualized GPU, providing an energy efficient, low-latency capability to render and stream graphics on remote displays. This means an employee can bring massive computing power with them into the field as they perform work and visit with clients. For example, running a complex computationally intensive windows application remotely on their tablet device. This improved capacity for virtualization is certain to be a driving force in the growing prevalence of cloud computing.

The list of applications that harness the power of GPU computing is impressive, including such diverse fields as animal behavior, airborne surveillance, genetic research, image processing, machine learning, molecular dynamics, ray tracing, medical imaging, and even an effort to put the first private robot on the moon. Where will GPUs take us next? Stay tuned and stay involved.

Into The Blue – GPUs and Remote Sensing

HySpeed Computing’s president, James Goodman, is attending the 2012 NVIDIA GPU Computing Conference. He’ll be sharing his experiences, thoughts and news coming out of the convention.

HySpeed Computing President James Goodman at NVIDIA’s GPU Technology Conference

To close out Day 4, I had my chance to take the stage and share my insights on the versatility and power of GPU computing.

In an exciting and intriguing project with Northeastern University, we were able to apply this technology to effectively research the impacts of climate change and environmental degradation on coral reefs. Though beautiful and incredible, coral reefs are also very susceptible to changes in their ecosystem. To monitor the state of coral reefs, we utilized remote sensing technology, specifically imagery from satellite and airborne instruments, to map coral reef distribution and assess changes over time.

If it were not for GPU computing though, this research would have taken incredibly longer to process and analyze. Through this technology we were able to accelerate our algorithms to incredible speed, enabling a greater amount of imagery to be processed. This ultimately created a much more efficient pathway for the mapping and assessment of coral reefs.

It was exciting to share our research and collaborate with others in the scientific computing community. It will be interesting to see how this technology continues to evolve and expand over the next year.

Analyzing Animal Behavior Models Through GPU Computing

HySpeed Computing’s president, James Goodman, is attending the 2012 NVIDIA GPU Computing Conference. He’ll be sharing his experiences, thoughts and news coming out of the convention.

Ian Couzin explains how GPU computing helped create breakthroughs in his research.

Day 3 of the NVIDIA conference, and now we start delving deeper into the possibilities of GPU computing and how it can be used for real-world applications.
Today’s keynote speaker Iain Couzin, behavioral ecologist from Princeton University, detailed how GPU computing has become the core technology by which he is able to explore research into collective animal behavior and decision-making. Recently recognized as a National Geographic Emerging Explorer, Couzin uses a combination of experiments, observations and computer modeling to explore the collective dynamics of animal behavior, how groups make informed decisions, the evolution of leadership and the dynamics of large-scale collective motion.

With access to the power of GPU computing, Couzin is able to substantially increase his capacity for analysis. By harnessing and refining GPU computing to his needs, he has been able to make significant strides in understanding group dynamics in animal behavior. Research, that he hopes, will lend to better understanding human behavior, information dissemination, brain functionality and biological research on the cellular level.

NVIDIA Announces Future for GPU Computing

NVIDIA GPU Technology Conference – Day 2

HySpeed Computing’s president, James Goodman, is attending the 2012 NVIDIA GPU Computing Conference. He’ll be sharing his experiences, thoughts and news coming out of the convention.

Entering the second day of the NVIDIA conference, we had high expectations for the keynote address and breaking news. The keynote did not disappoint.

The main speaking hall faded to black with a soundtrack playing in the background. The room of 3,000 was buzzing with anticipation. Then, the opening keynote speaker, CEO and co-founder of NVIDIA Jen-Hsun Huang, took the stage, commanding the attention of the entire eager audience. As part of his keynote, Jen-Hsun confirmed NVIDIA’s continued commitment to GPU computing by announcing the company has yet again “doubled down” on investing in the technology. This investment became immediately apparent as he also took the opportunity to announce the launch of their most advanced GPU to date, the Kepler GPU. (Watch the keynote speech here)

With more speed, power and energy efficiency than any previous NVIDIA GPU, the Kepler replaces the Fermi and promises to fundamentally advance computer graphics and computing. Built on three new technologies – SMX, Hyper-Q and Dynamic Parallelism – the Kepler provides exceptional new computing capabilities. It is also the first ever GPU designed for a world of cloud computing. Kepler will also be used as the technology behind GeForce Grid, which enables unprecedented low-latency live streaming of cloud gaming.

Other guest speakers and demonstrations of the day, joining Jen-Hsun on stage, included Sumit Dhawan, group vice president and general manager of Citrix; Grady Cofer, visual effects supervisor at Industrial Light & Magic; David Yen, senior vice president and general manager of Cisco Data Center Group; and Dave Perry, CEO and co-founder of Gaikai.

Big Ideas & The Power of a Connected Community

NVIDIA GPU Technology Conference – Day 1

HySpeed Computing’s president, James Goodman, is attending the 2012 NVIDIA GPU Computing Conference. He’ll be sharing his experiences, thoughts and news coming out of the convention.

Conference attendees enjoy an afternoon social hour networking with fellow participants, developing new collaborations, and learning about the latest GPU advances.

As we arrived to kick start the 2012 GTC, we could feel a definite buzz in the atmosphere. A sense of community and collaboration as the GPU computing field came together. With leadership from NVIDIA, the community here is building an entire GPU computing ecosystem, which is continually growing and evolving through new technologies and applications.

As one of the hottest trends in high performance computing, integrating multi core CPU’s with GPU acceleration is offering developers and end-users and incredible array of possibilities. Leveraging the pervasiveness of GPUs in our increasingly tech-driven lives, the community is able to look toward incredible goals of software acceleration being achieved through GPU computing, portability across platforms, sharing software libraries and extending functionality across disciplines.

Over the coming days, I’ll be excited to see the sessions on code development, algorithm optimization, applications and the latest software releases (to name a few).