GTC 2013 – Set your sights on processing speed

GTCNVIDIA will soon be hosting its annual GPU Technology Conference – GTC 2013 – later this month from March 18-21 in San Jose, CA. Last year’s conference saw the release of NVIDIA’s astounding new Kepler GPU architecture. Be sure to tune in to this year’s conference to see what’s next in the world of high performance GPU computing.

Can’t attend in person? NVIDIA will be live streaming the keynote addresses (currently listed as upcoming events on, but be sure to check the conference website  for details so as not to miss out). NVIDIA also records all the session speakers and makes the content available later for everyone to view. In fact, you can currently visit GTC On-Demand at the conference website to explore sessions from past conferences.

If nothing else, don’t miss the opening keynote address (March 19 @9am PST) by Jen-Hsun Huang, NVIDIA’s co-founder, President and CEO. He’ll be discussing “what’s next in computing and graphics, and preview disruptive technologies and exciting demonstrations across industries.” Jen-Hsun puts on quite a show. It’s not only informative with respect to NVIDIA’s direction and vision, but also entertaining to watch. After all, you’d expect nothing else from the industry leader in computer graphics and visualization.

And what about geospatial processing? How does GTC 2013 fit into the science of remote sensing and GIS? The answer lies in the power of GPU computing to transform our ability to more rapidly process large datasets and implement complex algorithms. It’s a rapidly growing field, and impressive to see the levels of speedup that are being achieved, in some cases more than 100x faster on the GPU than on the CPU alone. Amongst the conference sessions this year will be numerous general presentations and workshops on the latest techniques for leveraging GPUs to accelerate your processing workflow. More specifically, there will be a collection of talks directly related to remote sensing, such as detecting man-made structures from high resolution aerial imagery, retrieving atmospheric ozone profiles from satellite data, and implementing algorithms for orthorectification, pan-sharpening, color-balancing and mosaicking. Other relevant sessions include a real-time processing system for hyperspectral video, and many more on a variety of other image processing topics.

HySpeed Computing is excited to see what this year’s conference has to offer. How about you?

For more on GTC 2013:


Smartphones in Space – STRaND-1 using a Google Nexus One for satellite operations

It seems someone is always coming up with a new application or novel use for their smartphone. Now we can add satellite operations to the latest list of smartphone innovations.

STRaND and team

STRaND-1 satellite with team members from Surrey Space Center and Surrey Satellite Technology (credit SSTL)

The STRaND-1 satellite was successfully launched into space on 25 February 2013 from the Satish Dhawan Space Centre in Sriharikota, India. STRaND-1 (which stands for Surrey Training, Research, and Nanosatellite Demonstrator) contains a complete Google Nexus One running the Android operating system. According to STRaND-1 developers, this isn’t some “stripped-down” version of the phone, but rather the whole phone “mounted against one of the panels, with the phone camera peeping out through a porthole.”

STRaND-1 was developed by researchers at the University of Surrey’s Surrey Space Center as well as engineers from Surrey Satellite Technology. This is the first contribution from the United Kingdom following the satellite design specifications of the CubeSat program. By standardizing the design format of nanosatellites, the CubeSat program provides a cost-efficient avenue to launch and deploy small satellites. Organizations building CubeSats largely originate from academia, mostly universities and high schools, but also include commercial companies.

In the case of STRaND-1, the satellite measures just 10cm x 30cm in size and weighs only 4.3kg. And the satellite was built using mostly commercial off-the-shelf components. The Google Nexus One smartphone will be used to run a number of Apps, including a collection of Apps selected from a community competition. These include: ‘iTesa’, which will record the magnitude of the magnetic field around the phone; ‘STRAND Data’, which will display satellite telemetry data on the phone; ‘360’, which will be used to collect imagery of the Earth using the phone’s camera and then use this imagery to establish satellite position; and ‘Scream in Space’, which will be used to project user-uploaded screams into space using the phone’s speakers.

After the initial phase of operation and experiments using a linux-based computer, also onboard the satellite, a second phase of the STRaND-1 mission will switch satellite operations to the smartphone. This will not only further test the ability of off-the-shelf phone components to operate in a space environment, but also validate the phone’s ability to run advanced guidance, navigation and control systems. With this achievement, STRaND-1 will become the first ever smartphone-operated satellite.

The next time you pick up your phone, think about the possibilities.

For more information on STRaND-1:

How Big is Geo? – Google solicits reports on the geospatial industry

GoogleBy now everyone has become familiar with digital maps and location-based services, from the satellite images on our mobile devices to the turn-by-turn directions in our automobiles to the weather maps used in your daily news broadcast. This is all part of the larger geospatial industry, which spans government and other commercial markets in addition to familiar geospatial consumer products.

But just how big is the geospatial industry? To answer that question, Google recently commissioned two reports on the economics of the geospatial industry, a report on the U.S. from The Boston Consulting Group and a global report from Oxera.

According to the report by The Boston Consulting Group, within the U.S. economy alone, the geospatial services industry is estimated to employ more than 500,000 people, generate $75 billion in annual revenues, and have an overall economic impact estimated at $1.6 trillion annually in revenues. Oxera reports similar impact for the global geospatial services industry, which is estimated to generate $150-$270 billion annually in revenues. The market is also forecast to continue growing in coming years, 30% per year globally according to Oxera and 10% per year in the U.S. according to The Boston Consulting Group. The overall diversity and growing importance of geospatial products in our society is the foundation for this continued growth.

To put this in perspective, Oxera compared this with the video game and airline industry, which are estimated at $25 billion per year and $594 billion per year, respectively. This indicates geospatial is 5-10 times larger than the video game industry and at least one third the size of the global airline industry. What makes geospatial so big? Consider the fact that digital imagery and location-based services are essential components in resource management, supply chain logistics, infrastructure design, telecommunications, and national defense. Also consider the manufacturing industry involved with creating consumer products, as well as the satellite and space industry needed to make it all work.

As another example, geospatial products and services can contribute significant cost savings to existing markets. For instance, Oxera estimates that GPS generates $10 billion annually in cost savings through increased efficiency in logistics. Oxera also similarly estimates that geospatial services contribute $8-$22 billion annually in savings for agriculture by improving irrigation.

Geospatial is clearly an expansive industry, and still growing. This indicates not just commercial opportunity, but also a robust job market, which includes everything from manufacturing and design to software and application development.

So to answer the question… geospatial is BIG.

Google and the Google logo are registered trademarks of Google Inc., used with permission.

The International Space Station – A unique platform for Earth observation

International Space Station

International Space Station (image: NASA).

From the launch of its first module in 1998, to its first onboard crew in 2000, to today’s expansive labyrinth of space laboratories and solar arrays, the International Space Station is a technological marvel and an icon of human innovation. The ISS is well known as a research facility for medicine, biology, physical science, physiology, space flight, and cutting-edge engineering.

But did you know the ISS is home to a unique collection of facilities that can be used for Earth observing instruments. These include the Columbus – External Payload Facility (Columbus-EPF), the Expedite the Processing of Experiments to the Space Station Logistics Carrier (ELC), the Japanese Experiment Module – Exposed Facility (JEM-EF) and the Window Observational Research Facility (WORF). The Columbus-EPF, ELC and JEM-EF support external payloads, which means remote sensing instruments can be mounted on the outside of the ISS. And the WORF supports internal payloads by providing a very high optical quality optical window through which instruments can view the Earth below.

Advantages of using the ISS as a remote sensing platform include the capacity to install new instruments with relative ease (at least compared with launching free flying satellites), the ability to remove instruments and transport them to ground for post-mission analysis, and in some cases the option for crew interaction with the instrument while onboard the station. The ISS also has a unique orbit that differs from most Earth observing satellites, thus allowing image collection at different times of the day and under different illumination conditions than would otherwise be possible. These same orbit characteristics; however, can also be a disadvantage with respect to image uniformity and operational requirements. Additionally, in some cases the solar arrays can interfere with observations during certain situations. Nonetheless, the ISS is an excellent facility to test new instruments and explore new remote sensing capabilities.

So what are some of the instruments that have flown on the ISS? There are HICO (Hyperspectral Imager for the Coastal Ocean) and RAIDS (Remote Atmospheric and Ionospheric Detection System), which are integrated into a single payload installed on the JEM-EF. As the name implies, HICO is a hyperspectral instrument that has been optimized for imaging the nearshore aquatic environment. RAIDS is used for measuring the major constituents of Earth’s upper atmosphere, specifically the thermosphere and the ionosphere. There is also the EVC (Earth Viewing Camera), which is part of the European Technology Exposure Facility (EuTEF) deployed on Columbus-EPF. EVC is a commercial off-the-shelf digital camera used to capture color images of the Earth’s surface. A final example is ISSAC (International Space Station Agricultural Camera), which is installed as an internal payload on WORF. ISSAC is a multispectral camera measuring wavelengths in the visible and near infrared that primarily targets agricultural areas in the northern Great Plains. ISSAC is also particularly exciting, since it was largely built and operated by students at the University of North Dakota.

Those are just a few of the exciting instruments on the ISS. There are many others… and more instruments planned for future missions.

Are you involved in instrument development or image analysis related to the ISS? We’d love to hear your thoughts and stories and share them with the community.

For more about the ISS:

HySpeed Computing – Reviewing our progress and looking ahead

Join HySpeed Computing as we highlight our accomplishments from the past year and look ahead to what is sure to be a productive 2013.

The past year has been an eventful period in the life of HySpeed Computing. This was the year we introduced ourselves to the world, launching our website ( and engaging the community through social media platforms (i.e., using the usual suspects – Facebook, LinkedIn and Google+). If you’re reading this, you’ve found our blog, and we thank you for your interest. We’ve covered a variety of topics to date, from community data sharing and building an innovation community to Earth remote sensing and high performance computing. As our journey continues we will keep sharing our insights and also welcome you to participate in the conversation.

August of 2012 marked the completion of work on our grant from the National Science Foundation (NSF). The project, funded through the NSF SBIR/STTR and ERC Collaboration Opportunity, was a partnership between HySpeed Computing and the Bernard M. Gordon Center for Subsurface Sensing and Imaging Systems at Northeastern University. Through this work we were able to successfully utilize GPU computing to accelerate a remote sensing tool for the analysis of submerged marine environments. Our accelerated version of the algorithm was 45x faster than the original, thus approaching the capacity for real-time processing of this complex algorithm.

HySpeed Computing president, Dr. James Goodman, also attended a number of professional conferences and meetings during 2012. This included showcasing our achievements in geospatial applications and community data sharing at the International Coral Reef Symposium in Cairns, Australia and the NASA HyspIRI Science Workshop in Washington, D.C. and presenting our accomplishments in remote sensing algorithm acceleration at the GPU Technology Conference in Pasadena, CA and the VISualize Conference in Washington, D.C. Along the way we met, and learned from, a wonderfully diverse group of other scientist and professionals. We are encouraged by the direction and dedication we see in the community and honored to be a contributor to this progress.

HyPhoonSo what are we looking forward to in 2013? You heard it here first – we are proud to soon be launching HyPhoon, a gateway for accessing and sharing both datasets and applications. The initial HyPhoon release will focus on providing the community with free and open access to remote sensing datasets. We already have data from the University of Queensland, Rochester Institute of Technology, University of Puerto Rico at Mayaguez, NASA, and the Galileo Group, with additional commitments from others. This data will be available for the community to use in research projects, class assignments, algorithm development, application testing and validation, and in some cases also commercial applications. In other words, in the spirit of encouraging innovation, these datasets are offered as a community resource and open to your creativity. We look forward to seeing what you accomplish.

Connect with us through our website or via social media to become pre-registered to be the first to access the data as soon as it becomes available!

Beyond datasets, HyPhoon will also soon include a marketplace for community members to access advanced algorithms, and sell user-created applications. Are you a scientist with an innovative new algorithm? Are you a developer who can help transform research code into user applications? Are you working in the application domain and have ideas for algorithms that would benefit your work? Are you looking to reach a larger audience and expand your impact on the community? If so, we encourage you to get involved in our community.

HySpeed Computing is all about accelerating innovation and technology transfer.

LDCM Prepares for Launch – Continuing the Landsat legacy

NASA hosted a press conference last Thursday to highlight final preparations as LDCM – the Landsat Data Continuity Mission – prepares for launch on Feb 11, 2013.

LDCMWith the countdown to this momentous launch drawing near – now less than a month away – the excitement amongst the panel members at the press conference was clearly evident. Many eyes from around the world will be expectantly watching Vandenberg Air Force Base in California next month as LDCM lifts-off aboard an Atlas 5 rocket.

LDCM, which will officially be renamed Landsat 8 once in orbit, is important as the next satellite in the long-lived, and incredibly successful, Landsat program. With its new capabilities and improved instrument design, LDCM was described by panelists at the recent briefing as the “best Landsat ever”, delivering both “more data per day” and “higher quality data” than any previous Landsat.

Successful launch of LDCM becomes particularly crucial in light of recent announcements that Landsat 5 will soon be decommissioned and considering ongoing concerns related to the operational limitations of Landsat 7 [note that Landsat 6 failed to reach orbit during its 1993 launch, and thus never made it into operation]. While numerous other satellites provide their own capabilities for Earth observation, the unprecedented 40-year continuity of the Landsat program enables analysis of long-term trends and unique capabilities for the assessment and monitoring of our changing planet. LDCM thus represents more than just a new satellite, but a critically important continuation of numerous global science applications.

LDCM contains two science instruments, the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI instrument will measure a total of nine spectral bands: eight multispectral bands at 30m resolution in the visible, near-infrared, and shortwave infrared; and one panchromatic band at 15m resolution in the visible. Unlike previous Landsat missions, which used whiskbroom instruments, the OLI utilizes a pushbroom configuration, thereby enabling improved signal-to-noise performance, i.e., improved data quality. The TIRS instrument will measure two thermal bands at 100m resolution, subdividing the single thermal band previously measured by Landsat 4-7. With this overall design configuration, the OLI and TIRS instruments together maintain the legacy of previous Landsat instruments, while at the same time expanding to include additional capabilities.

LDCM is a joint program between NASA and USGS, with NASA handling instrument development, spacecraft design and launch, and USGS handling flight operations, and data processing, distribution and archiving. Importantly, as has been the policy since 2008, data from Landsat 8 will be offered free to all interested users.

The launch will be streamed live on NASA TV ( Don’t miss this historic occasion.

For more information on LDCM:

Satellites, Technology and Palm Trees – It’s all about CSTARS

HySpeed Computing recently visited CSTARS to learn more about University of Miami’s remote sensing facility.

A short distance south of Miami, just down the Florida Turnpike, and surrounded by a lush tropical landscape, is an advanced satellite download and image analysis facility. CSTARS – the Center for Southeastern Tropical Advanced Remote Sensing – owned and operated by the University of Miami – provides state-of-the-art research capabilities and data access for scientists around the world.


(Image credit: CSTARS)

The CSTARS facility was purchased by the University of Miami in 2000, and after strategically phasing in new infrastructure and operations, officially launched in 2003. Present today on the 78 acre grounds are two 11.3m antennas, one 20m antenna, and several buildings containing the system controls and data processing equipment. These antennas are the links that ultimately connect data from orbiting satellites to researchers on the ground. While much of the facility has been designed to be automated, a number of scientists and staff are located onsite for handling satellite and antenna operations, conducting research investigations, and performing system maintenance.

Since its inception, imagery downloaded through the CSTARS antennas has been used as the foundation for a diverse range of scientific studies, including topics such as assessing water level changes in the Florida Everglades, investigating land subsidence trends, tracking global ocean currents, and monitoring volcanic activity. CSTARS is also notably included as a partner in one of the Department of Homeland Security Centers of Excellence, whose particular objective involves improving maritime and port security. And the facility has also played an important role in damage assessments and relief efforts associated with Hurricane Katrina, the Haiti earthquake and the Deepwater Horizon oil spill.

Interestingly, CSTARS operates satellite communications for the U.S. Antarctic Amundsen-Scott South Pole Station, thus providing a vital link between the station and the outside world. This is accomplished using GOES-3 (Geostationary Operational Environmental Satellite), a weather satellite launched in 1978, which ceased functioning in that regard in 1993, but was reactivated as a communications satellite in 1995. It’s a great example of resourcefulness and engineering dexterity that has enabled GOES-3 to continue operating and provide voice and data transmission with the Antarctic station.

CSTARS also represents an interesting piece of technology history in south Florida. The facility previously served as the site of the U.S. Naval Observatory Time Service Alternate Master Clock Station, which had the responsibility of providing accurate “atomic time” given a failure of the master atomic clock in Washington, D.C. During the same period it was also one of the stations for the U.S. Very Long Baseline Interferometry program, which contributed fundamental data for better understanding dynamics of the Earth’s surface. A further look through the U.S. Naval Observatory Station history reveals many other unique scientific advances were achieved at the facility through the years.

From past to present, technology has played an integral part in defining this small plot of land in south Florida. For more information on CSTARS:

Remote Sensing in the Cloud – Introducing the ENVI Services Engine

remote sensing in the cloudA popular topic these days is cloud computing. And the world of remote sensing is no exception. New developments in software, hardware, and connectivity are offering innovative options for performing remote sensing image analysis and visualization tasks in the cloud.

One example of the recent advance in cloud computing capabilities for geospatial scientists is the development of the ENVI Services Engine by Exelis Visual Information Solutions (Exelis VIS). Using what was previously the domain of desktop computing – this software engine brings the image analysis tools of ENVI into the cloud. This translates into an ability to deploy ENVI processing tools, such as image classification, anomaly detection and change detection, into an online environment. Additionally, because the system uses a HTTP REST interface and was constructed utilizing open source standards, implementing the software is feasible across a variety of operating systems and different hardware devices.

This flexibility of the ENVI Services Engine, and cloud computing in general, speaks directly to the “bring your own device” movement. Rather than being limited to certain operating systems or certain types of hardware, users have many more options to satisfy their preferences. Access and processing thus becomes feasible from a variety of tablets, mobile phones and laptops, in addition to the usual array of desktops and workstations.

As an example, consider the ability to access imagery and derived data layers from your favorite mobile device. Now consider being able to adjust your analysis on-the-fly from this same device based on observations while in the field. With the image processing being tasks handled on remote servers, extensive computing capacity is no longer required on your local device. This enables not just remote access to image processing, but also the ability for on-demand visualization and display of entire databases full of different images and results.

Having the image processing tasks performed on the same servers, or on servers closer to, where the imagery is stored is also more computationally efficient, since imagery does not need to be first transferred to local computers and results then transferred back to the servers. This is particularly relevant for large data archives, where even simple changes to existing algorithms, or the addition of new algorithms, may necessitate re-processing vast volumes of data.

Although the concept of cloud computing is not new, it has become apparent that the software and hardware landscape has evolved, making cloud computing for geospatial analysis significantly more attractive than ever before.

Attendees of the VISualize conference earlier this year received a sneak-peek at the ENVI Services Engine. The software was also recently on display at the GEOINT conference this past October. However, official release of the software isn’t scheduled until early 2013. For more information:

The Future of NASA Earth Science – Preview of upcoming satellite launches


The “Blue Marble” (image courtesy NASA)

Since its establishment in 1958, NASA has become well known for its advances in space exploration, and closer to home, highly recognized for its long history of scientific research using Earth observing satellites. From the early days of the TIROS program, whose first satellite was launched in 1960, to the more recent Landsat program, which has spanned 40 years of operation from 1972 to present (…and still going), NASA has been a leader in using satellite observations to improve our understanding of Earth.

NASA is currently operating an unprecedented number of Earth observing satellites, with many more in the pipeline. Here’s a look at some of the instruments NASA plans on launching in the coming years:

LDCM: Landsat Data Continuity Mission. As mentioned, the Landsat program has been operating since 1972. This longevity has enabled an enormous volume of remote sensing research to be accomplished, primarily focused on land surfaces but also including applications in the shallow coastal zone. With the lifespans of all the previous Landsat instruments reaching their end, and a hardware failure on Landsat 7, NASA recognized the need to move forward with a replacement to this important family of instruments. The LDCM will contain two instruments, the Operational Land Imager, measuring nine bands in the visible to short wave infrared, eight multispectral and one panchromatic, and the Thermal Infrared Sensor, measuring two thermal bands. LDCM, a collaborative mission between NASA and USGS, is currently scheduled for launch in early 2013.

GPM: Global Precipitation Measurement. The GPM mission, an international partnership co-led by NASA and JAXA (Japan Aerospace and Exploration Agency), builds on the success of TRMM (Tropical Rainfall Measuring Mission) launched in 1997. Whereas TRMM was designed to measure rainfall in the tropical and sub-tropical regions, GPM will acquire global measurements of both rainfall and snow. The concept for the GPM mission centers on a Core Observatory satellite, which will contain the latest advanced instruments to serve as a reference for calibrating measurements from a host of other operational satellites. The GPM Core Observatory contains two instruments, the GMI (GPM Microwave Imager) and the DPR (Dual-Frequency Precipitation Radar). The GPM Core Observatory is scheduled for launch in 2014.

OCO-2: Orbiting Carbon Observatory. The OCO-2 mission is a replacement satellite for the original OCO instrument launched in 2009 that unfortunately failed to make orbit. OCO-2 will acquire precise global measurements of atmospheric carbon dioxide, providing scientists with an unprecedented ability to explore the spatial and temporal patterns of carbon dioxide levels in our planet’s atmosphere. Measurements will be obtained using a single instrument containing three separate spectrometers to measure three narrow bands in the near-infrared that are sensitive to the presence of atmospheric gases.  OCO-2 is scheduled for launch in 2014.

SMAP: Soil Moisture Active Passive. Understanding soil moisture plays an important role in weather and climate forecasting, as well as predicting droughts, floods, landslides and agricultural productivity. To address this need, the SMAP mission will deliver global measurements of both soil moisture and its freeze/thaw state. SMAP measurements will be made using two L-band instruments, a radiometer and a synthetic aperture radar. Utilizing the L-band frequency allows measurements to be acquired night or day, irrespective of cloud cover, and even through moderate vegetation. SMAP is scheduled for launch in late 2014.

As each new instrument passes through the requisite design review process, it moves closer to approval for launch. Listed above are just some of the instruments approaching this auspicious achievement. There are many more on the way, with even more in the early planning stages. As a result of this ongoing progress, our ability to assess and monitor the condition of our planet has never been greater, with bold plans to continue improving this capacity in the future.

For more on NASA’s history, visit:

For information on NASA’s satellite program, visit:

HyspIRI Science Workshop Day 3 – Community data and international collaboration

The final day of the HyspIRI Science Workshop saw emphasis on international collaborations and development of shared data resources for the remote sensing community. Vibrant conversations were heard around the meeting throughout the day, covering an array of topics, but mostly focusing on how remote sensing can be used to assist in addressing key societal questions, such as climate and environmental change.

In addition to ongoing presentations related to the NASA HyspIRI mission, colleagues from other countries described international efforts to develop satellite instruments using similar technologies. For example, DLR, the German Aerospace Center, reported great progress with EnMAP (Environmental Mapping and Analysis Programme). An exciting aspect of the EnMAP mission is that agreements have recently been established to make data from the mission freely available to interested researchers. Advances are also being made with HISUI (Hyperspectral Imager Suite), which is being developed by the Japanese Ministry of Economy, Trade and Industry, and with PRISMA (PRecursore IperSpettrale della Missione Applicativa), which is a combined imaging spectrometer and panchromatic camera system under development by the Italian Space Agency.

HyspIRI - Guild et al.

Liane Guild (NASA ARC) discusses NASA’s COAST project with Sergio Cerdeira Estrada (CONABIO), Frank Muller-Karger (USF) and Ira Leifer (UCSB)

But it wasn’t all about satellites. Significant attention was also placed on the various airborne missions being used to demonstrate technology readiness, as well as perform their own valuable scientific investigations. This includes instruments such as AVIRIS, AVIRIS-ng, HyTES, PHyTIR, PRISM and APEX. The research being conducted using these instruments, which include both imaging spectrometers and multispectral thermal systems, is vital for validating engineering design components, data delivery mechanisms, calibration procedures, and image analysis algorithms. As a result, these instruments represent important steps forward in the progress of the HyspIRI mission. However, they also independently have great value, providing numerous opportunities for remote sensing scientists to develop new methods and deliver innovative research results.

In addition to the instruments themselves, scientists are also working towards improving overall data availability, calibration techniques and field validation methods. For example, NASA JPL is enlisting the remote sensing community to build an open-access spectral library, with the impressive goal of cataloging the spectral characteristics of as many of the Earth’s natural and manmade materials as possible. Such spectra represent important components in a variety of image classification and analysis algorithms. Other programs, such as the NEON project in the U.S. and the TERN project in Australia, are focused on collecting field data from example study sites and providing the data for others to use in their own research projects. It’s encouraging to see this level of community and collaboration.

As evidenced by the presentations and posters at the workshop, imaging spectrometry is a mature science with a wealth of proven application areas. However, this won’t stop scientists from continuing to innovate and push the limits of what can be achieved using this technology. There’s always a new idea around the next corner, and it’s workshops like this that help promote information exchange, development of new collaborations, and the creation of new research directions.

Presentations from the HyspIRI Science Workshop and information on the HyspIRI mission can be found at