Spectral imagery explained

In this blog I will attempt to explain what Multispectral and Hyperspectral imagery is about, from a physics perspective, at least on a rudimentary level. Let’s start by explaining the concept of channels.

With visible light, your eyes see the reflected or radiated electromagnetic energy within a certain frequency spectrum. Looking at a computer screen you see a multitude of colors, – however, the computer produces this light in channels (frequency bands): red, green and blue. The different colors that your eye perceive is a blend of different intensities of each frequency band.

If you were another animal, you might see light differently. Some animals see infrared radiation and others see ultraviolet, both which are invisible to the human eye. Now, imagine if we could view the world taking in other frequencies in the electromagnetic spectra. This is where we introduce multispectral and hyperspectral sensors.

Electromagnetic Spectrum

Visible light, i.e. red, green and blue, infrared and ultraviolet are defined regions in the electromagnetic spectrum. Each region is categorized based on its wavelength λ (sometimes frequency f, as  f=v/λ, where v is the propagation speed)

  • Visible light (380 nm to 700 nm)
  • Outside these wavelengths we find
  • Infrared (700 nm to 1mm)
  • Ultraviolet (10 nm to 380 nm)

Multispectral and hyperspectral imagery gives the power to see other (than red, green and blue) wavelengths such as infrared and ultraviolet, but also additional wavelengths visible to the sensor.

The main difference between multispectral and hyperspectral is the number of bands and how narrow the bands are. Think of it as splitting the sensor’s total bandwidth into defined sections, much as the computer uses the red, green and blue to emulate the whole visible color spectrum.

Sensors

In the drone industry spectral analysis is done with passive sensors, dependent upon ambient, reflected, radiation. Since ambient radiation levels differ due to atmospheric conditions, it is important to measure the incoming light as a reference. Also; due to atmospheric absorption certain wavelengths are not used as there is not enough reflective energy.

Multispectral imagery generally refers to 3 to 10 bands, where each band is obtained using a remote sensing radiometer. Each band will have a spatial resolution as well as a wavelength resolution (spectral information captured per pixel). This is also valid for the hyperspectral sensor.

Hyperspectral imagery consists of many more and much narrower bands. A hyperspectral image could have hundreds of bands where the bands are obtained by an imaging spectrometer. The drawback of using high resolution hyperspectral sensors is that it adds complexity, – several hundred narrow bands require a lot of compute power to analyze.

Ultraspectral imagery is more of the same, but with the total spectral bandwidth divided into thousands of bands.

spectral imagery

Conclusion

As multispectral sensors have broader bandwidth, whereas in the case of hyperspectral the bandwidth is finer, the bottom line is all about the amount of spectral information that can be retrieved to perform some sort of analysis. Images produced from hyperspectral sensors contain much more data than images from multispectral sensors and have a greater potential to detect differences among land and water features.

Hyperspectral and multispectral images have many real-world applications. For example, hyperspectral imagery has been used to map invasive species and help in mineral exploration. There are also more applications in the fields of precision agriculture, ecology, oceanography, oil and gas and atmospheric studies where multispectral and hyperspectral remote sensing are being used to better describe the world we live in.

In a recent blog, and as an example, I wrote about the usage of multispectral sensors in precision agriculture and UAVs.

Leave a Reply