You are on page 1of 14

Innovative Food Science and Emerging Technologies 19 (2013) 114

Contents lists available at SciVerse ScienceDirect

Innovative Food Science and Emerging Technologies


journal homepage: www.elsevier.com/locate/ifset

Advanced applications of hyperspectral imaging technology for


food quality and safety analysis and assessment: A
review Part I: Fundamentals
Di Wu, Da-Wen Sun
Food Refrigeration and Computerised Food Technology (FRCFT), School of Biosystems Engineering, University College Dublin, National University of Ireland, Agriculture & Food Science Centre,
Beleld, Dublin 4, Ireland

a r t i c l e

i n f o

Article history:
Received 23 February 2012
Accepted 21 April 2013
Editor Proof Received Date 24 May 2013
Keywords:
Hyperspectral imaging
Imaging spectroscopy
Food quality
Food safety
Image processing
Image analysis
Spectrometry

a b s t r a c t
By integrating two classical optical sensing technologies of imaging and spectroscopy into one system,
hyperspectral imaging can provide both spatial and spectral information, simultaneously. Therefore,
hyperspectral imaging has the capability to rapidly and non-invasively monitor both physical and morphological characteristics and intrinsic chemical and molecular information of a food product in the purpose of
quality and safety analysis and assessment. As the rst part of this review, some fundamental knowledge
about hyperspectral imaging is reviewed, which includes the relationship between spectroscopy, imaging,
and hyperspectral imaging, principles of hyperspectral imaging, instruments for hyperspectral imaging, processing methods for data analysis, and discussion on advantages and disadvantages.
Industrial relevance: It is anticipated that real-time food monitoring systems with this technique can be
expected to meet the requirements of the modern industrial control and sorting systems in the near future.
2013 Elsevier Ltd. All rights reserved.

Contents
1.
2.
3.

4.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Relationship between spectroscopy, imaging, and hyperspectral imaging
Principles of hyperspectral imaging . . . . . . . . . . . . . . . . . .
3.1.
Classes of spectral imaging . . . . . . . . . . . . . . . . . . .
3.2.
Hyperspectral cube . . . . . . . . . . . . . . . . . . . . . .
3.3.
Acquisition of hyperspectral images . . . . . . . . . . . . . .
3.4.
Image sensing modes . . . . . . . . . . . . . . . . . . . . .
Hyperspectral imaging instruments . . . . . . . . . . . . . . . . . .
4.1.
Light sources . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1.
Halogen lamps . . . . . . . . . . . . . . . . . . . .
4.1.2.
Light emitting diodes (LEDs) . . . . . . . . . . . . .
4.1.3.
Lasers . . . . . . . . . . . . . . . . . . . . . . . .
4.1.4.
Tunable light sources . . . . . . . . . . . . . . . . .
4.2.
Wavelength dispersion devices . . . . . . . . . . . . . . . . .
4.2.1.
Filter wheels . . . . . . . . . . . . . . . . . . . . .
4.2.2.
Imaging spectrographs . . . . . . . . . . . . . . . .
4.2.3.
Tunable lters . . . . . . . . . . . . . . . . . . . .
4.2.4.
Fourier transform imaging spectrometers . . . . . . .
4.2.5.
Single shot imagers . . . . . . . . . . . . . . . . . .
4.3.
Area detectors . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.
CCD detector . . . . . . . . . . . . . . . . . . . . .
4.3.2.
CMOS detector . . . . . . . . . . . . . . . . . . . .
4.4.
Calibration of hyperspectral imaging system . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Corresponding author. Tel.: +353 1 7167342; fax: +353 1 7167493.


E-mail address: dawen.sun@ucd.ie (D.-W. Sun).URL's:URL: http://www.ucd.ie/refrig, http://www.ucd.ie/sun (D.-W. Sun).
1466-8564/$ see front matter 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.ifset.2013.04.014

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

2
2
2
2
2
3
4
4
4
4
5
5
5
5
5
5
6
6
6
6
7
7
7

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

5.

Hyperspectral image processing methods . . . . . . .


5.1.
Reectance calibration of hyperspectral images .
5.2.
Image enhancement and spectral preprocessing .
5.3.
Image segmentation . . . . . . . . . . . . . .
5.4.
Object measurement . . . . . . . . . . . . . .
5.5.
Multivariate analysis . . . . . . . . . . . . . .
5.6.
Optimal wavelength selection . . . . . . . . .
5.7.
Model evaluation . . . . . . . . . . . . . . .
5.8.
Visualization of quality images . . . . . . . . .
6.
Advantages and disadvantages of hyperspectral imaging
7.
Conclusions . . . . . . . . . . . . . . . . . . . . .
Acknowledgments . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

1. Introduction
Food products with high quality and safety are always expected and
demanded by consumers, leading to the introduction of legislation for
food safety and mandatory inspection of food products. The development of accurate, rapid and objective quality inspection systems
throughout the entire food process is important for the food industry
to ensure the safe production of food during processing operations and
the correct labeling of products related to the quality, safety, authenticity and compliance. Currently, human visual inspection is still widely
used, which however is subjective, time-consuming, laborious, tedious
and inconsistent. Commonly used instrumental ways are mainly analytical chemical methods, such as mass spectrometry (MS) and high performance liquid chromatography (HPLC). However, they have several
disadvantages, such as being destructive, time-consuming, and unable
to handle a large number of samples, and sometimes requiring lengthy
sample preparation. Therefore, it is critical and necessary to apply accurate, reliable, efcient and non-invasive alternatives to evaluate quality
and quality-related attributes of food products.
Recently, optical sensing technologies have been researched as potential tools for non-destructive analysis and assessment for food quality and safety. In particularly, by integrating both spectroscopic and
imaging techniques into one system that can acquire a spatial map of
spectral variation, hyperspectral imaging (also called imaging spectroscopy or imaging spectrometry) has been widely studied and developed,
resulting in many successful applications in the quality assessment of
food products. A general overview of applications in quality determination for numerous food products is introduced in the second part of this
review.

2. Relationship between spectroscopy, imaging, and


hyperspectral imaging
Non-contact optical techniques such as spectroscopy and imaging are extremely advantageous for online inspection of agricultural
and food products to guarantee their quality and safety. Spectroscopy is a promising method for determining the essential qualities of
food products based on the measurement of optical properties
(Bock & Connelly, 2008; Cen & He, 2007). However, spectroscopy
technique does not give information on spatial distributions of traits
in food products, which greatly limits its application to quantify
spatial-related distribution and structure related attributes. On the
other hand, measurement of the external features of food products
can be achieved by conventional imaging system or more specically
computer vision (Du & Sun, 2005, 2006; Sun & Brosnan, 2003; Wu &
Sun, 2012; Zheng, Sun, & Zheng, 2006). However, because operating
at visible wavelengths in the forms of monochromatic or color images, a conventional imaging system is incapable in inspecting specimens with similar color, classifying complex objectives, predicting
chemical components, and in detecting invisible defects.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

8
8
8
9
10
10
11
12
12
12
13
13
13

With the integration of the main advantages of spectroscopy and


imaging, hyperspectral imaging technique can simultaneously acquire
spectral and spatial information in one system that is critical for the
quality prediction of agricultural and food products. Hyperspectral imaging technique can be applied for quantitative prediction of the inherent chemical and physical properties of the specimen as well as their
spatial distribution simultaneously. If a conventional spectral measurement provides the answer to the question of what and a conventional
imaging provides the answer to the question of where, hyperspectral
imaging can provide the answer to the question of where is what.
Table 1 shows the main differences among imaging, spectroscopy, and
hyperspectral imaging techniques.
3. Principles of hyperspectral imaging
A good understanding of the principles of hyperspectral imaging is
crucial for the use of this tool. Therefore, some fundamental knowledge
is introduced in this section.
3.1. Classes of spectral imaging
A spectral imaging system produces a stack of images of the same object at different spectral wavelength band. There are three main classes in
the eld of spectral imaging, namely multispectral, hyperspectral, and
ultraspectral imaging. Their concept between these classes is similar.
The main difference is the number of images within the spectral cube.
Hyperspectral imaging systems acquire images with an abundance of
contiguous wavelengths (normally less than 10 nm). There are usually
dozens or hundreds of images, which make every pixel in the
hyperspectral image have its own spectrum over a contiguous wavelength range (Ariana & Lu, 2008b). Unlike hyperspectral imaging, multispectral imaging systems cannot provide a real spectrum in every image
pixel. Multispectral images usually have less than ten spectral bands,
and some of them have dozens. Therefore, the spectral resolution of multispectral imaging systems is usually larger than 10 nm. Besides using
bandpass lters, 3CCD (charge-coupled devices) camera is also commonly used for acquiring multispectral images. 3CCD has three discrete image
sensors and a dichroic beam splitter prism that splits the light into three
spectral bands. Although the spectral resolution of multispectral imaging
is lower than that of hyperspectral imaging, its acquisition speed is faster.
The acquisition speeds of 3CCD camera are usually dozens frames per second, while it usually takes several seconds to measure a hyperspectral
image. There is no quantitative comparison between hyperspectral
image and ultraspectral image. It is usually believed that ultraspectral imaging systems usually have a very ne spectral resolution.
3.2. Hyperspectral cube
Hyperspectral image is a three-dimensional (3D) hyperspectral
cube (also called hypercube, spectral cube, spectral volume, datacube,
and data volume), which is composed of voxels (also called vector

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

pixels) containing spectral information (of wavelengths) as well as


two-dimensional spatial information (of x rows and y columns). As an
example, the hyperspectral cube of a sh llet acquired using reectance mode is illustrated in Fig. 1. The raw hyperspectral cube consists
of a series of contiguous sub-images one behind each other at different
wavelengths (Fig. 1.a). Each sub-image provides the spatial distribution
of the spectral intensity at a certain wavelength. That means a
hyperspectral image described as I (x, y, ) can be viewed either as a
separate spatial image I (x, y) at each individual wavelength (), or as
a spectrum I () at each individual pixel (x, y). From the rst view,
any spatial image within the spectral range of the system can be picked
up from the hyperspectral cube at a certain wavelength within the
wavelength sensitivity (Fig. 1.b). The gray scale image shows the different spectral intensity of the imaged object at a certain wavelength due
to the distribution of its corresponding chemical components. For example, an image within the hypercube at a single waveband centered
at 980 nm with bandwidth of 5 nm (Fig. 1.b) can relatively show the information of moisture distribution in the sh llet, which is difcult to
be observed in RGB image (Fig. 1.c). The pixels with high moisture contents in this image appeared as the darkest parts since an absorption of
O\H stretching second overtones of water is around 980 nm. From the
second view, the resulting spectrum of a certain position within the
specimen can be considered as its own unique spectral ngerprint of
this pixel to characterize the composition of that particular pixel
(Fig. 1.d).
3.3. Acquisition of hyperspectral images
There are four approaches to acquire 3-D hyperspectral image cubes
(I (x, y, )), which are point scanning, line scanning, area scanning, and
the single shot method, as illustrated in upper half of Fig. 2. In the point
scanning method (also known as the whiskbroom method), a single
point is scanned at one pixel to provide the spectrum of this point
(Fig. 2.a), and other points are scanned by moving either the detector
or the sample along two spatial dimensions (x and y). Its obtained
hyperspectral cube is stored in the band-interleaved-by-pixel (BIP)

format. For an image stored in BIP format, the rst pixel for all
bands is in sequential order, followed by the second pixel for all
bands, followed by the third pixel for all bands, etc., interleaved up
to the number of pixels. This format is optimal for accessing the spectral information of each pixel. The disadvantages of whiskbroom are
very time-consuming for positioning the sample and need advanced
repositioning hardware to ensure repeatability. The second approach illustrated in Fig. 2.b is called as line scanning method or
pushbroom method, which records a whole line of an image as well
as spectral information simultaneously corresponding to each spatial
pixel in the line. A complete hyperspectral cube can be obtained as
the line is scanned along the direction of x dimension (Fig. 2.b),
and the obtained cube is stored in the format of band-interleavedby-line (BIL). BIL format is a scheme for storing the pixel values of
an image in a le band by band for each line, or row, of the image. Because of its characteristics of continuous scanning in one direction,
line scanning is particularly suitable in conveyor belt systems that
are commonly used in food process lines. Therefore, line scanning
is the most popular method of acquiring hyperspectral images for
food quality and safety inspection. The disadvantage of the
pushbroom technique is that the exposure time can be set at only
one value for all wavelengths. Such exposure time has to be short
enough to avoid saturation of spectrum at any wavelength, resulting
in underexposure of other spectral bands and low accuracy of their
spectral measurement.
The above two methods are spatial scanning methods, while the
area or plane scanning (also known as band sequential method or
wavelength scanning) is a spectral scanning method as illustrated
in Fig. 2.c. This approach keeps the image eld of view xed and acquires a 2-D monochrome image (x, y) with full spatial information
at a single wavelength at a time. Such scanning repeats over the
whole wavelength range, results in a stack of single band images
stored in the band sequential (BSQ) format. As a very simple format,
BSQ encodes each line of the image at the rst band is followed immediately by the next line in the same spectral band, followed by
the second band for all lines, followed by the third band for all

Fig. 1. Schematic diagram of hyperspectral image (hyperspectral cube) for a piece of sh llet.

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

Fig. 2. Acquisition approaches of hyperspectral images (scanning directions are shown by arrows, and gray areas shows data acquired each time) and image sensing modes.

lines, etc., interleaved up to the number of bands. This format provides an easy access of spatial (x,y) access of at a single spectral
band. As the detector is exposed to only a single wavelength each
time, a suitable exposure time can be set for each wavelength. In addition, the area scanning does not need to move either sample or detector and is suitable for the applications where the object should be
stationary for a while, such as excitationemission in orescence imaging. A disadvantage of area scanning is that it is not suitable for a
moving sample or the inspection of real-time delivery. At last, the
single shot method records both spatial and spectral information
using a large area detector with one exposure to capture the images
(Fig. 2.d), making it very attractive when fast hyperspectral imaging
is required. However, it is still in the early stage of development and
has limited resolutions for spatial dimension and narrow ranges for
spectral dimension.

information into the sample and has less surface effects compared to reectance mode. Meanwhile, the interactance mode reduces the inuence of thickness, which is a practical advantage over transmission. It
should be noted that a special setup is required in the transmittance
mode to seal light in order to prevent specular reection directly entering the detector (Nicolai et al., 2007).
4. Hyperspectral imaging instruments
Instrumentation of hyperspectral imaging is basic and important to
acquire reliable hyperspectral images with high quality. Selection of
the components of the instruments and the design of their setup and
calibration require a good understanding of the conguration and calibration of hyperspectral imaging system.
4.1. Light sources

3.4. Image sensing modes


There are three common sensing modes for hyperspectral imaging,
namely reectance, transmittance or interactance as illustrated in
lower half of Fig. 2. Positions of light source and the optical detector
(cameral, spectrograph, and lens) are different for each acquisition
mode. In reectance mode, the detector captured the reected light
from the illuminated sample in a specic conformation to avoid specular reection (Fig. 2.e). External quality features are typically detected
using reectance model, such as size, shape, color, surface texture and
external defects. In transmittance mode, the detector is located in the
opposite side of the light source (Fig. 2.f), and captures the transmitted
light through the sample which carries more valuable internal information but is often very weak (Schaare & Fraser, 2000). Transmittance
mode is usually used to determine internal component concentration
and detect internal defects of relative transparent materials such as
sh, fruit, and vegetables. However, transmittance mode has a low signal level from light attenuation and is affected by the thickness of sample. In interactance mode, both light source and the detector are located
in the same side of sample and parallel to each other (Fig. 2.g). On the
basis of such setup, the interactance mode can detect deeper

Light sources generate light as an information carrier to excite or illuminate the target, and are an essential part of optical inspection systems. Typical light sources used in hyperspectral imaging systems
include halogen lamps, light emitting diodes, lasers, and tunable light
sources.
4.1.1. Halogen lamps
As a broadband illumination source, halogen lamps are commonly
used for the illumination of visible (VIS) and near-infrared (NIR) spectral regions. Typically, a lamp lament made of tungsten wire is placed
in a quartz glass bulb lled with halogen gas such as iodine or bromine.
The output light is generated from incandescent emission when the lament has a high temperature. Light is a smooth continuous spectrum
in the range of wavelength from visible to infrared without sharp
peaks. The halogen lamps work with low voltage, are considered as an
all-purpose illumination sources. The tungsten halogen lamps have
been used as illumination units in hyperspectral reectance measurements (Wu, Shi et al., 2012; Wu & Sun, 2013). In hyperspectral transmittance measurements, halogen lamps with high intensity have also
been used for detecting inside information of food (Ariana & Lu,

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

2008a). The disadvantages of halogen lamps include relatively short


lifetime, high heat output, spectral peak shift due to temperature
change, output unstable due to operating voltage uctuations, and sensitivity to vibration.
4.1.2. Light emitting diodes (LEDs)
A LED is a semiconductor light source, which has advanced rapidly
due to its advantages of small size, low cost, fast response, long lifetime,
low frequency of bulb replacement, low heat generation, low energy
consumption, robustness, cool to the touch without concerning burning, and insensitivity to vibration. LEDs are solid state sources without
using a lament for incandescent emission. LEDs emit light when a
semiconductor is electried, and start to be used as small indicator
lights on instrument panels. Depending on the materials used for the
pn junction, LEDs can produce not only narrowband light at different
wavelengths of ultraviolet, visible or infrared region, but also high intensity broadband white light. Due to the capability of directional distribution, LEDs are good at spot lighting. All photon can be sent by LEDs in
one direction without losing energy. According to different illumination
requirements, LEDs can be assembled in different arrangements such as
spot, line, and ring lights. Due to its benets mentioned above, LED
lights have started to become the illumination units of hyperspectral
imaging systems in the application of food inspection (Park, Yoon et
al., 2011). The disadvantages of LEDs include being sensitive to wide
voltage uctuations and junction temperature, low light intensities as
compared to halogen lights, and producing grainy light when multiple
LEDs are used in bulbs. Currently, the wavelength ranges of LEDs are
mainly from ultraviolet to short-wave near infrared, while some LEDs
emit light from long-wave near infrared to mid-infrared region. On
the basis of the development of new materials and electronics, the
LED technology is still ongoing and will become mainstream light
sources.
4.1.3. Lasers
Unlike tungsten halogen lamps and white LEDs which generate
broadband light, lasers are directional monochromatic light sources
widely used as excitation sources in the application of uorescence
and Raman measurements. Lasers generate light on the way of stimulated emission. There are three basic components of a laser, namely
a resonant optical cavity/optical resonator, a laser gain medium/
active laser medium, and a pump source to excite the particles in
the gain medium. Monochromaticity, directionality, and coherence
are three unique properties of lasers. When a food are excited by a
monochromatic beam of light with a high energy, the electrons in
molecules of certain compounds of the food will be excited to emit
light of a lower energy in a broad wavelength range, resulting in uorescence emission or Raman scattering. Both uorescence imaging
and Raman imaging are sensitive optical techniques that carries
composition information at pixel-level and can detect subtle changes
of food quality. Recently, lasers have been utilized as excitation
sources in the applications of hyperspectral uorescence imaging
(Cho et al., 2009) and Raman imaging (Qin, Chao, & Kim, 2011) for
quality inspection of food. Moreover, because of their ability to produce narrowband pulsed light, LEDs are now also used as excitation
sources of uorescence measurement for quality inspection of food
(Yang et al., 2012), although light generated from lasers have higher
intensities and narrower bandwidths than that from LEDs.
4.1.4. Tunable light sources
In many current hyperspectral imaging systems for quality and
safety inspection of food, the wavelength dispersion device is placed
between the detector and the sample to disperse light into different
wavelengths after interaction with the sample. There is another
equivalent approach that combines the broadband illumination and
the wavelength dispersion device together, which is called tunable
light source. Tunable light sources allow directly area scanning to obtain

both spatial and spectral information of sample by setting the wavelength dispersion device in the illumination light path instead of the imaging light path. Because only narrowband light is incident on the object
at a time, the intensity of the tunable light sources is relatively weak,
which can reduce high irradiance and heat damage of sample. Currently, tunable light sources have been used for inspecting historical documents which require weak illumination for the sample protection
(Klein, Aalderink, Padoan, de Bruin, & Steemers, 2008). In addition, tunable light sources are mainly used for area scanning and are not efcient
for point and line scanning. Therefore, tunable light sources are practically not suitable for conveyor belt systems.
4.2. Wavelength dispersion devices
Wavelength dispersion devices are important for the hyperspectral
imaging systems using broadband illuminating light sources. They
have the function of dispersing broadband light into different wavelengths. Typical examples include lter wheels, imaging spectrographs,
acousto-optic tunable lters, liquid crystal tunable lters, Fourier transform imaging spectrometers, and single shot imagers.
4.2.1. Filter wheels
A lter wheel carrying a set of discrete bandpass lters is the most
basic and simple device for wavelength dispersion. The bandpass lters
transmit the light at a particular wavelength efciently while eliminating light at other wavelengths. There is a broad range of lters from ultraviolet, visible to near infrared wavelength with various specications
commercially available to satisfy different demands. Limitations of lter
wheels include mechanical vibration from moving parts, slow wavelength switching, and image unmatched due to the lter movement.
4.2.2. Imaging spectrographs
An imaging spectrograph, which generally operates in line-scanning
mode, has the capability for dispersing incident broadband light into
different wavelengths instantaneously and generating a spectrum for
each point on the scanned line without the use of moving parts. Diffraction gratings are generally used in imaging spectrographs for wavelength dispersion. A diffraction grating is a collection of equally spaced
reecting or transmitting elements separated from one another by
a distance that is on the order of the wavelength of the light being
studied. Upon diffraction, an electromagnetic wave incident on a
grating will have its electric eld amplitude, or phase, or both, modied in a predictable manner (Palmer, 2005). There are two main
forms of imaging spectrographs, namely reection gratings (i.e., a
grating superimposed on a reective surface) and transmission
gratings (i.e., a grating superimposed on a transparent surface). In
the imaging spectrograph utilizing a transmission grating, a prism
gratingprism (PGP) is commonly used. After entranced through
the entrance slit of the spectrograph, the incoming beam is collimated by the front lens and is then dispersed at the PGP component into
different wavelengths in a transmission way. At last, the dispersed
light is projected onto an area detector through the back lens to generate a two-dimensional matrix, where one dimension stands for a
continuous spectrum and the other spatial information. Transmission gratings are nearly independent of polarization and can be easily mounted to a lens and an area detector to form a pushbroom
hyperspectral imaging camera. However, transmission gratings are
limited by the properties of the grating substrate (or resin), and cannot operate at higher angles of diffraction as the reection gratings.
As another main form of imaging spectrographs, a typical reection
grating generally includes an entrance slit, two concentric spherical
mirrors, an aberration-corrected convex reection grating, and a detector. After entranced through the entrance slit, the incoming light
is reected by one of the mirror to the reection grating, which has
the function of dispersing the incident beam so that the direction of
the light propagation depends on its wavelength. The dispersed light

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

is then reected by the other mirror to the detector, where a continuous


spectrum is received at different pixels. It is believed that this conguration offers several advantages, such as high image quality, free of
higher-order aberrations, low distortion, low f-number, and large eld
size (Bannon & Thomas, 2005). The polarization effects of reective
spectrograph depend on the conguration and are generally less than
50%. In addition, the efciencies of the reective optical components
(e.g., mirrors) are generally higher than those of the transmission components (e.g., prisms). Therefore, the imaging spectrographs with reection grating can provide high signal-to-noise ratio (SNR) and ideal
for the low light measuring conditions such as uorescence imaging
and Raman imaging. A main disadvantage of reection gratings is that
they are needed to use costly ways to correct inherently induced distortions, while transmission gratings use on-axis optics having automatically less aberrations.
4.2.3. Tunable lters
Acousto-optic tunable lter (AOTF) and liquid crystal tunable lter
(LCTF) are both electronically tunable bandpass lters. By using
acousto-optic interactions in a crystal, the AOTF can isolate light at a single wavelength from a broadband source through an applied acoustic
eld. A liquid crystal tunable lter (LCTF) has electronically controlled
liquid crystal cells inserted between two parallel polarizers to transmit
light with a specic wavelength while light energy out of the passband
is rejected. Similar to a bandpass lter, tunable lters only disperse light
at one particular wavelength at a time. Unlike the xed interference lters, the electronically tunable lters like AOTFs and LCTFs can be exibly controlled for different wavelengths by varying the frequency of the
radio frequencies using a computer. Tunable lters have moderate spectral resolution (about 520 nm) and broad wavelength range (about
4002500 nm). In addition, because tunable lters have no moving
parts, they have no problem of speed limitation, mechanical vibration,
and image misregistration, which are constraints of the rotating lter
wheels. In comparison to AOTFs, LCTFs take much long response time
to switch from one wavelength to another (milliseconds versus microseconds), but have better image quality. In addition, AOTFs require
more stringent optical design than LCTFs. The shortcomings of tunable
lter include high F-number that leads to small light collection angle
and low light collection efciency, needs of linearly polarized incident
light that can cause 50% light loss, and longer exposure time than imaging spectrographs in similar illumination conditions. In the research of
food quality and safety inspection, LCTF-based hyperspectral imaging
systems have been used for detecting sour skin-infected onions
(Wang, Li, Tollner, Gitaitis, & Rains, 2012), prediction of apple rmness
(Peng & Lu, 2006), and classication of wheat (Choudhary, Mahesh,
Paliwal, & Jayas, 2009). AOTFs have also started to be used in food analysis (Park, Lee et al., 2011).
4.2.4. Fourier transform imaging spectrometers
Fourier transform imaging spectrometers employ an interferometer to self-interfere a broadband light, resulting in an interferogram
that contains its spectral data. The generated interferogram is then
calculated by an inverse Fourier transform to resolve the constitution of the frequencies (or wavelengths) of the broadband light.
Michelson and Sagnac are two main interferometer designs for the
current Fourier transform imaging spectrometers. Both designs
have a beamsplitter and two at mirrors. The difference of two designs is that one mirror and the beamsplitter are xed in Michelson
interferometer, while the other mirror moves to introduce optical
path difference for generating interferogram. In Sagnac interferometer,
two mirrors are xed and the beamsplitter can be slightly rotated to
create the interference fringes. In addition, two mirrors in Michelson interferometer are perpendicular to each other, while in the Sagnac spectrometer the two mirrors are not perpendicular but have a xed angle
(b 90) between them. Because of no moving components, the Sagnac
spectrometer has good mechanical stability and compactness, but

relatively low resolution. On the contrary, the moving mirror in the


Michelson spectrometer increases its sensitivity to vibrations. Moreover, the Sagnac spectrometer is similar to a dispersive spectrometer
where only one spatial dimension is collected in one scan and the spectra are acquired in a single line in the perpendicular direction. A eld
scanning mirror or a moving platform is commonly used in Sagnac
spectrometers to acquire the second spatial dimension. The difference
between a dispersive spectrometer and a Sagnac spectrometer is that
the former measures the spectra at different wavelengths directly,
while the latter needs an additional step of taking a Fourier transform.
On the other hand, the Michelson spectrometer has a pixel-based interferogram to allow imaging in two dimensions. However, in a Michelson
spectrometer, a time interval is required to shift the moving mirror.
Therefore, it takes a long time to collect the interferogram for a ne
spectral resolution and a high SNR. Although Fourier transform imaging
spectrometers are now mainly used in bioanalytical chemistry and
medicine, they are considered to have considerable potential impact
in food science, due to its benets of providing high spectral resolution,
wide wavelength range, high optical throughput, and a spatial resolution down to a few micrometers.
4.2.5. Single shot imagers
Either spatial scanning methods (e.g. whiskbroom and pushbroom)
or spectral scanning methods (e g. staring imaging) cannot acquire
hyperspectral images for fast-moving samples. On the contrary, single
shot imagers can collect multiplexed spatial and spectral data simultaneously, making it possible to acquire a hypercube at video frame
rates. Although single shot imagers are still in the early stage, there
are already several systems available, such as the miniature staring
HPATM imager (Bodkin, 2010), the image mapping spectroscopy endoscope (Kester, Bedard, Gao, & Tkaczyk, 2011), and the image mapping spectrometer systems (Tkaczyk, Kester, & Gao, 2011). There is a
trade-off between temporal and spectral resolution of current single
shot imagers. The ner the spectral resolution is, the lower the temporal resolution is, and vice versa. With the feature of capturing
hyperspectral images on the millisecond time scale, single shot devices are especially advanced in real-time and have a brilliant future
for food quality and safety inspection.
4.3. Area detectors
Area detectors have the function of quantifying the intensity of
the acquired light by converting incident photons into electrons.
CCD (charge-coupled device) and CMOS (complementary metaloxide-semiconductor) cameras are two major types of solid state
area detectors. Photodiodes made of light sensitive materials are
the basic unit of both CCD and CMOS to convert radiation energy to
electrical signal. Silicon (Si), indium gallium arsenide (InGaAs), and
mercury cadmium tellurium (MCT or HgCdTe) are three commonly
used materials for hyperspectral imaging. The silicon is used for acquiring the spectral information in the ultraviolet, visible and shortwave
near infrared regions. Because of the advantages of small size, high
speed, low noise and good spectral response, the silicon-based CCD
cameras have been widely used for inspection of food quality (Park,
Kise, Windham, Lawrence, & Yoon, 2008; Yoon, Lawrence, Smith, Park,
& Windham, 2008). With its advantages of fairly at and high quantum
efciency in the near infrared region, InGaAs made of an alloy of indium
arsenide (InAs) and gallium arsenide (GaAs) is commonly used for
detecting the spectra at 0.91.7 m (ElMasry, Sun, & Allen, 2012; Wu,
Sun, & He, 2012). The detection range of InGaAs can be further extended
to 2.6 m by adjusting the percentages of InAs and GaAs. However,
InGaAs photodiodes have higher costs than silicon-based photodiodes.
For the detection of spectra at mid-infrared region, MCT is the material
of choice with the features of large spectral range and high quantum efciency. MCT is an alloy of CdTe and HgTe and is considered as the third
most well-regarded semiconductor after silicon and gallium arsenide.

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

The detection ranges of MCT include mid-infrared region (about 2.5 to


25 m) and near infrared region (about 0.82.5 m). The MCT photodiodes are commonly used in hyperspectral imaging systems for the requirement of acquiring spectra in long-wave near infrared region
(about 1.72.5 m) for the food quality inspection (Manley, Williams,
Nilsson, & Geladi, 2009; Vermeulen, Pierna, van Egmond, Dardenne, &
Baeten, 2012). Other materials including lead selenide (PbSe) operating
at wavelengths between 1.5 and 5.2 m, lead sulde (PbS) between
1 and 3.2 m, indium antimonide (InSb) between 1 and 6.7 m, platinum silicide (PtSi) between 1 and 5 m, germanium (Ge) between
0.8 and 1.7 m, and deuterated, L-alanine doped triglycine sulfate
(DLaTGS) between 0.8 and 25 m. The converted electrical signals
are then digitalized to generate the hypercubes using an analog to
digital (A/D) converter.
4.3.1. CCD detector
Both CCD and CMOS cameras consist of millions of photodiodes
(known as pixels) tightly arranged in rows forming an array. During
the measurement of the array, the electric charges accumulated in
photodiodes must be moved out of the array to a place where the
quantity of charges can be measured. There are generally four designs of CCD architectures for measuring two-dimensional region,
namely full frame, frame transfer, interline transfer, and frame interline transfer. The full frame is the simplest design of CCD architecture, in which the accumulated charges move row by row into a
horizontal shift register. The image measurement using the full
frame is relatively slow, because each line is exported one by one
(known as a progressive scan) controlled by a mechanical shutter
to avoid interference of newly generated line. Different from full
frame design, the interline design has additional vertical shift registers that are adjacent to the corresponding photodiode and covered
by opaque materials to shield the incident light. The function of the
vertical shift registers is to collect and pass the charges from each
photodiode. The horizontal shift register then readouts the collected
charges row by row. Therefore, the signal accumulation (exposure)
and readout can be done simultaneously in the interline design.
However, the vertical shift registers are opaque and therefore decrease the open ratio of the light sensitive area. Recently, an improvement has been made by using on-chip lenses, resulting in an
increment of over 70% of the overall quantum efciency. The frame
transfer design is an extensive version of full frame design by adding
a new storage frame next to the integration frame that is consisted of
photodiodes. The storage frame has the same size of the integration
frame and is covered by an opaque mask. After acquired in the integration frame, the charges of the whole frame are shifted into the
storage frame rapidly. When the charges in the storage frame are
transferred into the horizontal shift register, a new image can be captured in the integration frame. Compared with full frame design, the
frame transfer design has a faster frame rate, but also has a larger size
and more complex control electronics. The frame interline transfer
design combines the principles of both frame transfer and interline
transfer to give a further accelerated speed of image acquirement.
Charges accumulated in photodiodes are transferred to the vertical
shift registers, and then shifted to the storage frame as a whole.
However, the frame interline transfer design has the advantages of
both frame transfer and interline transfer, which are a low light efciency and a high cost for the doubled frame area. In the inspection of
food quality and safety, full frame (Devaux et al., 2006), frame transfer (Mendoza, Lu, Ariana, Cen, & Bailey, 2011; Yoon, Park, Lawrence,
Windham, & Heitschmidt, 2011), and interline transfer (Singh, Jayas,
Paliwal, & White, 2010a, 2010b) have all been considered to meet the
requirements of different applications.
4.3.2. CMOS detector
CMOS image sensor is considered to have the potential to compete against CCD. The main differences between these two types of

detectors is that both photodetector and readout amplier in each


pixel are included within the CMOS image sensor (Litwiller, 2005).
After incident photon is converted to electron by photodiodes, a voltage signal is converted from the integrated charges by using optically
insensitive transistors adjacent to the photodiode accordingly in
CMOS, and then readout over the wires. Because the wires used in
the CMOS can transfer signal very fast, CMOS camera is especially
suitable for the requirement of high-speed imaging for online industrial inspection. In CCD technology, blooming occurs when the
charge in a pixel exceeds the saturation level and starts to ll adjacent pixels. However, because of the structure of including both photodetector and readout amplier in one pixel, each pixel in CMOS
array is independent of other pixels nearby, resulting in being immune to the blooming. Moreover, owing to such structure, CMOS
has the random addressability to each particular pixel by an XY address. Besides high-speed imaging and random addressability, there
are many other advantages for CMOS image sensors, such as small
size, low cost, single power supply, and low power consumption,
which make them competitive in the consumer electronics market.
CMOS has been used in hyperspectral imaging systems for food quality inspection (Qiao, Ngadi, Wang, Gariepy, & Prasher, 2007; Qiao,
Wang et al., 2007). The constraint of CMOS cameras is the higher
noise and dark current than the CCDs because of the on-chip circuits
used to transfer and amplify signals, and as a result of lower dynamic
range and sensitivity than CCDs.

4.4. Calibration of hyperspectral imaging system


Appropriate calibrations for a hyperspectral imaging system are
essential to ensure the reliability of the acquired hyperspectral
image data and to guarantee the consistent performance of the system. Even if the environment of data measurement is carefully controlled, inconsistent spectral proles of reference spectra may be
acquired by some systems. Therefore, it is necessary to eliminate
this variability by using a standardized and objective calibration,
and a validation protocol. The goals of calibration process are to standardize the spectral and spatial axes of the hyperspectral image, validate the acceptability and reliability of the extracted spectral and
spatial data, determine whether the hyperspectral imaging system
is in running condition, evaluate accuracy and reproducibility of
the acquired data under different operating conditions, and diagnose
instrumental errors if necessary. The major types of calibration include wavelength calibration, spatial calibration, and curvature
calibration.
Hyperspectral imaging systems with imaging spectrographs disperse indecent light into different wavelengths, which are then
charged at different pixels along the spectral dimension on the detector. The wavelength of each pixel is unknown. The wavelengths
of the dispersed light on pixels might also change under different

Table 1
Main differences among imaging, spectroscopy, and hyperspectral imaging techniques.
Features

Hyperspectral Spectroscopy Imaging Multispectral


imaging
imaging

Spectral information
Spatial information
Multi-constituent
information
Detectability to objects
with small size
Flexibility of spectral
extraction
Generation of
quality-attribute
distribution

Limited

Limited

Limited

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

operating conditions, resulting in inuencing the accuracy and reproducibility of image acquisition. Therefore, wavelength calibration
is needed to identify each pixel along the spectral dimension with a
specic wavelength. The form of data from hyperspectral images is
pixel intensity versus pixel index, and will be intensity versus wavelength after wavelength calibration. The hyperspectral imaging systems using xed or tunable lters do not need wavelength
calibration as the wavelength of each lter is identied. The wavelength calibration commonly uses wavelength calibration lamps to
identify each wavelength as a function of its pixel index. The wavelength calibration lamps produce narrow, constant, intense, stable,
and specic lines from the excitation of various rare gases and
metal vapors. Various wavelength calibration lamps cover different
wavelength ranges from ultraviolet to infrared for calibrating different systems. Typical wavelength calibration lamps include pencil
style lamps, battery powered lamps, and high power lamps using
Argon (Ar), Krypton (Kr), Mercury (Hg), Mercury/Argon (Hg/Ar),
Neon (Ne), Xenon (Xe), etc. In the wavelength calibration process,
the lamp is rst scanned by the hyperspectral imaging system and
the spectral prole is extracted along the spectral dimension of the
image. The spectral peaks with known wavelengths and their corresponding pixel indices along the spectral dimension are then identied. A quantitative regression equation is established between the
wavelength and the pixel indices. Linear, quadratic, cubic, and trigonometric equations are commonly used. As a result, the wavelengths
of all pixels along the spectral dimension are identied using the
resulting regression.
Spatial calibration for hyperspectral imaging systems has the function of determining the dimension and resolution of the eld of view.
Spatial calibration approaches are different for the hyperspectral imaging systems with different image acquisition modes. As the area
scanning acquires a series of images with the same dimension at different spectral bands, the spatial calibration is conducted on a selected image with high SNR using resolution test charts such as ISO
12233 Test Chart, NBS 1952 Resolution Test Chart, and 1951 USAF
resolution test chart. The line scanning hyperspectral imaging systems might have different resolutions for the two spatial dimensions,
because the pixels along the y direction of the hyperspectral cube are
acquired using the imaging spectrograph and the pixels along the x
direction are acquired by the stepwise movement of the sample.
The resolution of x direction is the step size of the movement per
pixel and the range of the x direction depends on the distance of
the movement. The calibration for y direction is conducted by scanning a target printed with thin parallel lines. The resolution of y direction determined by dividing the distance of a range on the target by
the number of pixels of the range within the scanned image. The
range of the y direction is calculated by multiplying the resolution
by the number of pixels on the spatial dimension of the detector.
Curvature calibration is intended to correct the reection effect of
light on the food with spherical geometry, so that the spectrum at any
pixel is independent of its location. Gomez-Sanchis et al.(2008) proposed a curvature calibration of mandarin, where the amount of light
reected is corrected according to the angle between the incident
light and the normal to the surface direction.

xy
D cos 1 D 

for minimizing the adverse side effects produced by the curvature of


the fruit. In another study, Gowen et al. (2008) found that multiplicative
scatter correction (MSC) was efcient to decrease spectral variability of
mushrooms due to curvature.
5. Hyperspectral image processing methods
Because the data volume of a hyperspectral image is usually very
large and suffers from collinearity problems, chemometric algorithms
are required for mining detailed important information. Typical steps
of a full algorithm for analyzing hyperspectral image are outlined in a
owchart illustrated in Fig. 3. Commercially available software tools
for hyperspectral image process are mainly Environment for Visualizing
Images (ENVI) software (Research Systems Inc., Boulder, CO, USA),
MATLAB (The Math-Works Inc., Natick, MA, USA), and Unscrambler
(CAMO PROCESS AS, Oslo, Norway). ENVI is a popular software tool
designed to process, analyze, and display hyperspectral images. A variety of popular image processing algorithms are bundled in ENVI using
automated, wizard-based approaches or automated workows to provide step-by-step processes and instructions to help users for image
processing quickly and easily. MATLAB is a high-level technical computing language and interactive environment that has the capability of developing algorithms, creating models, analyzing data, visualizing
images for processing and analyzing hyperspectral image data. As a
fourth-generation programming language, MATLAB enables users to
analyze hyperspectral images more exibly than ENVI. In addition,
with tools and built-in math functions, MATLAB enables users to explore multiple approaches of data analysis faster than other traditional
programming languages, such as C, C++, Fortran, and Java. Unscrambler is a famous chemometric tool for multivariate data analysis. Although it cannot be directly used for the analysis of hyperspectral
image data, Unscrambler has been widely used for the data mining
and calibration of spectral data.
5.1. Reectance calibration of hyperspectral images
The raw spectral image collected using hyperspectral imaging is actually detector signal intensity. Therefore, a reectance calibration
should be performed to calibrate the raw intensity image into reectance or absorbance image with black and white reference images. In
order to remove the effect of dark current of the camera sensor, the
black image (B, about 0% reectance) is acquired when the light source
is completely turned off and the camera lens is completely covered with
its non-reective opaque cap. The white reference image (W) is
obtained under the same condition as the raw image using a white surface board which has a uniform, stable and high reectance standard
(about 99.9% reectance). These two reference images are then used
to correct the raw hyperspectral images by using the following equation:
R

IS I D
 100
IW I D

where R is the corrected hyperspectral image in a unit of relative reectance (%); IS the raw hyperspectral image; ID the dark image, and IW the
white reference image.
5.2. Image enhancement and spectral preprocessing

where () is the corrected spectrum at a point (x,y) at wavelength ;


D is the ratio between the direct light and total average lights, and
cos() modulates the amount of direct light reected at each pixel.
The D and the angle of incidence are different for each of the pixels
within the image of sample, and therefore should be determined accordingly. For this purpose Gomez-Sanchis et al. (2008) developed a
digital elevation model (DEM) to obtain the geometric parameters of
the fruit. The results showed the proposed calibration was effective

Image enhancement is an important process for improving the qualities of image. Some of image enhancement techniques are intended to
make specied image characteristics more obvious, such as edge and
contrast enhancement, magnifying, pseudo-coloring, and sharpening.
Others are used to reduce the noise, such as convolution and spatial ltering, Fourier transform (FT), and wavelet transforms (WT). FT and WT
are also suitable for edge detection. Moreover, image enhancement

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

Fig. 3. Flowchart of a series of typical steps for analyzing hyperspectral image data.

techniques can also be grouped into spatial domain methods (such as


the histogram equalization method and the local neighborhood operations based on convolution) and frequency domain methods (such as
discrete Fourier transform and wavelet transforms).
Spectral preprocessing algorithms are mainly used to improve the
spectral data extracted from hyperspectral images mathematically.
The goal of spectral preprocessing is to correct effects from random
noise, length variation of light path, and light scattering, resulting in
producing a robust model with the best predicting ability. The most
widely used preprocessing algorithms include smoothing, derivatives,
standard normal variate (SNV), MSC, FT, WT, and orthogonal signal correction (OSC). Smoothing (e.g. moving average, SavitzkyGolay, median lter, and Gaussian lter) is used to reduce noise from the spectral
data without reducing the number of spectral variables. Derivatives
(mainly rst and second derivatives) is accomplished in correcting
baseline effects in spectra. The 2nd derivative also has the function of
resolving nearby peaks, and sharpening spectral features. MSC is a
transformation method used to compensate for additive and/or multiplicative effects in spectral data. SNV is a row-oriented transformation
which centers and scales individual spectra. Both MSC and SNV are
competent to reduce the spectral variability due to scatter and baseline
shifts. FT and WT separate noise from the spectra in the frequency domain. OSC lters the uninformative part for quality vector Y from the
spectral matrix X based on constrained principal component analysis
(PCA) or partial least square regression (PLSR). Pre-processing should
only be used when it really helps to improve the model performance.
5.3. Image segmentation
The objective of image segmentation is to divide an image into
isolated objects or regions and locate the region of interests (ROIs)
in a form of masks for further spectral and textural feature extraction
(ElMasry, Wang, & Vigneault, 2009). Manual segmentation can
obtain accurate sectioned mask if the process is carefully executed manually, but the process is time-consuming, tedious, and

subjective, and therefore this method is not suitable to be extensively applied in industry application. Image segmentation algorithms
are more efcient than manual segmentation. The most used segmentation algorithms are thresholding (like global thresholding
and adaptive thresholding), morphological processing (like erosion,
dilation, open, close, and watershed algorithms), edge-based segmentation (like gradient-based methods and Laplacian-based
methods), and spectral image segmentation.
Thresholding is a widely used image segmentation method due to
its simplicity of implementation. Images containing the object with
uniform graylevel and a background of unequal but also uniform
graylevel would be appropriate for using thresholding. Generally
there are two kinds of thresholding algorithms, global thresholding
and adaptive thresholding. The rst approach is the simplest
thresholding technique and commonly implemented when the
gray histogram is bimodal. When the graylevels of the ROI and the
background and corresponding contrast are not constant within an
image, an adaptive threshold will be competent, where a different
threshold is used for different regions in the image. Morphological
processing is exible and powerful for image segmentation. Neighborhood operations are typical binary morphological operations by
sliding a structure element containing any complement of 0 s and 1 s
with any size over the image. Erosion and dilation are two elementary
operations to morphological processing from which all other morphological operations are based. Erosion is the process of removing pixels
on object boundaries in an image, while dilation is the process of adding
pixels to the boundaries of objects. Edge-based segmentation is commonly used when pixels on edges/boundaries of objects have dramatic
and discontinue graylevel changes. Gradient-based methods detect the
edge pixels by searching the maximum in the rst derivative within the
image, while Laplacian-based methods locate edge pixels by looking for
zero-crossings in the second derivative within the image. Spectral
image segmentation is considered as a higher-level analysis compared
to traditional segmentations that are regarded as low-level operations.
Traditional segmentations operate on a monochrome image that has a

10

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

scalar graylevel value of each pixel, while spectral image segmentation


is a process of vector mining, because each pixel within hyperspectral
image is a vector of intensity values. Spectral image segmentation integrates segmentation and classication into a single process. This approach has been used with success in food analysis (ElMasry, Iqbal,
Sun, Allen, & Ward, 2011).
5.4. Object measurement
For quantitative measurement of ROI within hyperspectral image,
graylevel object measures are required to obtain a function of the intensity distribution of ROI extracted by image segmentation. There
are two main categories of graylevel object measurements, namely
intensity-based measure and texture-based measure (Ngadi & Liu,
2010). Mean is the most widely used rst-order measure for acquiring intensity information (ElMasry, Wang, ElSayed, & Ngadi, 2007;
Qiao, Wang et al., 2007), which is calculated by averaging the intensity of pixels within the ROI at each wavelength. Besides mean, the
rst-order measures also include standard deviation, skew, energy,
and entropy. Texture is a typical example for the second-order measures that are based on joint distribution functions. It represents the
spatial arrangement of the pixel graylevels within ROI (IEEE Standard
601.4-1990, 1990). The graylevel co-occurrence matrix (GLCM) provides a number of second-order statistics used to describe the graylevel
relationships within a neighborhood around a pixel of interest, and has
been used in many hyperspectral imaging applications (ElMasry et al.,
2007; Qiao, Ngadi et al., 2007; Qin, Burks, Ritenour, & Bonn, 2009).
The 2-D Gabor lter is another popular method for image texture extraction and analysis. It has the capability to achieve certain optimal
joint localization properties in both spatial domain and spatial frequency domain (Daugman, 1980, 1985).
5.5. Multivariate analysis
As discussed previously, hyperspectral imaging contains huge
amount of data that are commonly extracted as intensity-based,
texture-based, and morphological-based features. Multivariate analysis
is required to efciently decompose massive quantity of features into
useful information and establish simple and easier understandable relationship between hyperspectral imaging data and the desired attributes
of tested samples. Multivariate analysis can be classied into qualitative
classication and quantitative regression.
Qualitative classication (also called pattern recognition) includes
unsupervised classication and supervised classication. Unsupervised
classication is achieved according to the nature characteristics that can
be correlation, distance, or combination of them, without a prior knowledge about the class information of the data. Typical unsupervised multivariate classication algorithms for the analysis of hyperspectral data
include PCA, k-means clustering, and hierarchical clustering. PCA decomposes the spectral data into several principal components (PCs) to
characterize the most important directions of variability in the high dimensional data space. The similar spectral signatures among samples
and their class information can be evaluated by the rst several PCs
resulting from PCA. K-means clustering classies samples into k clusters
in which each sample belongs to the cluster with the minimum distance
to the cluster centroid. Hierarchical clustering is intended to build a
hierarchy of clusters that is usually presented in a dendrogram. There
are two types of hierarchical clustering: agglomerative and divisive.
The hierarchical clustering is achieved by the use of a measure of distance between pairs of samples. The hierarchical clustering is not efcient for large data sets.
Supervised classication is different from unsupervised classication
by grouping new samples into predened known classes according to
their measured features. Typical supervised multivariate classication
algorithms for the analysis of hyperspectral data include linear discrimination analysis (LDA), partial least square discriminant analysis

(PLS-DA), articial neural networks (ANN), support vector machines


(SVM), and k-nearest neighbor (kNN). LDA nds an optimal linear projection of independent variables to classify the samples into separate
classes. This method reaches maximal separation by maximizing the
ratio of between-groups to within-groups variability. The prime difference between LDA and PCA is that LDA includes class information of
samples, while PCA only considers independent variables. PLS-DA is on
the basis of PLSR approach for the optimum separation of classes by
encoding dependent variable of PLSR with dummy variables describing
the classes (Wu, Feng, He, & Bao, 2008). PLS-DA is then implemented in
the usual way of PLSR. kNN is a non-parametric approach to group objects based on closest neighbor samples within the feature space. As a
learning algorithm of instance-based, kNN is perhaps the simplest of
all machine learning algorithms: an object is assigned to the class by a
majority vote of its neighbors.
In the application of spectral analysis, the goal of multivariate regression is to establish a relationship between the spectrum response of the
tested sample and its target features with explanatory or predictive purposes. Multivariate regression can be linear or non-linear. Multivariate
linear regression methods in quantitative analysis of spectral data mainly
include multiple linear regression (MLR) (Wu et al., in press), principle
component regression (PCR), and PLSR (Antonucci et al., 2011; Sinija &
Mishra, 2011). MLR establishes a relationship between spectrum and
the desired attributes of tested sample in the form of a linear equation
with features of being simple and easy to be interpreted. The regression
coefcients of this equation are determined by the process of calculating
the minimum error between reference and predicted values in a least
squares sense. MLR fails when the number of variables is more than
that of samples and is easy to be affected by the collinearity between
the variables. In the case of analyzing hyperspectral cubes, the effective
variable selection or dimensionality reduction is required before MLR
model establishment. PCR is a regression method consisted of PCA and
MLR. First, a PCA is carried out on spectral data. Instead of the original variables, the PCs are then used as independent variables in a MLR on the dependent variables. The advantages of PCR against MLR is that the PC
calculation makes the independent variables uncorrelated and with less
noise. The constraints of PCR is that its PC calculation does not consider
the reference values of dependent variables, therefore the obtained PCs
may be not informative for the dependent variables. Different from PCR,
PLSR decomposes both the spectral (independent variables) and concentration (dependent variables) information simultaneously, resulting in
extracting a set of orthogonal factors called latent variables (LVs). In the
decomposition process, dependent variables are actively considered in estimating the LVs to ensure that the rst several LVs are most related for
predicting dependent variables. The building of the relationship between
independent variables and dependent variables then becomes a simple
task as to nd out the optimal number of LVs which have the best predictive power in the relationship to dependent variables.
Sometimes, the relationship between the spectra of the tested sample and its quality may have the characteristics of non-linear, and the
solutions of such analysis become better by using the non-linear regression techniques such as ANN (Lorente, Aleixos, Gmez-Sanchis, Cubero,
& Blasco, 2013) and support vector regression (SVR) (Chen, Wu, He, &
Liu, 2011; Liu, Gao, Hao, Sun, & Ouyang, 2012; Wei, Xu, Wu, & He, in
press). ANN simulates the behavior of biological neural networks for
learning and prediction purpose. The multilayer feed forward neural
network is the most widely used ANN technique which has three layers
of input, hidden, and output to arrange the articial neurons. Neurons
are simple computational elements that process information using a
connectionist approach to computation. Neurons are linked by weighted connections that are adjusted based on input and output information
during the learning phase. The spectral features are introduced to the
input layer with the results of the predicted values exported from the
output layer. SVR is another powerful supervised learning methodology
based on the statistical learning theory. The structural risk minimization
principle (SRM) is embodied instead of traditional empirical risk

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

minimization principle (ERM), which is employed by conventional neural network to avoid overtting and multidimensional problem. Especially, LS-SVM, an optimized version of the standard SVM, is
commonly used for spectral analysis. It employs nonlinear map function
and maps the input features to a high dimensional space, thus to change
the optimal problem into equality constraint condition. Lagrange multiplier is utilized to calculate the partial differentiation of each feature to
obtain the optimal solution. ANN and SVM can be applied in both classication and regression tasks.
5.6. Optimal wavelength selection
Hyperspectral imaging provides more spectral data related to food
quality than multispectral images, as the numbers of wavelengths of
hyperspectral images are much larger than those of multispectral images. In most cases, the inclusion of most wavelengths does not increase
the model performance, since some wavelengths mainly include irrelevant information while others have low SNR. The elimination of irrelevant variables can predigest calibration modeling and improve the
results in terms of accuracy and robustness (Wu et al., 2009; Wu, He,
& Feng, 2008). Besides, there is a problem of multicollinearity among
contiguous variables (wavelengths). Multicollinearity (or collinearity)
means that the correlations among the independent variables (wavelengths) are strong. These variables have similar spectral information.
The presence of a high degree of collinearity between variables in a
model will tend to inuence the matrix towards singularity, and this
in turn will have a large inuence on the coefcients generated (Zou,
Zhao, Povey, Holmes, & Mao, 2010). The selection of wavelengths can
minimize the collinearity among contiguous wavelengths. Based on
the selected optimal wavelength, a reduced image cube can be generated instead of the whole hyperspectral cube, resulting in speeding up the
subsequent data processing and improving prediction results in terms
of accuracy and robustness. Moreover, wavelength selection is also an
important step in the applications of detecting the properties of interest.
The selected wavelengths are used as a reference to convert the hypercube into virtual images with maximal contrast for the properties of interest. Image processing techniques are then applied to these virtual
images for the detection of the properties of interest. In addition, if a
few optimal wavelengths that have characteristic information are selected, a multispectral imaging system with the advantages of simple
structure and low cost can be established based on these selected wavelengths and will be incomparable for process monitoring and real-time
inspection. However, most current researches selected optimal wavelengths respectively for each individual quality attribute of food products. Different optimal wavelengths are selected for different quality
attributes accordingly. When one set of optimal wavelengths is used
to design a multispectral imaging system, only one quality attribute
can be predicted and the multifunctionality of hyperspectral imaging
is lost. Recently, Wu, Sun et al. (2012) proposed the selection of instrumental effective wavelengths (IEW) and predictive effective wavelengths (PEW) that are the optimal wavelengths of several quality
attributes. The multispectral imaging systems designed based on IEW
have the multifunctionality of determining several quality attributes
simultaneously.
The aim of wavelength selection methods is to select optimal
wavelengths containing the important information related to quality
attributes, produce the smallest possible errors for qualitative discriminations or quantitative determinations. Knowledge based selection is a manual approach made from the basic knowledge about the
spectroscopic properties of the sample (Zou et al., 2010). There are
also mathematical selection algorithms for choosing optimum wavelengths in a more efcient way.
Some classical approaches include correlation coefcients,
loading and regression coefcients, analysis of spectral differences
(ASD), spectrum derivatives, and stepwise regression. Correlation
coefcient approach selects the wavelengths have the highest

11

correlation coefcient as feature wavebands. Loading and regression coefcients reect the relation between a given response and
all predictors (wavelengths). In general, wavelengths having
large values (irrespective of sign) are considered as optimal ones.
The ASD analyzes the difference between spectra of samples of different varieties. The wavelengths with large differences are important for the discrimination. The method of spectrum derivatives
calculates the difference of the derivatives of spectra and selects
the wavelengths that have large differences between samples of
different varieties as the optimal wavelengths. Stepwise regression
nds the important wavelengths by adding one wavelength with
forward addition and then testing it with backward elimination.
Successive projections algorithm (SPA) and uninformative variable
elimination (UVE) are two relatively, sophisticated methods. UVE is
based on the stability analysis of the PLSR regression coefcient. The
stability of a variable is calculated by dividing the mean of the regression coefcients by standard deviation of the regression coefcients of
the variable. SPA employs a simple projection operation in a vector
space to select subsets of variables with minimum of collinearity. In addition, UVE eliminates uninformative variables but its selected variables
might have a problem of multicollinearity and SPA selects variables
with minimal multicollinearity but its selected variables might contain
variables less related to the quality attribute. Therefore, a combination
of UVE-SPA was proposed by Ye, Wang, and Min (2008) to complementary advantages of both methods and has been applied to the spectral
analysis of food quality (Wu, Chen, Zhu, Guan, & Wu, 2011; Wu, Nie,
He, & Bao, 2012).
Elaborate search-based strategies include simulated annealing (SA)
and genetic algorithms (GAs). SA is a probabilistic metaheuristic for
the global optimization problem inspired from annealing process in
metallurgy. In the application of wavelength selection, SA generates a
numerical string containing the selected wavelengths. By analogy
with the annealing process, SA attempts to replace the current solution
by a random solution in each step. The solution is iteratively modied
using a criterion called Boltzman's probability distribution (Metropolis
criterion) that is subject to the increment of objective function and a
global parameter T, that is analogous with temperature. T is gradually
decreased during the process. As the T is decreasing, solution is increasingly difcult to be modied. Finally, if T is lowered sufciently, no further changes in the solution space are possible. To avoid being frozen at
a local optimum, the SA algorithm moves slowly through the solution
space. This controlled improvement of the objective value is accomplished by accepting non-improving moves with a certain probability
that decreases as the algorithm progresses (Chen & Lei, 2009). GA is a
search heuristic algorithm that mimics the process of Darwin's theory
of natural selection to research optimization. In the application of wavelength selection, GA evolves a population of strings called chromosomes
that encode wavelengths. A tness function is used to evaluate the
performance of chromosomes. Similar to natural selection, the chromosomes with a high tness value have a higher probability to
reproduce. The evolution process is repeated until the termination
condition has been reached. The elaborate search-based strategies
are generally more efcient than exhaustive enumeration by
researching a large part of all possible subsets in a reasonable time,
much less than the time of researching all possible subsets. However,
a main drawback of elaborate search-based strategies is that their results are unstable. Different optimal wavelengths might be selected
every time, although their prediction abilities are sometime similar.
In addition, SA and GA have many adjustable factors that affect the
results, and therefore require a considerable level of expertise for
users.
Interval base algorithms include interval partial least squares (iPLS),
windows PLS and backward interval partial least squares (biPLS). The
iPLS splits the spectra into several equal distant regions, and then establishes PLS regression models for each sub-interval. The interval region
with the lowest RMSECV is chosen as the optimal one. The siPLS

12

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

algorithm calculates all the possible PLS model combinations of several


intervals and chooses the best combination. In the biPLS algorithm, after
the dataset is split into a given number of intervals, the PLS model is calculated with each interval left out, which is termed as backward. The
left out interval is the one that when left out gives the poorest
performing model with respect to RMSECV. The optimal combination
is determined in accordance with the smallest RMSECV. The main advantage of interval base algorithms is an interval display showing the
performances of interval models along the full range wavelength. However, the width and number of intervals have much inuence on the selection result and calculation time. More combinations of intervals with
small width considered might lead to better result, but can also increase
the calculation time.
5.7. Model evaluation
For the analysis of hyperspectral image data either on the purpose of
qualitative classication or quantitative regression, the multivariate
data must be trained to build a calibration model that should be evaluated for its validity by the validation process such as cross validation or
by the prediction process with a new set of samples. There are two main
categories of cross validation: the segmented cross validation and full
cross validation. In segmented cross validation, the samples are apportioned into segments or subgroups. In its calculation procedure, one
segment is kept out of the calibration at a time. By repeating the selection of different segments, the predictions can be made on all samples
for the validation procedure. In leave-one-out cross validation, only
one sample is preserved at a time and all other samples are used to
build the calibration. The latter validation is also called leave-one-out
cross validation and is in general the favorable one. Segmented cross
validation is commonly used when full cross validation would be too
time consuming or when some samples are treated at the same condition and should be included in one segment.
Within the processes of calibration, validation, and prediction, the
performance of a calibration model is usually evaluated in terms of standard error of calibration (SEC), root mean square error of calibration
(RMSEC) and coefcients of determination (r 2) of calibration (rC2) in
the calibration process, root mean square error of cross-validation
(RMSECV) and coefcients of determination of validation (rV2) in the
validation process, and standard error of prediction (SEP), root mean
square error of prediction (RMSEP), coefcients of determination of
prediction (rP2), and residual predictive deviation (RPD). Generally, a
good model should have higher values of rC2, rV2, rP2, and RPD, lower
values of SEC, SEP, RMSEC, RMSECV and RMSEP, and a small difference
between SEC and SEP.
5.8. Visualization of quality images
Recently, there is an increasing demand to know the detailed quality
distribution of spatially non-homogeneous properties of interest in a
sample, rather than their average concentration. Hyperspectral imaging
is a main technique to be extensively used in obtaining quality images
that have the advantages of knowing and understanding the heterogeneity of food products. There are usually three ways to visualize the
quality distribution of a sample. The rst way is to directly display the
quality distribution using an image at an individual wavelength
(Fig. 1.b). However, it provides limited information and cannot accurately present the spatial distribution, unless the certain quality attribute has a very good correlation to the intensity of this wavelength.
The second way of visualizing the quality distribution is using a false
color image. There are three types of color photoreceptor cells (cones)
for human to percept tristimulus values. A false color image is generated
by setting three monochrome images for red, green and blue channels
of RGB image respectively (Fig. 1.e). The monochrome images can be
the images at certain wavelengths or the images obtained by mathematical calculation such as PCA and WT. False color images are

commonly used for target detection and display purposes, and it is


hard to be used to show quantitative distributions. Moreover, only limited false color images could be obtained because only three monochrome images can be used. The third way is to calculate the quality
attribute of each pixel by applying chemometric tools with the spectrum of corresponding pixel, which can be considered as a linear or
non-linear mathematical combination of images at different wavelengths. Although it is practically impossible to measure the precise
concentration of compositions for every pixel within a sample, a solution to this problem is to establish the regression models based on the
average spectrum of all pixels within the ROI where the corresponding
reference value can be obtained. The ROI can be the whole sample or a
part of sample from a selected location. The established models can
then be applied to determine the composition content of each pixel
within the object region, which can be used for further generation of
quality images.
6. Advantages and disadvantages of hyperspectral imaging
Due to its advantages of distinguished characteristic identication
ability and containing rich information, hyperspectral imaging is highly
suitable for food quality and safety analysis and assessment. On the
other hand, as any other techniques, hyperspectral imaging also has
some demerits that need to be solved in future research. The major advantages of applying hyperspectral imaging in food quality and safety
analysis and assessment can be summarized as follows:
Hyperspectral imaging is a chemical-free assessment method that
requires minimal sample preparation. Therefore, it saves labor, time,
reagent cost, and the cost of waste treatments compared with traditional methods, as a result, it is economic.Like spectroscopy technique, hyperspectral imaging is a non-invasive and non-destructive
method that can be applied for both qualitative and quantitative analyses. Unlike spectroscopy, hyperspectral imaging records a complete
spectrum of every pixel within the scene. Therefore, hyperspectral
imaging is able to delineate multiple distribution of different constituents within a sample, not just the bulk composition.Hyperspectral
imaging provides an extremely simple and expeditious inspection
based on the established and validated calibration model. It can determine the contents and distributions of several components simultaneously within the same sample. Such determination permits labeling
and pricing of different entities in a sample simultaneously in sorting
food products.Hyperspectral imaging is exible to choose any ROIs
within the image even after image acquisition, and any typical spectrum
of a ROI or a pixel can be considered as a spectral signature and be saved
in a spectral library. Due to its rich spectral and spatial information,
hyperspectral imaging is competent in detection and discrimination of
different objects even if they have similar colors, overlapped spectra,
or morphological characteristics.
Although there are many advantages for hyperspectral imaging,
some disadvantages still needed to be solved before industrial application and are summarized as follows:
Hyperspectral images contain much redundant data that pose
considerable challenges for data mining, and hardware speed of the
hyperspectral imaging system needs to be improved to satisfy the
rapid acquisition and analysis of the huge hyperspectral data cube.
However, due to its long time data acquisition and analysis,
hyperspectral imaging is not suggested for direct implementation
in online application. A multispectral imaging system acquiring the
spectral images only at several optimal wavelengths would be
more suitable to meet the speed requirement of quality inspection.
Such optimized multispectral imaging systems have much lower dimensionality than hyperspectral imaging systems, resulting in less
data acquisition time. The optimal wavelengths can be determined
through analyzing the hyperspectral imaging data.Hyperspectral imaging is an indirect method as spectroscopy. Both of them need accurate reference calibration and robust model transfer algorithms, and

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

do not have good detection limits compared to chemical-based analytical methods. Moreover, as spectroscopy, hyperspectral imaging
also has the well-known problem of multicollinearity. Multivariate
analysis and variable selection are two ways to reduce the effect of
this problem.Reference values of attributes cannot be measured accurately for every pixel within a sample. The quantitative relationship is usually established based on the mean spectrum of ROI
where its reference quality value can be measured using the standard method.Hyperspectral imaging is not suitable when the ROI
within the surface of a sample is smaller than a pixel or the quality
attributes have no characteristic spectral absorption.The analysis of
liquids or homogenous samples does not need hyperspectral imaging but only spectroscopy, because the value of imaging lies in the ability to visualize spatial heterogeneities in samples. A point measurement
using spectrometer will get the same spectral information of the whole
sample.Most food products have very strong absorption of light, making
them opaque over a distance of about several millimeters in visible and
near infrared region. Lammertyn, Peirs, De Baerdemaeker, and Nicola
(2000) calculated the light penetration depths in apple fruit. The depths
were up to 4 mm in the 700900 nm range and between 2 and 3 mm in
the 9001900 nm range. In another study, Hampton et al. (20022003)
reached the maximum penetration depth of 13 mm into sh tissue.
Moreover, the penetration depth of light in MIR region (usually a few
micrometers) is much shorter than NIR. Therefore, hyperspectral imaging cannot detect the information of constituents deep inside the food
sample.The variation of temperature affects the water absorption spectrum. As water is a main component of food products, there is a potential heating effect for the measured hyperspectral images of food.
7. Conclusions
Hyperspectral imaging has been proved as a promising technology for rapid, efcient and reliable measurement of different quality
attributes and their spatial distribution, simultaneously, and therefore can be used instead of human inspectors or wet chemical
methods for the automatic grading and nutrition determination of
food products. By combining spatial and spectral details together in
one system, hyperspectral imaging technique can simultaneously acquire spatial images in many spectrally contiguous bands to form a
3-D hyperspectral cube, and is considered to have the ability to complement advantages of spectroscopy and imaging techniques. The
predicted values quality or safety attributes at pixel-level can then
be used to generate the distribution map of the attribute, leading to
better characterization and improved quality and safety evaluation
results. Currently, there are still many challenges facing the full exploitation of this technique in terms of computation speed, limitations of hardware, and high cost. Therefore, hyperspectral imaging
studies are often geared towards identication of optimal wavelengths to design low cost multispectral imaging system that will
play an important role in the food industry for real-time monitoring
systems for food safety and quality assessment.
Acknowledgments
The authors would like to acknowledge the nancial support provided by the Irish Research Council for Science, Engineering and Technology under the Government of Ireland Postdoctoral Fellowship scheme.
References
Antonucci, F., Pallottino, F., Paglia, G., Palma, A., D'Aquino, S., & Menesatti, P. (2011).
Non-destructive estimation of Mandarin maturity status through portable VIS-NIR
spectrophotometer. Food and Bioprocess Technology, 4(5), 809813.
Ariana, D. P., & Lu, R. (2008a). Detection of internal defect in pickling cucumbers using
hyperspectral transmittance imaging. Transactions of the ASABE, 51(2), 705713.

13

Ariana, D. P., & Lu, R. (2008b). Quality evaluation of pickling cucumbers using hyperspectral reectance and transmittance imaging: Part I. Development of a prototype.
Sensing and Instrumentation for Food Quality and Safety, 2, 144151.
Bannon, D., & Thomas, R. (2005). Harsh environments dictate design of imaging spectrometer. Laser Focus World, 41(8), 9395.
Bock, L. E., & Connelly, R. K. (2008). Innovative uses of near-infrared spectroscopy in
food processing. Journal of Food Science, 73(7), R91R98.
Bodkin, A. (2010). Hyperspectral imaging systems. In US Patent Application Publication
(Vol. US 2010/0328659 A1).
Cen, H. Y., & He, Y. (2007). Theory and application of near infrared reectance spectroscopy in determination of food quality. Trends in Food Science & Technology, 18(2),
7283.
Chen, X. J., & Lei, X. X. (2009). Application of a hybrid variable selection method for
determination of carbohydrate content in soy milk powder using visible and
near infrared spectroscopy. Journal of Agricultural and Food Chemistry, 57(2),
334340.
Chen, X. J., Wu, D., He, Y., & Liu, S. (2011). Nondestructive differentiation of panax species using visible and shortwave near-infrared spectroscopy. Food and Bioprocess
Technology, 4(5), 753761.
Cho, B., Kim, M. S., Chao, K., Lawrence, K., Park, B., & Kim, K. (2009). Detection of fecal
residue on poultry carcasses by laser-induced uorescence imaging. Journal of Food
Science, 74(3), E154E159.
Choudhary, R., Mahesh, S., Paliwal, J., & Jayas, D. S. (2009). Identication of wheat classes using wavelet features from near infrared hyperspectral images of bulk samples. Biosystems Engineering, 102(2), 115127.
Daugman, J. G. (1980). Two-dimensional spectral-analysis of cortical receptive-eld
proles. Vision Research, 20(10), 847856.
Daugman, J. G. (1985). Uncertainty relation for resolution in space, spatial-frequency,
and orientation optimized by two-dimensional visual cortical lters. Journal of
the Optical Society of America a-Optics Image Science and Vision, 2(7), 11601169.
Devaux, M. F., Taralova, I., Levy-Vehel, J., Bonnin, E., Thibault, J. F., & Guillon, F.
(2006). Contribution of image analysis to the description of enzymatic degradation kinetics for particulate food material. Journal of Food Engineering, 77(4),
10961107.
Du, C. J., & Sun, D. -W. (2005). Comparison of three methods for classication of pizza
topping using different colour space transformations. Journal of Food Engineering,
68(3), 277287. http://dx.doi.org/10.1016/j.jfoodeng.2004.05.044.
Du, C. J., & Sun, D. -W. (2006). Learning techniques used in computer vision for food
quality evaluation: A review. Journal of Food Engineering, 72(1), 3955.
ElMasry, G., Iqbal, A., Sun, D. -W., Allen, P., & Ward, P. (2011). Quality classication of
cooked, sliced turkey hams using NIR hyperspectral imaging system. Journal of
Food Engineering, 103(3), 333344.
ElMasry, G., Sun, D. -W., & Allen, P. (2012). Near-infrared hyperspectral imaging for
predicting colour, pH and tenderness of fresh beef. Journal of Food Engineering,
110(1), 127140.
ElMasry, G., Wang, N., ElSayed, A., & Ngadi, M. (2007). Hyperspectral imaging for nondestructive determination of some quality attributes for strawberry. Journal of Food
Engineering, 81(1), 98107.
ElMasry, G., Wang, N., & Vigneault, C. (2009). Detecting chilling injury in Red Delicious
apple using hyperspectral imaging and neural networks. Postharvest Biology and
Technology, 52(1), 18.
Gomez-Sanchis, J., Molto, E., Camps-Valls, G., Gomez-Chova, L., Aleixos, N., & Blasco, J.
(2008). Automatic correction of the effects of the light source on spherical objects:
An application to the analysis of hyperspectral images of citrus fruits. Journal of
Food Engineering, 85(2), 191200.
Gowen, A. A., O'Donnell, C. P., Taghizadeh, M., Cullen, P. J., Frias, J. M., & Downey, G.
(2008). Hyperspectral imaging combined with principal component analysis for
bruise damage detection on white mushrooms (Agaricus bisporus). Journal of
Chemometrics, 22(34), 259267.
Hampton, K. A., Wutzke, J. L., Cavinato, A. G., Mayes, D. M., Lin, M., & Rasco, B. A.
(2002-2003). Characterization of optical probe light penetration depth for noninvasive analysis. Eastern Oregon Science Journal, XVIII, 1418.
IEEE Standard 601.4-1990 (1990). IEEE standard glossary of image processing and pattern recognition terminology. Los Alamitos, CA, USA: IEEE Press.
Kester, R. T., Bedard, N., Gao, L., & Tkaczyk, T. S. (2011). Real-time snapshot
hyperspectral imaging endoscope. Journal of Biomedical Optics, 16(5), 056005.
Klein, M. E., Aalderink, B. J., Padoan, R., de Bruin, G., & Steemers, T. A. G. (2008). Quantitative hyperspectral reectance imaging. Sensors, 8(9), 55765618.
Lammertyn, J., Peirs, A., De Baerdemaeker, J., & Nicola, B. (2000). Light penetration
properties of NIR radiation in fruit with respect to non-destructive quality assessment. Postharvest Biology and Technology, 18(2), 121132.
Litwiller, D. (2005). CMOs vs. CCD: Maturing technologies, maturing markets. Photonics
Spectra, 39(8), 5458.
Liu, Y. D., Gao, R. J., Hao, Y., Sun, X. D., & Ouyang, A. G. (2012). Improvement of
near-infrared spectral calibration models for brix prediction in 'Gannan' navel oranges by a portable near-infrared device. Food and Bioprocess Technology, 5(3),
11061112.
Lorente, D., Aleixos, N., Gmez-Sanchis, J., Cubero, S., & Blasco, J. (2013). Selection of
optimal wavelength features for decay detection in citrus fruit using the ROC
curve and neural networks. Food and Bioprocess Technology, 6, 530541.
Manley, M., Williams, P., Nilsson, D., & Geladi, P. (2009). Near infrared hyperspectral
imaging for the evaluation of endosperm texture in whole yellow maize (Zea
maize l.) kernels. Journal of Agricultural and Food Chemistry, 57(19), 87618769.
Mendoza, F., Lu, R., Ariana, D., Cen, H., & Bailey, B. (2011). Integrated spectral and image
analysis of hyperspectral scattering data for prediction of apple fruit rmness and
soluble solids content. Postharvest Biology and Technology, 62(2), 149160.

14

D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114

Ngadi, M. O., & Liu, L. (2010). Hyperspectral image processing techniques. In D. -W. Sun
(Ed.), Hyperspectral imaging for food quality analysis and control (pp. 99127)
(1st ed.). 2010. San Diego, California, USA: Academic Press/Elsevier.
Nicolai, B. M., Beullens, K., Bobelyn, E., Peirs, A., Saeys, W., Theron, K. I., et al. (2007).
Nondestructive measurement of fruit and vegetable quality by means of NIR
spectroscopy: A review. Postharvest Biology and Technology, 46(2), 99118.
Palmer, C. (2005). Diffraction grating handbook (6th ed.). Rochester, NY: Newport
Corporation.
Park, B., Kise, M., Windham, W., Lawrence, K., & Yoon, S. (2008). Textural analysis of
hyperspectral images for improving contaminant detection accuracy. Sensing and
Instrumentation for Food Quality and Safety, 2(3), 208214.
Park, B., Lee, S., Yoon, S. -C., Sundaram, J., Windham, W. R., Hinton, J. A., et al. (2011).
AOTF hyperspectral microscopic imaging for foodborne pathogenic bacteria detection (802707-802707).
Park, B., Yoon, S. -C., Windham, W., Lawrence, K., Kim, M., & Chao, K. (2011). Line-scan
hyperspectral imaging for real-time in-line poultry fecal detection. Sensing and Instrumentation for Food Quality and Safety, 5(1), 2532.
Peng, Y., & Lu, R. (2006). Improving apple fruit rmness predictions by effective correction of multispectral scattering images. Postharvest Biology and Technology, 41(3),
266274.
Qiao, J., Ngadi, M. O., Wang, N., Gariepy, C., & Prasher, S. O. (2007). Pork quality and
marbling level assessment using a hyperspectral imaging system. Journal of Food
Engineering, 83(1), 1016.
Qiao, J., Wang, N., Ngadi, M. O., Gunenc, A., Monroy, M., Gariepy, C., et al. (2007). Prediction of drip-loss, pH, and color for pork using a hyperspectral imaging technique. Meat Science, 76(1), 18.
Qin, J. W., Burks, T. F., Ritenour, M. A., & Bonn, W. G. (2009). Detection of citrus canker
using hyperspectral reectance imaging with spectral information divergence.
Journal of Food Engineering, 93(2), 183191.
Qin, J., Chao, K., & Kim, M. S. (2011). Investigation of Raman chemical imaging for detection of lycopene changes in tomatoes during postharvest ripening. Journal of
Food Engineering, 107(34), 277288.
Schaare, P. N., & Fraser, D. G. (2000). Comparison of reectance, interactance and transmission modes of visible-near infrared spectroscopy for measuring internal properties of kiwifruit (Actinidia chinensis). Postharvest Biology and Technology, 20(2),
175184.
Singh, C. B., Jayas, D. S., Paliwal, J., & White, N. D. G. (2010a). Detection of midge-damaged
wheat kernels using short-wave near-infrared hyperspectral and digital colour imaging. Biosystems Engineering, 105(3), 380387.
Singh, C. B., Jayas, D. S., Paliwal, J., & White, N. D. G. (2010b). Identication of
insect-damaged wheat kernels using short-wave near-infrared hyperspectral and
digital colour imaging. Computers and Electronics in Agriculture, 73(2), 118125.
Sinija, V., & Mishra, H. (2011). FTNIR spectroscopic method for determination of moisture content in green tea granules. Food and Bioprocess Technology, 4(1), 136141.
Sun, D. -W., & Brosnan, T. (2003). Pizza quality evaluation using computer vision Part 1 - Pizza base and sauce spread. Journal of Food Engineering, 57(1), 8189.
http://dx.doi.org/10.1016/S0260-8774(02)00275-3 (PII S0260-8774(02)00275-3).
Tkaczyk, T. S., Kester, R. T., & Gao, L. (2011). Image mapping spectrometers. In US Patent
Application Publication (Vol. US 2011/0285995 A1).
Vermeulen, P., Pierna, J. A. F., van Egmond, H. P., Dardenne, P., & Baeten, V. (2012). Online detection and quantication of ergot bodies in cereals using near infrared
hyperspectral imaging. Food Additives and Contaminants Part a-Chemistry Analysis
Control Exposure & Risk Assessment, 29(2), 232240.
Wang, W., Li, C., Tollner, E. W., Gitaitis, R. D., & Rains, G. C. (2012). Shortwave infrared
hyperspectral imaging for detecting sour skin (Burkholderia cepacia)-infected onions. Journal of Food Engineering, 109(1), 3848.

Wei, X., Xu, N., Wu, D., & He, Y. (2013). Determination of branched-amino acid content in
fermented Cordyceps sinensis Mycelium by using FT-NIR spectroscopy technique.
Food and Bioprocess Technology. http://dx.doi.org/10.1007/s11947-013-1053-4 (in
press).
Wu, D., Chen, X. J., Shi, P. Y., Wang, S. H., Feng, F. Q., & He, Y. (2009). Determination of
alpha-linolenic acid and linoleic acid in edible oils using near-infrared spectroscopy improved by wavelet transform and uninformative variable elimination.
Analytica Chimica Acta, 634(2), 166171.
Wu, D., Chen, X., Zhu, X., Guan, X., & Wu, G. (2011). Uninformative variable elimination
for improvement of successive projections algorithm on spectral multivariable selection with different calibration algorithms for the rapid and non-destructive determination of protein content in dried laver. Analytical Methods, 3(8), 17901796.
Wu, D., Feng, L., He, Y., & Bao, Y. (2008). Variety identication of Chinese cabbage
seeds using visible and near-infrared spectroscopy. Transactions of the ASABE,
51(6), 21932199.
Wu, D., He, Y., & Feng, S. (2008). Short-wave near-infrared spectroscopy analysis of
major compounds in milk powder and wavelength assignment. Analytica Chimica
Acta, 610(2), 232242.
Wu, D., Nie, P. C., He, Y., & Bao, Y. D. (2012). Determination of calcium content in powdered milk using near and mid-infrared spectroscopy with variable selection and
chemometrics. Food and Bioprocess Technology, 5(4), 14021410.
Wu, D., Shi, H., Wang, S., He, Y., Bao, Y., & Liu, K. (2012). Rapid prediction of moisture
content of dehydrated prawns using online hyperspectral imaging system.
Analytica Chimica Acta, 726, 5766.
Wu, D., & Sun, D. -W. (2012). Colour measurements by computer vision for food quality
control A review. Trends in Food Science & Technology, 29(1), 520.
Wu, D., & Sun, D. -W. (2013). Potential of time series-hyperspectral imaging (TS-HSI)
for non-invasive determination of microbial spoilage of salmon esh. Talanta,
111, 3946.
Wu, D., Sun, D. -W., & He, Y. (2012). Application of long-wave near infrared
hyperspectral imaging for measurement of color distribution in salmon llet. Innovative Food Science & Emerging Technologies, 16, 361372.
Wu, D., Wang, S., Wang, N., Nie, P., He, Y., Sun, D. -W., et al. (2013). Application of time
series-hyperspectral imaging (TS-HSI) for determining water distribution within
beef and spectral kinetic analysis during dehydration. Food and Bioprocess Technology.
http://dx.doi.org/10.1007/s11947-012-0928-0 (in press).
Yang, C. -C., Kim, M. S., Kang, S., Cho, B. -K., Chao, K., Lefcourt, A. M., et al. (2012). Red to
far-red multispectral uorescence image fusion for detection of fecal contamination on apples. Journal of Food Engineering, 108(2), 312319.
Ye, S. F., Wang, D., & Min, S. G. (2008). Successive projections algorithm combined with
uninformative variable elimination for spectral variable selection. Chemometrics
and Intelligent Laboratory Systems, 91(2), 194199.
Yoon, S., Lawrence, K., Smith, D., Park, B., & Windham, W. (2008). Embedded bone fragment detection in chicken llets using transmittance image enhancement and
hyperspectral reectance imaging. Sensing and Instrumentation for Food Quality
and Safety, 2(3), 197207.
Yoon, S. C., Park, B., Lawrence, K. C., Windham, W. R., & Heitschmidt, G. W. (2011).
Line-scan hyperspectral imaging system for real-time inspection of poultry carcasses with fecal material and ingesta. Computers and Electronics in Agriculture,
79(2), 159168.
Zou, X. B., Zhao, J. W., Povey, M. J. W., Holmes, M., & Mao, H. P. (2010). Variables selection methods in near-infrared spectroscopy. Analytica Chimica Acta, 667(12),
1432.
Zheng, C. X., Sun, D. -W., & Zheng, L. Y. (2006). Recent developments and applications of
image features for food quality evaluation and inspection - A review. Trends in Food
Science & Technology, 17(12), 642655. http://dx.doi.org/10.1016/j.tifs.2006.06.005.

You might also like