You are on page 1of 5

Remote Sensing Lab

Lab Instructor: Michael Johnson

Supervised Classification
Data: Download data and save the dataset to your workspace for this lab. Do not create
a folder on your H: folder. You should work on local computer, for example, D:/your
name/rslab/supervise.

Objectives:

To generate supervised signitures using the AOI and Region Grow tools
To use a signiture file as an input to a maximum likelihood classification.

Study area:
The image provided is a subset of a Landsat ETM+ image scene for Pensacola, Florida
area. You are supposed to classify the image into seven land use types:
Land cover/type type
Water
High density urban
Residential
Forest
Grass
Agriculture
Sand beach

Code
1
2
3
4
5
6
7

Instructions:
Classification is the process of sorting pixels into a finite number of individual classes, or
categories, of data based on their data file values. If a pixel satisfies a certain set of
criteria, then the pixel is assigned to the class that corresponds to that criteria. There are
two ways to classify pixels into different categories:

supervised
unsupervised

Supervised classification is more closely controlled by you than unsupervised


classification. In this process, you select pixels that represent patterns you recognize or
can identify with help from other sources. Knowledge of the data, the classes desired,
and the algorithm to be used is required before you begin selecting training samples. By
identifying patterns in the imagery you can "train" the computer system to identify pixels
with similar characteristics. By setting priorities to these classes, you supervise the

classification of pixels as they are assigned to a class value. If the classification is


accurate, then each resulting class corresponds to a pattern that you originally identified.
Unsupervised classification is more computer-automated. It allows you to specify
parameters that the computer uses as guidelines to uncover statistical patterns in the data.
Supervised training requires a priori (already known) information about the data, such as:

What type of classes need to be extracted? Soil type? Land use? Vegetation?
What classes are most likely to be present in the data? That is, which types of
land cover, soil, or vegetation (or whatever) are represented by the data?

In supervised training, the user relies on her/his own pattern recognition skills and a
priori knowledge of the data to help the system determine the statistical criteria
(signatures) for data classification. To select reliable samples, the user should know some
information-either spatial or spectral-about the pixels that they want to classify. The
location of a specific characteristic, such as a land cover type, may be known through
ground truthing. Ground truthing refers to the acquisition of knowledge about the study
area from field work, analysis or aerial photography, personal experience, etc. Ground
truth data are considered to be the most accurate (true) data available about the area of
study. They should be collected at the same time as the remotely sensed data, so that the
data correspond as much as possible. However, some ground truth data may not be very
accurate due to a number of errors, inaccuracies, and human shortcomings. Global
positioning system receivers are useful tools to conduct ground truth studies and collect
training sets. Training samples are sets of pixels that represent what is recognized as a
discernible pattern, or potential class. The system will calculate statistics from the
sample pixels to create a parametric signature for the class.

Procedure:
0. Run Erdas Imagine. Set the preference for your default data and output directory. Also
check Use bubble help in the preference editor.
1. From the Viewer Tool Bar select the Open Layer icon and select the file called
pensacola.img. Display as Fit to Frame.
2. From the Viewer Menu bar select AOI/Tools and AOI/Seed Properties. The dialog
boxes that appear will be used to generate your singnature AOIs. Click on the the ? on
the AOI Tool menu bar and click on Help on the AOI/Seed Properties box. Read about
the functions of each.
3. From the Main Icon Panel select Classifier/Signature Editor. A dialog box will
appear and will eventually contain a CellArray of created signatures.
4. Within the Viewer, visually select an area of within the bay that you would like to take
as a representative sample of the water. From the AOI tool palette, select the Create

Polygon AOI icon. As your cursor is moved into the viewer, you should notice that it
turns into a crosshair.
5. Move the cursor to within the bay and click with the first mouse button to start
digitizing. Every click with the first button will place a point that will form a polygon.
The return leg of this polygon will follow you around as you digitize. To close the
polygon double click the first mouse button.
6. The area of the water will now be taken as a sample for the classifier. From the
Signature Editor, select the icon Create New Signature(s) from AOI. This will take
the image area defined by the AOI and add its spectral information into the Signature
Editor. Answer Question 1.
7. You should now have an entry in the Signature Editor called Class 1. Edit this text
field and enter Water 1 as the class name for this signature. Use your knowledge of false
color images to help you with this process.
8. From the AOI tool palette, select the Region Grow AOI icon. As your cursor is
moved into the viewer, you should notice that it turns into a crosshair. Move the cursor to
other parts of water body, but over an area that has not already been selected by any other
AOI tools. Click to plant a seed pixel.
9. An AOI will be generated that is quite small since the value for the Spectral
Euclidean Distance, found in the Region Growing Properties dialog box, has a default
value of 1. Increase this value to 5 and click the Redo button.
10. Once the new region has been calculated, set the Spectral Euclidean Distance to 10
and observe any change. Answer Questions 2 and 3.
11. This newly grown region will now be used as a sample for the classifier. From the
Signature Editor, select the Create New Signature icon. This will take the image area
defined by the AOI and add the spectral information into the Signature Editor.
12. You should now have a second water entry in the Signature Editor. Edit this class
name to Water 2.
13. Repeat the region growing process and add to the Signature Editor three more
signatures representing different tones of water body, for example, plumes, shallow water
near beach. Do not duplicate classes from the first set of signatures. Answer Question 4.
14. Repeat the signature collection using AOI tools for other land use classes as shown in
the table above. Note that for some land use types, using Create Polygon AOI rather
than Region Grow AOI may be more appropriate. For example, for land use types with
heterogeneous signatures, better use Create Polygon AOI.

15. From the Signature Editor Menu Bar select File/Save. Name your file
pensacola.sig. Click OK.
16. You may access the Supervised Classification box in one of two ways. Since you
have just finished creating signatures and the Signature Editor is still open, then from the
Signature Editor Menu Bar select Classify/Supervised. If you were creating a
classification from a closed signature file then you would go to the Main Icon Panel and
select Classifier/Supervised Classification. Both procedures will bring up similar
dialog boxes. In the latter case, you will need to enter the Input Raster File and the Input
Signature File.
17. For the Output Classified File enter pensacola_super.img.
18. For the Parametric Rule select Maximum Likelihood. If there are any signatures
selected in the Signature Editor when you run your classification, then your output will
be based only on that selection, so deselect any selected signatures. Use the remaining
defaults and click OK.
19. When the process is done click OK in the Job Status box. Then display
pensacola_super.img, and the original image side by side in viewers opened using
Geospatial Light Table option as Fit to Frame. Compare the two images.
20. Recode pensacola_super.img (Image interpreter|GIS analysis|Recode), combine
same types of land use into a single one, for example, water 1 and water 2 should be
combined to form water. For output image name, enter Pensacola_landcover.img. Recode
the attribute table using the codes provided in the above table. After you obtain the
recoded image, open Pensacola_landcover.img in a viewer and compare it with the
original. Assign appropriate color to each land use type. Go to Raster/Attributes, based
on the histogram (which shows the number of pixels for each land use type) information,
calculate the percentage of area of each land use type using MS Excel. Compile a table to
record this information. Turn in the table.
21. Make a Map Composition with your final classified image and print it in color or
b/w. Make sure the map has legend. You may need to use Screen Capture software to
capture the map for printing. Hand in this map with your answers to the questions.

Q1. What type of information can be seen in the CellArray regarding this signature?
Q2. Was there a significant change in the size of the AOI when you altered the value from
5 to 10? If not, why?
Q3. How would your AOIs have been different if you had used the 8-way neighbor
search results?

Q4. What are the advantages of using a region grow signature versus a manually
delineated signature?
Also turn in the table of area percentage (the percent of land area for each land use class)
and the land use map.

You might also like