You are on page 1of 3

Laws' Texture Measures

The texture energy measures developed by Kenneth Ivan Laws at the University of Southern California have been used
for many diverse applications. These measures are computed by first applying small convolution kernels to a digital
image, and then performing a nonlinear windowing operation. We will first introduce the convolution kernels that we
will refer to later.
The 2-D convolution kernels typically used for texture discrimination are generated from the following set of onedimensional convolution kernels of length five:

L5 = [ 1 4
E5 = [ -1 -2
S5 = [ -1 0
W5 = [ -1 2
R5 = [ 1 -4

6 4 1 ]
0 2 1 ]
2 0 -1 ]
0 -2 1 ]
6 -4 1 ]

These mnemonics stand for Level, Edge, Spot, Wave, and Ripple. Note that all kernels except L5 are zero-sum. In his
dissertation, Laws also presents convolution kernels of length three and seven, and discusses the relationship between
different sets of kernels.
From these one-dimensional convolution kernels, we can generate 25 different two-dimensional convolution kernels by
convolving a vertical 1-D kernel with a horizontal 1-D kernel. As an example, the L5E5 kernel is found by convolving a
vertical L5 kernel with a horizontal E5 kernel. Of the 25 two-dimensional convolution kernels that we can generate
from the one-dimensional kernels above, 24 of them are zero-sum; the L5L5 kernel is not. A listing of all 5x5 kernel
names is given below:

L5L5 E5L5 S5L5 W5L5 R5L5


L5E5 E5E5 S5E5 W5E5 R5E5
L5S5 E5S5 S5S5 W5S5 R5S5
L5W5 E5W5 S5W5 W5W5 R5W5
L5R5 E5R5 S5R5 W5R5 R5R5
The remainder of this document describes how to build up a set of texture energy measures for each pixel in a digital
image. This is only a "cookbook" strategy, and therefore most steps are optional.

Step I: Apply Convolution Kernels


Given a sample image with N rows and M columns that we want to perform texture analysis on (i.e. compute texture
features at each pixel), we first apply each of our 25 convolution kernels to the image (of course, for certain applications
only a subset of all 25 will be used.) The result is a set of 25 NxM grayscale images. These will form the basis for our
textural analysis.

Step II: Performing Windowing Operation


We now want to replace every pixel in our 25 NxM separate grayscale images with a Texture Energy Measure (TEM) at
the pixel. We do this by looking in a local neighborhood (lets use a 15x15 square) around each pixel and summing
together the absolute values of the neighborhood pixels. We generate a new set of images, which we will refer to as the
TEM images, during this stage of image processing. The following non-linear filter is applied to each of our 25 NxM
images.

7
7 |
|
NEW ( x,y ) = SUM SUM | OLD ( x+i,y+j ) |
i =-7 j =-7 |
|
Laws also suggests the use of another filter instead of the "absolute value windowing" filter listed above:

( 7
7
NEW ( x,y ) = SQRT ( SUM
( i =-7 j =-7

)
SUM OLD ( x+i,y+j ) ^ 2 )
)

We have at this point generated 25 TEM images from our original image. Lets denote these images by the names of the
original convolution kernels with an appended ``T'' to indicate that this is a texture energy measure (i.e. the non-linear
filtering has been performed). Our TEM images are named:

L5L5T E5L5T S5L5T W5L5T R5L5T


L5E5T E5E5T S5E5T W5E5T R5E5T
L5S5T E5S5T S5S5T W5S5T R5S5T
L5W5T E5W5T S5W5T W5W5T R5W5T
L5R5T E5R5T S5R5T W5R5T R5R5T

Step III: Normalize Features for Contrast


All convolution kernels used thus far are zero-mean with the exception of the L5L5 kernel. In accordance with Laws'
suggestions, we can therefore use this as a normalization image; normalizing any TEM image pixel-by-pixel with the
L5L5T image will normalize that feature for contrast.
After this is done, the L5L5T image is typically discarded and not used in subsequent textural analysis unless a
``contrast'' feature is desirable.

Step IV: Combine Similar Features


For many applications, ``directionality'' of textures might not be important. If this is the case, then similar features can
be combined to remove a bias from the features from dimensionality. For example, L5E5T is sensitive to vertical edges
and E5L5T is sensitive to horizontal edges. If we add these TEM images together, we have a single feature sensitive to
simple ``edge content''.

Followig this example, features that were generated with transposed convolution kernels are added together. We will
denote these new features with an appended ``R'' for ``rotational invariance''.

E5L5TR
S5L5TR
W5L5TR
R5L5TR
S5E5TR
W5E5TR
R5E5TR
W5S5TR
R5S5TR
R5W5TR

=
=
=
=
=
=
=
=
=
=

E5L5T + L5E5T
S5L5T + L5S5T
W5L5T + L5W5T
R5L5T + L5R5T
S5E5T + E5S5T
W5E5T + E5W5T
R5E5T + E5R5T
W5S5T + S5W5T
R5S5T + S5R5T
R5W5T + W5R5T

To keep all features consistent with respect to size, we can scale the remaining features by 2:

E5E5TR = E5E5T * 2
S5S5TR = S5S5T * 2
W5W5TR = W5W5T * 2
R5R5TR = R5R5T * 2
The result, if we assume we have deleted L5L5T altogether as suggested in Step III, is a set of 14 texture features which
are rotationally invariant. If we stack these images up, we get a data set where every pixel is represented by 14 texture
features.

References:

K. Laws. Textured Image Segmentation, Ph.D. Dissertation, University of Southern California, January 1980.

K. Laws. Rapid texture identification. In SPIE Vol. 238 Image Processing for Missile Guidance, pages 376380, 1980.

You might also like