You are on page 1of 5

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL.

19, NO. 6, NOVEMBER/DECEMBER 1989 1613

Color Discrimination By Computer where [A,,A,] is the visible wavelength range. From (l), the
normalized color at ( x , y ) is given by
GLENN HENLEY
L(A)M(g)C(A) - L(A)C( A)
Abstract-A color space metric that is useful for computer vision is f(x,y,A) =
developed. While there is no shortage of proposed methods for quantifying f 2 L ( A ) M( g ) C ( A ) dA l h 2 L (A)C( A ) dA
how the human eye perceives color differences, it is difficult to apply ti~ese 1 A1

perceptual metrics directly to the color images sensed by a computer vision (3)
system. The metric developed can be applied to images sensed using color
filters (e.g., R . G . B ) as is often done in computer vision. This metric is and we see that f ( x , y, A) does not depend on the geometry g.
defined in terms of the spectral characteristics of the filters and camera In this correspondence, I develop a color metric for computer
and accounts for their noise properties. Ihe metric is designed to respond vision. This metric is intended to quantify the significance of a
to material changes while remaining insensitive to geometric variation in color change in an image. Some work in computer vision has
the scene. The color distance function is derived as an estimate of the been done on developing color metrics for the purpose of detect-
distance between normalized spectral power distributions. Components of ing color edges [16], [17], [19]. None of these proposed metrics,
this distance are weighted to account for sensor noise properties. A color however, consider the underlying properties of the sensors being
metric has several possible uses in a computer vision system. A color used. Similarly all of these techniques are sensitive to geometrical
metric is useful for detecting color edges, classifying intensity edges, and variations in the scene. The most significant advancement of this
estimating color variation within image regions. Ihe usefulness of this correspondence is the development of a color metric that ac-
color metric is demonstrated by evaluating its performance on real images. counts for the spectral properties of the camera and filters and
their noise characteristics and that also is insensitive to geometric
I. INTRODUCTION variations in the scene unless they are accompanied by material
Until recently most work in computer vision has dealt exclu- changes.
sively with images obtained using a single sensor. It is perhaps Based on previous work, it does not seem likely that color edge
for this reason that many of the available algorithms for color detection is a particularly useful step to be taken in computer
images are multidimensional generalizations of algorithms origi- vision. The experiments of Novak and Shafer [17] and Nevatia
nally intended for intensity images, e.g., [I], [7], [HI, [21]. Many [16] have demonstrated that a large majority of detected color
of these generalized algorithms have been effectively applied to edges are also detected as irradiance edges. Interestingly human
real images. It is reasonable to expect, however, that better uses subjects have been unable to bring color edges into focus when
for color information can be obtained by first examining the the color edge does not coincide with a brightness edge [24].
physics that is specific to the formation of color images. In this The color metric of this correspondence is intended to be
spirit, computer vision researchers have recently started develop- applied to regions of an image following the detection of image
ing algorithms that are based on the physics of color image irradiance edges. This is useful for grouping regions correspond-
formation, e.g., [22]. It is only by examining the relevant physics ing to the same material in the scene and for classifying irradi-
that it is possible to say whether color images contain useful ance edges. For example, given an object of a single material that
information that is not present in intensity images and, if so, how is illuminated by a single spectral-power distribution, there will
this information can be reliably extracted. be an irradiance edge, but not a normalized color edge across a
Perhaps the most important advantage of having color infor- surface orientation discontinuity. Similarly a specular irradiance
mation in addition to image irradiance information is that the edge in an image will indicate the presence of an inhomogeneous
normalized color of a surface of one material is more stable dielectric if there is a corresponding normalized color edge [8].
under changes in geometry than the corresponding image irradi- On the other hand a specular irradiance edge without a normal-
ance values. It is shown in [8] that if we are not viewing a ized color edge indicates the presence of a homogeneous material.
high-light on an inhomogeneous dielectric, or if an algorithm is The color metric is also useful for estimating the color variation
used to first remove such highlights [ll], then the spectral power within image regions of continuous image irradiance. This varia-
distribution Z ( x , y , A) of the light incident on the imaging sensor tion has a strong relationship to the material composition of an
can be approximated by object’s surface and may therefore be useful for recognition.
11. COLORSENSING
Z(X,Y,X) =L(A)M(g)C(A) (1)
The properties of light impinging on a sensor plane can be
where x and y indicate coordinates in the image plane, A is characterized by the function Z ( x , y , A) where Z is irradiance, x
wavelength, L(A) is the spectral power distribution of the light and y are spatial coordinates in the image plane, and A is
incident on the imaged surface, and M( g) and C( A) are deter- wavelength. Information about the spectral properties of the
mined by the optical properties of the imaged surface. The incident light can be obtained by using several sensors (e.g., color
parameter g indicates dependence on the photometric geometry. filters) with differing wavelength responses. For the remainder of
Define the normalized color at an image point (x, y ) by this section, I consider a fixed image location ( x o , y o ) and
abbreviate Z( x o , yo, A ) by Z( A).
It has been standard in computer vision to digitize an image of
RGB values using “red,” “green,” and “blue” filters and to refer
to theses RGB values as if they represent something fundamental
about the scene. In reality there are an infinite number of
combinations of filters and cameras that might be used to obtain
RGB values. In general different “red” filters placed in front of
Manuscript received September 20,1988; revised March 30,1989. This work different CCD cameras will give us different R values for the
was supported by AFOSR contract F33615-85-C-5106 and KBV Contract same incident light. In this work, I transform the sensor measure-
AIADS SlO93-S-l(Phase 11). ments into an approximation to the spectral power distribution
The author is with the Robotics Laboratory, Computer Science Dept.,
Stanford University, Stanford. CA 94305. of the light entering the camera. This approximation attempts to
IEEE Log Number 8930338. capture a physical property of the light entering the camera.

0018-9472/89/1100-1613$01.00 01989 IEEE


1614 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 19. NO. 6, NOVEMBER/DECEMBER 1989

Therefore this approximation is somewhat less dependent on the In summary, is the unique approximation to I ( A ) that
actual sensors used than the sensor measurements themselves. lies in the space spanned by the n basis functions b,(A),
Following [lo] and [23], I approximate I(A) as a linear combi- b,( A), . . .,b, - ,( A) and that also satisfies the n c o n s t r a z o f (4).
nation of basis functions. This color recovery method is similar to There are two sources of error in the approximation I(A). The
techniques that express surface reflectance as a linear combina- first source of error is noisy sensor measurements S. The second
tion of basis functions. I believe that expressing surface re- source of error is the possibility that I ( A ) cannot be exactly
flectance using such a linear model was first done bysallstrom represented in the space spanned by the basis functions
[20] and subsequently used by many others, e.g., [3], [4], [12]. In b,(A), b,(A); . -,b,-,(A). The effects of sensor noise are dis-
this correspondence, I consider only approximations to I(A) and cussed in Section V. A discussion of approximation error is not
not approximations to surface spectral reflectance. central to the development of this work, but is included for
Consider approximating the function I ( A ) at a point in an completeness in an appendix.
image. At each image point, we measure the outputs s, of n
sensors. Each sensor has a certain spectral response denoted by 111. A METRIC SPACE FOR NORMALIZED COLOR
the function f; (A). In a typical system, each function f,(A) will
correspond to the product of a color filter transmission function This section develops a normalization procedure for color. The
with the spectral response of the camera. Therefore at each image analysis of this section assumes the recovery of a continuous
function I ( X ) using noiseless sensors. In Section IV, I specialize
point we have the measured values s, (0 < i < n - 1) given by
the analysis to finite dimensional approximations like those re-
covered using the technique of Section 11.
(4) The total power of I ( A ) is given by the L'[A,,A,] norm

where A ranges over the entire electromagnetic spectrum. In this


work we restrict our interest in the behavior of I(A) to the visible
region of the spectrum. Therefore the functions f,(A) will be
nonzero only for values of X in the visible range. We denote the e (2), define the normalized color of I ( A ) by the function
visible range by [A,, A,]. Since many cameras respond to light in I(A) having unit total power
the near infrared region of the spectrum, it may be necessary to
use an infrared (IR) cutoff filter to block light for which A > A,.
Suppose we approximate the function I(A) by a linear combi-
nation of rn basis functions b,(A). If we let a, be the components
of Z(X) on this basis, then we have The space of normalized p h y s i c a l z o r s is the space of all
continuous nonnegative functions I(A ) on [A,, A,] having unit
I(A)= a,b,(A). (5 ) total power.
OgjGm-1,
As shown in Section I and analyzed in detail in [8], normalized
Substituting, (4) becomes color is only weakly dependent on reflection geometry. Thus
given a technique for removing highlights such as [ll], we get the
desirable property that for a surface of a single material illumi-
nated by a single spectral-power distribution the normalized
color of all image points corresponding to that surface will be
which may be written identical. In this sense (12) serves to factor out geometric infor-
mation and preserve information about source color and surface
spectral reflectance.
Given any two normalize*ysical*
- -
I,( A ) and I,( A )
define the distance from I , ( A ) to 1 2 ( A ) by the L2[A,,A,]
Let distance function

d ( m , l 2 ( x ) )= \ / / A 2 [ m - 1 2 ( x ) ] 2 d A .
A1 (13)
, denote the integral in (7). Then k,, is a constant that depends
only on the ith filter function and the j t h basis function. Let S The L2[A,,A,] distance of (13) is used rather than the distance
be the n dimensional vector defined by S = [so, sl; . .,snP1IT. defined by
Let K be the n X rn constant matrix defined by K(i, j)= k,,.
Let A be the rn dimensional vector defined by A =
[U,, a , , . . ., a,,,- ,IT. Then we have the linear system of n equa-
hons because inaccuracies over a small range of A can cause large
deviations of dmm.On the other hand the distance d of (13)
S=KA. (9) depends on the distance between the functions integrated over
If we choose our filters f;(A) and basis functions b,(A) such that the entire spectrum. Therefore the L2[X,,A,] distance will usu-
K has maximal rank, then the n sensor outputs so, s,; . -,s , - ~ ally give a more reliable characterization of the distance between
uniquely determine n components a,, a,, . . ,a, - of I( A). two measured functions than the distance dmm.
Therefore by letting m = n in (9) we can recover an approxima- Although we have not motivated our decision to compute
tion to the function I ( A ) given by distances between functions normalized in L'[A,,A,] using the
_1
L2[A1,h2]metric, functions treated in this way have several
I ( A ) = BT( A) A useful properties. These properties are discussed in [9].

where IV. A FINITE-DIMENSIONALREPRESENTATION


The functions I ( A ) in the discussion of Section I11 were
assumed to be arbitrary nonnegative continuous functions on
and B ( A ) is the vector [b,(A), b,(A);. -,b,,-,(A)IT. [A,, A,], i.e., nonnegative elements of the space C[A,, A,]. The
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 19, NO. 6 , NOVEMBER/DECEMBER 1989 1615
- L _

color recovery method of Section 11, however, is only able to The color space distance between Il ( A ) and I,( A ) is given
recover finite-dimensional approximations to these functions.
This section specializes the analysis of Section I11 to finite-di-
mensional approximations that can be recovered using the method
of Section 11.
In general we begin by restricting ourselves to some n dimen- which simplifies because of orthogonality to
sional subset S of C [ A , ,A,]. If S is specified by a set of basis
functions, then the Gram-Schmidt process [5] can be used to
construct an orthonormal basis for S. This orthonormal basis
={-.
d(m,m) O<r<n-l
(24)

may then be taken to be the basis B ( A ) of Section 11.


For the purpose of concreteness, we will take S to be the set of C. Color Space as a Subset of RI1- '
polynomials of degree n - 1 and the orthonormal basis to be the From (19) the representation for normalized physical colors is
first n normalized Legendre polynomials. The normalized Legen-
dre polynomials are given by
O<i<n-1

From (20) we see that all normalized physical colors have 6,)=
I/&. We can write
- 1
where P , ( A ) is the Legendre polynomial of degree i. The func- I(A)=-+ ci,p,(A).
tions p , (A) are orthonormal in the sense that l<i</1-1

We can consider the normalized physical color to be the


-
point in R"-' with coordinates ci, 6,; . ., 6,,-l. Since we require
I ( A ) > 0 on A, < A Q A,, normalized physical colors form a
We note that the analysis of this section applies more generally proper subset of K - ' . To be precise, normalized physical colors
to any orthonormal basis for which one of the basis functions is a are those points of RI'-' for which
constant. 1
ci,pf(A)>-- -1<A<l. (27)
A . Computing Normalized Physical Color l<r<n-l 2
Using the normalized Legendre polynomial representation, the Let C"-' be the set of points in R"-l that are normalized
technique of Section I1 recovers an approximation to I ( A ) of the physical colors. Points in C"-' wil! be referred to as
form
A = (ci,, 2;, . . , ci,,- (28)
where the coordinates G,, ci, . . ., ci,, - have the same signifi-
cance as in (26).
The total power of I ( A ) is given by AII important consequence of viewing colors as points in C"-'
is that the usual euclidean metric in R"-' is equivalent to the
metric we defined for normalized physical colors in Section 111.
This is seen by examining (24) and recalling that Po = to.Thus a
representation in terms of normalized Legendre polynomials al-
lows intuition about the familiar distance in RI'-' to be applied
to distance in color space.
where the last step follows from the orthogonality of the func-
tions p , ( A ) . Therefore the total power of I ( X ) may be deter- V. MODELING SENSOR NOISE
mined by only considering the first coefficient in the normalized If we are able to make perfect measurements so, s1; . ., s,,-
-
Legendre polynomial representation. From the total =er
I ( A), we can determine the normalized physical color I ( A) by
of then (10) allows us to compute the approximation to I ( X ) given
by
-
I ( A) = BT( A)A. (29)
In real physical situations each of the measurements so,
sl,-. .,s , - ~ will have some amount of variability. This section
where examines how the variance in the measurements S affects the
computed approximation ofJ29).
Let Adenote E ( A ) and S denote E ( S ) . From (lob), we have
A= E [ K - ' S ] =K -3. ( 30)
Let E,,, be the n x n convariance matrix of A defined by
B. Metric Space Properties
For two functions Il(A) and I , ( A ) given by X A = E [ ( A - 7)(A - A)T] (31)
=E [( K - 'S - K - ' S ) ( K-'S - K-'S) 1' (32)

the normalized physical colors are


+ K - ' E [ ( S S)( s
- I').
- (K-') (33)
which may be written

where the t and it are computed as in (20). where L is the n x n covariance matrix of S.
1616 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 19, NO. 6,NOVEMBER/DECEMBER 1989

For most real sensors it is reasonable to assume that A has the TABLE I
multivariate normal density given by EXPERIMENTAL RESULTS
Patches d( x lo-')
21-23 0.05
22-23 0.06
Contours of constant density are hyperellipsoids satisfying 21-22 0.09
19-20 0.13
( A - X) k,'( A X)= C
-
20-21
19-21
0.33
0.21
where C is a constant. Thus, from E A we can determine the 19-23 0.47
scatter of the data in any direction. It follows that, in general, 19-22 0.58
'
points in C"- will tend to exhibit greater dispersion in certain 20-23
20-22
0.64
0.16
directions than in others. Thus the euclidean metric for C"-' 1-2 1.30
that is appropriate for noiseless measurements (see Section IV) 9-15 2.82
will be inappropriate for real sensor measurements. This is be- 8-13 3.19
cause directions in C"-' along which sensor noise is amplified
can dominate the euclidean distance of (24). It is desirable to
develop a color metric that can take into account this anisotropic Based on the properties of the Mahalanobis distance, any pair
property of C - ' and thereby produce a distance between two of patches separated by a distance less than D = 1 . 5 X l O - ' is
normalized physical colors that is relatively independent of scat- taken to correspond to patches of the same material. Any pair of
ter due to noise. Define as a color metric the Mahalanobis patches separated by a distance of D or more is taken to
distance [6] given by correspond to patches of different materials. Thus the metric
assigns all pairs of neutral patches (19-23) to the same material
class and also assigns patches 1 (dark skin) and 2 (light skin) to
the same material class. Assigning patches 1 and 2 to the same
where 2, and 2, are normalized colors in C-'. The metric of class is not unreasonable since the corresponding spectral re-
(37) has the effect of normalizing by the variance in each direc- flectance functions differ by only a scalar multiple if one ignores
tion, thus giving a more representative estimate of the physical a small wiggle in the spectral reflectance of patch 2. Such small
distance between, A, and A, than the euclidean distance. We differences are difficult to distinguish" from noise when using
observe that if Z, is unitary, then C"-' is isotropic. As expected broad-band color filters like those used in these experiments.
for this case (37) simplifies to the scaled Euclidean distance. From Table I, we see that the color metric is able to assign to
different material classes patch 9 (moderate red) and patch 15
VI. EXPERIMENTS (red) and also patch 8 (purplish blue) and patch 13 (blue). All
This section gives some experimental results obtained using the other pairs of patches are easily distinguished by the color
metric of (37). The metric has been applied to an image of a color distance function.
chart [14] illuminated by a lamp intended to simulate daylight.
APPENDIX
Four color filters (25,47A,57A,96) are used to obtain informa-
tion over different regions of the visible spectrum. A CCD Given the recovery technique described in Section 11, I exam-
camera having a linear response and equipped with an infrared ine how the choice of the sensors h(A) and the basis functions
blocking filter is used. A set of orthonormal basis b,(A) can affect the quality of the recovered approximation to
functions that span the same space as the sensor functions Z(A). One result is that given an orthonormal system of n basis
fo(X),fi(X),fr(X),f;(X) are used. These basis functions proved functions, it is p o s s i K o choose n sensors such that the recov-
to be the most effective in color discrimination experiments. ered approximation I ( A) is the best least squares approximation
Some theoretical justification for this choice of basis functions is to Z(A) in the space spanned by the n basis functions. By itself
given in the appendix. The matrix Zs is estimated from images of this result is not of great use in practice since given an arbitrary
uniform regions. The filter transmission functions and camera orthonormal system of basis functions it is unlikely that the
response function are taken from manufacturer's specifications. theoretically optimal sensors will be among those that are readily
The chart is made up of 24 matte patches characterized by available. This result however does lead to some insight into the
different spectral reflectance functions. Patches 19-24 are differ- problem of selecting sensors and basis functions in practical
ent neutral grays; the spectral reflectance functions of these situations.
patches differ only by a scalar multiple. From (l),these patches Let q$,(?),$ (A);.-,+,,-'(A) be an orthonormal set of basis
might correspond to a single material illuminated by a single functions in L'[A,, A,]. Then given n sensor measurements S ,
source but viewed under different geometric conditions. We the procedure of Section 11 allows us to recover an approxima-
would like a color metric to identify these patches as correspond- tion I ( A) of the form
ing to the same material. Ideally the color metric should distin-
guish the other 18 patches from each other and from the neutral - c
Z(A)= d,+,(A). (38)
patches. Odidn-1
Average sensor values so, sl, s2,s3 for each patch are computed
CZI
from the image and the distance (37) between each pair of The best approximation I( A) in the least-squares sense is the
patches is evaluated. Patch 24 (the darkest of the neutral grays) is choice of the dj that minimizes
not considered because the signal-to-noise ratio is too low for the
metric to be reliable. Table I lists distances between those pairs
of patches for which distance is smallest. All pairs of patches not
listed have distances exceeding 5.0X lo-,. It is inappropriate to
compare the metric of this correspondence with any of the
previously mentioned metrics because each of the other tech- Equation (4) may be written
niques respond to intensity variation and therefore do not inter-
pret the neutral patches as instances of a single material.
IEEE TRANSACTIONS O N SYSTEMS, MAN, A N D CYBERNETICS, VOL. 19, NO. 6 , NOVEMBER/DECEMBER 1989 1617

where ( - ,.) denotes the L2[A,, A,] inner product. Thus if we I61 R. Duda and P. Hart. Puttern Clussificution and Scene Analysis. New
define York: Wilcy, 1973.
[71 R. Haralick and Ci. Kelly, “Pattern recognition with measurement space
i ( A ) =+((A) i=O,I;.., n-1 (41) and spatial clustering for multiple images,” Proc. IEEE, vol. 57, Apr.,
1969. pp. 654-665.
then we will have G. Healey, “Using color for geometry insensitive segmentation,” J. Opt.
s, = ( + i 9 0. Soc. Amer. A , pp. 920-937. June 1989.
G. Healcy and T. 0. Binford, “The role and use of color in a general
Since the +,( A ) are orthonormal functions, K will be the identity vision system.” Proc. A RPA Imuge Understunding Workshop, Univ. of
matrix. From ( a t h e technique of Section I1 will recover the Southern California, Los Angeles, Feb. 1987, pp. 599-613.
B. K. P. Horn. “Exact reproduction of colored images,” Comput. Vision

- c
approximation I( A)* given by

I(A)*= (+,l,z)+,(A). (43)


Gruphics and Inluge Processing, vol. 26, pp. 135-167, 1984.
G . Klinker. S. Shafer. and T. Kanade, “Using a color reflectionmodel to
separate highlights from object color,” Proc. First h t . Con6 Computer
Vision. London. England. June 1987, pp. 145-150.
OGtbn-1
L. Maloney and B. Wandell, “Color constancy: A method for recovering
surface spectral reflectance,” J . Opt. Soc. Amer. A , vol. 3, pp. 1673-1683,
It is easy to show [5] that the approximation given by (43) Oct. 1986.
minimizes M and that the error is given by L. Maloney. “Evaluation of linear models of surface spectral reflectance
with small numbcrs of parameters,” J. Opt. Soc. Amer. A , vol. 3, pp.
1673-1683. Oct. 1986.
C. McCamy. H. Marcus. and J. Davidson, “A color-rendition chart,”
J. Appl. Pliotogruphic Engineering. vol. 2. no. 3, Summer 1976.
Therefore it is possible to choose sensors i ( A ) such that the V. Nalwa, “Edge-detector resolution improvement by image interpola-
procedure of Section V computes the best least-squares approxi- Truns. Puttern Anal. Machine Intell., vol. 9, no. 3. May
mation to an arbitrary function I ( A ) . 1987.
What is perhaps more interesting in practice is that the previ- R. Nevatia. “A color edge detector and its use in scene segmentation,”
IEEE Truns. Srst. Mu11 Cvhern., vol. SMC-7, no. 11, pp. 820-826, 1977.
ous analysis implies that given a set of n linearly independent C. Novak and S. Shafer. “Color edge detection” in Imuge Understanding
sensor functions fo( A), fl(A),. . .,f,,- ,(A), it is possible to find Reseurch ut C M U . Takeo Kanade, in Proceedings of ARPA Imuge
basis functions bo(A), b, (A), . . b,,- A) such that the recovered
e ,
Undersrunding Winkshop, Univ. of Southern California, Los Angeles, pp.
c _ 35-37. 1987.
approximation I ( A ) is the best least-squares approximation to R. Ohlander. K. Price. and D. R. Reddy, “Picture segmentation using a
I ( X ) in the space spanned by the n basis functions. These basis recursive region splitting method.” Comput. Graphics und Imuge Proces.,
functions will be defined such that they span the same space as vol. 8. pp. 313-333, 1978.
G . S. Robinson. “Color edge detection,” Optical Engineering, vol. 16,
fO(X),fl(A); --,j;,-,(A). Selecting basis functions in this way no. 5 . pp. 479-4x4. Sept.-Oct. 1977.
has the disadvantage that these basis functions may not accu- P. Sallstrom. “Colour and physics: Some remarks concerning the physi-
rately capture the properties of typical I ( A ) functions. On the cal aspects of human colour vision,” Univ. of Stockholm: Institute of
other hand in a situation where a set of basis functions Physics Report 73-09. 1973.
A. Sarabi and J. Aggarwal. “Segmentation of chromatic images,” Puttern
bo(A),bl(X),~~~,b,,_l(h) is known to provide a good fit to a Recognition. vol. 13. no. 6. pp. 417-427, 1981.
certam class of I ( A ) functions that are of interest, it may be S . Shafer, “Using color to separate reflection components.” Univ. of
advantageous to search the space of available sensors for an n Rochester. Rochester, NY. vol. TR 136, Apr. 1984.
dimensional set that as nearly as possible spans the same space as B. Wandell. “The synthesis and analysis of color images,” IEEE Trans.
Puttern A n d . Muchine Intell.. vol. PAMI-9. no. 1, Jan. 1987.
bo(A ), (.A), . . .,b,, (A). Selecting sensors in this way can help
~
J. Wolfc, “Hidden visual processes,” Scientific Americun, pp. 94-103,
to m m m e the effects of approximation error on the recovered Fch. 19x3.
I(x) functions.
It is n a s to ask how large n should be in order that the
function I( A) is a good approximation to Z(A). This depends of
course on the properties of the function I ( A ) and how well these
properties are captured by the basis functions B ( h ) . Maloney
Response Profiles of Trajectory Detectors
[13] has done a thorough analysis of reflectance functions R ( A ) MICHAEL R. M. JENKIN, MEMBER, IEEE, A N D
for real materials by considering both empirical data and the ALLAN D. JEPSON
fundamental physical processes that determine the properties of
R ( A ) . He has concluded that at least five basis functions are Absrrucr -Characterisitics of trajectory detectors are examined. A num-
required to accurately characterize R ( A). Unfortunately no corre- ber of experiments to show the response of particular detectors to struc-
sponding analysis has been done for illuminants. ture with different trajectories, disparities, and velocities, is presented.
Finally, the authors comment on some of the similarities and differences
ACKNOWLEDGMENT between the detectors presented here, and models for “stereomotion”
I would like to thank Tom Binford, W. E. Blanz, and Brian detectors present in biological vision systems.
Wandell for many helpful suggestions and comments on a previ-
ous draft of this correspondence. I. INTRODUCTION
REFERENCES The human visual system contains visual pathways that encode
the three-dimensional trajectory of objects. Models for these
[ l ] M. Ah, W. Martin, and J. Aggarwal, “Color-based computer analysis of
aerial photographs.” Computer Graphics and Imuge Processing, vol. 9, “stereomotion detectors” have been proposed by Beverley and
282-293. 1979.
[2] T. 0. Binford. T. Levitt, and W. Mann, “Bayesian inference in model-
based machine vision,” Proc. Workshop on lJncertuin<v in Artificial Intell., Manuscript received September 14, 1988; revised March 30, 1989. M. R. M.
1987. Jenkin was supported by NSERC Canada and York University. A. D. Jepson
[3] M. Brill, “A device performing illuminant-invariant assessment of chro- was supported by NSERC Canada.
matic relations,” J. Theoret. Biol., vol. 71, pp. 473-478, 1978. M. R. M. Jenkin is with the Dept. of Computer Science, York University,
[4] G. Buchsbaum, “A spatial processor model for object colour perception,” 4700 Keele St., North York, Ontario. Canada M3J 1P3.
J. Frunklin Institute, vol. 310, pp. 1-26, 1980. A. D. Jepson is with the Dept. of Computer Science, University of Toronto,
[SI R. Courant and D. Hilbert, Methods of Muthemuticul Physics, vol. 1, Toronto, Ontario, Canada.
New York: Wiley, 1953. IEEE Log Number 8930339.

0018-9472/89/1100-1617$01.00 01989 IEEE

You might also like