Professional Documents
Culture Documents
http://www.merl.com
Matthias Zwicker
Hanspeter Pfister
Jeroen van Baar
Markus Gross
Abstract
In this paper we present a novel framework for direct volume rendering using a s-
platting approach based on elliptical Gaussian kernels. To avoid aliasing artifacts, we
introduce the concept of a resampling filter combining a reconstruction with a low-pass
kernel. Because of the similarity to Heckbert’s EWA (elliptical weighted average) fil-
ter for texture mapping we call our technique EWA volume splatting. It provides high
image quality without aliasing artifacts or excessive blurring even with non-spherical
kernels. Hence it is suitable for regular, rectilinear, and irregular volume data sets.
Moreover, our framework introduces a novel approach to compute the footprint func-
tion. It facilitates efficient perspective projection of arbitrary elliptical kernels at very
little additional cost. Finally, we show that EWA volume reconstruction kernels can be
reduced to surface reconstruction kernels. This makes our splat primitive universal in
reconstructing surface and volume data.
This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in
whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such
whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research
Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions
of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment
of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved.
Figure 1: The forward mapping volume rendering pipeline. Figure 2: Volume rendering. Left: Illustrating the volume render-
ing equation in 2D. Right: Approximations in typical splatting al-
As an overview, we briefly describe the coordinate systems and
gorithms.
transformations that are relevant for our technique. The volume
data is initially given in object coordinates. To render the data from The volume rendering equation describes the light intensity
an arbitrary viewpoint, it is first mapped to camera space using the I (^
x) at wavelength that reaches the center of projection along
viewing transformation. We deal with the effect of this transforma- ^ with length L:
the ray x
tion in Section 4.3. The camera coordinate system is defined such
ZL
that its origin is at the center of projection.
We further transform the data to ray space, which is introduced I (^
x) = c (^
x; )g (^x; )e
R
x
0 g (^ ;) d d , (1)
0
in Section 3.2. Ray space is a non-cartesian coordinate system that
enables an easy formulation of the volume rendering equation. In where g (x) is the extinction function that defines the rate of light
ray space, the viewing rays are parallel to a coordinate axis, facili- occlusion, and c (x) is an emission coefficient. The exponential
tating analytical integration of the volume function. We present the term can be interpreted as an attenuation factor. Finally, the prod-
transformation from camera to ray space in Section 4.4; it is a key uct c (x)g (x) is also called the source term [12], describing the
element of our technique. Since its purpose is similar to the pro- ^ at the point x2 .
light intensity scattered in the direction of the ray x
jective transform used in rendering pipelines such as OpenGL, it is Now we assume that the extinction function is given as a
also called the projective mapping. weighted sum of coefficients gk and reconstruction kernels rk (x).
Evaluating the volume rendering equation results in a 2D image This corresponds to a physical model where the volume consists of
in screen space. In a final step, this image is transformed to view- individual particles that absorb and emit light. Hence the extinction
port coordinates. Focusing on the essential aspects of our tech- function is: X
nique, we are not covering the viewport transformation in the fol- g (x) = gk rk (x). (2)
lowing explanations. However, it can be easily incorporated in an k
implementation. Moreover, we do not discuss volume classification
and shading in a forward mapping pipeline, but refer to [13] or [20] In this mathematical model, the reconstruction kernels rk (x) reflect
for a thorough discussion. position and shape of individual particles. The particles can be ir-
regularly spaced and may differ in shape, hence the representation
3.2 Splatting Algorithms in (2) is not restricted to regular data sets. We substitute (2) into (1),
We review the low albedo approximation of the volume rendering yielding:
equation [5, 12] as used for fast, direct volume rendering [19, 6, 13,
XZ L
8]. The left part of Figure 2 illustrates the corresponding situation I (^
x) = c (^
x; )gk rk (^x; )
in 2D. Starting from this form of the rendering equation, we discuss k 0
Y
several simplifying assumptions leading to the well known splatting
formulation. Because of their efficiency, splatting algorithms [19, e gj
R
x
0 rj (^ ;) d d . (3)
13] belong to the most popular forward mapping volume rendering j
techniques.
We slightly modify the conventional notation, introducing our To compute this function numerically, splatting algorithms make
concept of ray space. We denote a point in ray space by a column several simplifying assumptions, illustrated in the right part of Fig-
vector of three coordinates x = (x0 ; x1 ; x2 )T . Given a center of ure 2. Usually the reconstruction kernels rk (x) have local support.
The splatting approach assumes that these local support areas do antialiased splatting equation
not overlap along a ray x ^ , and the reconstruction kernels are or- Z X
dered front to back. We also assume that the emission coefficient
(I
h)(^x) = ck ( )gk qk ( )
is constant in the support of each reconstruction kernel along a ray,
hence we use the notation ck (^ x) = c (^x; x2 ), where (^x; x2 ) is in
R2 k
kY1
the support of rk . Moreover, we approximate the exponential func-
tion with the first two terms of its Taylor expansion, thus ex 1 x. (1 gj qj ( )) h(^
x ) d . (6)
Finally, we ignore self-occlusion. Exploiting these assumptions, we j =0
X kY1 practice this function is evaluated only at discrete positions, i.e., the
I (^
x) = ck (^
x)gk qk (^x) (1 gj qj (^
x)) , pixel centers. Therefore we cannot evaluate (6), which requires that
I (^
x) is available as a continuous function.
(4)
k j =0
However, we make two simplifying assumptions to rearrange the
integral in (6). This leads to an approximation that can be evaluated
where qk (^
x) denotes an integrated reconstruction kernel, hence: efficiently. First, we assume that the emission coefficient is approx-
Z imately constant in the support of each footprint function qk , hence
qk (^
x) = r (^x; x2 ) dx2 . ck (^
x) ck for all x ^ in the support area. Together with the
Rk
(5)
assumption that the emission coefficient is constant in the support
of each reconstruction kernel along a viewing ray, this means that
Equation (4) is the basis for all splatting algorithms. Westover [19] the emission coefficient is constant in the complete 3D support of
introduced the term footprint function for the integrated reconstruc- each reconstruction kernel. In other words, we ignore the effect of
tion kernels qk . The footprint function is a 2D function that speci- shading for antialiasing. Note that this is the common approach for
fies the contribution of a 3D kernel to each point on the image plane. antialiasing surface textures as well.
Integrating a volume along a viewing ray is analogous to projecting Additionally, we assume that the attenuation factor has an ap-
a point on a surface onto the image plane, hence the coordinates proximately constant value ok in the support of each footprint func-
x^ = (x0 ; x1 )T are also called screen coordinates, and we say that tion. Hence:
kY1
I (^
x) and qk (^x) are defined in screen space.
Splatting is attractive because of its efficiency, which it derives
(1 gj qj (^x)) ok (7)
j =0
from the use of pre-integrated reconstruction kernels. Therefore,
during volume integration each sample point along a viewing ray ^ in the support area. A variation of the attenuation factor
for all x
is computed using a 2D convolution. In contrast, ray-casting meth- indicates that the footprint function is partially covered by a more
ods require a 3D convolution for each sample point. This provides opaque region in the volume data. Therefore this variation can be
splatting algorithms with an inherent advantage in rendering ef- interpreted as a “soft” edge. Ignoring such situations means that
ficiency. Moreover, splatting facilitates the use of higher quality we cannot prevent edge aliasing. Again, this is similar to rendering
kernels with a larger extent than the trilinear kernels typically em- surfaces, where edge and texture aliasing are handled by different
ployed by ray-casting. On the other hand, basic splatting methods algorithms as well.
are plagued by artifacts because of incorrect visibility determina- Exploiting these simplifications, we can rewrite (6) to:
tion. This problem is unavoidably introduced by the assumption
X Z
that the reconstruction kernels do not overlap and are ordered back
(I
h)(^x) ck ok gk qk ( )h(^
x ) d
to front. It has been successfully addressed by several authors as
k R2
mentioned in Section 2. In contrast, our main contribution is a novel X
splat primitive that provides high quality antialiasing and efficiently = ck ok gk (qk
h)(^x).
supports elliptical kernels. We believe that our novel primitive is k
compatible with all state-of-the-art algorithms.
Following Heckbert’s terminology [4], we call:
4 The EWA Volume Resampling Filter x) = (qk
h)(^x)
k (^ (8)
4.1 Aliasing in Volume Splatting
an ideal resampling filter, combining a footprint function qk and a
Aliasing is a fundamental problem of any rendering algorithm, aris- low-pass kernel h. Hence, we can approximate the antialiased splat-
ing whenever a rendered image or a part of it is sampled to a discrete ting equation (6) by replacing the footprint function qk in the origi-
raster grid, i.e., the pixel grid. Aliasing leads to visual artifacts such nal splatting equation (4) with the resampling filter k . This means
as jagged silhouette edges and Moiré patterns in textures. Typically, that instead of band-limiting the output function I (^
x) directly, we
these problems become most disturbing during animations. From band-limit each footprint function separately. Under the assump-
a signal processing point of view, aliasing is well understood: be- tions described above, we get a splatting algorithm that produces
fore a continuous function is sampled to a regular sampling grid, it a band-limited output function respecting the Nyquist frequency of
has to be band-limited to respect the Nyquist frequency of the grid. the raster image, therefore avoiding aliasing artifacts. Remember
This guarantees that there are no aliasing artifacts in the sampled that the reconstruction kernels are integrated in ray space, result-
image. In this section we provide a systematic analysis on how to ing in footprint functions that vary significantly in size and shape
band-limit the splatting equation. across the volume. Hence the resampling filter in (8) is strongly
The splatting equation (4) represents the output image as a con- space variant.
tinuous function I (^ x) in screen space. In order to properly sample Swan et al. presented an antialiasing technique for splatting [17]
this function to a discrete output image without aliasing artifacts, it that is based on a uniform scaling of the reconstruction kernels to
has to be band-limited to match the Nyquist frequency of the dis- band-limit the extinction function. Their technique produces simi-
crete image. In theory, we achieve this band-limitation by convolv- lar results as our method for radially symmetric kernels. However,
ing I (^
x) with an appropriate low-pass filter h(^x), yielding the for more general kernels, e.g., elliptical kernels, uniform scaling
is a poor approximation of ideal low-pass filtering. Aliasing arti- 4.3 The Viewing Transformation
facts cannot be avoided without introducing additional blurriness. The reconstruction kernels are initially given in object space, which
On the other hand, our method provides non-uniform scaling in has coordinates t = (t0 ; t1 ; t2 )T . Let us denote the Gaussian re-
these cases, leading to superior image quality as illustrated in Sec- construction kernels in object space by rk00 (t) = GV
00 (t tk ),
k
tion 7. Moreover, our analysis above shows that band-limiting the where tk are the voxel positions in object space. For regular vol-
extinction function does not guarantee aliasing free images. Be- ume data sets, the variance matrices Vk00 are usually identity ma-
cause of shading and edges, frequencies above the Nyquist limit trices. For rectilinear data sets, they are diagonal matrices where
persist. However, these effects are not discussed in [17]. the matrix elements contain the squared distances between voxels
along each coordinate axis. Curvilinear and irregular grids have
4.2 Elliptical Gaussian Kernels to be resampled to a more regular structure in general. For exam-
We choose elliptical Gaussians as reconstruction kernels and low- ple, Mao et al. [11] describe a stochastic sampling approach with a
pass filters, since they provide certain features that are crucial for method to compute the variance matrices for curvilinear volumes.
our technique: Gaussians are closed under affine mappings and con- We denote camera coordinates by a vector u = (u0 ; u1 ; u2 )T .
volution, and integrating a 3D Gaussian along one coordinate axis Object coordinates are transformed to camera coordinates using an
results in a 2D Gaussian. These properties enable us to analytically affine mapping u = '(t), called viewing transformation. It is de-
compute the resampling filter in (8) as a single 2D Gaussian, as fined by a matrix W and a translation vector d as '(t) = Wt + d.
will be shown below. In this section, we summarize the mathemat- We transform the reconstruction kernels GV 00 (t tk ) to camera
k
ical features of the Gaussians that are exploited in our derivation in space by substituting t = ' 1 (u) and using Equation (10):
the following sections. More details on Gaussians can be found in
GVk00 (' (u) 1 uk ) = rk0 (u),
tk ) =
jW j GVk0 (u
Heckbert’s master’s thesis [4]. 1
GV
(14)
We define an elliptical Gaussian (x p) centered at a point 1
p with a variance matrix V as: where uk = '(tk ) is the center of the Gaussian in camera coordi-
nates and rk0 (u) denotes the reconstruction kernel in camera space.
GV (x p) =
1 e
1(
2 x p TV 1 x p ,
) ( )
(9) According to (10), the variance matrix in camera coordinates Vk0 is
2jVj 1
2 given by Vk0 = WVk00 WT .
j j
where V is the determinant of V. In this form, the Gaussian is 4.4 The Projective Transformation
normalized to a unit integral. In the case of volume reconstruction The projective transformation converts camera coordinates to ray
kernels,GV is a 3D function, hence V is a symmetric 3 3 matrix coordinates as illustrated in Figure 3. Camera space is defined such
and x and p are column vectors (x0 ; x1 ; x2 )T and (p0 ; p1 ; p2 )T , that the origin of the camera coordinate system is at the center of
respectively. We can easily apply an arbitrary affine mapping u = projection and the projection plane is the plane u2 = 1. Camera
(x) to this Gaussian. Let us define the affine mapping as (x) = space and ray space are related by the mapping x = m(u). Using
Mx + c, where M is a 3 3 matrix and c is a vector (c0 ; c1 ; c2 )T . the definition of ray space from Section 3, m(u) and its inverse
We substitute x = 1 (u) in (9), yielding: m 1 (x) are therefore given by:
0 1 0 1
x0 u0 =u2
GV ( (u) p) = jM1 1 j GMVMT (u (p)).
1
(10) @ x1A = m(u) = @ u1 =u2 A (15)
x2 k(u0 ; u1 ; u2 )T k
0 1 0 1
Moreover, convolving two Gaussians with variance matrices V and u0 x0 =l x2
Y results in another Gaussian with variance matrix V + Y: @ u1 A = m (x) = @ x1 =l x2 A ,
1
(16)
u2 1=l x2
(GV
GY )(x p) = GV+Y (x p). (11)
where l = k(x0 ; x1 ; 1)T k.
Finally, integrating a 3D Gaussian GV along one coordinate axis Unfortunately, these mappings are not affine, so we cannot apply
GV
yields a 2D Gaussian ^ , hence: Equation (10) directly to transform the reconstruction kernels from
camera to ray space. To solve this problem, we introduce the local
Z u
affine approximation m k of the projective transformation. It is
GV (x p) dx2 = GV^ (^x p^ ), defined by the first two terms of the Taylor expansion of m at the
R
(12)
point uk :
u u
m k (u) = xk + J k (u uk ), (17)
where x ^ = (x0 ; x1 )T and p^ = (p0 ; p1 )T . The 2 2 variance where xk = m(uk ) is the center of a Gaussian in ray space. The
matrix V^ is easily obtained from the 3 3 matrix V by skipping
u
Jacobian J k is given by the partial derivatives of m at the point
the third row and column: uk :
0 1 J k= u @m
(uk ). (18)
a b c @u
V=@ b d e A, a
b
b
d
= V^ . (13)
In the following discussion, we are omitting the subscript uk , hence
c e f
m(u) denotes the local affine approximation (17). We substitute
u = m 1 (x) in (14) and apply Equation (10) to map the recon-
In the following sections, we describe how to map arbitrary ellip- struction kernels to ray space, yielding the desired expression for
tical Gaussian reconstruction kernels from object to ray space. Our rk (x):
derivation results in an analytic expression for the kernels in ray
space rk (x) as in Equation (2). We will then be able to analytically rk (x) = jW1 1j GVk0 (m (x)
1
uk )
integrate the kernels according to Equation (5) and to convolve the
footprint function qk with a Gaussian low-pass filter h as in Equa- 1
tion (8), yielding an elliptical Gaussian resampling filter k . = jW 1jjJ j GVk (x
1
xk ), (19)
uk
^
u
uk
l
u2 ^
u
r'k(u)
u2
xk
xk xk
^
xk ^
xk
x2
2 2D to 3D parameterization 3D to 2D projection
v11 = t210 + t211 + ts122 2D to 2D compound mapping
t12 t22
v12 = v21 = t10 t20 + t11 t21 + s2
Figure 6: Rendering surface kernels.
2
v22 = t220 + t221 + ts222 , 6 Implementation
We implemented a volume rendering algorithm based on the EWA
where we denote an element of T3D by tij . splatting equation. Our implementation is embedded in the VTK
We compute the 2D Gaussian footprint function by integrating (visualization toolkit) framework [16]. We did not optimize our
the reconstruction kernel. According to (13), its 2D variance ma- code for rendering speed. We use a sheet buffer to first accumulate
trix is obtained by skipping the third row and column in V. As splats from planes in the volume that are most parallel to the pro-
s approaches infinity, we therefore get the following 2D variance jection plane [19]. In a second step, the final image is computed
by compositing the sheets back to front. Shading is performed us- 7 Results
ing the gradient estimation functionality provided by VTK and the The EWA resampling filter has a number of useful properties, as
Phong illumination model. illustrated in Figure 8. When the mapping from camera to ray
We summarize the main steps which are required to compute the space minifies the volume, size and shape of the resampling filter
EWA splat for each voxel: are dominated by the low-pass filter, as in the left column of Fig-
1: for each voxel k { ure 8. In the middle column, the volume is magnified and the re-
2: compute camera coords. u[k]; sampling filter is dominated by the reconstruction kernel. Since the
3: compute the Jacobian J; resampling filter unifies a reconstruction kernel and a low-pass fil-
4: compute the variance matrix V[k]; ter, it provides a smooth transition between magnification and mini-
5: project u[k] to screen coords. x_hat[k]; fication. Moreover, the reconstruction kernel is scaled anisotropi-
6: setup the resampling filter rho[k]; cally in situations where the volume is stretched in one direction
7: rasterize rho[k]; and shrinked in the other, as shown in the right column. In the bot-
8: } tom row, we show the filter shapes resulting from uniformly scaling
the reconstruction kernel to avoid aliasing, as proposed by Swan et
First, we compute the camera coordinates uk of the current voxel k al. [17]. Essentially, the reconstruction kernel is enlarged such that
by applying the viewing transformation to the voxel center. Using its minor radius is at least as long as the minor radius of the low-
uk , we then evaluate the Jacobian J as given in Equation (25). In pass filter. For spherical reconstruction kernels, or circular footprint
line 4, we transform the Gaussian reconstruction kernel from object functions, this is basically equivalent to the EWA resampling filter.
to ray space. This transformation is implemented by Equation (20), However, for elliptical footprint functions, uniform scaling leads
and it results in the 3 3 variance matrix Vk of the Gaussian in to overly blurred images in the direction of the major axis of the
ray space. Remember that W is the rotational part of the viewing ellipse.
transformation, hence it is orthonormal typically. Moreover, for
spherical kernels, Vk00 is the identity matrix. In this case, evaluation minification magnification anisotropic
minification-magnification
of Equation (20) can be simplified significantly. Next, we project 1.5
6 2
the voxel center from camera space to the screen by performing a 1
footprint function
4
the third row and column, and adding a 2 2 identity matrix for the
–1
–4
–1
j j
and 1= W 1 that are used as normalization factors.
⊗
1.5
6 2
Finally, we rasterize the resampling filter in line 7. As can be 1
4
low-pass filter
seen from the definition of the elliptical Gaussian (9), we also need 0.5 2
1
the inverse of the variance matrix, which is called the conic matrix.
Let us denote the 2 2 conic matrix of the resampling filter by Q.
–1.5 –1 –0.5 0 0.5 1 1.5 –6 –4 –2 0 2 4 6 –2 –1 0 1 2
–0.5 –2
–1
x) = x T Qx where
r( x = (x0 ; x1 )T = x^ x^ k .
–1.5
=
1.5
6
Note that the contours of the radial index, i.e., r = const: are
2
1
resampling filter
4
1
–0.5 0 –2 0 0
the circle center. The exponential function in (9) can now be written –1.5 –1 0.5 1 1.5 –6 –4 2 4 6 –2 –1 1 2
1
as e 2 r . We store this function in a 1D lookup table. To evaluate
–0.5 –2
–1
–4
–1
biquadratic in x
Swan's reconstruction kernel
centered around x
2
1
Heckbert provides pseudo-code of the rasterization algorithm in [4]. –1.5 –1 –0.5 0 0.5 1 1.5 –6 –4 –2 0 2 4 6 –2 –1 0 1 2
–0.5 –2
–1
–4
–1
–6 –2
–1.5
We present a new splat primitive for volume rendering, called the [10] X. Mao. Splatting of Non Rectilinear Volumes Through Stochastic Resam-
pling. IEEE Transactions on Visualization and Computer Graphics, 2(2):156–
EWA volume resampling filter. Our primitive provides high quality
170, June 1996.
antialiasing for splatting algorithms, combining an elliptical Gaus-
sian reconstruction kernel with a Gaussian low-pass filter. We use a [11] X. Mao, L. Hong, and A. Kaufman. Splatting of Curvilinear Volumes. In IEEE
novel approach of computing the footprint function. Exploiting the Visualization ’95 Proc., pages 61–68. October 1995.
mathematical features of 2D and 3D Gaussians, our framework ef- [12] N. Max. Optical Models for Direct Volume Rendering. IEEE Transactions on
ficiently handles arbitrary elliptical reconstruction kernels and per- Visualization and Computer Graphics, 1(2):99–108, June 1995.
spective projection. Therefore, our primitive is suitable to render [13] K. Mueller, T. Moeller, and R. Crawfis. Splatting Without the Blur. In Proceed-
regular, rectilinear, curvilinear, and irregular volume data sets. Fi- ings of the 1999 IEEE Visualization Conference, pages 363–370. San Francisco,
nally, we derive a formulation of the EWA surface reconstruction CA, October 1999.
kernel, which is equivalent to Heckbert’s EWA texture filter. Hence [14] Klaus Mueller and Roger Crawfis. Eliminating Popping Artifacts in Sheet
we call our primitive universal, facilitating the reconstruction of Buffer-Based Splatting. IEEE Visualization ’98, pages 239–246, October 1998.
surface and volume data. ISBN 0-8186-9176-X.
We have not yet investigated whether other kernels besides ellip- [15] Klaus Mueller and Roni Yagel. Fast Perspective Volume Rendering with Splat-
tical Gaussians may be used with this framework. In principle, a ting by Utilizing a Ray-Driven Approach. IEEE Visualization ’96, pages 65–72,
resampling filter could be derived from any function that allows the October 1996. ISBN 0-89791-864-9.
analytic evaluation of the operations described in Section 4.2 and [16] W. Schroeder, K. Martin, and B. Lorensen. The Visualization Toolkit. Prentice
that is a good approximation of an ideal low-pass filter. Hall, second edition, 1998.
To achieve interactive frame rates, we are currently investigat- [17] J. E. Swan, K. Mueller, T. Möller, N. Shareef, R. Crawfis, and R. Yagel. An Anti-
ing the use of graphics hardware to rasterize EWA splats as tex- Aliasing Technique for Splatting. In Proceedings of the 1997 IEEE Visualization
ture mapped polygons. We also plan to use sheet-buffers that are Conference, pages 197–204. Phoenix, AZ, October 1997.
parallel to the image plane to eliminate popping artifacts. To ren-
[18] L. Westover. Interactive Volume Rendering. In C. Upson, editor, Proceedings
der non-rectilinear datasets we are investigating fast back-to-front of the Chapel Hill Workshop on Volume Visualization, pages 9–16. University of
sorting algorithms. Furthermore, we want to experiment with our North Carolina at Chapel Hill, Chapel Hill, NC, May 1989.
splat primitive in a post-shaded volume rendering pipeline. The
[19] L. Westover. Footprint Evaluation for Volume Rendering. In Computer Graph-
derivative of the EWA resampling filter could be used as a band-
ics, Proceedings of SIGGRAPH 90, pages 367–376. August 1990.
limited gradient kernel, hence avoiding aliasing caused by shading
for noisy volume data. Finally, we want to exploit the ability of [20] C. Wittenbrink, T. Malzbender, and M. Goss. Opacity-Weighted Color Interpo-
our framework to render surface splats. In conjunction with voxel lation For Volume Sampling. IEEE Symposium on Volume Visualization, 1998,
pages 431–444. ISBN 0-8186-9180-8.
culling algorithms we believe it is useful for real-time iso-surface
rendering. [21] M. Zwicker, H. Pfister., J. Van Baar, and M. Gross. Surface Splatting. In Com-
puter Graphics, SIGGRAPH 2001 Proceedings. Los Angeles, CA, July 2001.
Accepted for publication.
9 Acknowledgments
Many thanks to Lisa Sobierajski Avila for her help with our imple-
mentation of EWA volume splatting in vtk. We would also like to
thank Paul Heckbert for his encouragement and helpful comments.
Thanks to Chris Wren for his supporting role in feeding us, and to
Jennifer Roderick and Martin Roth for proofreading the paper.