You are on page 1of 9

Generating Procedural Clouds

in Real Time on 3D Hardware


By Kim Pallister, Intel Corporation
Look up… waaayyy up…
As an the introduction to this article, this is where I tell you about how I recently spent some time looking
up at cloud formations in the sky and became inspired to write software to emulate what I saw. That would
be a nice introduction, but it would be untrue. The truth is that I became inspired after seeing a procedural
cloud demo on Hugo Elias' website. (See the references at the end of the article for a link to the demo). The
thing that interested me about the demo was the fact that so much of the work was 'busy work' - filling in
pixels and blending stuff together – work that seemed to me to be just the kind of work that graphics cards
were put on earth to do.

I'll explain how I implemented the procedural clouds using 3D hardware, and some ways to make it scale
across a range of processor and graphics card performance and capabilities. First though, I'd like to point
out some observations about clouds and some of their properties. (Yes, I did eventually go look at the sky
before putting together the demo ). We won't be able to model all of these things, but it's worth noting them
to give us a starting point. Fortunately, since I live in Oregon, there was no shortage of clouds to snap some
pictures of. I've included a few below, and made some notes on each.

Figure 1A
• The sky behind the clouds has some color to it. Usually blue, though it could be black at night. (See
figure 1). (Sunsets and sunrises have gradients that go from yellow/red to blue, as we'll see in a later
photo)
• Thin clouds are white. As they get thicker, they turn gray. This isn't just srcalpha-invsrcalpha
transparency, but rather indicates that there are two things that have to be modeled, the amount the
background is being obscured, and the amount of light the clouds emit in the direction of the viewer
(which is light reflected/refracted from all directions).
• Clouds are for the most part, randomly turbulent. The shape of the patterns can change greatly, often
with altitude. Low-lying clouds tend to be thick and billowing, and higher level clouds tend to be thin
and uniform.
• Clouds at lower altitudes tend to obscure light from above more than reflect light from below, and are
also usually thicker, and thus are usually darker. Clouds at higher levels are nearly always whiter.
Figure 1B
• As you turn toward the sun the clouds tend to be brighter.
• There are sometimes visible transitions in cloud patterns, such as along a weather front.
• Atmospheric haze makes the sky and clouds in the distance fade out to a similar color.

Figure1C
• Clouds have thickness, and thus become lit or take on light or appear lit?. Those away from the sun
tend to be brighter on one side and darker on another.
• Clouds at sunrise and sunset tend to reflect light from below more than transmit light from above.
• At sunrise and sunset, more colors of the spectrum are reflected, thus the sky color tends to be a
gradient from blue to yellow or red, and the clouds tend to be lit with light of orange or red.
• The sky's cloud layer isn't a plane (or a cube for that matter), but rather a sphere. We just look at it
from close to it's circumference, and so often mistake it for a plane.

We're certainly a long way from modeling all of these things. Also, this doesn't begin to list the
observations we'd make if we were able to fly up, into and through the clouds. Limiting ourselves to a view
from the ground, we'll see how many we can model in a real time application later. First, some background
on some of the techniques we'll use.

Making Noise
If you've been reading this publication and others on a regular basis, you've undoubtedly heard talk of
procedural textures, and in particular, procedural textures based on Perlin noise. Perlin noise refers to the
technique devised by Ken Perlin of mimicking various natural phenomena by adding together noise
(random numbers) of different frequencies and amplitudes. The basic idea is to generate a bunch of random
numbers using a seeded random number generator (seeded in order to be able to reproduce the same results
given the same seed), do “some stuff” to them, and make them look like everything from smoke to marble
to wood grain. Sounds like it shouldn't work, but it does.

This is best illustrated with an example. Take the example of a rocky landscape. When looking at it's
altitude variation, low frequencies exist (rolling hills), as well as medium frequencies (boulders, rocks) and
very high frequencies (pebbles). By creating a random pattern at each of these frequencies, and specifying
their amplitude (e.g.mountains are between 0 and 10000 feet, boulders between 0 and 100 feet, pebbles
under 2 inches), we can add them together to get the landscape. (See figure 2 for a one dimensional
example of this .)

Figure 2 - Waves of different frequencies and amplitudes being summed together. In this case a regular
function because the input functions are regular.

Taking the one dimensional example to two dimensions, we'd get a bitmap that could represent a
heightmap, or in our case cloud thickness. Taking it to three dimensions, we could either be representing a
volume texture, or the same two dimensional example animated over time.

Aside from the random number generator, the other thing necessary is a way to interpolate points between
sample values. Ideally, we'd want to use a cubic interpolation to get curves like those in the graph, but we
are going to use a simple linear interpolation. This won't look as good, but will let us use the hardware to do
it.

Summary of the procedural cloud technique


The idea behind the technique is to generate a number of octaves of noise (an octave being an interval
between frequencies having a 2:1 ratio) and combine them together to make some turbulent looking noise
that resembles smoke or clouds. Each of the octaves is updated at a specific rate, different for each octave,
and then smoothed. To generate the turbulent noise for a given frame, we interpolate between the different
updates for each octave, and then combine the different snapshots for each octave together to create the
turbulence. At that point, some texture blending tricks need to be done to clamp off some ranges of values,
and map it onto a sky dome, box, or whatever surface you are working on.

As I mentioned earlier, what intrigued me about the original software-rendered demo was that many of the
steps involved (smoothing noise, interpolating between noise updates, combining octaves) seemed like a lot
of per-pixel cost. Cost that could be done on the graphics card using alpha blending, bilinear filtering, and
by rendering to texture surfaces. A remaining question was whether the simple four-tap sampling of the
bilinear filter would be adequate for doing the smoothing. I figured I'd attempt it, and as I think you'll see,
the results are acceptable.

Background on rendering to texture surfaces


In the synopsis of the technique above, I mentioned rendering to texture surfaces, something possible on a
lot of modern day 3D hardware, and exposed by the DirectX7 API. I wrote an article for gamasutra.com on
this subject (a link is included at the end of this article) that goes into more detail, but I will summarize the
technique here for those that haven't read it.

In order to render to a texture surface, one has to create a surface that can be both used as a render target
and as a texture (there's a DirectDraw surface caps flag for each). If the application is using a Z buffer, it
must attach one to the texture surface as well.
Then for each frame, the application does one BeginScene/EndScene pair per render-target. After rendering
to the texture and switching back to the back buffer, the application is free to use the texture in that scene.

I use this technique pretty extensively in this demo, but there's no reason it can't be done by rendering to the
back buffer and then blitting to a texture surface for later use. In fact, OpenGL doesn't expose the ability to
render to a texture, so you'll have to blit to the textures if this is your API of choice. This is also the work
around used on hardware that doesn't support render-to-texture.

Enough already! Let's render some clouds.


The technique involves a number of steps:
• Generating the noise for a given octave at a point in time
• Smoothing the noise
• Interpolating the smoothed noise with the previous update, for the current frame
• Compositing all the current update octaves into a single turbulent noise function for the
current frame
• Doing some blending tricks to the composite noise to make it look a little more like clouds
• Mapping the texture to the sky geometry

Generating the noise for a given octave at a point in time


Generating the noise is fairly simple. What we want is a simple seeded pseudo-random number generator to
generate 'salt & pepper' noise. The pseudo-random number generator should be such that it produces the
same result for a given seed.

We need to generate noise at several different frequencies. I chose to do four, though less might suffice on
lower end systems. By representing the four octaves as four textures of different resolutions (say 32x32
through 256x256) which eventually will all be upsampled to the same size (by mapping them to textures of
larger sizes), I can achieve the desired result automatically. The bilinear filtering does the interpolation for
me, and a small size texture just ends up being a lower frequency than a larger size texture. A cubic filter
would be better at approximating the curve I should get, but the results from the bilinear are acceptable.

The noise textures are updated at different periods, with the lowest frequency, stored in the 32x32 texture,
being updated the least frequently (I used an interval of 7 seconds). The higher the frequency, the more
frequently it is updated. This makes sense if you think about it. The lowest frequency represents large,
general cloud formations which change infrequently. The highest frequency represents small, wispy bits in
the clouds which change more rapidly. You can see this in action if you ever see time-lapse photography of
clouds.

I used frequencies that were multiples of two (for the sake of simplicity) and because the results were
pretty good. However, it's not necessary to do so. Interesting results can be achieved by using frequency
combinations other than just multiples of two times the input frequency.

Smoothing the noise


In order to take the salt and pepper noise and make it look more like clouds, we need to remove any high
frequency components from it. In English, this means we need to make sure that no individual pixel is too
dramatically different than its neighbors. We can do this by rendering the noise to another texture and
letting the bilinear filter average it with its neighbors. Pretty simple. We can repeat this process for
additional smoothing, but I think you'll agree that a single pass is enough.

Before generating the smoothed noise, we save off the previous version generated at the last update. We'll
use this when interpolating between updates in our next step.

Figure 3 shows the noise texture before and after smoothing.


Figure 3: The source noise texture for a single octave (32x32), a version that has been smoothed, and the
smoothed version upsampled to a 256x256 texture with filtering.

Interpolating the smoothed noise with the previous update


As we are periodically updating each octave of noise (as opposed to recreating it per frame), we need to
interpolate between updates to come up with a texture representing the noise at the current time. My
implementation uses a simple linear interpolation according to how much time has elapsed between the last
update and the next one. A cubic interpolation might give a better result, but would probably be overkill.

The formula for interpolating between the two updates is simply:

Interpolant = TimeSinceLastUpdate/UpdatePeriod
CurrentOctave = PreviousOctave*(1-Interpolant) + LatestOctave*Interpolant

Compositing all the octaves into a single turbulent noise function


As I mentioned earlier, the lower frequencies represent the larger shapes and patterns that occur in the
clouds. It would follow then, that these are also more significant in their contribution to the cloud texture's
luminance and color.

As I was using a series of frequencies that are multiples of two, I chose to make their contributions
multiples of two, such that each octave’s contribution was half that of the next lowest frequency. This can
be expressed as:

Color = 1/2 Octave0 + 1/4 Octave1 + 1/8 Octave2 + 1/16 Octave3 +….

This was easy to code and produced good results. However, the weighting could certainly be changed to
change the look of the end result.

Figure 4 shows the interpolation and compositing steps.


Figure 4: Interpolation and compositing of noise octaves

I chose to render this in multiple passes, by starting with the highest frequencies and working my way up to
the lowest ones, each time modulating the render target buffer contents with a monochrome color
representing an intensity of 0.5.

Unfortunately, we start to run into some issues with dynamic range and precision. I'll talk about these later,
but it's worth noting here that we multiply the resultant texture by itself to increase the dynamic range and
contrast.

At this point, the noise looks pretty good (Figure 5 shows the sum of the four octaves of noise.). It's very
turbulent, is animated, and the artifacts from the simple linear interpolation aren't too apparent. However,
we have some more to do before it looks like clouds.

Figure 5: Summing four octaves of noise.

Making clouds out of vapor


What we have so far looks a bit like smoke or steam. Clouds are of course water vapor, but on a large scale.
The thing about water vapor is that below a certain density it isn't visible. That density changes with
temperature, atmospheric pressure, and a number of other factors. In our turbulent noise texture, all
'densities' are visible, and we need to fix that by removing the low densities. As it turns out, this is an
exponential function, where the color value is the exponent, something that would be difficult to emulate in
hardware. However, we can do a pretty good job just using clamping (this turns out to be one place that
working in the 0 to 1 range works out in our favor). By having our final color equal to some factor minus
the cloud texture squared, we get a roughly similar end result to the exponential function (the color is
inverted, but that won't matter since the noise is fairly evenly distributed). Varying the intensity of the
factor color lets us vary how isolated the clouds are. This is shown in figure 6.

Figure 6: Clamping the noise to make isolated clouds

Unfortunately, we lose some dynamic range here. We can use a modulate2X or equivalent to get it back,
but we still lose precision.

Putting clouds in the sk y


Now that we have something that looks like clouds, we need to map the clouds to the sky geometry. You
can of course use a plane, (a sky box is attractive for some applications) but I wanted to model the
curvature of the sky as well. I chose to use a tessellated plane, bent into a curve, to match the sky's
curvature. This is just a plane set at whatever height we want the clouds at, with the vertices pulled down to
match the radius of the earth's diameter plus the cloud height. If we know the maximum altitude the viewer
will be able to attain, we can figure out how big to make the plane so that it's edges can't be seen.

This method is attractive, because the mapping is easy to do (a planar mapping will suffice), with none of
the pinching that occurs at the poles of a tessellated sphere. Because the bilinear filtering has been
wrapping when sampling texels, the cloud texture tiles nicely. We can vary the amount of tiling to change
the look of the clouds.

The rest is fairly easy. Fill the background to the desired sky color (blue in this case), blend the clouds over
it, add a glow around the sun, and of course all 3D graphics demos are required to have a lens flare. As an
interesting effect, we can use the cloud texture to vary the amount of lens flare. Since we know how the
texture maps to the sky dome, and the angle at which the sun is coming, we can figure out exactly which
texel in the cloud texture maps to the sun's position on screen. We can then upsample that over a pair of
triangles and use it as a mask for the lens flare.

I implemented two different methods of blending the clouds. The first is just a straight transparency, which
looks okay, but doesn't mimic the obscuring of the light that happens. The second method does this, via a
second pass of the cloud texture which has been clamped at a different level. Figure 7 shows off our final
result.
Figure 7: The cloud texture mapped onto a sky dome.

Room for improvement


At this point, we've created clouds that show a number of the traits that I listed out at the beginning of this
article. The clouds are random and turbulent, we can change the shape of cloud patterns, and how dense the
clouds are, and we have the clouds mapped onto a curved sky and fading out over distance. However,
there's still room for improvement. We haven't modeled the cloud thickness, how it gets lit, the colors that
occur at dawn or dusk, or the appearance of cloud formations along weather fronts. Given more fill-rate and
more time, there are a couple of things we could add to improve the effect.

For one, we could do an embossing effect using an extra pass to brighten the clouds on the sides facing the
sun, and darken them on the sides away from the sun. However, this would require that we not tile the
texture, which in turn would probably mean that we'd need to use a higher resolution texture to begin with.
Alternatively, we could do an additional render-to-texture pass to tile the texture out onto one that would
then be mapped once onto the whole sky.

Multiple cloud layers could be mapped onto another geometry mesh for added detail. This could then scroll
at a different velocity (as many games do with static cloud textures).

Another thing that would be interesting is to have the clouds interact with the environment, such as clouds
parting around a mountain peak, or being disturbed by a plane. This could be done by a combination of
deforming the mesh the clouds are mapped onto and modifying the texture itself. However, it might take a
lot of work to get it to look just right.

Another improvement would be to add a gradient background for sunsets and sunrises, and tint some
portion of the clouds.

Finally, something we could do to make the demo altogether more natural and random looking is to
animate any (or all) of the various parameters over time , using a smoothed noise function. This could be
used to animate the cloudiness factor to vary the amount of cloud over time, or to animate the speed of the
clouds, or the turbulence. The only limit is your imagination (or in my case, the time to implement it all)!

Multi-Texture Meltdown
As I alluded to earlier, an unfortunate problem when doing any effect that requires a significant amount of
multi-texture and multi-pass rendering, is the lack of precision that can quickly result in visual artifacts.
Where possible, 32-bit textures can be used to minimize (but not always eliminate) this problem. At the
same time, it's important to only do so where necessary to minimize the texture memory consumption.
Another problem is the fact that the dynamic range of textures is limited to the 0 to 1 range. This makes it
difficult when dealing with anything that gets amplified over multiple stages. For example, the higher
frequency noise textures ended up only contributing one or two bits to the end result due to this problem.

Making it Scale
If you're writing games for the PC platform, being able to implement a snazzy effect isn't enough. You also
have to be able to make it scale down on basic machines or on video cards with less available memory.
Fortunately, this procedural cloud technique lends itself well to scaling in several respects.

On the lowest end systems, you can either use a static texture or generate the texture only once. Other areas
for scalability include making updates to the noise less frequently, using fewer octaves or using lower
resolution or lower color depth textures.

Doing it Yourself
Generating procedural clouds can allow for unique skies that change over time and with other factors in the
environment. This can improve the look of skies over static cloud textures that never change with time.
Additionally, they can save on storage, or download size for Internet applications.

Hopefully, some of the techniques presented here will allow you to implement similar things in your
applications.

Additional Resources

• Modeling and Texturing, A Procedural Approach - second edition. David S. Ebert, Editor. AP
Professional, 1994. ISBN 0-12-228730-4
• Hugo Elias & Matt Fairclough's procedural cloud demo -
http://freespace.virgin.net/hugo.elias/models/m_clouds.htm
• Haim Barad's Procedural Texture Using MMX article -
http://www.gamasutra.com/features/programming/19980501/mmxtexturing_01.htm
• Ken Perlin's Noise Machine website - http://www.noisemachine.com/
• Kim Pallister's article, “Rendering to Texture Surfaces Using DirectX7”
http://www.gamasutra.com/features/19991112/pallister_01.htm

About the Author


Kim Pallister is a Technical Marketing Engineer and Processor Evangelist with Intel's Developer Relations
Group. He is currently focused on realtime 3D graphics technologies and game development. He can be
reached at kim.pallister@intel.com.

* Third-party brands and names are property of their respective owners.


Legal Information at http://developer.intel.com/sites/developer/tradmarx.htm
© 2000 Intel Corporation

You might also like