You are on page 1of 75

1

2
3
Very complex rendering: Dynamic sweat and damage, animated trails. Dynamic
wrinkles, muscle maps. Subsurface scattering (via UV-space blur in NIS). SSAO.
Extremely dense models (sub-pixel triangles...). Almost used all the texture samplers on
the skin. 3d Crowd with facial animation. 3d Cloth / softbody for impacts on the face...
See GDC 2010 Fight Night 4 presentation...

4
5
6
Tools that help tuning:
In game image viewing/filtering tools
Photoshop scripts
Debug shaders
Debug/Reviewing environments (lighting)
Artists need to be able to isolate issues, not judge only the final result.

7
Aliasing ... Try to crank up resolution and AA on pc games, see how much more realistic
they look!

8
Photorealistic scenes can be found across generations...

From top left, clockwise: Half-Life 2, Call of duty 4 Modern Warfare, Grand Theft Auto
4, Left for dead, Call of Juarez – Bound in blood, Tourist Trophy (PS2), Gran Turismo 4
(PS2)

Most screenshots are from “Dead End Thrills”

This is a selection of “older” games but I’m cheating here: it’s all about environments.
Humans are harder.

Videogame photography links:


http://enwandrews.tumblr.com/
http://shotbyrobert.com/wordpress/?page_id=1011
http://deadendthrills.com/

9
Some big changes are still perceptually believable. The black and white image also has
more global and local contrast. Global blurring does not change our ability of
recognizing the “human” quality of the face (but blurring only the skin detail while
retaining the edges would for example!)

Most of the visual quality are relative. The lips in the bottom-left picture appear to be
redder than the one on the top, even if only the skin around them has a slight cool tint,
the lips did not change.

Also some “minor” changes in absolute values can make huge differences. The bottom-
mid image has a bit of green added to the midtones, the bottom-right one only
changed the reds making them more pink.

10
We need photorealistic visual targets – photos, collages, high-quality renderings.
Concept art won’t cut it, as it’s good to evaluate some aspects (volume, composition)
but not all of what’s needed to generate a photorealistic image (colour, texture, details)

11
12
13
Possible solutions:
- Re-compute normals from skinned geometry
It’s tricky as the geometry in game can be in a different format and use different rules
than the original in your DCC 3d modeling package. I.E. Quads vs Triangles, subdivision
surfaces, projection of the normals from a high-res model into low-res etc...

-Compute an extended set of weights for normal skinning.


Let's say we derive face normals by averaging the vertices of a face. And that we
compute vertex normals by some form of averaging of the faces of a vertex (usually,
weighted by the areas). Let's assume that the face areas do not change under skinning.
In that case we can compute a set of bone weights and indices that is the average of
the bone weights and indices that act upon the faces of a given vertex.

14
15
16
17
18
19
Shaders are some sort of black box. They model a given function (BRDF, or better, a
given approximation of the whole rendering equation), and each parameter can change
part of the function’s shape.

The art-direction and visual targets define an “objective function”, that we don’t know
mathematically in any form. Artists are “optimizers”, they try to find the parameters
that best fit the shader function to the art-direction.

More parameters, more degrees of freedom = harder work for the artists. We need to
guide the artists, provide intuitive models, limit parameter ranges and dependencies.
Math and physics guides our process, they make us understand what to add and why, if
a parameter makes sense or not, what is the root cause of a given visual defect.

20
Adding more parameters/hacks
-- Increases complexity. Over time shaders become unwieldly, it becomes hard to
understand what causes what.
-- Feedback loop / a problem that makes itself worse. The more complex the shader,
the less it’s understood, the more we will need to add ad-hoc hacks when things go
wrong.
-- Performance problems
-- Makes changes harder: it becomes actually HARDER to art-direct, it becomes harder
to iterate and tune (NAIL THE LOOK).

Everyone is shifting towards less parameters, more physically based models. Many talks
at Siggraph2010 were about GI in movies and its advantages

...It would be a nice project to create an automatic shader-tuner, that fits parameters to
a given reference image. The right comparison metrices are needed, and there is no
hope to make it work over all the dimensions of a typical shader parameter space, but
it could be possible for example to fit SH lighting having fixed the material parameters
for example.

21
22
23
Not only less parameters, coarser granularity!
Material parameters: Skin in FN4 was specified mostly per-skintone, sometimes per boxer
Skin in FNC is mostly tuned globally, only 2-3 parameters per skintone, none per boxer

Lighting parameters: FN4 used 3-4 lighting cubemaps per venue, FNC only one
FN4 tuned lighting per scenario (i.e. Entrances vs In-Game), FNC practically doesn’t.
Cubemaps are SLOW to iterate on!

Artist’s hacks: bending the parameters (the shader “shape”) too much, trying to fix a given
visual problem with the wrong controls. “Encourages” communication: if a given visual problem
is not fixable within the current parameter ranges, talk to an engineer 

Example of “artist friendly” parameters: Normalize specular so that changing exponent does
not change the intensity
It can also make some computation faster, i.e. Translate brightness/contrast parameters in a
single mAdd

Parameter ranges: Avoid having artists “pushing” parameters to levels that do not make sense.
If they feel that need either there is a misunderstanding on how the lighting works, or the
lighting is not working correctly!

Texture parameter “space”: Gamma, different encodings and compression settings.


Author textures in 16bit, find the best representation in the pipeline. Author texture
separately, merge and mix them in the pipeline (i.e. Multiple grayscale texture merged in
different channels of a single in-game one, AO maps pre-multiplied into diffuse and specular
etc...)

24
25
26
27
28
29
30
Bad math in the specular: We tinted specular using diffuse lighting color, this actually
decreased hue variety.

Standard diffuse does not work. In FN4, in-game we used an ambient map (redness) to
simulate SSS. During the NIS this map was replaced by a dynamically generated one
(UV-space blur) This solution can’t work with more varied lighting setups (FN4 is almost
always strongly top-lit). Also the map was not entirely reasonable the way it was
authored.

Sidenote: Noise is good to break highlights, if we wanted to simulate peach-fuzz it


would be good to add a bit of noise in the specular at grazing angles

31
Don’t judge too much:
Don’t let your experience hinder your
immagination
Not the time to be strictly practical
Tinker, Try, Hack, Read, Try
Look at other medias
After this, all the walls around my desk
were cluttered with papers, screenshots,
ideas

32
33
Performance problems:
We need to compute diffuse lighting twice, first for SSS, then for the direct component
Problems if we need to render many characters. FN4 is the ideal case, in game we have
only two boxers

Screenshots: Jensen’s multipole SSS model, D’Eon-Luebke realtime implementation

34
35
36
It’s possible to use curvature to approximate the geometrical neighborhood. We didn’t
do that in our test.

Similar to: “Curvature-Dependent Local Illumination Approximation for Translucent


Materials” Kubo-Hariu-Wemler-Morishima
And “Curvature-Dependent Reflectance Function for Interactive Rendering of
Subsurface Scattering”
“Curvature-based shading of translucent materials, such as human skin”

The most recent and complete effort is “Pre-Integrated Skin Shading” by Eric Penner:
http://www.ericpenner.net/portfolio/

37
38
39
Avoid blurring: Weight occlusion based on sample distance. Line sampling.

Bent normals: did not make a big enough difference for us, rencently Crysis2 shipped
with it

Consider normals:
Sample in 3D -> Orient sampling hemisphere along surface normals
Sample in 2D -> Consider distance between normal-plane and sampled depth.

Specular occlusion:
sample on a line towards the specular reflection. Useful especially on the rim (fresnel
angles)!

40
41
42
Power after cube fetch
Wrong: can’t really change the exponent, only changes contrast. Dependant on the
HDR range of the cubemap.
Such contrast adjustments shift colors (saturation)!

43
Encoding multiple phong exponents in the mips
360 and PS3 (and older PC cards) do not filter across cube faces -> ATI Cubemapgen
library solves this by making sure that the borders of the cube faces match

Your DXT compressor has to be aware of this too, use the same reference colors for
blocks across the seams! Cubemapgen does not does that

44
We like phong because can be baked in cubemaps!

45
46
47
48
49
50
51
We use a few “ubershaders” with static defines for features, so it’s easy to produce
specializations

52
53
54
55
Faster iteration is better than good technology.
I don’t think this applies to only to artists! I think any programmer experienced how
much more important is to have fast iteration when working with shaders. We don’t
have great debugging tools for them (compared to other languages) but being able to
hotswap them easily and to have an immediate visual feedback is usually better than
having great tools but slow iteration. The same applies to scripting languages.
I’ve even found myself more prone to explore an idea with a lesser potential if it was
implementable within the boundaries of what I could iterate fast, than better ideas that
required slower changes.

For FN4 most of our lighters time was spent tuning lighting cubemaps (80%) in
photoshop
FNC: drastically reduced the number of cubemaps, adopted an in-house tool for their
autoring

Another big pain were expression-driven corrective bones in Maya, we have many of
them. Expressions written in MEL were hand-translated in C++ code! We employed a
new node-based scripting engine for these, with a runtime in game and in Maya.

Ownership:
The more “distance” there is between artists and the game (mainline), the less
ownership is possible.
Outsourcing works too. The artists responsible for a given outsourcer does also
effectively own their external changes, test them and check them into mainline

56
57
58
We added a fairly complex motion blur to Champion. It’s does not only blur camera
movements, but also skinned actor ones. It’s also capable of blurring outside the boxer
silouhette, and it’s fairly robust.

We did some tests very early in prepro using optical flow on a filmed version of the
game to quickly add the effect and we preferred that. Later on we retested our
hypotesis using an actual prototype in game and blind testing.

Most of the pre-pro was still done thinking about techniques that could work at 60hz.
This was not a huge bottleneck as our worst case performance in FN4 and in FNC turns
out to be the in-game NIS, that even in FN4 ran at 30hz. In-game FNC could still run at
60hz even if it wasn’t optimized for it.

SSDO could still be a good technique. It requires better tools, light bleeding could be
solved in the future with more accurate ray-casting, importance sampling according to
the cubemap etc...

Some examples of effects driven by the analytic main light: Ear translucency can be
faked with a cheap trick, we boosted specular cubemaps with it and so on..

Again, accurate occlusion is Very important for shapes and volumes -> likeness

59
Hybrid Point/Directional lights are nice. FN4 flashes were a product of both IIRC
(dot(N,D)*dot(N,P)), works nicely.

60
61
62
Having an analytic light gets us INFORMATION, we can use it to drive many more
effects. Ear fake translucency is just a light set opposite to the main and masked with an
ears mask…

63
It’s easy to break the rendering realism by having visible artifacts, especially aliasing or
upsampling defects...

64
65
66
67
68
69
Note: FNC SSAO occludes specular, main light diffuse (using min(shadow,ao)) and
ambient. It’s used as a shadowing component, not only as ambient occlusion.

70
71
72
73
74
75

You might also like