You are on page 1of 10

Unit 41 3D Modelling

Applications of 3D modelling
Games:
In modern day game development, it is usually impossible to come across AAA games that does
not use some form of 3D models, as many developers prefer to create 3D games. An example of
the use of 3D models in games would be character models, which are created by game designers,
then given animations that respond to the
game/controls by the animators and coders. The
model is then placed into the game world, which
creates an interactive 3D character. The only
games, which do not use 3D models, are 2D games,
which are more common in the indie game
developer scene. For example, the critically
acclaimed games “To the Moon” and “Finding
paradise” made by indie game developer Freebird Figure 1 - 3D Models of Skull Unit from the Game Metal
Games used 2D art in Game maker to create their Gear Solid V: The Phantom Pain (Fudge, 2016)
game.

Animation:
Similar to games, there are two types of animation: 2D and 3D. The former, 2D, is created with
various methods such as 2D vectors or bitmap-based animation which are done on a computer
using software such as Toon Boom or Adobe Flash. Along with this, there are also the traditional
methods of animation such as cel animation which was the most common form of animation
used by powerhouses like Disney until the late 90s.
3D animation on the other hand works more like how character models are animated in games. In
3D animation, the 3D modeller will create a 3D model of a character which will then be given to
the animator who will fit the 3D model with an animatable skeleton that features joints and other
parts crucial in creating movement or action. This then allows the animator to manipulate the 3D
model, thus creating animation. Over time more complex technologies like performance/motion
capture have been created to allow actors to wear a
rig that translates their real-life actions and facial
expressions to a 3D animation on the computer. A
notable use of motion capture would be the character
Gollum form the Lord of the Rings movies played by
actor Andy Serkis who believed that is was the only way
to bring that character to life. A lot of modern day 3D
video games also use this method for animating 3D
Figure 2 Andy Serkis in his Mo-Cap suit acting as
models in game. Gollum

Product Design:
3D modelling has had a massive impact on the ability to design
and create in product design. For example, 40 years ago, all
design would have to be hand drawn which is very limiting as
the designer would have to then draw the design from
multiple angles making the whole process very time
consuming. However, with the advancements in computer
hardware and software, designers can now use 3D software Figure 3 3D Model of a Camera (htt)
such as 3Ds Max or Maya to create 3D models of their designs before creating them in real life. A
big advantage of this would be that the models can then be viewed from every and any angle and
can feature very fine details. This will also allow the designer to change certain aspects of the
design that he may not like or may not fit in.

Education:
3D models can be very useful in the education system for showing students objects they may not
be able to view in real life. For example, medical students may not be able to see a real human
heart however, with the use of 3D modelling software, models of the human heart can be
created for them to examine and study. Similarly, 3D animated models of our solar system can
be crated to show students in science classes.

Architectural Walk-through
Similar to product design, the benefit of 3D modelling
in Architecture has helped immensely as it allows
designers to create a 3D set of what they want to
build to scale on 3D software or CAD software such as
AutoCAD. Doing this then allows them to go through
their design and change any errors or things they may
not like. This also allows designers to show potential
investors their designs in full detail as they can
through every nook and cranny of the 3D model.

Displaying 3D Polygon Animation


Application Programming interface (API)
The API (Application Programming Interface) is a set sequence of program instructions
(subroutines) definitions, protocols and tools used in the creation of application software. API is
in a way is how different components of a software can communicate and work together. If you
are creating a program, having a good API that is easy to understand and navigate makes the
task much easier as the API will be providing all the essentials needed for the programmer to
put the software together is right there. API can also be specialised for many different things
such as operating system, database system and computer hardware. While APIs can be
specialised for different types of software they will often all share some common instructions
for tasks like routines, object classes, data structures, variables or remote calls.
Direct 3D
This is a graphics API by Microsoft for use on the Microsoft Windows operating systems and is
part of the Direct X API. Direct3D is used to three-dimensional objects without having to
sacrifice performance and allows software to run in full screen mode. Along with this, Direct3D
can take advantage of any available video card by using a process called hardware acceleration
which allows the computer to run smoother as it allocates tasks better suited to the video
card/Graphics Processing Unit instead of the CPU (Central Processing Unit.)

Graphics Pipeline
The graphics pipeline is the method in which the 3D model is presented to the viewer. It details
the steps necessary to create a 3D rendered scene. Below is that method in order:

Modelling
First, a scene is modelled using primitives. The scene may vary in complexity given the task or
what the user wants to create. However, a scene that is more complex and intricate will not
only take a longer time to complete, it may also take more memory and processing power from
the computer.
Lighting
Next is the lighting, this will determine the effect of a light source on a model. These may
include reflections shading and transparency. Along with this, the light source will gauge how
the scene will be rendered in terms of colours. Also, if a scene lacks a light source, the whole
scene will show up as just black.

Viewing
To make sure everything is in order, you can navigate through the 3D world you have created by
using a 3D camera system which allows you to view the model from all angles and different
perspectives such as Orthographic or isometric.

Projection
This refers to any method commonly used in computer graphics and engineering for mapping
three dimensional points on a two-dimensional plane. Most methods used to create this sort of
projection use two-dimensional pixel information taken from several bitplanes to create a 3D
projection.

Clipping
Clipping allows you to disregard all geometric primitives that are not within viewing range, in
doing this, the scene can be rendered faster and the computer’s hardware will be less stressed
as there is less to display on screen at any given time.

Scan Conversion
Scan conversion which is also known as “rasterization” is the process in which a scene image is
rendered into raster/bitmap format by calculating the correct colour values of every pixel on
screen.

Texturing and Shading


During the previously mentioned rasterization process, colours are allocated to the model which
are defined by the textures that have been applied to the model along with the lighting effects
in the area. This determines how the finished model will look when the rasterization is
complete.

Display
This is final step of the graphics pipeline and is where the model has been fully rendered and
can be displayed on an output device.

Rendering Techniques
Radiosity
This is rendering technique, which focuses on the
global lighting in a scene by using a global illumination
algorithm; it works by tracking the way light is spread
and diffused by not only light sources along with
surfaces that reflect light around the scene too.
Radiosity is a good technique for creating natural
shading as it attempts to track all sources of light
around a scene like a room or a hallway. Figure 4 Comparison of a Scene with and without
Radiosity Enabled
Ray Tracing
This is a rendering technique which works by tracing the
light in a scene as pixels allowing it to simulate the effects it may have when encountering other objects
in the scene. This is best suited for pre-rendered productions like animation or images as the method is
slow and does not work very well when applied to things that need to be rendered in real-time like a
video game. This technique when applied to the right things can produce highly realistic
results, however, this method takes longer than most other techniques to render.

Rendering Engines
A rendering engine is what process/renders
three-dimensional graphical images based on the model and
texture and displays them on the screen. Video game engines
like DICE’s Frostbite engine may include a Ray tracing engine
and radiosity engine that can produce high quality imagery
Figure 5 Real Time Tech Demo of Radiosity
and lighting effects, which can be seen in their classic first- on The Frostbite 2 Engine (XboxViewTV,
person shooter, Battlefield 3. n.d.)

Distributed Rendering Techniques


Often used in 3D animated movies, cinematics, etc that employ high end rendering techniques
like radiosity and ray tracing, this method was developed to help reduce the task of rendering by
breaking single frames into sections known as buckets which are then sent to a load of different
computers in what is known as a render farm. Each computer then works to render one part of a
small frame and when done, the computers send
their buckets back to the main computer which
then renders the buckets to create a complete
frame. This method helps save a lot of time as
shown with Pixar’s movie Toy Story, to render a
single frame in that movie would’ve taken 16
hours and as the movie is made at 24 frames per
second and is 103 minutes long, it would’ve taken
Pixar 1648 days or 4.5 years to render the movie
on one computer. As you would expect, the
process can be sped up simply by adding
additional computers to the farm.

Lighting
Lighting is the process which calculates the effect a light source will have on a model. This may
also include the effect the reflections will have, any effect transparent material like glass may
have and where the shade on the object will be.

Textures
Textures are an important part of rendering as they can make objects feel more real and life
like instead of just looking like a coloured load of things in a scene by adding finer details not
covered by the model. Textures are also important in that they give your eyes an idea of what is
what. For example, if you were to have two different models, each of a plank of wood, but
neither of them has any textures detailing of the type of wood used, how would the viewer be
able to tell the differentiate between them? Along with this, textures will also have an effect
on how the light is reflected off a surface, i.e. darker materials will absorb more light in the
scene.
Fogging
This technique is used to make objects in the distance
appear with less detail as they are not fully rendered as the
models become blurred and are shaded differently. Doing
this will enhance the perception of distance for viewer
while as the same time reducing render time for the scene.

Shadowing
This, like real life when an area of the scene is unaffected by a light source. Shadowing occurs
when objects are places in front of a light source, thereby blocking the light and creating a
shadow. Examples of this may be beneath tables in a room where the only source of light is a
light fixture on the celling or buildings outside which obscure the sun, casting a shadow in the
direction, they are blocking the light.

Vertex and Pixel Shaders


Vertex Shader
This is a GPU (Graphics Processing Unit) function that is responsible for manipulating each
vertex’s values on an X, Y, and Z axis for a 3D plane, by the use of mathematical processes
conducted on an object. Vertex shaders are used to manipulate a range of things like texture
co-ordinates, colour, fog, and point size but cannot be used to create new vertices. The use of
vertex shaders usually take place during rasterization.

Pixel Shader
Pixel shaders are again GPU functions which are responsible for controlling single pixels. Pixel
shaders work by creating fragments (individual pixels) that are processed one pixel at a time
and will take care of things like lighting values, bump mapping and shadows. Pixel shaders can
also be used to add post processing effects for a scene after it has been rasterized.

Level of Detail
These refer techniques used to decrease the level of detail on a 3D object as that object gets
further away from the camera. Adjusting the level of detail can make rendering a scene much
more efficient as the overall workload and quality is decreased. However, the difference made
to the overall quality tends to go unnoticed.

Geometric Theory
Vertices:
Vertices are exactly how they are described in geometry: points in a three-
dimensional space, with each individual point being labelled as a vertex.
Vertices are connected to edges which together create a create meshes that
can then be manipulated to change the mesh into different shapes and sizes.
Vertices are also highlighted by colour which can vary depending on the
software used.

Lines
These are the length between two vertices that have been connected. If the position of one
vertex is changed, the line will automatically adjust to ensure they are still connected. If one
vertex is moved, the other vertex can also be moved, so long as it is selected at the same time
but if it is not selected, the vertex will remain static.
Curves
Curves are like lines in that they are a length between two
vertices that have been connected but with curves, instead
of the line taking place on an X and Y axis, the curve is
given a Z axis which allows it to bend. While they are
similar to how lines work when moved, curves undergo more
complex transformations when a vertex is changed.

Edges:
This is another part of a polygon which is used to help define the
shape of a model but can also be used to transform models. Edges
are lines on a mesh which are defined by two vertices at either end.
When an edge is moved, the vertices at either end are moved along
with it. Edges when connected form the face of a mesh.

Polygons:
Polygons are geometric shapes which are made up of at least three
or more straight lines, these straight lines are known as edges. Polygons are defined by three-
dimensional points which are called as vertices; vertices are connected to both ends of the
edges on a polygon. The inside area of a polygon is known as a face. In 3D modelling software,
polygons are referred to as meshes and are placed together to create shapes and objects. The
more meshes used on a model makes the model look smoother and more realistic, these types of
models are known as “High Poly” models and tend to require more graphics processing power to
render and more CPU power to create. The opposite of these are called “low poly” models and
as their name suggests, they use a low number of meshes to create the model. Low poly models
tend to look jagged and unrealistic but do not require the same amount of computer power as
high poly models.

Elements:
Elements are the parts of the shape which can be selected, i.e. the lines, vertices and faces.

Face
A face, exactly how it is described in math, is a two-dimensional flat
surface which when put together with the vertices, curves and edges make
up a mesh. For example, if you were to create a cube, you would need to
connect six flat square faces together in a box shape.

Meshes:
Meshes are forms created by using edges, vertices and faces. These
are 3D models that can be made into any shape.
Wireframe:
Wireframes are found on meshes and are made up of the vertices and Figure 6 Polygon Mesh
lines which define the shape. However, unlike the full mesh, (Wikipedia, n.d.)

wireframes lack any of the faces needed to


create a polygon. An analogy of a wireframe
would be that wireframes are like the skeletal
view of a model, only the bare bones are shown.
Most, if not all 3D modelling software allow you
to view a model though its wireframe view,
which is often labelled as wireframe view.
Surfaces
The surfaced of a model are what covers the mesh. Another way of putting this would that all
the faces on a model when joined together creates a surface.

Two-Dimensional Coordinate Geometry


This is a geometric setting which can be used to calculate the position of an
object in two-dimensional plane consisting of the X and Y cartesian coordinates.
This is commonly used within mathematics to find coordinates on a two-
dimensional plane.

Three-Dimensional Coordinate Geometry


This is a geometric setting which can be used to
calculate the position of an object in three-dimensional space, using the
cartesian X, Y, Z coordinates. The use of this in 3D modelling allows the
computer to create a three-dimensional space which allows the user to create
3D models as they would appear in real life. This also allows users to look at a
model from any angle as three-dimensional space is simulated.

Cartesian Coordinates (X)


This coordinate is used to determine the value of an object’s distance to on the screen and viewer when
looked at from the front. Adjusting this axis allows the user to move an object closer or further away
from where the camera is positioned. For example, if a modeller were creating a street and wanted to
move a street light further back in the scene while keeping in line with the other street lights, he could
adjust the X coordinates of the object while keeping the others the same.

Cartesian Coordinates (Y)


This coordinate is used to determine the horizontal position of an object on the screen. Adjusting the Y
axis allows the user to move an object left or right in a straight line. For example, if a 3D modeller
wanted to move a model of a window to the left side of the front of a house, he could use the Y axis to
adjust its position.

Cartesian Coordinates (Z)


This coordinate is used to determine the vertical positioning of an object on screen. Making an
adjustment to the Z coordinates of an object allows the use to reduce or increase the height of said
object in relation to other objects in an environment. For example, if you created a house and wanted to
use the floor as the celling, you could copy the floor and then use change the Z coordinated of the
copied object, moving it up into position.

2D Coordinates
Unlike three-dimensional coordinates, two-dimensional coordinates can only exist on two axes, X and Y,
unlike 3D which exists on three axes, X, Y and Z. Due to 2D coordinates only existing on two axes, models
made with them can have no depth and will appear flat.

Mesh Construction
Box modelling
This is a modelling technique in which basic common primitive
shapes such as cubes, spheres, cones, etc are used to create the
basic outline of a model. After the basic outline has been created,
the model is then greatly expanded upon, using some of the same
steps to add more detail, further developing the final model. Use
of this method allows you to have more control over the model and
refine it to have lots of small details.
Extrusion modelling
This is a modelling technique which is similar to box
modelling in that a common primitive is selected
and used to the basic outline of the model.
However, this is where the similarities end, with
extrusion modelling, a single object like a cube or
cylinder is used and is then extruded either inwards,
outwards, up or down via a selected face(s) on the
object. The draw back to extrusion modelling is that
is only good for creating basic models and will be
quite challenging to use to create more complex
models.

Common Primitives
The most important part about mesh shapes is the primitives that they are made up of. These
primitives allow the mesh to shaped and transformed into almost anything. Here are two
examples of primitives.
• Cubes: Are made up of six faces, features twelve edges and
eight vertices and is the most commonly used primitive in
modelling
• Pyramid: Features one raised vertex with four vertices
creating the square base, has eight edges and five faces

3D Development Software
Software
Maya
Maya is a 3D computer graphics application that was first released in 1998 by Alias Systems by
has seen been taken over and developed by Autodesk. The software is often used in creating
video games, animated movies, TV series and visual effects. Maya is most notable for its use in
TV and film industry as it features the ability to create great detail with a range of different
components. This is software is quite expensive, costing from £204 a month to £1,644 a year
and there is also an option to pay £4,932 for three
years. However, for the price, you get high-quality
tools used by professionals in the industry. There is
also a student version of the software which is free for
three years.

3DS Max
Original versions of this software know as 3D Studio was
released in 1990 by Autodesk. The software is used for
making 3D animation, models, games and images. 3DS Max
often seen heavy use in the video game industry due to its
excellent modelling capabilities and high-end rendering
and realistic 3D animation. A popular game which uses this
software is The Witcher 3: Wild hunt. Also, as the software
features plug-ins, there are additional things the user can
install such as extra shaders or meshes. This software is the same as Maya in terms of price and
also features a free student version available for use for three years. And just like Maya, the price
is worth it for the high-end and professional tools you are getting.
Cinema 4D
First released in 1990, Cinema 4D is a 3D modelling, animation, motion graphic and rendering
software available for use on WindowsOS and MacOS. It is commonly used as an industry standard
for creating 3D models for use in video editing, this has only become more prominent with the
developers of the software teaming up with Adobe to feature a lite version of Cinema 4D for use
with Adobe After Effects. Compared to Maya, this software is very cheap as you can purchase the
current version forever for only £550 and given what you get with it, it’s a pretty good price.
There is also the option to get a short-term licence for the
software for only £120.
Unlike Maya, there is
no student version
available but there is a
trail version which
gives you the full
software with all its
functions for 42 days.

File Formats
3ds
.3ds as its name suggests is one of the file formats used by the 3D modelling, animating and
rendering software, Autodesk 3DS Max. This file format allows you to import and export things
like geometry such as the positions of vertices, textures and lighting data. .3ds has become an
industry standard along with .obj for importing and exporting 3D models.

.C4d
A .C4d file is used on the 3D modelling, animation, motion graphic and rendering application
Cinema 4D. This files type stores three-dimensional models, scenes and all the objects,
positions, rotation, pivot points, meshes and animation information on a scene. C4D file types
can also be exported to imagine/ video editing software such as Adobe Photoshop or Adobe
After Effects.
.obj
An .obj file is a file format created as an easy way to import and export geometry from
different 3D development applications. These files contain data that represents 3D geometry
like the positions of vertices, lines, edges, curves and the faces that create each polygon. Many
3D design software’s have adopted this files type, making it the most commonly found file type
between different software.

Plug-ins
These components allow the user to add extensions to pre-existing software. Plug-ins allow
users to add specific features that the software they are using does not currently features. Most
if not all 3D software has a load of available plug-ins to be used on the software, these plug-ins
can range from scripts to meshes.

Constraints
Polygon Count
One major setback when it comes to 3D modelling and its applications is the polygon count any
given model can have; the higher a model’s number of polygons, the more detail it can have.
These models are called high-poly models and while high end system can run them, the issue
starts when lots of high-poly models are in a scene. While high-end systems, which cost over a
thousand pound to build, may be able to run lots high poly models at any given time, most other
computers will undoubtedly experience permanence issues. Along with this, high-poly models
take much longer to render especially when there are multiple high-poly models on scene. While
there are, as mentioned earlier, methods to counteract this and cause less strain on the
computer, a lot of modern hardware, while powerful, is just not up to the task of running lots of
high-poly models in a scene.

Rendering Time
Another constraint when it comes to 3D modelling is the time it takes to render a model,
especially if it is model with a high number of polygons. This, along with various other factors
like radiosity that takes place during the render which adds more time as more detail and
quality is added to the final render, the time taken to render is drastically increased. In
addition, a more detailed and polished render of a scene will take a lot of high-end hardware as
it very intensive on the GPU, CPU and RAM (Random Access Memory)

References
Fudge, M. (2016). Retrieved from https://www.artstation.com/artwork/dQqVK
http://www.realucs.co.uk/#
XboxViewTV. (2010). Retrieved from Youtube: https://www.youtube.com/watch?v=T1Gv6URIlNU
https://en.wikipedia.org/wiki/Polygon_mesh
https://en.wikipedia.org/wiki/Three-dimensional_space
https://commons.wikimedia.org/wiki/File:2D_Cartesian_Coordinates.svg

You might also like