You are on page 1of 67

COMPUTER GRAPHICS WITH C++ November 7, 2015

Brief Contents
Chapter 1: Introduction to interactive computer graphics
1.1. Introduction...............................................................................................................
1.2. Brief history of computer graphics............................................................................
1.3. 3D graphics techniques and terminologies................................................................
1.4. Common uses of computer graphics.........................................................................
1.5. Examples of Application areas...................................................................................

Chapter 2: Graphics hardware


2.1. Introduction................................................................................................................
2.2. Raster Display System...............................................................................................
2.3. Introduction to 3D graphics pipeline..........................................................................
2.4. The z-Buffer for hidden surface removal...................................................................
Chapter 3: Geometry and Line Generation
3.1. Line drawing Algorithm............................................................................................
3.1.1. DDA line drawing algorithm.....................................................................
3.1.2. Bresenham's line and circle generating algorithm.....................................
3.2. Line thickness and Line style.....................................................................................
3.3. Polygons and Filling..................................................................................................
3.4. Text and Characters.................................................................................................
Chapter 4: Geometrical Transformations
4.1. Introduction..............................................................................................................
4.2. 3D transformations...................................................................................................
4.3. Matrix Representation of transformations................................................................
4.4. Homogeneous coordinates.......................................................................................
4.5. Combination of transformations..............................................................................

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 1
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER ONE: INTRODUCTION TO INTERACTIVE COMPUTER GRAPHICS

Introduction
The term computer graphics includes almost everything on computers that is not text or sound. Today
almost every computer can do some graphics, and people have even come to expect to control their
computer through icons and pictures rather than just by typing. Here in our lab at the Program of
Computer Graphics, we think of computer graphics as drawing pictures on computers, also called
rendering. The pictures can be photographs, drawings, movies, or simulations- pictures of things, which
do not yet exist and maybe could never exist. Or they may be pictures from places we cannot see directly,
such as medical images from inside your body. We spend much of our time improving the way computer
pictures can simulate real world scenes.
Computer graphics may be defined as a pictorial representation or graphical representation of
objects in a computer.
Computer graphics involves display, manipulation and storage of pictures and experimental data for
proper visualization using a computer. Computer graphics means drawing pictures on a computer
screen. Computer graphics are pictures and movies created using computers - usually referring
to image data created by a computer specifically with help from specialized graphical hardware and
software
Typical graphics system comprises of a host computer with support of fast processor, large memory,
frame buffer and

Display devices (Color monitors),


Input devices (mouse, keyboard, joystick, touch screen, trackball),
Output devices (LCD panels, laser printers, color printers, plotters etc)
Interfacing devices such as video I/O, TV interface etc.

The conceptual model of any interactive graphics system is given in the picture shown in Figure 1.1. At
the hardware level (not shown in picture), a computer receives input from interaction devices, and outputs
images to a display device. The software has three components. The first is the application program, it
creates, stores into, and retrieves from the second component, the application model, which represents the
graphic primitive to be shown on the screen. The application program also handles user input. It produces
views by sending to the third component, the graphics system, a series of graphics output commands that
contain both a detailed geometric description of what is to be viewed and the attributes describing how
the objects should appear. After the user input is processed, it sent to the graphics system is for
actually producing the picture. Thus the graphics system is a layer in between the application program
and the display hardware that effects an output transformation from objects in the application model to a
view of the model.

Figure 1.1. Conceptual framework for interactive graphics

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 2
COMPUTER GRAPHICS WITH C++ November 7, 2015

Computer graphics system could be active or passive. In both cases the input to the system is scene
description and output is a static or animated scene to be displayed. In case of active systems, the user
controls the display with the help of a GUI, using an input device.

Computer graphics is now-a-days, a significant component of almost all systems and applications of
computers in every field of life.

1.1. brief history of computer graphics


The first computers consisted of rows and rows of switches and lights. Technicians and engineers worked
for hours, days, or even weeks to program these machines and read the results of their calculations.
Patterns of illuminated bulbs conveyed useful information to the computer users, or some crude printout
was provided. You might say that the first form of computer graphics was a panel of blinking lights. (This
idea is supported by stories of early programmers writing programs that serve no useful purpose other
than creating patterns of blinking and chasing lights!)

Times have changed. From those first thinking machines, as some called them, sprang fully
programmable devices that printed on rolls of paper using a mechanism similar to a teletype machine.
Data could be stored efficiently on magnetic tape, on disk, or even on rows of hole-punched paper or
stacks of paper-punch cards. The hobby of computer graphics was born the day computers first started
printing. Because each character in the alphabet had a fixed size and shape, creative programmers in the
1970s took delight in creating artistic patterns and images made up of nothing more than asterisks (*).

Going Electric
Paper as an output medium for computers is useful and persists today. Laser printers and color inkjet
printers have replaced crude ASCII art with crisp presentation quality and photographic reproductions of
artwork. Paper and ink, however, can be expensive to replace on a regular basis, and using them
consistently is wasteful of our natural resources, especially because most of the time we dont really need
hard-copy output of calculations or database queries.

The cathode ray tube (CRT) was a tremendously useful addition to the computer. The original computer
monitors, CRTs were initially just video terminals that displayed ASCII text just like the first paper
terminalsbut CRTs were perfectly capable of drawing points and lines as well as alphabetic characters.
Soon, other symbols and graphics began to supplement the character terminal. Programmers used
computers and their monitors to create graphics that supplemented textual or tabular output. The first
algorithms for creating lines and curves were developed and published; computer graphics became a
science rather than a pastime.

The first computer graphics displayed on these terminals were two-dimensional, or 2D. These flat lines,
circles, and polygons were used to create graphics for a variety of purposes. Graphs and plots could
display scientific or statistical data in a way that tables and figures could not. More adventurous
programmers even created simple arcade games such as Lunar Lander and Pong using simple graphics
consisting of little more than line drawings that were refreshed (redrawn) several times a second.

The term real-time was first applied to computer graphics that were animated. A broader use of the word
in computer science simply means that the computer can process input as fast as or faster than the input is
being supplied. For example, talking on the phone is a real-time activity in which humans participate.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 3
COMPUTER GRAPHICS WITH C++ November 7, 2015

You speak, and the listener hears your communication immediately and responds, allowing you to hear
immediately and respond again, and so on. In reality, there is some delay involved due to the electronics,
but the delay is usually imperceptible to those having the conversation. In contrast, writing a letter or an
e-mail is not a real-time activity. Applying the term real-time to computer graphics means that the
computer is producing an animation or a sequence of images directly in response to some input, such as
joystick movement or keyboard strokes. Real-time computer graphics can display a wave form being
measured by electronic equipment, numerical readouts, or interactive games and visual simulations.

Going 3D
The term three-dimensional, or 3D, means that an object being described or displayed has three
dimensions of measurement: width, height, and depth. An example of a two-dimensional object is a piece
of paper on your desk with a drawing or writing on it, having no perceptible depth. A three-dimensional
object is the can of soda next to it. The soft drink can is round (width and depth) and tall (height).
Depending on your perspective, you can alter which side of the can is the width or height, but the fact
remains that the can has three dimensions.

For centuries, artists have known how to make a painting appear to have real depth. A painting is
inherently a two-dimensional object because it is nothing more than canvas with paint applied. Similarly,
3D computer graphics are actually two-dimensional images on a flat computer screen that provide an
illusion of depth, or a third dimension. What makes the cube look three-dimensional is perspective, or the
angles between the lines that lend the illusion of depth.

1.2. 3D graphics techniques and terminologies


3D graphics terminologies are the terms used during the rendering. There also processes or techniques
that are used during rendering. so let us see them in detail!
1.3.1. 3D graphics terminologies
The process by which mathematical and image data is transformed into a 3D dimensional image is called
rendering. When used as a verb, it is the process that your computer goes through to create the three
dimensional image. Rendering is also used as a noun, simply to refer to the final image produced. Now
lets take a look at some of the other terms and processes that take place during rendering.

Transformations and Projections


Transformation is the process of introducing changes in the shape size and orientation of the object using
scaling rotation reflection shearing & translation etc.
The process of converting the description of objects from world coordinates to viewing coordinates
is known as projection. The production of the 2D display of the 3D scene is called projection
Rasterization
The actual drawing, or filling in of the pixels between each vertex to make the lines is called rasterization.

Texture Mapping
A texture is simply a picture that we map to the surface of a triangle or polygon. Textures are fast and
efficient on modern hardware, and a single texture can reproduce a surface that might take thousands or
even millions of triangles to represent otherwise.
Blending
Blending allows us to mix different colors together. This reflection effect is done simply by drawing the
cube upside down first. Then we draw the floor blended over the top of it, followed by the right side up

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 4
COMPUTER GRAPHICS WITH C++ November 7, 2015

cube. You really are seeing through the floor to the inverted cube below. Your brain just says, Oh a
reflection. Blending is also how we make things look transparent.

Connecting the Dots


That is pretty much computer graphics in a nut shell. Solid 3D geometry is nothing more than connecting
the dots between vertices and then rasterizing the triangles to make objects solid. Transformations,
shading, texture, and blending: Any computer rendered scene you see in a movie, video game, or
scientific simulation is made up of nothing more than various applications of these four things.

Scan Conversion
The process of converting basic, low level objects into their corresponding pixel map representation.

1.3.2. 3D graphics techniques


Typically graphics packages allows user to specify which part of the defined picture is to be display and
where that part is to be placed on the display devices. Any convenient Cartesian coordinate system,
referred to as the world coordinate reference frame, can be used to define the picture. For a 2D picture, a
view is selected by specifying the sub-area of the total picture area. A user can select single area for
display or several areas could be selected for simultaneous display or for an animated panning sequence
across a scene. The picture parts within the selected areas are mapped onto the specified areas of the
device coordinates. When multiple view areas are selected, these areas can be placed in separate display
locations, or some areas could be inserted into other, larger display areas. Transformation from world to
device coordinates involve translation, rotation, rotation and scaling operations, as well as procedures for
deleting those parts of the picture that are outside the limits of the selected display areas.

1.3.2.1. Windows and Viewports


A world coordinate area selected for display is called window. An area on a display device to which a
window is mapped is called viewport. The window defines what is to be viewed; the viewport defines
where it is to be displayed. Often, window and viewport are rectangles in standard position, with the
rectangle edges parallel to the coordinate axes. In general, mapping of a part of the world coordinate
scene to device coordinate is simply referred to as a viewing transformation. Sometimes the 2D viewing
transformation is referred to as the window to viewport transformation or the windowing transformation.
But, in general viewing involves more than just the transformation form window to viewport.

In computer graphics terminology, the term window originally referred to the area of a picture that is

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 5
COMPUTER GRAPHICS WITH C++ November 7, 2015

selected for viewing. Unfortunately, the same term is now used in window manager system to refer to any
rectangular screen area that can be moved about, resided, and made active or inactive. in this chapter we
only use the term window to refer to area of a world coordinate scene that has been selected for display.
1.3.2.2. Window-To-Viewport Coordinate transformation
Once object description have been transferred to the viewing reference frame, we choose the window
extents in viewing coordinates and select the viewport limits in normalized coordinates. Object
descriptions are then transferred to normalized device coordinates. We do this using transformation that
maintains the same relative placement of objects in normalized space as they had in viewing coordinates.
If a coordinate position is at the center of viewing window, for instance, it will be displayed at the center
of the viewport.
If (xwmin, ywmin) and (xwmax, ywmax) are bottom-left and top-right corners of the window and if (xvmin,
yvmin) and (xvmax, yvmax) are those for the viewports then the window to viewport scaling transformation is
given by:
sx = (xvmax-xvmin)/(xwmax-xwmin)
sy = (yvmax-yvmin)/(ywmax-ywmin)
Where sx and sy are the scaling factor along x and y axes respectively.

Example: Consider a window with (2,3) and (6,8) are bottom-left corner and top-right corners
coordinated and (1,2) and (4,5) are those for a viewport. Find the value of the scaling factors that are
needed to transform the given window into viewport.
Solution: for our given example
xwmin=2, ywmin=3, xwmax=6, ywmax=8
xvmin=1, yvmin=2, xvmax=4, yvmax=5

We know that scaling factor


sx = (xvmax-xvmin)/(xwmax-xwmin)
sx = (4-1)/(6-2)=3/4
And
sy = (yvmax-yvmin)/(ywmax-ywmin)
sy = (5-2)/(8-3)=3/5

1.3.2.3. Clipping
Generally, any procedure that identifies those portion of picture that are either inside or outside of
specified region of space is referred to as a clipping algorithm or simply clipping. The region against
which an object is clipped is called a clip window.
For the viewing transformation, we want to display only those picture parts that are within the window
area. everything outside the window is discarded. Clipping algorithms can be applied in the coordinates,
so that only the contents of the window interior are mapped to device coordinates.

Following three situations may occur:


Case 1: The object may be completely outside the window.
Case 2: The object may be seen partially i.e., some part of it may lie outside the window.
Case 3: The object may be completely seen within the window.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 6
COMPUTER GRAPHICS WITH C++ November 7, 2015

The process of clipping eliminates the portion of the object which does not contribute to the final image.
For the above case it can be clearly seen that case 1 and case 2 demand clipping while in case 3 no
clipping is needed.

1.3.2.4. Point Clipping


Assuming that the clip window is rectangle in standard position, we save a point P (x, y) for display if the
following inequalities are satisfied:
xwmin <= x <= xwmax
ywmin <= y <= ywmax
If any one of these four inequalities is false, the point is outside the window and it is not displayed.

1.3.2.5. Line Clipping


All line segments fall into one the following clipping categories:
1. Visible: Both end points of the line lie within the window
2. Invisible: The line lies outside the window.
3. Partially visible (or clipping candidate): A line is partially visible when some part of it lies
within window.

Cohen-Sutherland Line Clipping Algorithm


This is one of the oldest and most popular line-clipping procedures. It is an efficient procedure for
determining the category of line segment. The algorithm proceeds in three steps:
Step 1: Assign a four bit code to each end point of the line. The end point lies in one of the following
nine regions of the plane

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 7
COMPUTER GRAPHICS WITH C++ November 7, 2015

4-bit code is assigned as

Bit 4 Bit 3 Bit 2 Bit 1


Top Bottom Right Left
Starting from the left most bit, each bit of the code is set to 0 or 1 as per the following rule:
Bit-1 is set to 1 if point 'P' lies to the left of the window
Bit-2 is set to 1 if point 'P' lies to the right of the window
Bit-3 is set to 1 if point 'P' lies to the bottom of the window
Bit-4 is set to 1 if point 'P' lies to the top of the window
A point with region code is inside the window.

Step 2: The line is visible if both endpoint region codes are 0000, not visible if the logical AND (bit by
bit multiplication) of the region codes is not 0000, and the line is candidate for clipping if the logical
AND of the end point region code is 0000.

Step 3: If the line is candidate for clipping, find the intersection of the line with the four boundaries of the
window and check the visible portion of the line.

Example: Given a clipping window A(5, 5), B(15,5), C(15, 20), D(5, 20). Using Cohen-Sutherland line
clipping algorithm, find the visible portion of line with end points P(7, 7) and Q(12, 18).
Solution: End point region code of P is 0000. and end point region code of Q is also 0000.
Since both end points region codes are (0000), the line is completely visible. The visible portion of the
line is from P(7, 7) to Q(12, 18).

Example: Given a clipping window A(5, 5), B(15,5), C(15, 20), D(5, 20). Using Cohen-Sutherland line
clipping algorithm, find the visible portion of line with end points P(3, 10) and Q(3, 20).
Solution: point region code of P is 0001. and end point region code of Q is also 0001. The line is not
completely visible. Now we have to test whether the line is partially visible or not. To do this we need to
find the logical AND (bit by bit multiplications of both end points region codes.

0001
Logical AND 0001

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 8
COMPUTER GRAPHICS WITH C++ November 7, 2015

0001
Conclusions: Since the result is not 0000, the line is totally invisible.

Example: Given a clipping window A(5, 5), B(20, 5), C(20, 15), D(5, 15). Using Cohen-Sutherland line
clipping algorithm, find the visible portion of line with end points P(3, 7) and Q(15, 20).
Solution: point region code of P is 0001. and end point region code of Q is also 1000.

Since both end points region codes are not (0000), the line is not completely visible. Now we have to test
whether the line is partially visible or not. To do this we need to find the logical AND (bit by bit
multiplications of both end points region codes.
0001
Logical AND 1000
0000
Since the result is 0000, the line may be partially visible.
We can use the following equation to find the equation of the given line:
y - y1 = (y2 - y1)(x - x1)
x2 - x1
After substituting the values of (x1, y1) and (x2, y2), we get

y - 7 = (20 - 7)(x - 3)
15 - 3
y - 7 = 13(x - 3)
12
12y - 84 = 13x -39

13x -12y = -45


Above equation is the equation of the given line.

(1) Intersection of line with left edge ( x =5) of the window


13 * 5 -12y= -45
y = 110/12 = 9.16 = 9 (which is greater than ywmin (5) and less than ywmax (15) and hence this intersection
point is accepted.

(2) Intersection of line with the right edge (x =20) of the window
13 * 20 -12y = -45
y = 305/12 = 25.41 = 25 (which is greater than ywmax (15) and hence intersection point is rejected

(3) Intersection of the line with the bottom edge (y =5) of the window
13x -12*5 = -45
x =15/13= 1.15=1(which is less than xwmin (5) hence intersection point is rejected

(4) Intersection of the line with the top edge (y =15)


13x -12 * 15 = -45
x = 135/13 =10.38=10 (which is greater than xwmin (5) and less than xwmax (20) and hence the intersection
point is accepted.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 9
COMPUTER GRAPHICS WITH C++ November 7, 2015

Conclusion: Since the two intersection points are accepted the visible portion of the line is from (5,9) to
(10, 15).

Midpoint Subdivision Line Clipping algorithm


An alternative to finding intersection points by equation solving is based on bisection method of
numerical analysis. The line segment is divided at its midpoint into two smaller line segments. The
clipping categories of the new line segments are then determined. Each segment in category 3 is divided
again into smaller segments and categorized. The bisection and categorization process continues untill all
segments are in category 1(visible) or category 2 (invisible). The midpoint coordinates (xm, ym) of a line
segment joining P1(x1, y1) to P2(x2, y2) are given by:
xm = x1 + x2
2
ym = y2 +y1
2
The algorithm is formalized in the following three steps:
for each end point:

Step 1: If the end point is visible, then process is complete. If not, continue.
Step 2: If the line is trivially invisible, no output is generated. The process is complete. If not, continue.
Step 3: divide the line P1P2 at its midpoint Pm. Apply the previous tests to the two segments P1Pm and
PmP2. If PmP2 is rejected as trivially invisible, the midpoint is an overestimation of the furthest visible
point. Continue with P1Pm. otherwise the midpoint is an underestimation of the furthest visible point.
Continue with P2Pm. If the segment becomes so short that the midpoint corresponds to the accuracy of the
machine or, as specified to the end points, evaluate the visibility of the point and the process is complete.
Note: Midpoint algorithm uses only integer value

Example: Given clipping window A(0,0), B(40,0), C(40,0), D(0,40). Using midpoint subdivision line
clipping algorithm, find the visible portion, if any, of the line with endpoints P(-10, 20) and Q(50, 10).
Solution: Region code of P is (0001) and region code of Q is (0010)

Since both endpoints region codes are not (0000), the line is not completely visible. Now we have to test
whether the line is partially visible or not. To do this we need to find the logical AND (bit by bit
multiplications of both end points region codes.
0001
Logical AND 0010
0000
Since the result is 0000, the line may be partially visible.
Midpoint is
xm = x1 + x2 =-10 +50 =20
2 2
ym = y2 +y1 = 20+10 = 15
2 2

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 10
COMPUTER GRAPHICS WITH C++ November 7, 2015

End point region code for midpoint (xm , ym) is (0000). Neither segment PPm nor PmQ is either totally
visible or trivially invisible. let us keep segment PPm for latter procession, and we continue with PmQ.
This subdivision process continues until we find the intersection point with window edge i.e (40,y).
Following table shows how the subdivision continues:

P Q Pm Comment
(-10, 20) (50, 10) (20, 15) Save PPm and continue with PmQ
(20, 15) (50, 10) (35, 12) Continue with PmQ, as PPm is totally visible
(35, 12) (50, 10) (42, 11) Continue with PPm, as PmQ is totally visble
(35, 12) (42, 11) (38, 11) Continue with PmQ
(38, 11) (42, 11) (40, 11) Success. This is the intersection point of the line with right
window edge

(-10, 20) (20, 15) (5, 17) Recall saved PPm, continue with PPm, as PmQ is totally
visible
(-10, 20) (5, 17) (-2, 18) Continue with PmQ, as PPm is totally visible
(-2, 18) (5, 17) (1, 17) Continue with PPm, as PmQ is totally visible
(-2, 18) (1, 17) (0, 17) Success. This is the intersection point of the line with left
window edge
Conclusion: Visible portion of line segment PQ is from (0, 17) to (40, 11).

1.3. Common uses of Computer Graphics


Three-dimensional graphics have many uses in modern computer applications. Applications for real-time
3D graphics range from interactive games and simulations to data visualization for scientific, medical, or
business uses. Higher-end 3D graphics find their way into movies and technical and educational
publications as well.

Real-Time 3D
As defined earlier, real-time 3D graphics are animated and interactive with the user. One of the earliest
uses for real-time 3D graphics was in military flight simulators. Even today, flight simulators are a
popular diversion for the home enthusiast.

The applications for 3D graphics on the personal computer are almost limitless. Perhaps the most
common use today is for computer gaming. Hardly a title ships today that does not require a 3D graphics
card to play. Although 3D has always been popular for scientific visualization and engineering
applications, the explosion of cheap 3D hardware has empowered these applications like never before.
Business applications are also taking advantage of the new availability of hardware to incorporate more
and more complex business graphics and database mining visualization techniques. Even the modern GUI
is being affected and has evolved to take advantage of 3D hardware capabilities. The Macintosh OS X,
for example, uses OpenGL to render all its windows and controls for a powerful and eye-popping visual
interface.
There are also many uses of Real-time 3D(Interactive) graphics are:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 11
COMPUTER GRAPHICS WITH C++ November 7, 2015

Education and Training: Computer-generated models of physical, financial and economic systems
are often used as educational aids. Models of physical systems, physiological systems, population trends
or equipment such as the color-coded diagram can help trainers to understand the operation of the system.
For some training applications, special systems are designed. Examples of such specialized systems
are the simulators for practice sessions or training of ship captains, aircraft pilots, heavy
equipment operators and air traffic-control personnel. Some simulators have no video screens, but
most simulators provide graphics screens for visual operation.
For example, in an automobile-driving simulator, it is used to investigate the behavior of drivers in
critical situations.
Cartography: Computer graphics is used to produce both accurate and schematic representations of
geographical and other natural phenomena from measurement data. Examples include geographic maps,
relief maps, exploration maps for drilling and mining, oceanographic charts, weather maps, contour maps,
and population-density maps.
User interfaces: As soon mentioned, most applications that run on personal computers and workstations,
and even those that run on terminals attached to time shared computers and network computer servers,
have user interfaces that rely on desktop window systems to manage multiple simultaneous activities, and
on point and click facilities to allow users to select menu items, icons, and objects on the screen; typing is
necessary only to input text to be stored and manipulated. Word-processing , spreadsheet, and desktop-
publishing programs are typical applications that take this text into the screen to display.
(Interactive) plotting in business, science and technology: The next most common use of graphics
today is probably to create 2D and 3D graphs of mathematical, physical, and economic functions;
histograms, bar and pie charts; task-scheduling charts; inventory and production charts, and the like . All
these are used to present meaningfully and concisely the trends and patterns gleaned from data, so as to
clarify complex phenomena and to facilitate informed decision making.
Office automation and electronic publishing: The use of graphics for the creation and dissemination of
information has increased enormously since the advent of desktop publishing on personal computers.
Many organizations whose publications used to be printed by outside specialists can now produce printed
materials in house. Office automation and electronic publishing can produce both traditional printed
(hardcopy) documents and electronic (softcopy) documents that allow browsing of networks of
interlinked multimedia documents are proliferating
Computer-aided drafting and design: In computer-aided design (CAD), interactive graphics is used to
design components and systems of mechanical , electrical, electromechanical, and electronic devices,
including structure such as buildings, automobile bodies, airplane and ship hulls, very large scale-
integrated (VLSI) chips, optical systems, and telephone and computer networks. Sometimes, the use;
merely wants to produce the precise drawings of components and assemblies, as for online drafting or
architectural blueprints. More frequently however the emphasis is on interacting with a computer based
model of the component or system being designed in order to test, for example, its structural, electrical, or
thermal properties. Often, the model is interpreted by a simulator that feeds back the behavior of the
system to the user for further interactive design and test cycles. After objects have been designed, utility
programs can post process the design database to make parts lists, to process bills of materials, to define
numerical control tapes for cutting or drilling parts, and so on.
Simulation and animation for scientific visualization and entertainment: Computer produced
animated movies and displays or the time-varying behavior of real and simulated objects are becoming
increasingly popular for scientific and engineering visualization. We can use them to study abstract

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 12
COMPUTER GRAPHICS WITH C++ November 7, 2015

mathematical entries as well as mathematical models of such phenomena as fluid flow, relativity, nuclear
and chemical reactions, physiological system and organ function, and deformation of mechanical
structures under various kinds of loads. Another advanced-technology area is interactive cartooning. The
simpler kinds of systems for producing Flat cartons are becoming cost-effective in creating routine in-
between frames that interpolate between two explicitly specified key frames. Cartoon characters will
increasingly be modeled in the computer as 3D shape descriptions whose movements are controlled by
computer commands, rather than by the figures being drawn manually by cartoonists . Television
commercials featuring flying logos and more exotic visual trickery have become common, as have
elegant special effects in movies. Sophisticated mechanisms are available to model the objects and to
represent light and shadows.
Art and commerce: Overlapping the previous categories the use of computer graphics in art and
advertising here, computer graphics is used to produce pictures that express a message and attract
attention. Personal computers and Teletext and Videotexts terminals in public places such as in private
homes, offer much simpler but still informative pictures that let users orient themselves, make choices, or
even teleshop and conduct other business transactions. Finally, slide production for commercial,
scientific, or educational presentations is another cost-effective use of graphics, given the steeply rising
labor costs of the traditional means of creating such material.
Process control: Whereas flight simulators or arcade games let users interact with a simulation of a real
or artificial world, many other applications enable people or interact with some aspect of the real world
itself. Status displays in refineries, power plants, and computer networks show data values from sensors
attached to critical system components, so that operators can respond to problematic conditions. For
example, military commanders view field data number and position of vehicles, weapons launched,
troop movements, causalities on command and control displays to revise their tactics as needed; flight
controller airports see computer-generated identification and status information for the aircraft blips on
their radar scopes, and can thus control traffic more quickly and accurately than they could with the
uninitiated radar data alone; spacecraft controllers monitor telemetry data and take corrective action as
needed.

Non-Real-Time 3D (passive graphics)


A computer graphics operation that transfers automatically and without operator intervention. Non-
interactive computer graphics involves one way communication between the computer and the user.
Picture is produced on the monitor and the user does not have any control over the produced picture.
1.4. Examples of application areas
1. GUI- Graphical user Interface:- typical components used are:
Menus
Icons
Cursers
Dialog boxes
Scroll bars

There are also few other components which could also be used such as:

Buttons, Valuators, Grids, Sketching, and 3D interface.


2. Plotting in business
3. Office automation

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 13
COMPUTER GRAPHICS WITH C++ November 7, 2015

4. Desktop publishing
5. Plotting in science and technology
6. Web/business/commercial publishing and advertisements
7. CAD/CAM design (VLSI, Construction, Circuits)
8. Scientific visualization
9. Entertainment (movie, TV advt., Games etc)
10. Simulations
11. Cartography
12. Multimedia
13. Virtual reality
14. process monitoring
15. Education and training
16. Digital image processing

The four major areas of computer graphics are:

1. Display of information
2. Design/Modeling
3. Simulation and
4. User Interface.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 14
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER TWO: Graphics hardware


Introduction
The hardware devices used for the computer graphics are:
Input Devices
Keyboard, Mouse, Data tablet, Scanner, Light pen, Touch screen, Joystick
Output Devices
1. Raster Devices- CRT, LCD, LED, Plasma screens, Printers
2. Vector Devices- Plotters, Oscilloscopes
Various devices are available for data input on graphics workstations. Most systems have a keyboard and
one or more additional devices specially designed for interactive input. These include a mouse, trackball,
space ball, joystick, digitizers, dials, and button boxes. Some other input devices used in particular
applications are data gloves, touch panels, image scanners, and voice systems.

Z-Mouse: is small hand-held box used to position the screen cursor.


Include three buttons
A thumb wheel on the side, a track ball on the top, and a standard mouse ball underneath.
This design provides six degrees of freedom to select an object from the spatial position.
With this we can pick up an object, rotate it and we can move it in any direction
Used in virtual reality and CAD systems
Joysticks
Consists of small vertical liver mounted on a base
Used to move the cursor around the screen
The screen cursor is moved according to the distance
One or two buttons is usually intended for signaling certain actions
Touch panels: Allow selecting the screen position with the touch of finger.
Three types
Optical touch panel
Electrical touch panel
Acoustical touch panel
Image scanners:
Drawings, color and black and white photos or text can be given as an input to the computer with
an optical scanning mechanism.
According to reflected light intensity the gradations of gray scale or color can be stored in
an array
Light pens: The pencil-shaped devices 's are used to select screen positions by detecting the light coming
from point on the CRT screen.
Data glove:
Constructed with a series of sensors that can detect hand and finger motions
The transmitting and receiving antennas can be structured as a set of three mutually
perpendicular cols, forming a three dimensional Cartesian coordinates system.
Electromagnetic coupling between the three pairs of coil is used to provide information
about the position and orientation of hand.
The voice-system input: can be used to initiate graphics operations or to enter data.
2.1. Raster-Scan Displays
Raster: A rectangular array of points or dot.
An image is subdivided into a sequence of (usually horizontal) strips known as "scan lines which can be
further divided into discrete pixels for processing in a computer system.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 15
COMPUTER GRAPHICS WITH C++ November 7, 2015

A raster image is a collection of dots called pixels

WORKING
In a raster scan system, the electron beam is swept across the screen, one row at a time from top to
bottom. As the electron beam moves across each row, the beam intensity is turned on and off to create a
pattern of illuminated spots. The return to the left of the screen, after refreshing each scan line is called
Horizontal retrace. At the end of each frame the electron beam returns to the top left corner of the screen
to begin the next frame is called Vertical retrace:

fig. Raster Scan Display


Picture definition is stored in a memory area called the refresh buffer or frame buffer.
Refresh buffer or frame buffer is memory area that holds the set of intensity values for all the screen
points.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 16
COMPUTER GRAPHICS WITH C++ November 7, 2015

Stored intensity values then retrieved from refresh buffer and painted on the screen one row (scan line)
at a time.

fig. Object as set of discrete points across each scan line

The quality of a raster image is determined by the total number pixels (resolution), and the amount of
information in each pixel (color depth). A black-and-white system: each screen point is either on or off,
so only one bit per pixel is needed to control the intensity of screen positions. Such type of frame buffer
is called Bit map. High quality raster graphics system have 24 bits per pixel in the frame buffer (a full
color system or a true color system). Refreshing on raster scan displays is carried out at the rate 60 to
80 frame per second.

Interlacing

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 17
COMPUTER GRAPHICS WITH C++ November 7, 2015

On some raster systems (TV), each frame is displays in two passes using an interlaced refresh procedure.
Interlacing is primarily used for slower refresh rates.
An effective technique to avoid Flicker.(Flicker occurs on CRTs when they are driven at a low refresh
rate, allowing the brightness to drop for time intervals sufficiently long to be noticed by a human eye).

Applications
Suited for realistic display of screens.
Home television and computer printers create their images basically by raster scanning. Laser
printers use a spinning polygonal mirror (or an optical equivalent) to scan across the
photosensitive drum, and paper movement provides the other scan axis.
Common raster image formats include BMP (Windows Bitmap), JPEG (Joint Photographics
Expert Group), GIF (Graphics Interchange Format) , PNG (Portable Network Graphic), PSD
(Adobe PhotoShop).

Disadvantage
To increase size of a raster image the pixels defining the image are be increased in either number
or size Spreading the pixels over a larger area causes the image to lose detail and clarity.
Produces jagged lines that are plotted as discrete points

Graphics displays are actually CRT display devices or cathode ray tube display devices.
The most commonly used display devices is CRT Monitors. Of course there are other better
types of display devices which are coming out based on solid state technology, it could be the
flat panel devices, or the plasma devices, the organic LEDs and other devices but most
commonly in the world and specifically in our country most of the display devices are based
on CRT monitors. So, what are the different types of CRT display devices which have been
there or available.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 18
COMPUTER GRAPHICS WITH C++ November 7, 2015

CRT Based Monitors

This is a typical diagram that illustrates the operation of a CRT or a CRT based monitor. It is
an operation which shows how an electron gun with an accelerating anode is used to
generate an electron beam which is used to display points or pictures in the screen. On the left
hand side of the CRT you have a heating filament which is responsible to heat up the cathode
element of the CRT and that is what generates the electrons. After the heating element is heating
this cathode the electron simply boils off from the cathode and it is guided by a set of devices
which are all essential cylindrical in nature and it helps to guide this electronic beam path towards
the screen. We have three functions here; a control grid, a focusing anode and an accelerating
anode. These are essentially three cylindrical devices which are struck inside the cylindrical CRT
device and three of these have three independent tasks.
The raster display stores the display primitives like lines, characters, shaded areas or pattered areas
in the refresh buffer. We have the central processing unit which typically does all the tasks of
monitoring the system as well as computer graphic commands if necessary if you do not
have separate graphics processor. We have a system memory refresh buffer could be a part of
the system memory. The video controller could take commands through the CPU, through the
system bus from the CPU the commands are all in the system memory or frame buffer and the
video controller is the one which converts the line drawing primitives and draws it on the screen or
the monitor. So that is the typical architecture of a simple raster system which does not have a
graphics processor.

In this case using the refresh buffer we use points to draw raster scan raster scan as given in this slide
raster scan are drawn with the help of points, random scan is drawn with the help of lines
and we also need a few more functionalities or a modified or advanced architecture of a graphics
system with the use of frame buffer, with the use of picture processor separately to draw a
screen.
In raster scan approach, the viewing screen is divided into a large number of discrete phosphor picture
elements, called pixels. The matrix of pixels constitutes the raster. The number of separate pixels in the
raster display might typically range from 256X256 to 1024X 1024. Each pixel on the screen can be made
to glow with a different brightness. Color screen provide for the pixels to have different colors as well as
brightness. In a raster-scan system, the electron beam is swept across the screen, one row at a time from
top to bottom. As the electron beam moves across each row, the beam intensity is turned on and off to
create a pattern of illuminated spots. Picture definition is stored in a memory area called the refresh buffer
or frame buffer. This memory area holds the set of intensity values for all the screen points. Stored
intensity values are then retrieved from the refresh buffer and "painted" on the screen one row (scan line)
at a time. Each screen point is referred to as a pixel or pel (shortened forms of picture element). The
capability of a raster-scan system to store intensity information for each screen point makes it well suited
for the realistic display of scenes containing subtle shading and color patterns. Home television sets

and printers are examples of other systems using raster-scan methods. It is different from the Random
Scan in the sense that Random Scan was a line drawing device or a line drawing system like
the device DVST or Direct View Storage Tube. The refresh is basically a point plotting device.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 19
COMPUTER GRAPHICS WITH C++ November 7, 2015

2.2. Introduction to the 3D graphics pipeline


Graphics processes generally execute sequentially. Pipelining the process means dividing it into stages.
Especially when rendering in real-time, different hardware resources are assigned for each stage. There
are three stages
Application Stage
Geometry Stage
Rasterization Stage

Application stage
Entirely done in software by the CPU
Read Data
the world geometry database,
Users input by mice, trackballs, trackers, or sensing gloves
In response to the users input, the application stage change the view or scene

Geometry Stage

Rasterization stage

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 20
COMPUTER GRAPHICS WITH C++ November 7, 2015

2.3. The Z Buffer for hidden surface removal


What are hidden surfaces?
When we view a picture containing non transparent objects and surfaces, then we cant see those objects
from view which are behind from the objects closer to eye. We must remove these hidden surfaces to get
realistic screen image. The identification & removal of these surfaces is called the Hidden- surface
problem.

Depth Buffer Method


Also known as depth-buffer method.
Each surface is processed separately one pixel position at a time across the surface

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 21
COMPUTER GRAPHICS WITH C++ November 7, 2015

The depth values for a pixel are compared and the closest (smallest z) surface determines the
color to be displayed in the frame buffer.
Also referred as z- buffer method
Z-buffer is like a frame buffer, contain depths
Proposed by Catmull in 1974
Two buffer areas
1. Depth buffer to store depth values
2. Refresh buffer to store intensity values
Depth value for a surface position (x, y)
It is an image space approach

The z-Buffer algorithm is one of the most commonly used routines. It is simple, easy to implement, and is
often found in hardware. The idea behind it is uncomplicated: Assign a z-value to each polygon and then
display the one (pixel by pixel) that has the smallest value.
There are some advantages and disadvantages to this:
Advantages:

object or image space

Disadvantages:
Takes up a lot of memory.
Can't do transparent surfaces without additional code.

For example:

Consider these two polygons (right: edge-on left: head-on)


The computer would start (arbitrarily) with Polygon 1 and put it's depth value into the buffer. It would do
the same for the next polygon, P2. It will then check each overlapping pixel and check to see which one is
closer to the viewer, and display the appropriate color.

This is a simplistic example, but the basic ideas are valid for polygons in any orientation and permutation
(this algorithm will properly display polygons piercing one another, and polygons with conflicting depths,
such as:

CHAPTER THREE: GEOMETRY AND LINE GENERATION


Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 22
COMPUTER GRAPHICS WITH C++ November 7, 2015

3.1. LINE DRAWING ALGORITHMS


In order to display any object on a computer screen, the most important information is which pixel have
to be highlighted for accomplishing it. Most human operators generally think in terms of more complex
graphic objects such as cubes, cones, circles and ellipses. Te process of representing continuous graphics
objects as a collection of discrete pixels is called scan conversion.

Several line drawing algorithms are developed. Their basic objective is to enable visually
satisfactory images in least possible time. This is achieved by reducing the calculations to a minimum.
This is by using integer arithmetic rather than floating point arithmetic. This way of minimizing even
a single arithmetic operation is important. This is because every drawing or image generated will have
a large number of line segments in it and every line segment will have many pixels. So saving of one

computation per pixel will save number of computations in generating an object. This in turn minimizes
the time required to generate the whole image on the screen. The computer screen is divided into rows
and columns. The intersection area of these rows and columns is known as pixel (in short pel).

In order to draw a line on the screen, first we have to determine which pixels have to switched ON or are
to be highlighted. The process of determining which combination of pixels provides the best
approximation to the desired line is known as rasterization.

General requirements for drawing a line

Lines must appear to be straight


Lines should start and end accurately
Lines should have constant brightness along their length
Lines should be drawn rapidly

The Cartesian slope-intercept equation for a straight line is

y=m.x+c (1)

Where m is the slope of the line and c is the y- intercept. (x, y) is any point on the line. Suppose we are
given the 2 end points of a line. (xstart, ystart) and (xend, yend).

(xend, yend)

(xstart, ystart)

(Fig: 1.34 - Line path between end point positions (xend, yend) and (xstart, ystart))

m=(y2-y1)/(x2-x1) (2)

c=y1-(m.x1) (3)

Algorithms for displaying straight lines are based on the equations (1), (2) and (3).

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 23
COMPUTER GRAPHICS WITH C++ November 7, 2015

For any given x interval x along a line, we can compute the corresponding y interval from equation (2)
as

y=m. x (4)

Similarly, we can obtain the x interval x corresponding to a specified y as

x= y/m (5)

We know that the general equation of a line is y = mx + c. Where m is the slope of the line and c is the y-
intercept. (x, y) is any point on the line. We also know that the slope of the above line can be expressed
as:-
m = (yend ystart)/ xend - xstart

DDA (Digital Differential Analyzer) algorithm


DDA is a scan-conversion line algorithm based on either y or x, using equation (4) or (5).
DDA line drawing algorithm can be written as:-

1. Input the two line end points (xa, ya) and (xb, yb )
2. Calculate
dx = xb - xa
dy = yb - ya
3. If abs(dx)>=abs(dy) then Length= abs(dx) else Length= abs(dy)
x=dx/length
y=dy/length
x=xa
y=ya
putpixel(ceil(x),ceil(y))
i=1
4. while (i<=length)
x=x+x
y=y+y
putpixel(ceil(x),ceil(y))
i=i+1
end while
5. Finish

Example: Consider a line from (2,3) to (6,6). Use DDA algorithm to rasterize this line.
Solutions: Initializations:
(xa, ya) = (2,3)
(xb, yb ) = (6,6)

dx= xb-xa=6-2=4
dx=4

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 24
COMPUTER GRAPHICS WITH C++ November 7, 2015

dy= yb-ya=6-3=3
dy=3

Since abs(dx)>=abs(dy), length=abs(dx)=4


length=4

x=dx/length=4/4=1
x=1

y=dy/length=4/4=0.75
y=0.75

x=xa=2
x=2
y=ya=3
y=3
First pixel will be plotted at (2,3)

Plotting begins:

For i=1
x=x+x=2+1=3
y=y+y=3+0.75=3.75
First pixel will be plotted at (3,3.75)

For i=2
x=x+x=3+1=4
y=y+y=3.75+0.75=4.5
First pixel will be plotted at (4,4.5)

For i=3
x=x+x=4+1=5
y=y+y=4.5+0.75=5.25
First pixel will be plotted at (5,5.25)

For i=4
x=x+x=5+1=6
y=y+y=5.25+0.75=6
First pixel will be plotted at (6,6)

The pixel coordinates we found are

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 25
COMPUTER GRAPHICS WITH C++ November 7, 2015

i X y Round(x,y)
0 2 3 (2,3)
1 3 3.75 (3,3.75)
2 4 4.5 (4,4.5)
3 5 5.25 (5,5.25)
4 6 6 (6,6)

Note: Drawing of lines is left for students

Exercises:
1. Consider a line from (0,0) to (6,7). Use DDA algorithm to rasterize this line. Then draw a line on
paper using (x,y) coordinates.
2. Consider a line from (0,0) to (-5,5). Use DDA algorithm to rasterize this line. Then draw a line on
paper using (x,y) coordinates.

Drawbacks of DDA Algorithm


1. Although DDA is fast, the accumulation of round off error in successive additions of floating
point increment however can cause the calculated pixel position to drift away from the true line
path for the long line segments.
2. Floating point operations and rounding off in DDA is time consuming.

These drawbacks have been overcome in Bresenham's line drawing algorithm.

C++ program of DDA line drawing algorithm


Following is the C++ program to implement DDA line drawing algorithm:
#include<iostream.h>
#include<conio.h>
#include<math.h>
#include<graphics.h>
class lineDDA
{
private:
int dx,dy,x,y,xi,yi,xa,ya,xb,yb,step,k;
public:
void getdata();
void line();
};
void lineDDA::getdata()
{
cout<<"\n\tEnter xa and ya :";
cin>>xa>>ya;
cout<<"\n\tEnter xb and yb :";
cin>>xb>>yb;
}
void lineDDA::line()

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 26
COMPUTER GRAPHICS WITH C++ November 7, 2015

{
dx=abs(xb-xa);
dy=abs(yb-ya);
if(dx>dy)
step=dx;
else
step=dy;
xi=(xb-xa)/step;
yi=(yb-ya)/step;
x=xa;
y=ya;
putpixel(abs(x),abs(y),1);
for(k=1;k<=step;k++)
{
x+=xi;
y+=yi;
putpixel(abs(x),abs(y),1);
k++;
}
}
void main()
{
lineDDA l;
clrscr();
int gd=DETECT,gm;
initgraph(&gd,&gm,"C:/TC/BGI");
l.getdata();
l.line();
getch();
closegraph();
}

Bresenham's Line Algorithm


An accurate and efficient raster line generating algorithm, developed by Bresenham, scan converts lines
using only incremental integer calculations that can be adapted to display circles and other curves.
We can summarize Bresenham's line drawing for a line with positive slope less than 1 in the following
listed steps. The constants 2x and 2x-2 are calculated once for each line to be scan converted, so the
arithmetic involves only integer addition and subtraction of these two constants.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 27
COMPUTER GRAPHICS WITH C++ November 7, 2015

Bresenham's line drawing algorithm for |m|<1

1. Input the two endpoints (xa, ya) and (xb, yb) and store the left endpoint in (x0,y0)
2. Load (x0,y0) into the frame buffer: that is , plot the first point
3. Calculate constants x= xb- xa, y= yb- ya, 2y, and 2y-2x and obtain the starting point for the
decision parameter as
P0 = 2y-x
4. At each Xk along the line , starting at K=0; perform the following test:
If Pk<0 then
Xk+1 = Xk+1 and Yk+1=yk
Pk+1=pk+2y
Otherwise, if Pk>=0 then
Xk+1 = Xk+1 and Yk+1=yk+1
Pk+1=pk+2y-2x
5. Repeat step 4, x times

Example: Consider a line from (2,3) to (6,6). Use Bresenham's line drawing algorithm to rasterize this
line.
Solution: The end points of the line are (2,3) to (6,6).
(xa, ya)=(2,3)
(xb, yb)=(6,6)

Hence, x0=2, y0=3

x= xb- xa=6-2=4
x=4

y= yb- ya=6-3=3
y=3

2y= 2*3=6
2y=6

2y-2x=2*3-2*4=6-8=-2
2y-x=-2

The initial decision parameter p0 has the value


p0=2y-x=2*3-4=2
p0=2

We plot the initial point (x0,y0)=(2,3) and determine successive pixel position along the line path from
the decision parameters.

(1) Since P0>=0, for k=0, we have


xk+1=xk+1
x1=x0+1=2+1=3
x1=3

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 28
COMPUTER GRAPHICS WITH C++ November 7, 2015

yk+1=yk+1
y1=y0+1=3+1=4
y1=4

pk+1=pk+2y-2x
p1=p0+2y-2x=2+(-)2=0
p1=0

(2) Since P1>=0, for k=1, we have


xk+1=xk+1
x2=x1+1=3+1=4
x2=4

yk+1=yk+1
y2=y1+1=4+1=5
y2=5

pk+1=pk+2y-2x
p2=p1+2y-2x=0+(-)2=-2
p2=-2

(3) Since P2<0, for k=2, we have


xk+1=xk+1
x3=x2+1=4+1=5
x3=5

yk+1=yk
y3=y2=5
y3=5

pk+1=pk+2y
p3=p2+2y=-2+6=4
p3=4

(4) Since P3>=0, for k=3, we have


xk+1=xk+1
x4=x3+1=5+1=6
x4=6

yk+1=yk+1
y4=y3+1=5+1=6
y4=6

pk+1=pk+2y-2x
p4=p3+2y-2x=4+(-)2=2

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 29
COMPUTER GRAPHICS WITH C++ November 7, 2015

p4=2

The pixel coordinates found are

K Pk Xk+1 Yk+1
0 2 2 3
1 0 3 4
2 -2 4 5
3 4 5 5
4 2 6 6

Exercises:

1. Draw a line with end points (20,10) and (30,18) using Bresenham's line drawing algorithm.

C++ program of Bresenham's line drawing algorithm


#include <graphics.h>
#include <conio.h>
#include <math.h>
#include <iostream.h>
void main(){
int x1,x2,y1,y2,i,e,x,y,dx,dy;
int gdriver = DETECT, gmode;
initgraph(&gdriver,&gmode,C:\TC\BGI);
cout<<Enter co-ordinates of point 1: ;
cin>>x1>>y1;
cout<<Enter co-ordinates of point 2: ;
cin>>x2>>y2;
dx = abs(x2-x1);
dy = abs(y2-y1);
x=x1;
y=y1;
e = 2*dy-dx;
i=1;
do{
putpixel(x,y,WHITE);
while(e>=0){
y++;
e = e 2*dx;
putpixel(x,y,WHITE);
}
x++;
e = e + 2*dy;
i++;
}while(i<=dx);

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 30
COMPUTER GRAPHICS WITH C++ November 7, 2015

getch();
closegraph();
}

Bresenham's Circles Algorithm


An accurate and efficient raster scan circle generating algorithm, developed by Bresenham, scan converts
circle using only incremental integer calculations that can be adapted to display and other curves also.

Bresenham's Circles Algorithm

1. Input radius and circle center (Xc,Yc), and obtain the first point on the circumstances of a circle
centered on the origin as

(X0,Y0)=(0,r)

2. Calculate initial values of decision parameter as

P0=3-2*r

3. At each Xk along the line, starting at K=0, perform the following test:

If Pk<0then

Xk+1=Xk+1 and Yk+1=Yk


Pk+1=Pk+4 Xk+1+6
Otherwise If Pk>=0 then
Xk+1=Xk+1 and Yk+1=Yk-1
Pk+1=Pk+4(Xk+1- Yk+1)+10
4. Determine symmetry points at the other seven octants
5. Move each calculated pixel position (x,y) onto the circular path centered on (Xc,Yc) and plot the
coordinate value as
x=x+xc and y=y+yc
6. Repeat step 3 to 5 until x>=y.

C++ program for Bresenham's circle drawing algorithm

#include<stdlib.h>
#include<graphics.h>
#include<dos.h>
void circ_bre(int x,int y,int rad);
void display(int,int,int,int);
void main()
{
int gd = DETECT, gm, x,y,r;
initgraph(&gd,&gm,"c:\\tc\\bgi");
cleardevice();
cout<<"Bresenhams circle generation algorithm ";

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 31
COMPUTER GRAPHICS WITH C++ November 7, 2015

cout<<"\nEnter the center co-ordinates for circle ";


cin>>x>>y;
cout<<"\nEnter the radius of the circle";
cin>>r;
circ_bre(x,y,r);
getch();
closegraph();
}
void circ_bre(int x,int y,int rad)
{
float dp; //initialising the descision parameter.
int x1,y1;
x1 = 0; //initialisng the X,Y cordinates.
y1 = rad;
dp = 3 - 2*rad;
while(x1<=y1)
{
if(dp<=0)
dp += (4 * x1) + 6;
else
{
dp += 4*(x1-y1)+10;
y1--;
}
x1++;
display(x1,y1,x,y);
}
}
void display (int x1,int y1,int x,int y)
{
putpixel(x1+x,y1+y,WHITE); //plotting the pixels.
putpixel(x1+x,y-y1,WHITE);
putpixel(x-x1,y1+y,WHITE);
putpixel(x-x1,y-y1,WHITE);
putpixel(x+y1,y+x1,WHITE);
putpixel(x+y1,y-x1,WHITE);
putpixel(x-y1,y+x1,WHITE);
putpixel(x-y1,y-x1,WHITE);
}

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 32
COMPUTER GRAPHICS WITH C++ November 7, 2015

Polygon
A polygon is any 2-dimensional shape formed with straight lines. Triangles, quadrilaterals, pentagons,
and hexagons are all examples of polygons. The name tells you how many sides the shape has. For
example, a triangle has three sides, and a quadrilateral has four sides. So, any shape that can be drawn by
connecting three straight lines is called a triangle, and any shape that can be drawn by connecting four
straight lines is called a quadrilateral.

Fig: Shapes of polygons

All of these shapes are polygons. Notice how all the shapes are drawn with only straight lines? This is
what makes a polygon. If the shape had curves or didn't fully connect, then it can't be called a polygon.
The shape with arrow like is still a polygon even if it looks like it has an arrow. All the sides are straight,
and they all connect. The shape with arrow like has 11 sides.
Polygons aren't limited to the common ones we know but can get pretty complex and have as many sides
as are needed. They can have 4 sides, 44 sides, or even 444 sides. The names would be 4-gon, or
quadrilateral, 44-gon, and 444-gon, respectively. An 11-sided shape can be called an 11-gon.
A special class of polygon exists; it happens for polygons whose sides are all the same length and whose
angles are all the same. When this happens, the polygons are called regular polygons. A stop sign is an
example of a regular polygon with eight sides. All the sides are the same and no matter how you lay it
down, it will look the same. You wouldn't be able to tell which way was up because all the sides are the
same and all the angles are the same.
When a triangle has all the sides and angles the same, we know it as an equilateral triangle, or a regular
triangle. A quadrilateral with all sides and angles the same is known as a square, or regular quadrilateral.
A pentagon with all sides and angles the same is called a regular pentagon. An n-gon with sides and
angles the same is called a regular n-gon.

Fig: Regular polygons

Angles of Regular Polygons


Regular polygons also have two different angles related to them. The first is called the exterior angle and
it is the measurement between the shape and each line segment when you stretch it out past the shape.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 33
COMPUTER GRAPHICS WITH C++ November 7, 2015

The exterior angle.


However many sides a polygon has is the same number of exterior angles it has. So, a pentagon with five
sides has five exterior angles. A hexagon will have six exterior angles and so on. For regular polygons,
we can figure out the measurement of the exterior angle, but for polygons that aren't regular, we can't.
Here is the formula for regular polygons:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 34
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER FOUR: GEOMETRICAL TRANSFORMATION

The object itself is moved relative to a stationary coordinate system or background. Transformations are a
fundamental part of computer graphics.
Transformations are used:
for positioning geometric objects in 2D and 3D.to shape objects
For viewing geometric objects in 2D and 3D.
for modeling geometric objects in 2D and 3D, and
even to change how something is viewed(e.g. the type of perspective that is used).

Geometric transformations are used to fulfill two main requirements in computer graphics:
1. To model and construct scenes.
2. To navigate our way around 2 and 3 dimensional space.
For example, when a street building has n identical windows, we proceed as follows:
1. To construct a single window by means of graphics primitives;
2. To replicate n times the window.
3. To put each window at a desirable location using translations and rotations.
This shows that transformations such as translations and rotations can be used as scene modeling
operations. These transformations can be also used to move a bot or an avatar in the virtual environment
of a First Person Shooter (FPS) game.

Rigid body transformations or Euclidean transformations-transformations that do not change the object.
They are sometimes called isometries. These are:
Translate: If you translate a rectangle, it is still a rectangle.
Rotate: If you rotate a rectangle, it is still a rectangle.
There are 3 basic types of transformations that one can perform in 3 dimensions:
1. translations
2. scaling
3. rotation

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 35
COMPUTER GRAPHICS WITH C++ November 7, 2015

These basic transformations can also be combined to obtain more complex transformations. In order to
make the representation of these complex transformations easier to understand and more efficient, we
introduce the idea of homogeneous coordinates.

4.1. The 3D transformations

With respect to some 3-D coordinate system, an object Obj is considered as a set of points.
Obj = { P(x,y,z)}
If the Obj moves to a new position, the new object Obj is considered:
Obj = { P(x,y,z)}

Translation
Repositioning an object along a straight-line path (the translation distances) from one coordinate location
to another. Moving an object is called a translation. We translate an object by translating each vertex in
the object.
x = x + tx
y = y + ty
z = z + tz
Assume you are given a point at (x,y,z)=(2,1,3). Where will the point be if you move it 3 units to the right
and 1 unit up and 4 unit to z directions? Ans: (x',y',z') = (5,2,7). How was this obtained? - (x',y',z') =
(x+3,y+1,z+4). That is, to move a point by some amount dx to the right and dy up, you must add dx to the
x-coordinate and add dy to the y-coordinate and add dz to the z-coordinate. The translating distance pair(
tx, ty, tz) is called a translation vector or shift vector.

Rotation
In 2-D, a rotation is prescribed by an angle & a center of rotation P. But in 3-D rotations require the
prescription of an angle of rotation & an axis of rotation.
Rotation about the z axis:
R ,K x = x cos y sin
y = x sin + y cos
z = z
Rotation about the y axis:
R ,J x = x cos + z sin
y = y
z = - x sin + z cos
Rotation about the x axis:
R ,I x = x
y = y cos z sin
z = y sin + z cos
Scaling
Changing the size of an object is called Scaling . The scale factor s determines whether the scaling is a
magnification, s > 1, Or a reduction, s < 1. Scaling with respect to the origin, where the origin remains
fixed,
x = x . sx
Ssx,sy,sz y = y . sy

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 36
COMPUTER GRAPHICS WITH C++ November 7, 2015

z = z . sz
4.2. Matrix Representation
Translation:

Where tx, ty, and tz are translation vectors along x, y, and z axes respectively.

Scaling:

Where a, b, and c are scaling factors along x, y, and z axes respectively.


A Counter-clockwise Rotation about x-axes:

A Counter-clockwise Rotation about y-axes:

A Counter-clockwise Rotation about z-axes:

4.3. Homogeneous Coordinates


In general, when you want to perform a complex transformation, you usually make it by combining a
number of basic transformations. Scaling is done by matrix multiplication and translation is done by
vector addition. In order to represent all transformations in the same form, computer scientists have
devised what are called homogeneous coordinates. Do not try to apply any exotic interpretation to them.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 37
COMPUTER GRAPHICS WITH C++ November 7, 2015

They are simply a mathematical trick to make the representation be more consistent and easier to use.
Homogeneous coordinates (HC) add an extra virtual dimension. Thus 2D HC are actually 3D and 3D HC
are 4D. Consider a 2D point p = (x,y). In HC, we represent p as p = (x,y,1). An extra coordinate is added
whose value is always 1. This may seem odd but it allows us to now represent translations as matrix
multiplication instead of as vector addition.

TRANSLATION
In a three-dimensional homogeneous coordinate representation, a point is translated from position P = (x,
y, z) to position P' = (x', y', z') with the ma+trix operation

Parameters tx, ty, and tz, specifying translation distances for the coordinate directions x, y, and z, are
assigned any real values. The above matrix representation in is equivalent to the three equations
x'=x + tx; y'=y + ty; z'=z + tz;

An object is translated in three dimensions by transforming each of the defining points of the object. For
an object represented as a set of polygon surfaces, we translate each vertex of each surface and redraw the
polygon facets in the new position. We obtain the inverse of the translation matrix by negating the
translation distances tx, ty, and tz. This produces a translation in the opposite direction, and the product
of a translation matrix and its inverse produces the identity matrix.

ROTATION
To generate a rotation transformation for an object, we must designate an axis of rotation (about which
the object is to be rotated) and the amount of angular rotation. Unlike two-dimensional applications,
where all transformations are carried out in the xy plane, a three-dimensional rotation can be specified
around any line in space. The easiest rotation axes to handle are those that are parallel to the coordinate
axes. Also, we can use combinations of coordinate axis rotations (along with appropriate translations) to
specify any general rotation. By convention, positive rotation angles produce counterclockwise rotations
about a coordinate axis, if we are looking along the positive half of the axis to-ward the coordinate origin
Coordinate-Axes Rotations
The two-dimensional z-axis rotation equations are easily extended to three dimensions:
x' = x cos - ysin
y' = x sin + ycos
z' = z

In homogeneous coordinate form, the three-dimensional z-axis rotation equations are expressed as:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 38
COMPUTER GRAPHICS WITH C++ November 7, 2015

which we can write more compactly as:


p'=Rz().P
We get the equations for an x-axis rotation:
y' = ycos - zsin
z' = y sin + z cos
x' = x
Which can be written in the homogeneous coordinate form as:

or p'=Rx().p

The transformation equations for a y-axis rotation:


z'=zcos-xsin
x'=zsin+xcos
y'=y
The matrix representation for y-axis rotation in homogeneous coordinates is:

or p'=Ry().p

SCALING
The matrix expression tor the scaling transformation of a position P = (x, y, z) relative to the coordinate
origin can be written as:

or p'=S.p
where scaling parameters sx, sy, and sz, are assigned any positive values. Explicit expressions for the
coordinate transformations for scaling relative to the origin are:
z' = z.Sz;
x' = x.Sx;
y' = y.Sy;
4.4. Combination of transformations
With the matrix representations of the previous section, we can set up a matrix for any sequence
of transformations as a composite transformation matrix by calculating the matrix product of the
individual transformations. A Forming product of transformation matrices is often referred to as a
concatenation, or composition, of matrices. For column-matrix representation of coordinate
positions, we form composite transformations by multiplying matrices in order from right to left.
That is, each successive transformation matrix pre multiplies the product of the preceding
transformation matrices. If we want to apply a series of transformations T1, T2, and T3 to a set of points,
we can do it in two ways:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 39
COMPUTER GRAPHICS WITH C++ November 7, 2015

1. We can calculate P'=T1*P, P''=T2*P', P'''=T3*P''.


2. Calculate T=T1*T2*T3, then P'''=T*P.
Method 2 saves large number of addition and multiplications (Computational time) needs approximately
of 1/3 of as many operations, therefore we concatenate or compose the matrix into one final
transformation matrix, the applying it to the points.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 40
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER FIVE: STATE MANAGEMENT AND DRAWING


GEOMETRIC OBJECTS

"Displaying Points, Lines, and Polygons" explains what control you have over the details of how
primitives are drawn; for example, what diameter points have, whether lines are solid or dashed, and
whether polygons are outlined or filled.
"Normal Vectors" discusses how to specify normal vectors for geometric objects and (briefly) what
these vectors are for.
"Vertex Arrays" shows you how to put lots of geometric data into just a few arrays and how, with only a
few function calls, to render the geometry it describes. Reducing function calls may increase the
efficiency and performance of rendering.
5.1. Basic State Management
OpenGL maintains many states and state variables. An object may be rendered with lighting, texturing,
hidden surface removal, fog, or some other states affecting its appearance. By default, most of these states
are initially inactive. These states may be costly to activate; for example, turning on texture mapping will
almost certainly slow down the speed of rendering a primitive. However, the quality of the image will
improve and look more realistic, due to the enhanced graphics capabilities.
To turn on and off many of these states, use these two simple commands:
void glEnable(GLenum cap );
void glDisable(GLenum cap);
glEnable()turns on a capability, and glDisable() turns it off. There are over 40 enumerated values that can
be passed as a parameter to glEnable()or glDisable() . Some examples of these are GL_BLEND (which
controls blending RGBA values), GL_DEPTH_TEST (which controls depth comparisons and updates to
the depth buffer), GL_FOG (which controls fog), GL_LINE_STIPPLE (patterned lines), GL_LIGHTING
(you get the idea), and so forth.
You can also check if a state is currently enabled or disabled.
GLboolean glIsEnabled(GLenum capability)
Returns GL_TRUE or GL_FALSE, depending upon whether the queried capability is currently activated.
The states you have just seen have two settings: on and off. However, most OpenGL routines set values
for more complicated state variables. For example, the routine glColor3f()sets three values,
which are part of the GL_CURRENT_COLOR state. There are five querying routines used to find out
what values are set for many states:
void glGetBooleanv(GLenum pname, GLboolean * params);
void glGetIntegerv(GLenum pname , GLint *params);

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 41
COMPUTER GRAPHICS WITH C++ November 7, 2015

void glGetFloatv(GLenum pname, GLfloat * params);


void glGetDoublev(GLenum pname, GLdouble * params);
void glGetPointerv (GLenum pname , GLvoid ** params);
Obtains Boolean, integer, floatingpoint, doubleprecision, or pointer state variables. The pname
argument is a symbolic constant indicating the state variable to return, and params a pointer to an array of
the indicated type in which to place the returned data.
5.2. Displaying Points, Lines, and Polygons
By default, a point is drawn as a single pixel on the screen, a line is drawn solid and one pixel wide, and
polygons are drawn solidly filled in.
Point Details
To control the size of a rendered point, use glPointSize()and supply the desired size in pixels as the
argument.
void glPointSize (GLfloat size); Sets the width in pixels for rendered points; size must be greater than 0.0
and by default is 1.0.
The actual collection of pixels on the screen which are drawn for various point widths depends on
whether antialiasing is enabled. (Antialiasing is a technique for smoothing points and lines as theyre
rendered; If antialiasing is disabled (the default), fractional widths are rounded to integer widths, and a
screenaligned square region of pixels is drawn.
Thus, if the width is 1.0, the square is 1 pixel by 1 pixel; if the width is 2.0, the square is 2 pixels by 2
pixels, and so on.
With antialiasing enabled, a circular group of pixels is drawn, and the pixels on the boundaries are
typically drawn at less than full intensity to give the edge a smoother appearance. In this mode,
noninteger widths arent rounded.
Most OpenGL implementations support very large point sizes. The maximum size for antialiased points is
queryable, but the same information is not available for standard, aliased points. A particular
implementation, however, might limit the size of standard, aliased points to not less than its maximum
antialiased point size, rounded to the nearest integer value. You can obtain this floatingpoint value by
using GL_POINT_SIZE_RANGE with glGetFloatv().
Line details
With OpenGL, you can specify lines with different widths and lines that are stippled in various ways:
dotted, dashed, drawn with alternating dots and dashes, and so on.
void glLineWidth(GLfloat width); Sets the width in pixels for rendered lines; width must be greater than
0.0 and by default is 1.0.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 42
COMPUTER GRAPHICS WITH C++ November 7, 2015

The actual rendering of lines is affected by the antialiasing mode, in the same way as for points. Without
antialiasing, widths of 1, 2, and 3 draw lines 1, 2, and 3 pixels wide. With antialiasing enabled,
noninteger line widths are possible, and pixels on the boundaries are typically drawn at less than full
intensity. As with point sizes, a particular OpenGL implementation might limit the width of
nonantialiased lines to its maximum antialiased line width, rounded to the nearest integer value. You can
obtain this floatingpoint value by using GL_LINE_WIDTH_RANGE with glGetFloatv().
Polygons as Points, Outlines, or Solids
is
facing the viewer. This allows you to have cutaway views of solid objects in which there is an obvious
distinction between the parts that are inside and those that are outside. By default, both front and back
faces are drawn in the same way. To change this, or to draw only outlines or vertices, use
glPolygonMode().void glPolygonMode(GLenum face , GLenum mode); Controls the drawing mode for
a polygons front and back faces. The parameter face can be
GL_FRONT_AND_BACK, GL_FRONT, or GL_BACK; mode can be GL_POINT, GL_LINE, or
GL_FILL to indicate whether the polygon should be drawn as points, outlined, or filled. By default, both
the front and back faces are drawn filled. For example, you can have the front faces filled and the back
faces outlined with two calls to this routine: glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
5.3. Normal Vectors
A normal vector(or normal, for short) is a vector that points in a direction thats perpendicular to a
surface. An objects normal vectors define the orientation of its surface in space-in particular, its
orientation relative to light sources. These vectors are used by OpenGL to determine how much light the
object receives at its vertices.
You use glNormal*()to set the current normal to the value of the argument passed in. Subsequent calls to
glVertex*()cause the specified vertices to be assigned the current normal.
5.4. Vertex Arrays
There are three steps to using vertex arrays to render geometry.
1. Activate (enable) up to six arrays, each to store a different type of data: vertex coordinates, RGBA
colors, color indices, surface normals, texture coordinates, or polygon edge flags.
2. Put data into the array or arrays. The arrays are accessed by the addresses of (that is, pointers to) their
memory locations. In the clientserver model, this data is stored in the clients address space.
3. Draw geometry with the data. OpenGL obtains the data from all activated arrays by dereferencing the
pointers. In the clientserver model, the data is transferred to the servers address space. There are three
ways to do this:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 43
COMPUTER GRAPHICS WITH C++ November 7, 2015

1. Accessing individual array elements (randomly hopping around)


2. Creating a list of individual array elements (methodically hopping around)
3. Processing sequential array elements

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 44
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER SIX: Representing 3D objects


In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or
approximating their surfaces using polygons. Polygonal modeling is well suited to scan line rendering and
is therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D
objects include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray
tracers. 3D polygonal model: a 3D object made up entirely of polygons. 3D polygonal modeling: the
process of building a 3D object by specifying the polygons that make up that object.

Geometric theory and polygons


The basic object used in mesh modeling is a vertex, a point in three-dimensional space. Two vertices
connected by a straight line become an edge. Three vertices, connected to each other by three edges,
define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be
created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons
(generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A
group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each
of the polygons making up an element is called a face.
In Euclidean geometry, any three non-collinear points determine a plane. For this reason, triangles always
inhabit a single plane. This is not necessarily true of more complex polygons, however. The flat nature of
triangles makes it simple to determine their surface normal, a three-dimensional vector perpendicular to
the triangle's surface. Surface normal's are useful for determining light transport in ray tracing, and are a
key component of the popular Phong shading model. Some rendering systems use normal instead of face
normal's to create a better-looking lighting system at the cost of more processing. Note that every triangle
has two face normals, which point to opposite directions from each other. In many systems only one of
these normals is considered valid the other side of the polygon is referred to as a backface, and can be
made visible or invisible depending on the programmers desires.
Many modeling programs do not strictly enforce geometric theory; for example, it is possible for two
vertices to have two distinct edges connecting them, occupying exactly the same spatial location. It is also
possible for two vertices to exist at the same spatial coordinates, or two faces to exist at the same location.
Situations such as these are usually not desired and many packages support an auto-cleanup function. If
auto-cleanup is not present, however, they must be deleted manually.
A group of polygons which are connected by shared vertices is referred to as a mesh. In order for a mesh
to appear attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge
passes through a polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also
desirable that the mesh not contain any errors such as doubled vertices, edges, or faces. For some
purposes it is important that the mesh be a manifold that is, that it does not contain holes or singularities
(locations where two distinct sections of the mesh are connected by a single vertex).

Construction of polygonal meshes


Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more
common to build meshes using a variety of tools. A wide variety of 3D graphics software packages are
available for use in constructing polygon meshes.
One of the more popular methods of constructing meshes is box modeling, which uses two simple tools:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 45
COMPUTER GRAPHICS WITH C++ November 7, 2015

The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a
square would be subdivided by adding one vertex in the center and one on each edge, creating four
smaller squares.
The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and
shape which is connected to each of the existing edges by a face. Thus, performing
the extrude operation on a square face would create a cube connected to the surface at the location of
the face.
A second common modeling method is sometimes referred to as inflation modeling or extrusion
modeling. In this method, the user creates a 2D shape which traces the outline of an object from a
photograph or a drawing. The user then uses a second image of the subject from a different angle and
extrudes the 2D shape into 3D, again following the shapes outline. This method is especially common
for creating faces and heads. In general, the artist will model half of the head and then duplicate the
vertices, invert their location relative to some plane, and connect the two pieces together. This ensures
that the model will be symmetrical.
Another common method of creating a polygonal mesh is by connecting together various primitives,
which are predefined polygonal meshes created by the modeling environment. Common primitives
include:

Cubes
Pyramids
Cylinders
2D primitives, such as squares, triangles, and disks
Specialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blender's monkey mascot.
Spheres - Spheres are commonly represented in one of two ways:
Icospheres are icosahedrons which possess a sufficient number of triangles to resemble a sphere.
UV spheres are composed of quads, and resemble the grid seen on some globes - quads are larger
near the "equator" of the sphere and smaller near the "poles," eventually terminating in a single
vertex.
Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based
modeling is a user-friendly interface for constructing low-detail models quickly, while 3D scanners can
be used to create high detail meshes based on existing real-world objects in almost automatic way. These
devices are very expensive, and are generally only used by researchers and industry professionals but can
generate high accuracy sub-millimetric digital representations.

Operations
There are a very large number of operations which may be performed on polygonal meshes. Some of
these roughly correspond to real-world manipulations of 3D objects, while others do not.
Polygonal mesh operations:
Creations - Create new geometry from some other mathematical object
Loft - generate a mesh by sweeping a shape along a path

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 46
COMPUTER GRAPHICS WITH C++ November 7, 2015

Extrude - same as loft, except the path is always a line


Revolve - generate a mesh by revolving (rotating) a shape around an axis
Marching cubes - algorithm to construct a mesh from an implicit function
Binary Creations - Create a new mesh from a binary operation of two other meshes
Add - boolean addition of two meshes
Subtract - boolean subtraction of two meshes
Intersect - boolean intersection
Union - boolean union of two meshes
Attach - attach one mesh to another (removing the interior surfaces)
Chamfer - create a beveled surface which smoothly connected two surfaces
Deformations - Move only the vertices of a mesh
Deform - systematically move vertices (according to certain functions or rules)
Weighted Deform - move vertices based on localized weights per vertex
Morph - move vertices smoothly between a source and target mesh
Bend - move vertices to "bend" the object
Twist - move vertices to "twist" the object
Manipulations - Modify the geometry of the mesh, but not necessarily topology
Displace - introduce additional geometry based on a "displacement map" from the surface
Simplify - systematically remove and average vertices
Subdivide - smooth a course mesh by subdividing the mesh (Cat mull-Clark, etc.)
Convex Hull - generate another mesh which minimally encloses a given mesh (think shrink-wrap)
Cut - create a hole in a mesh surface
Stitch - close a hole in a mesh surface
Measurements - Compute some value of the mesh
Volume - compute the 3D volume of a mesh (discrete volumetric integral)
Surface Area - compute the surface area of a mesh (discrete surface integral)
Collision Detection - determine if two complex meshes in motion have collided
Fitting - construct a parametric surface (NURBS, bicubic spline) by fitting it to a given mesh
Point-Surface Distance - compute distance from a point to the mesh
Line-Surface Distance - compute distance from a line to the mesh
Line-Surface Intersection - compute intersection of line and the mesh
Cross Section - compute the curves created by a cross-section of a plane through a mesh
Centroid - compute the centroid, geometric center, of the mesh
Center-of-Mass - compute the center of mass, balance point, of the mesh
Circumcenter - compute the center of a circle or sphere enclosing an element of the mesh
Incenter - compute the center of a circle or sphere enclosed by an element of the mesh

Extensions
Once a polygonal mesh has been constructed, further steps must be taken before it is useful for games,
animation, etc. The model must be texture mapped to add colors and texture to the surface and it must be
given a skeleton for animation. Meshes can also be assigned weights and centers of gravity for use
in physical simulation.
To display a model on a computer screen outside of the modeling environment, it is necessary to store
that model in one of the file formats listed below, and then use or write a program capable of loading
from that format. The two main methods of displaying 3D polygon models are OpenGL and Direct3D.
Both of these methods can be used with or without a 3D accelerated graphics card.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 47
COMPUTER GRAPHICS WITH C++ November 7, 2015

Advantages and disadvantages


There are many disadvantages to representing an object using polygons. Polygons are incapable of
accurately representing curved surfaces, so a large number of them must be used to approximate curves in
a visually appealing manner. The use of complex models has a cost in lowered speed. In scan line
conversion, each polygon must be converted and displayed, regardless of size, and there are frequently a
large number of models on the screen at any given time. Often, programmers must use multiple models at
varying levels of detail to represent the same object in order to cut down on the number of polygons being
rendered.
The main advantage of polygons is that they are faster than other representations. While a modern
graphics card can show a highly detailed scene at a frame rate of 60 frames per second or higher, ray
tracers, the main way of displaying non-polygonal models, are incapable of achieving an interactive frame
rate (10 frame/s or higher) with a similar amount of detail.
Methods of creating polygonal meshes
Build mesh by hand
Tesselate a theoretical smooth surface
Tesselation: the process of creating a polygonal approximation from a smooth surface.
Extrude a 2D polygon, curve, etc.
Extrusion: the process of moving a 2D cross-section through space to create a 3D solid.
Revolve/sweep a 2D polygon or curve
Revolution: the process of rotating a 2D cross-section about an axis to create a 3D solid.

Problems with polygonal models

They approximate smoothly curving surfaces


Tradeoff between realism and efficiency
Lots of polygons: good approximation, slow to process
Few polygons: fast processing, poor approximation

Practical polygonal modeling

Modeling tools
3D Studio Max
Maya
AutoCAD
etc
File formats
OBJ, DXF, 3DS, FLT, DWG,

Non-Polygonal Representation

The science of 3D models is divided into 3 main models depending upon the different types of shapes and
techniques used for the imaging. These are polygonal representation and non-polygonal
representation(Rational B-spline Modeling and Primitive modeling).

Rational B-spline Modeling:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 48
COMPUTER GRAPHICS WITH C++ November 7, 2015

This model is being used in many field , a renowned software for this type of modeling is Maya. Mostly
this model is used to generate a curved design in graphics mainly by using polygons. The major reason
for the large use of Non-uniform rational B-spline modeling is the flexibility in the design that it present
while shaping the graphic designs, its quick and easy to use algorithms and its large memory for the
standard analytical shapes. Its an easy to handle type of modeling in which we can easily manipulate the
shapes and sizes of the models we create for our own use.

Primitive modeling:

The use of certain shapes and objects which may include spheres, or cubes probably, in combination and
making certain alterations to such objects and shapes in order to create an object that is desired is termed
as primitive modeling. It is commonly considered that primitive modeling is the very basic form of 3D
modeling which can utilized to form geometric shapes either with or without Boolean processes.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 49
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER SEVEN: COLORS AND IMAGES

Colors in Computer Graphics RGB: CIE


Any color expressed in RGB space is some mixture of three primary colors: red, green, and blue.
The RGB model's approach to colors is important because:
It directly reflects the physical properties of "True color" displays
As of 2011, most graphics tools support it, even if they prefer another color model
It is the only means of specifying a specific color in the CSS2 standard for Web pages.
The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation, and display of images in
electronic systems, such as televisions and computers, though it has also been used in
conventional photography.

RGB is a device-dependent color model: different devices detect or reproduce a given RGB value
differently, since the color elements (such as phosphors or dyes) and their response to the individual R, G,
and B levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB
value does not define the same color across devices without some kind of color management.

One common application of the RGB color model is the display of colors on a cathode ray
tube (CRT), liquid crystal display (LCD),plasma display, or organic light emitting diode (OLED) display
such as a television, a computers monitor, or a large scale screen. In order to achieve a representation
which uses only positive mixing coefficients, the CIE ("Commission Internationale d'Eclairage") defined
three new hypothetical light sources, x, y, and z, which yield positive matching curves:

If we are given a spectrum and wish to find the corresponding X, Y, and Z quantities, we can do
so by integrating the product of the spectral power and each of the three matching curves over all
wavelengths.
The weights X,Y,Z form the three-dimensional CIE XYZ space.
Image formats and Their Applications (GIF, JPG,PNG)

The most common image file formats, the most important for cameras, printing, scanning, and internet
use, are JPG, TIF, PNG, and GIF.

JPG is the most used image file format. Digital cameras and web pages normally use JPG files -
because JPG heroically compresses the data to be very much smaller in the file. However JPG
uses lossy compression to accomplish this feat, which is a strong downside. A smaller file, yes,
there is nothing like JPG for small, but this is at the cost of image quality. This degree is
selectable (with an option setting named JPG Quality), to be lower quality smaller files, or to be
higher quality larger files. In general today, JPG is rather unique in this regard, using lossy
compression allowing very small files of lower quality, whereas almost any other file type uses
lossless compression (and is larger).

Frankly, JPG is used when small file size is more important than maximum image quality (web
pages, email, memory cards, etc). But JPG is good enough in many cases, if we don't overdo the
compression. Perhaps good enough for some uses even if we do overdo it (web pages, etc). But if
you are concerned with maximum quality for archiving your important images, then you do need

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 50
COMPUTER GRAPHICS WITH C++ November 7, 2015

to know two things: 1) JPG should always choose higher Quality and a larger file, and 2) do NOT
keep editing and saving your JPG images repeatedly, because more quality is lost every time you
save it as JPG (in the form of added JPG artifacts... pixels become colors they ought not to be -
lossy).

TIF is lossless (including LZW compression option), which is considered the highest quality
format for commercial work. The TIF format is not necessarily any "higher quality" per se (the
image pixels are what they are), and most formats other than JPG are lossless too. This simply
means there are no additional losses or JPG artifacts to degrade and detract from the original. And
TIF is the most versatile, except that web pages don't show TIF files. For other purposes however,
TIF does most of anything you might want, from 1-bit to 48-bit color, RGB, CMYK, LAB, or
Indexed color. Most any of the "special" file types (for example, camera RAW files, fax files, or
multipage documents) are based on TIF format, but with unique proprietary data tags - making
these incompatible unless expected by their special software.
GIF was designed by CompuServe in the early days of computer 8-bit video, before JPG, for
video display at dial up modem speeds. GIF always uses lossless LZW compression, but it is
always an indexed color file (8-bits, 256 colors maximum), which is poor for 24-bit color photos.
Don't use indexed color for color photos today, the color is too limited. PNG and TIF files can
also optionally handle the same indexed color mode that GIF uses, but they are more versatile
with other choices too. But GIF is still very good for web graphics (i.e., with a limited number of
colors). For graphics of only a few colors, GIF can be much smaller than JPG, with more clear
pure colors than JPG). Indexed Color is described at Color Palettes
PNG can replace GIF today (web browsers show both), and PNG also offers many options of TIF
too (indexed or RGB, 1 to 48-bits, etc). PNG was invented more recently than the others,
designed to bypass possible LZW compression patent issues with GIF, and since it was more
modern, it offers other options too (RGB color modes, 16 bits, etc). One additional feature of
PNG is transparency for 24 bit RGB images. Normally PNG files are a little smaller than LZW
compression in TIF or GIF (all of these use lossless compression, of different types), but PNG is
perhaps slightly slower to read or write. That patent situation has gone away now, but PNG
remains excellent. Less used than TIF or JPG, but PNG is another good choice for lossless quality
work.

Major considerations to choose the necessary file type include:

Compression quality - Lossy for smallest files (JPG), or Lossless for best quality images (TIF, PNG).
Full RGB color for photos (TIF, PNG, JPG), or Indexed Color for graphics (PNG, GIF, TIF).
16-bit color (48-bit RGB data) is sometimes desired (TIF and PNG).
Transparency or Animation is used in graphics (GIF and PNG).
Documents - line art, multi-page, text, fax, etc - this will be TIF.
CMYK color is certainly important for commercial prepress (TIF).

The only reason for using lossy compression is for smaller file size, usually due to internet transmission
speed or storage space. Web pages require JPG or GIF or PNG image types, because sone browsers do
not show TIF files. On the web, JPG is the clear choice for photo images (smallest file, with image
quality being less important than file size), and GIF is common for graphic images, but indexed color is
not normally used for color photos (PNG can do either on the web).

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 51
COMPUTER GRAPHICS WITH C++ November 7, 2015

Other than the web, TIF file format is the undisputed leader when best quality is desired, largely
because TIF is so important in commercial printing environments. High Quality JPG can be pretty good
too, but don't ruin them by making the files too small. If the goal is high quality, you don't want small.
Only consider making JPG large instead, and plan your work so you can only save them as JPG only one
or two times. Adobe RGB color space may be OK for your home printer and profiles, but if you send your
pictures out to be printed, the mass market printing labs normally only accept JPG files, and only process
RGB color space

Difference in photo and graphics images

Photo images have continuous tones, meaning that adjacent pixels often have very similar colors, for
example, a blue sky might have many shades of blue in it. Normally this is 24-bit RGB color, or 8-bit
grayscale, and a typical color photo may contain perhaps a hundred thousand RGB colors, out of the
possible set of 16 million colors in 24-bit RGB color.

Graphic images are normally not continuous tone (gradients are possible in graphics, but are seen less
often). Graphics are drawings, not photos, and they use relatively few colors, maybe only two or three,
often less than 16 colors in the entire image. In a color graphic cartoon, the entire sky will be only one
shade of blue where a photo might have dozens of shades. A map for example is graphics, maybe 4 or 5
map colors plus 2 or 3 colors of text, plus blue water and white paper, often less than 16 colors overall.
These few colors are well suited for Indexed Color, which can re-purify the colors. Don't cut your color
count too short though - there will be more colors than you count. Every edge between two solid colors
likely has maybe six shades of anti-aliasing smoothing the jaggies (examine it at maybe 500% size).
Insufficient colors can rough up the edges. Scanners have three modes to create the image: color (for all
color work), grayscale (like B&W photos), and line art. Line art is a special case, only two colors (black
or white, with no gray), for example clip art, fax, and of course text. Low resolution line art (like cartoons
on the web) is often better as grayscale, to add anti-aliasing to hide the jaggies.

JPG files are very small files for continuous tone photo images, but JPG is poor for graphics, without a
high Quality setting. JPG requires 24-bit color or 8-bit grayscale, and the JPG artifacts are most
noticeable in the hard edges of graphics or text. GIF files (and other indexed color files) are good for
graphics, but are poor for photos (too few colors possible). However, graphics are normally not many
colors anyway. Formats like TIF and PNG can be used either way, 24-bit or indexed color - these file
types have different internal modes to accommodate either type optimally.

Color data mode -bits per pixel

JPG

RGB - 24-bits (8-bit color), or Grayscale - 8-bits

JPEG always uses lossy JPG compression, but its degree is selectable, for higher quality and larger files,
or lower quality and smaller files. JPG is for photo images, and is the worst possible choice for most
graphics or text data.

TIF

Versatile, many formats supported.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 52
COMPUTER GRAPHICS WITH C++ November 7, 2015

Mode: RGB or CMYK or LAB, and others, almost anything.


8 or 16-bits per color channel, called 8 or 16-bit "color" (24 or 48-bit RGB files).
Grayscale - 8 or 16-bits,
Indexed color - 1 to 8-bits,
Line Art (bilevel)- 1-bit

For TIF files, most programs allow either no compression or LZW compression (LZW is lossless, but is
less effective for color images). Adobe Photoshop also provides JPG or ZIP compression in TIF files too
(but which greatly reduces third party compatibility of TIF files). "Document programs" allow ITCC G3
or G4 compression for 1-bit text (Fax is G3 or G4 TIF files), which is lossless and tremendously effective
(small). Many specialized image file types (like camera RAW files) are TIF file format, but using special
proprietary data tags.

24-bits is called 8-bit color, three 8-bit bytes for RGB (256x256x256 = 16.7 million colors maximum.)
Or 48-bits is called 16-bit color, three 16-bit words (65536x65536x65536 = trillions of colors
conceptually)

PNG

RGB - 24 or 48-bits (called 8-bit or 16-bit "color"),


Alpha channel for RGB transparency - 32 bits
Grayscale - 8 or 16-bits,
Indexed color - 1 to 8-bits,
Line Art (bi level) - 1-bit

Supports transparency in regular indexed color, and also there can be a fourth channel (called Alpha)
which can map RGB graduated transparency (by pixel location, instead of only one color, and graduated,
instead of only on or off).

The APNG version also supports animation (like GIF), showing several sequential frames fast to simulate
motion.

PNG uses ZIP compression which is lossless, and somewhat more effective color compression than GIF
or TIF LZW. For photo data, PNG is somewhat smaller files than TIF LZW, but larger files than JPG
(however PNG is lossless, and JPG is not.) PNG is a newer format than the others, designed to be both
versatile and royalty free, back when the patent for LZW compression was disputed for GIF and TIF files.

GIF

Indexed color - 1 to 8-bits (8-bit indexes, limiting to only 256 colors maximum.) Color is 24-bit color, but
only 256 colors.

One color in indexed color can be marked transparent, allowing underlaying background to be seen (very
important for text, for example). GIF is an online video image, the file contains no dpi information for
printing. Designed by CompuServe for online images in the days of dialup and 8-bit indexed computer
video, whereas other file formats can be 24-bits now. However, GIF is still great for web use of graphics
containing only a few colors, when it is a small lossless file, much smaller and better than JPG for this.
GIF files do not save the dpi number for printing resolution.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 53
COMPUTER GRAPHICS WITH C++ November 7, 2015

GIF uses lossless LZW compression. (for Indexed Color, see second page at GIF link at page bottom).

GIF also supports animation, showing several sequential frames fast to simulate motion.

Note that if your image size is say 3000x2000 pixels, then this is 3000x2000 = 6 million pixels (6
megapixels). Assuming this 6 megapixel image data is RGB color and 24-bits (or 3 bytes per pixel of
RGB color information), then the size of this image data is 6 million x 3 bytes RGB = 18 million bytes.
That is simply how large your image data is. Then file compression like JPG or LZW can make the file
smaller, but when you open the image in computer memory for use, the JPG may not still have the same
image quality, but it is always still 3000x2000 pixels and 18 million bytes. This is simply how large your
6 megapixel RGB image data is (megapixels x 3 bytes per pixel).

Summary

The most common image file formats, the most important for general purposes today, are JPG, TIF, PNG
and GIF. These are not the only choices of course, but they are good and reasonable choices for general
purposes. Newer formats like JPG2000 never acquired popular usage, and are not supported by web
browsers, and so are not the most compatible choice.

PNG and TIF LZW are lossless compression, so their file size reduction is not as extreme as the wild
heroics JPG can dream up. In general, selecting lower JPG Quality gives a smaller worse file, higher JPG
Quality gives a larger better file. Your 12 megapixel RGB image data is three bytes per pixel, or 36
million bytes. That is simply how big your image data is. Your JPG file size might only be only 5-20% of
that, literally. TIF LZW might be 65-80%, and PNG might be 50-65% (very rough ballpark for 24-bit
color images). We cannot predict sizes precisely because compression always varies with image detail.
Blank areas, like sky and walls, compress much smaller than extremely detailed areas like a tree full of
leaves. But the JPG file can be much smaller, because JPG is not required to recover the original image
intact, losses are acceptable. Whereas, the only goal of PNG and TIF LZW is to be 100% lossless, which
means the file is not as heroically small, but there is never any concern about compression quality with
PNG or TIF LZW. They still do impressive amounts of file size compression, remember, the RGB image
data is actually three bytes per pixel.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 54
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER EIGHT: Viewing a local illumination model

This chapter describes the following major sections:

"OVERVIEW: THE CAMERA ANALOGY" gives an overview of the transformation process


by describing the analogy of taking a photograph with a camera, presents a simple example
program that transforms an object, and briefly describes the basic OpenGL transformation
commands.
"VIEWING AND MODELING TRANSFORMATIONS" explains in detail how to specify
and to imagine the effect of viewing and modeling transformations. These transformations orient
the model and the camera relative to each other to obtain the desired final image.
"PROJECTION TRANSFORMATIONS" describes how to specify the shape and orientation
of the viewing volume. The viewing volume determines how a scene is projected onto the screen
(with a perspective or orthographic projection) and which objects or parts of objects are clipped
out of the scene.
"VIEWPORT TRANSFORMATION" explains how to control the conversion of three-
dimensional model coordinates to screen coordinates.
"TROUBLESHOOTING TRANSFORMATIONS" presents some tips for discovering why
you might not be getting the desired effect from your modeling, viewing, projection, and
viewport transformations.
"MANIPULATING THE MATRIX STACKS" discusses how to save and restore certain
transformations. This is particularly useful when you're drawing complicated objects that are built
up from simpler ones.
"ADDITIONAL CLIPPING PLANES" describes how to specify additional clipping planes
beyond those defined by the viewing volume.
"EXAMPLES OF COMPOSING SEVERAL TRANSFORMATIONS" walks you through a
couple of more complicated uses for transformations.
"REVERSING OR MIMICKING TRANSFORMATIONS" shows you how to take a
transformed point in window coordinates and reverse the transformation to obtain its original
object coordinates. The transformation itself (without reversal) can also be emulated.

8.1. Overview: The Camera Analogy


The transformation process to produce the desired scene for viewing is analogous to taking a photograph
with a camera. As shown in Figure 8-1, the steps with a camera (or a computer) might be the following:

Set up your tripod and pointing the camera at the scene (viewing transformation).
Arrange the scene to be photographed into the desired composition (modeling
transformation).
Choose a camera lens or adjust the zoom (projection transformation).
Determine how large you want the final photograph to be - for example, you might want it
enlarged (viewport transformation).

After these steps are performed, the picture can be snapped or the scene can be drawn.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 55
COMPUTER GRAPHICS WITH C++ November 7, 2015

Figure 8-1 : The Camera Analogy

Note that these steps correspond to the order in which you specify the desired transformations in your
program, not necessarily the order in which the relevant mathematical operations are performed on an
object's vertices. The viewing transformations must precede the modeling transformations in your code,
but you can specify the projection and viewport transformations at any point before drawing occurs.
Figure 8-2 shows the order in which these operations occur on your computer.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 56
COMPUTER GRAPHICS WITH C++ November 7, 2015

Figure 8-2 : Stages of Vertex Transformation

To specify viewing, modeling, and projection transformations, you construct a 4 by 4 matrix M, which is
then multiplied by the coordinates of each vertex v in the scene to accomplish the transformation

v'=Mv

(Remember that vertices always have four coordinates (x, y, z, w), though in most cases w is 1 and for
two-dimensional data z is 0.) Note that viewing and modeling transformations are automatically applied
to surface normal vectors, in addition to vertices. (Normal vectors are used only in eye coordinates.) This
ensures that the normal vector's relationship to the vertex data is properly preserved.

The viewing and modeling transformations you specify are combined to form the modelview matrix,
which is applied to the incoming object coordinates to yield eye coordinates. Next, if you've specified
additional clipping planes to remove certain objects from the scene or to provide cutaway views of
objects, these clipping planes are applied.

After that, OpenGL applies the projection matrix to yield clip coordinates. This transformation defines a
viewing volume; objects outside this volume are clipped so that they're not drawn in the final scene. After
this point, the perspective division is performed by dividing coordinate values by w, to
produce normalized device coordinates. Finally, the transformed coordinates are converted to window
coordinates by applying the viewport transformation. You can manipulate the dimensions of the viewport
to cause the final image to be enlarged, shrunk, or stretched.

You might correctly suppose that the x and y coordinates are sufficient to determine which pixels need to
be drawn on the screen. However, all the transformations are performed on the z coordinates as well. This
way, at the end of this transformation process, the z values correctly reflect the depth of a given vertex
(measured in distance away from the screen). One use for this depth value is to eliminate unnecessary
drawing. For example, suppose two vertices have the same x and y values but different z values. OpenGL
can use this information to determine which surfaces are obscured by other surfaces and can then avoid
drawing the hidden surfaces.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 57
COMPUTER GRAPHICS WITH C++ November 7, 2015

A Simple Example: Drawing a Cube

Example 8-1 draws a cube that's scaled by a modeling transformation (see Figure 8-3). The viewing
transformation, gluLookAt(), positions and aims the camera towards where the cube is drawn. A
projection transformation and a viewport transformation are also specified. The rest of this section walks
you through Example 8-1 and briefly explains the transformation commands it uses. The succeeding
sections contain the complete, detailed discussion of all OpenGL's transformation commands.

Figure 8-3 : Transformed Cube

Example 8-1 : Transformed Cube: cube.c

#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>

void init(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_FLAT);
}

void display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0);
glLoadIdentity (); /* clear the matrix */
/* viewing transformation */
gluLookAt (0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glScalef (1.0, 2.0, 1.0); /* modeling transformation */
glutWireCube (1.0);
glFlush ();
}

void reshape (int w, int h)


{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glFrustum (-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
glMatrixMode (GL_MODELVIEW);
}

int main(int argc, char** argv)


{
glutInit(&argc, argv);

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 58
COMPUTER GRAPHICS WITH C++ November 7, 2015

glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);


glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}

8.1.1. The Viewing Transformation


Recall that the viewing transformation is analogous to positioning and aiming a camera. In this code
example, before the viewing transformation can be specified, the current matrix is set to the identity
matrix with glLoadIdentity(). This step is necessary since most of the transformation commands
multiply the current matrix by the specified matrix and then set the result to be the current matrix. If you
don't clear the current matrix by loading it with the identity matrix, you continue to combine previous
transformation matrices with the new one you supply. In some cases, you do want to perform such
combinations, but you also need to clear the matrix sometimes.

In Example 8-1, after the matrix is initialized, the viewing transformation is specified with gluLookAt().
The arguments for this command indicate where the camera (or eye position) is placed, where it is aimed,
and which way is up. The arguments used here place the camera at (0, 0, 5), aim the camera lens towards
(0, 0, 0), and specify the up-vector as (0, 1, 0). The up-vector defines a unique orientation for the camera.

If gluLookAt() was not called, the camera has a default position and orientation. By default, the camera is
situated at the origin, points down the negative z-axis, and has an up-vector of (0, 1, 0). So in Example 8-
1, the overall effect is that gluLookAt() moves the camera 5 units along the z-axis.

8.1.2. The Modeling Transformation


You use the modeling transformation to position and orient the model. For example, you can rotate,
translate, or scale the model - or perform some combination of these operations. In Example 8-
1,glScalef() is the modeling transformation that is used. The arguments for this command specify how
scaling should occur along the three axes. If all the arguments are 1.0, this command has no effect. In
Example 3-1, the cube is drawn twice as large in the y direction. Thus, if one corner of the cube had
originally been at (3.0, 3.0, 3.0), that corner would wind up being drawn at (3.0, 6.0, 3.0). The effect of
this modeling transformation is to transform the cube so that it isn't a cube but a rectangular box.

Try This
Change the gluLookAt() call in Example 8-1 to the modeling transformation glTranslatef() with
parameters (0.0, 0.0, -5.0). The result should look exactly the same as when you used gluLookAt(). Why
are the effects of these two commands similar?

Note that instead of moving the camera (with a viewing transformation) so that the cube could be viewed,
you could have moved the cube away from the camera (with a modeling transformation). This duality in
the nature of viewing and modeling transformations is why you need to think about the effect of both
types of transformations simultaneously. It doesn't make sense to try to separate the effects, but

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 59
COMPUTER GRAPHICS WITH C++ November 7, 2015

sometimes it's easier to think about them one way rather than the other. This is also why modeling and
viewing transformations are combined into the modelview matrix before the transformations are applied.
Also note that the modeling and viewing transformations are included in the display() routine, along with
the call that's used to draw the cube, glutWireCube(). This way, display() can be used repeatedly to draw
the contents of the window if, for example, the window is moved or uncovered, and you've ensured that
each time, the cube is drawn in the desired way, with the appropriate transformations. The potential
repeated use of display() underscores the need to load the identity matrix before performing the viewing
and modeling transformations, especially when other transformations might be performed between calls
to display().

8.1.3. The Projection Transformation


Specifying the projection transformation is like choosing a lens for a camera. You can think of this
transformation as determining what the field of view or viewing volume is and therefore what objects are
inside it and to some extent how they look. This is equivalent to choosing among wide-angle, normal, and
telephoto lenses, for example. With a wide-angle lens, you can include a wider scene in the final
photograph than with a telephoto lens, but a telephoto lens allows you to photograph objects as though
they're closer to you than they actually are. In computer graphics, you don't have to pay $10,000 for a
2000-millimeter telephoto lens; once you've bought your graphics workstation, all you need to do is use a
smaller number for your field of view.

In addition to the field-of-view considerations, the projection transformation determines how objects
are projected onto the screen, as its name suggests. Two basic types of projections are provided for you
by OpenGL, along with several corresponding commands for describing the relevant parameters in
different ways. One type is the perspective projection, which matches how you see things in daily life.
Perspective makes objects that are farther away appear smaller; for example, it makes railroad tracks
appear to converge in the distance. If you're trying to make realistic pictures, you'll want to choose
perspective projection, which is specified with the glFrustum() command in this code example.

The other type of projection is orthographic, which maps objects directly onto the screen without
affecting their relative size. Orthographic projection is used in architectural and computer-aided design
applications where the final image needs to reflect the measurements of objects rather than how they
might look. Architects create perspective drawings to show how particular buildings or interior spaces
look when viewed from various vantage points; the need for orthographic projection arises when blueprint
plans or elevations are generated, which are used in the construction of buildings.

Before glFrustum() can be called to set the projection transformation, some preparation needs to happen.
As shown in the reshape() routine in Example 8-1, the command called glMatrixMode()is used first,
with the argument GL_PROJECTION. This indicates that the current matrix specifies the projection
transformation; the following transformation calls then affect the projection matrix. As you can see, a few
lines later glMatrixMode() is called again, this time with GL_MODELVIEW as the argument. This
indicates that succeeding transformations now affect the modelview matrix instead of the projection
matrix.

Note that glLoadIdentity() is used to initialize the current projection matrix so that only the specified
projection transformation has an effect. Now glFrustum() can be called, with arguments that define the
parameters of the projection transformation. In this example, both the projection transformation and the
viewport transformation are contained in the reshape() routine, which is called when the window is first
created and whenever the window is moved or reshaped. This makes sense, since both projecting (the

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 60
COMPUTER GRAPHICS WITH C++ November 7, 2015

width to height aspect ratio of the projection viewing volume) and applying the viewport relate directly to
the screen, and specifically to the size or aspect ratio of the window on the screen.

8.1.4. The Viewport Transformation


Together, the projection transformation and the viewport transformation determine how a scene gets
mapped onto the computer screen. The projection transformation specifies the mechanics of how the
mapping should occur, and the viewport indicates the shape of the available screen area into which the
scene is mapped. Since the viewport specifies the region the image occupies on the computer screen, you
can think of the viewport transformation as defining the size and location of the final processed
photograph - for example, whether the photograph should be enlarged or shrunk.

The arguments to glViewport() describe the origin of the available screen space within the window - (0,
0) in this example - and the width and height of the available screen area, all measured in pixels on the
screen. This is why this command needs to be called within reshape() - if the window changes size, the
viewport needs to change accordingly. Note that the width and height are specified using the actual width
and height of the window; often, you want to specify the viewport this way rather than giving an absolute
size.

Drawing the Scene


Once all the necessary transformations have been specified, you can draw the scene (that is, take the
photograph). As the scene is drawn, OpenGL transforms each vertex of every object in the scene by the
modeling and viewing transformations. Each vertex is then transformed as specified by the projection
transformation and clipped if it lies outside the viewing volume described by the projection
transformation. Finally, the remaining transformed vertices are divided by w and mapped onto the
viewport.

General-Purpose Transformation Commands


This section discusses some OpenGL commands that you might find useful as you specify desired
transformations. You've already seen a couple of these
commands, glMatrixMode() andglLoadIdentity(). The other two commands described here -
glLoadMatrix*() and glMultMatrix*() - allow you to specify any transformation matrix directly and
then to multiply the current matrix by that specified matrix. More specific transformation commands -
such as gluLookAt() and glScale*() - are described in later sections.

As described in the preceding section, you need to state whether you want to modify the modelview or
projection matrix before supplying a transformation command. You choose the matrix
withglMatrixMode(). When you use nested sets of OpenGL commands that might be called repeatedly,
remember to reset the matrix mode correctly. (The glMatrixMode() command can also be used to
indicate the texture matrix.

void glMatrixMode(GLenum mode);


Specifies whether the modelview, projection, or texture matrix will be modified, using the
argument GL_MODELVIEW, GL_PROJECTION, or GL_TEXTURE for mode. Subsequent
transformation commands affect the specified matrix. Note that only one matrix can be modified

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 61
COMPUTER GRAPHICS WITH C++ November 7, 2015

at a time. By default, the modelview matrix is the one that's modifiable, and all three matrices
contain the identity matrix.

You use the glLoadIdentity() command to clear the currently modifiable matrix for future transformation
commands, since these commands modify the current matrix. Typically, you always call this command
before specifying projection or viewing transformations, but you might also call it before specifying a
modeling transformation.

void glLoadIdentity(void);
Sets the currently modifiable matrix to the 4 by 4 identity matrix.

If you want to specify explicitly a particular matrix to be loaded as the current matrix,
use glLoadMatrix*(). Similarly, use glMultMatrix*() to multiply the current matrix by the matrix
passed in as an argument. The argument for both these commands is a vector of sixteen values (m1, m2, ...
, m16) that specifies a matrix M as follows:

Remember that you might be able to maximize efficiency by using display lists to store frequently used
matrices (and their inverses) rather than recomputing them. (OpenGL implementations often must
compute the inverse of the modelview matrix so that normals and clipping planes can be correctly
transformed to eye coordinates.)

Caution: If you're programming in C and you declare a matrix as m[4][4], then the element m[i][j] is in
the ith column and jth row of the OpenGL transformation matrix. This is the reverse of the standard C
convention in which m[i][j] is in row i and column j. To avoid confusion, you should declare your
matrices as m[16].

void glLoadMatrix{fd}(const TYPE *m);


Sets the sixteen values of the current matrix to those specified by m.
void glMultMatrix{fd}(const TYPE *m);
Multiplies the matrix specified by the sixteen values pointed to by m by the current matrix and
stores the result as the current matrix.

Note: All matrix multiplication with OpenGL occurs as follows: Suppose the current matrix is C and the
matrix specified with glMultMatrix*() or any of the transformation commands is M. After
multiplication, the final matrix is always CM. Since matrix multiplication isn't generally commutative,
the order makes a difference.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 62
COMPUTER GRAPHICS WITH C++ November 7, 2015

CHAPTER NINE: Application Modeling


9.1. Distinction between modeling and graphics
In 3D computer graphics, 3D modeling (or modeling) is the process of developing a mathematical
representation of any three-dimensional surface of an object (either inanimate or living) via specialized
software. The product is called a 3D model. It can be displayed as a two-dimensional image through a
process called 3D rendering or used in a computer simulation of physical phenomena. The model can also
be physically created using 3D printing devices.
Models may be created automatically or manually. The manual modeling process of preparing geometric
data for 3D computer graphics is similar to plastic arts such as sculpting.
3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual
programs of this class are called modeling applications or modelers.
The study of computer graphics is a sub-field of computer science which studies methods for digitally
synthesizing and manipulating visual content. Although the term often refers to three-
dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

Graphics refers to any computer device or program that makes a computer capable of displaying and
manipulating pictures. The term also refers to the images themselves. For example, laser printers and
plotters are graphics devices because they permit the computer to output pictures. A graphics monitor is
a display monitor that can display pictures. A graphics board (or graphics card) is printed that, when
installed in a computer, permits the computer to display pictures.

In real-time computer graphics, geometry instancing is the practice of rendering multiple copies of the
same mesh in a scene at once. This technique is primarily used for objects such as trees, grass, or
buildings which can be represented as repeated geometry without appearing unduly repetitive, but may
also be used for characters. Although vertex data is duplicated across all instanced meshes, each instance
may have other differentiating parameters (such as color, or skeletal animation pose) changed in order to
reduce the appearance of repetition.

What is Instancing?
A problem when rendering big scenes is the limited size of main and graphics memory. But gladly big
scenes often contain a lot of similar objects. We can make use of this property by creating prototypes for
these objects and saving additional data to slightly alter them e.g. model matrices, materials, textures.
This technique is called Instancing.

There are two ways to implement Instancing: Software Instancing and Hardware Instancing. As the name
suggests the latter uses built-in hardware to speed-up the rendering process. When Hardware Instancing is
used the per-instance data has to be stored in the graphics memory. There are several different ways to do
this. In this tutorial I will present two of them: using uniforms and storing them into a texture. Using a
texture to store the data is often faster and you can store more data in one texture, but the implementation
with uniforms is easier and often sufficiently fast. In a later tutorial I will show you two additional
techniques: uniform buffer objects and per-instance vertex attributes.

But first I will explain how to implement Software Instancing.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 63
COMPUTER GRAPHICS WITH C++ November 7, 2015

Software Instancing
The following pseudo-code should explain how Software Instancing works:

bindVertexBufferObject(vbo);
for (unsigned int i=0; i < numInstanced; ++i)
{
loadModelMatrix(matrices[i]);
drawElements(TRIANGLES, 8, BYTE, &indices);
}

In the first step the vertex data is loaded. After this the model matrix is loaded for the current instance and
the geometry is drawn. This is process is repeated for each instance. The advantage of this approach is,
that we only need to create one vertex buffer and one index buffer for a group of similar objects and the
data only needs to be loaded once. But we still need to change the model matrix each time and draw the
instances one by one, which can be very slow.
How can we implement this with OpenSceneGraph? Pretty simple, we just need to create our scene-graph
the right way.

// create Group to contain all instances


osg::ref_ptr<osg::Group> group = new osg::Group;
// create Geode to wrap Geometry
osg::ref_ptr<osg::Geode> geode = new osg::Geode;
geode->addDrawable(m_geometry);
// now create a MatrixTransform for each matrix in the list
for (auto it = m_matrices.begin(); it != m_matrices.end(); ++it)
{
osg::ref_ptr<osg::MatrixTransform> matrixTransform = new osg::MatrixTransform(*it);
matrixTransform->addChild(geode);
group->addChild(matrixTransform);
}

Instead of creating one Geode for each object, we create just one. This Geode is shared by several
MatrixTransforms, which store the model matrix for each instance. The created scene-graph looks like
this:

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 64
COMPUTER GRAPHICS WITH C++ November 7, 2015

Pretty simple isn't it? Let's move one to Hardware Instancing.

Hardware Instancing
As explained Software Instancing is so slow, because each instance is drawn one by one. Graphics cards
work highly parallel, so they don't perform very well, if they only get very small packages of work. To
speed up Instancing we need to find a way to upload all model matrices at once and tell the graphics card
to draw several instances with one call.

This can be done with the OpenGL ARB: GL_ARB_draw_instanced which introduced two new draw
functions: glDrawArraysInstanced and glDrawElementsInstanced. These functions are similiar to their
non-instanced counterpart but have an additional parameter primcount, which tells the graphics card how
many instances should be drawn. On the GLSL side a new built-in variable gl_InstanceID was
introduced. It can be used to get the right data for this instance. In pseudo-code Hardware Instancing
looks like this:

bindVertexBufferObject(vbo);
loadModelMatrix(matrices);
drawElementsInstanced(TRIANGLES, 8, BYTE, &indices, numInstanced);

To implement the last line in OpenSceneGraph is really simple:

// first turn on hardware instancing for every primitive set


for (unsigned int i = 0; i < geometry->getNumPrimitiveSets(); ++i)
{
geometry->getPrimitiveSet(i)->setNumInstances(numInstances);
}
// we need to turn off display lists for instancing to work
geometry->setUseDisplayList(false);
geometry->setUseVertexBufferObjects(true);

The number of instances to be drawn, can be set with osg::PrimitiveSet::setNumInstances. Additionally


we need to make sure OpenSceneGraph uses vertex buffer objects instead of display lists for this
geometry. This is important because instancing will not work with display lists(this is explained in the
ARB spec).

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 65
COMPUTER GRAPHICS WITH C++ November 7, 2015

The more interesting part is how to store the model matrices in graphics memory and how to access them
in the vertex shader.

9.2. Retained mode versus immediate mode model


Graphics APIs can be divided into retained-mode APIs and immediate-mode APIs.
In computing, retained mode rendering is a style for application programming interfaces of graphics
libraries, in which the libraries retain a complete model of the objects to be rendered.
By using a "retained mode" approach, client calls do not directly cause actual rendering, but instead
update an internal model (typically a list of objects) which is maintained within the library's data space.
This allows the library to optimize when actual rendering takes place along with the processing of related
objects.

Some techniques to optimize rendering include:


managing double buffering
performing occlusion culling
only transferring data that has changed from one frame to the next from the application to the library

Retained-mode APIs can be simpler to use, because the API does more of the work for you, such as
initialization, state maintenance, and cleanup. On the other hand, they are often less flexible, because the
API imposes its own scene model. Also, a retained-mode API can have higher memory requirements,
because it needs to provide a general-purpose scene model.

Windows Presentation Foundation (WPF) is an example of a retained-mode API.


A retained-mode API is declarative. The application constructs a scene from graphics primitives, such as
shapes and lines. The graphics library stores a model of the scene in memory. To draw a frame, the
graphics library transforms the scene into a set of drawing commands. Between frames, the graphics
library keeps the scene in memory. To change what is rendered, the application issues a command to
update the scene for example, to add or remove a shape. The library is then responsible for redrawing
the scene.

A diagram that shows retained-mode graphics.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 66
COMPUTER GRAPHICS WITH C++ November 7, 2015

Immediate mode is an alternative approach; the two styles can coexist in the same library and are not
necessarily exclusionary in practice. For example, OpenGL has immediate mode functions that can use
previously defined server side objects (textures, vertex and index buffers, shaders, etc.) without resending
unchanged data.
Immediate mode rendering is a style for application programming interfaces of graphics libraries, in
which client calls directly cause rendering of graphics objects to the display. It does not preclude the use
of double-buffering. In contrast to retained mode, lists of objects to be rendered are not saved by the API
library. Instead, the application must re-issue all drawing commands required to describe the entire scene
each time a new frame is required, regardless of actual changes. This method provides the maximum
amount of control and flexibility to the application program.
Although drawing commands have to be re-issued for each new frame, modern systems using this method
are generally able to avoid the unnecessary duplication of more memory-intensive display data by
referring to that unchanging data (e.g. textures and vertex buffers) in the drawing commands.

An immediate-mode API is procedural. Direct2D is an immediate-mode API. Each time a new frame is
drawn, the application directly issues the drawing commands. The graphics library does not store a scene
model between frames. Instead, the application keeps track of the scene. With an immediate-mode API,
you can implement targeted optimizations.

A diagram that shows immediate-mode graphics.

Edited and Prepared by Mekuria Gemechu (Software Engineering Department, Wolkite University) Page 67

You might also like