You are on page 1of 25

COMPUTER GRAPHICS AND MULTIMEDIA

PROJECT
ITE401

Mesh Deformation Modeling Of A Bird

Slot: F2+ TF2


Submitted to: Prof. Vishwanathan
By:
Rishab Mittal (13BIT0058)
Roopendra Gurjar (13BIT0094)
Shubhanshu Sengar (13BIT0155)
Tushar Bhagatkar (13BIT0271)

What is Mesh Deformation?


Mesh deformation is useful in a variety of applications in computer
modeling and animation. Many successful techniques have been
developed to help artists sculpt stylized body shapes and
deformations for 3D characters. In particular, multi-resolution
techniques
and recently introduced differential domain methods are very
effective in preserving surface details, which is important for
generating high-quality results. However, large deformations, such as
those found with characters performing nonrigid and highly
exaggerated movements, remain challenging today, and existing
techniques often
produce implausible results with unnatural volume changes.

Consider an example of a moving leg.


Once youve built the skeleton and fitted it to a mesh, you can use it
to deform the skin of your character. You can do this using a
technique called mesh deformation, also known as skinning or
binding, which uses the position of the bones to determine the
shape of the mesh. As you move the bones of the skeleton, the skin
of the character deforms to match.
The goal of any mesh deformation utility is to move vertices. Each
vertex in the characters mesh is assigned to follow one or many
bones. When a bone moves, the vertices follow and maintain their
relative distance to the bone. The vertices of the thigh need to follow
the thighbone, for example, while both the upper and lower leg will
affect the vertices around the knee. When more than one bone
affects a vertex, their influence must be weighted.
A weighted deformation allows more than one bone to affect a given
vertex. The method of accomplishing this depends on the software

you use, but the underlying theory is the same for all packages. Each
bone affects each vertex, using a weight from 0 to 1. When the
weight is at 0, the vertex is unaffected by the bone. When the weight
is at 1, the bone completely controls the motion of the vertex, and
the vertex is said to be fully affected. Weights in the middle of the
range allow multiple bones to affect a vertex.

Techniques
Laplacian Deformation on Abstract Graphs
Suppose G = (P,E) is a graph, where P is a set of N point
positions
P = {pi R3|1 i N}, and E = {(i, j)| pi is connected to pj}
is the set of edges. The Laplacian of a graph is analogous to
the Laplace operator on manifolds [Chung 1997] and computes
the difference between each point pi in the graph G and a linear
combination of its neighboring points:

where N (i) = { j |{i, j} E} are the edge neighbors, wi j is the


weight for point pj , and i is the Laplacian coordinate of the
point pi in graph G. LG is called the Laplace operator of the
graph G.The weights wi j should be positive and satisfy
jN (i)wi j =
The simplest weighting is uniform weighting wi j = 1/|N (i)|
[Taubin 1995; Sorkine et al. 2004]. We use a more complicated
weighting scheme, described in Section 3.3.
To control a deformation, the user inputs the deformed
positions qi, i {1, ...,m} for a subset of the N mesh vertices.
This information is used to compute a new (deformed)
Laplacian coordinate I for each point i in the graph. The
deformed positions of the mesh vertices p

i are then obtained by solving the following quadric


minimization problem:

The first term represents preservation of local detail and the


second constrains the positions of those vertices directly
specified by the user. The parameter balances these two
objectives. The deformed Laplacian coordinates are computed
via

where i is the Laplacian coordinate in the rest pose, defined in


(1), and Ti transforms it into the deformed pose. A general
transform Ti which includes anisotropic scaling is too powerful
and can fit away local detail. The solution is to restrict Ti to a
rotation and isotropic scale [Sorkine et al. 2004]. Given the
deformed positions of a subset of the vertices qi, many
methods can be used to obtain Ti. We use a method,
describedwhich propagates the local transformation from the
specified region of deformation to the entire mesh, blending the
transform towards the identity away from the deformation site.
If the graph is a triangular mesh, the graph Laplacian is
identical to the mesh Laplacian. Using the mesh Laplacian to
encode surface details, [Alexa 2003; Lipman et al. 2004;
Sorkine et al. 2004] preserve detailed geometric structure over
a wide range of editing operations. For large deformations,
these methods exhibit unnatural volume changes (Fig. 2a) or
local self-intersections (Fig. 3a). The following section
describes how to impose volumetric constraints which reduce
such undesirable effects, by constructing a volumetric graph for
the mesh.

Deforming the Volumetric Graph:


To balance between preserving the original surfaces details
and constraining the volume, we modify the energy function in
Equation (2) to the following general form:

where the first n points in graph G belong to the mesh M. LM is


the discrete mesh Laplacian operator [Desbrun et al. 1999;
Meyer et al. 2002; Sorkine et al. 2004]. G is the sub-graph of G
formed by removing those edges belonging to M. For points on
the original mesh
M, i (1 i n) are the mesh Laplacian coordinates in

the deformed coordinate frame. For points in the volumetric


graph G, i (1 i N) are the graph Laplacian coordinates in
the deformed frame. Energy is thus decomposed into three
terms corresponding to preservation of surface details,
enforcement of the users chosen deformation locations, and
preservation of volumetric details/rigidity.
balances between surface and volumetric details. We actually
specify where = n/N. The n/N factor normalizes the
weight so that it is insensitive to the lattice density of the
volumetric graph.
With this normalization, we find that = 1 works well for
preserving volume and preventing self-intersections. The
parameter is not normalized because we want the constraint
strength to depend on the number of constrained points relative
to the total number of mesh points. We find 0.1 << 1 works
well for our examples. It is set to 0.2 by default.
Note that our volumetric constraint in Equation (3) could also
be combined with the quadric smoothness energy in [Botsch
and Kobbelt 2004]. We do not do this because we focus on
deforming models with significant geometric detail.
Propagation of Local Transforms To obtain the local transforms
Ti that take the Laplacian coordinates in the rest frame, i
and i, to the new Laplacian coordinates i and
i in the deformed frame, we adopt the WIRE deformation
method [Singh and Fiume 1998]. A sequence of mesh vertices
forming a curve is selected and then deformed to a new state.
This curve controls the deformation and defines the qi (Figure
7a). The control curve only specifies where vertices on the
curve deform to. The propagation algorithm first determines
where neighboring graph points deform to, then infers local
transforms at the curve points, and finally propagates the
transforms over the whole mesh. We begin by finding mesh
neighbors of the qi and obtaining their
deformed positions using WIRE.

To review this method, let C(u) and C(u) be the original and
deformed control curves respectively, parameterized by arc
length u [0,1]. Given some neighboring point p R3, let up
[0,1] be the parameter value minimizing distance between p
and the curve C(u). The deformation maps p to p such that C
maps to C and points nearby move analogously:

R(u) is a 33 rotation matrix which takes a tangent vector t(u)


on C and maps it to its corresponding tangent vector t(u) on C
by rotating around t(u)t(u). s(u) is a scale factor. It is
computed at each curve vertex as the ratio of the sum of
lengths of its adjacent edges inC over this length sum inC, and
then defined continuously over u by linear interpolation.
We now have the deformed coordinates for each point on the
control curve and for its 1-ring neighbors on the mesh. We
proceed to compute the Ti at each point on the control curve. A
rotation is defined by computing a normal and a tangent vector
as the per pendicular projection of one edge vector with this
normal. The normal is computed as a linear combination
weighted by face area of face normals around the mesh point i.
The scale factor of Ti is given by s(up). The transform is then
propagated from the control curve to all graph points p via a
deformation strength field f (p) which decays away from the
deformation site (Figure 7b). Constant, linear, and Gaussian
strength fields can be chosen and are based on the shortest
edge path (discrete geodesic distance) from p to the curve. The
simplest propagation scheme assigns to p a rotation and scale
from the point qp on the control curve closest to p. A smoother
result is obtained by computing a weighted average over all the
vertices on the control curve instead of the closest. Weighting

by the reciprocal of distance 1/kpqikg or by a Gaussian


function

works best in our experiments. kp qkg denotes the discrete


geodesic distance from p to q. controls the width of the
Gaussian. Weighting between multiple curves is similar, except
that the quaternion and scale must be accumulated over
multiple curves.The final transform matrix at point p is:
where Tp is ps weighted average transform. This formula
simply blends that transform with the identity using the strength
field. Laplacian coordinates thus approach their original (rest)
state outside the deformations influence region.
This propagation scheme is similar to the method in [Yu et al.
2004]. The difference is that we compute the transform for each
graph vertex and apply it to its Laplacian coordinate. [Yu et al.
2004] compute a transform for each triangle and apply it to the
triangles vertices. Independently transforming each triangle
disconnects it from its neighbors in the mesh, but solving the
Poisson equation stitches triangles back together, preserving
each triazgles orientation and scale in a least-squares sense.
Extending this idea to a volumetric domain requires a
tetrahedral mesh. Rather than computing transforms at the
deformation site and propagating them away from it, [Sorkine et
al. 2004] introduce additional degrees of freedom by defining
an unknown, least-squares optimal transform which takes a
local neighborhood of points from the rest state to the deformed
state. The transform is restricted to rotations and scales in
order to prevent loss of local detail, as is the case for us too.
For the system to remain quadratic and thus easily solvable,
rotations are defined using the small-angle approximation. This
is a poor approximation for large deformations, which
then require more complicated, iterative refinement.

Working on three.js
Three.js is a cross-browser JavaScript library/API used to create and
display animated 3D computer graphics in a web browser. Three.js
uses WebGL. The source code is hosted in a repository on GitHub.
Three.js allows the creation of GPU-accelerated 3D animations using
the JavaScript language as part of a website without relying on
proprietary browser plugins.This is possible thanks to the advent of
WebGL.High-level libraries such as Three.js or GLGE, SceneJS, PhiloGL
or a number of other libraries make it possible to author complex 3D
computer animations that display in the browser without the effort
required for a traditional standalone application or a plugin.
The Three.js library is a single JavaScript file. It can be included within
a web page by linking to a local or remote copy.
Effects: Anaglyph, cross-eyed and parallax barrier.
Scenes: add and remove objects at run-time; fog
Cameras: perspective and orthographic; controllers: trackball,
FPS, path and more
Animation: armatures, forward kinematics, inverse kinematics,
morph and keyframe
Lights: ambient, direction, point and spot lights; shadows: cast
and receive
Materials: Lambert, Phong, smooth shading, textures and more
Shaders: access to full OpenGL Shading Language (GLSL)
capabilities: lens flare, depth pass and extensive postprocessing library
Objects: meshes, particles, sprites, lines, ribbons, bones and
more - all with Level of detail
Geometry: plane, cube, sphere, torus, 3D text and more;
modifiers: lathe, extrude and tube
Data loaders: binary, image, JSON and scene
Utilities: full set of time and 3D math functions including
frustum, matrix, quaternion, UVs and more

Basic Steps For The Bird Mesh Model


1. Saving the basic file
Before you can use Three.js, you need somewhere to display it. Save
the following HTML to a file on your computer, along with a copy of
three.js in the js/ directory, and open it in your browser.
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8>
<title>CGM Project</title>
<style>
body { margin: 0; }
canvas { width: 100%; height: 100% }
</style>
</head>
<body>
<script src="js/three.js"></script>
<script>
// Our Javascript will go here.
</script>
</body>
</html>

2. Creating a Scene
To actually be able to display anything with Three.js, we need
three things: A scene, a camera, and a renderer so we can
render the scene with the camera.
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75,
window.innerWidth / window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );

3. Adding the Renderer


Next up is the renderer. This is where the magic happens. In addition
to the WebGLRenderer we use here, Three.js comes with a few
others, often used as fallbacks for users with older browsers or for
those who don't have WebGL support for some reason.
In addition to creating the renderer instance, we also need to set the
size at which we want it to render our app. It's a good idea to use the
width and height of the area we want to fill with our app - in this case,
the width and height of the browser window. For performance
intensive apps, you can also give setSize smaller values, like
window.innerWidth/2 and window.innerHeight/2, which will make
the app render at half size.
If you wish to keep the size of your app but render it at a lower
resolution, you can do so by calling setSize with false as updateStyle
(the third argument). For example, setSize(window.innerWidth/2,
window.innerHeight/2, false) will render your app at half resolution,
given that your <canvas> has 100% width and height.

Last but not least, we add the renderer element to our HTML
document. This is a <canvas> element the renderer uses to display the
scene to us.
function render() {
requestAnimationFrame( render );
renderer.render( scene, camera );
}
render();
This will create a loop that causes the renderer to draw the scene 60
times per second. If you're new to writing games in the browser, you
might say "why don't we just create a setInterval? The thing is - we
could, but requestAnimationFrame has a number of advantages.
Perhaps the most important one is that it pauses when the user
navigates to another browser tab, hence not wasting their precious
processing power and battery life.

Complete Code
<!DOCTYPE html>
<html lang="en">
<head>
<title>CGM Project</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, userscalable=no, minimum-scale=1.0, maximum-scale=1.0">
<style>
body {
font-family: Monospace;
background-color: #111;
color: #fff;
margin: 0px;
overflow: hidden;
}
a { color: #f00 }
</style>
</head>
<body>
<script src="three.js"></script>

<script src="js/Detector.js"></script>
<script src="js/libs/stats.min.js"></script>
<script>
if ( ! Detector.webgl ) Detector.addGetWebGLMessage();
var SCREEN_WIDTH = window.innerWidth;
var SCREEN_HEIGHT = window.innerHeight;
var container, stats;
var camera, scene, renderer;
var mixers = [];
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.createElement( 'div' );
document.body.appendChild( container );
var info = document.createElement( 'div' );
info.style.position = 'absolute';
info.style.top = '10px';
info.style.width = '100%';
info.style.textAlign = 'center';
info.innerHTML = 'CGM Project based on
threejs<br>By:<br>Rishab Mittal (13BIT0058)<br>Roopendra

Gurjar (13BIT0094)<br>Shubhanshu Sengar


(13BIT0155)<br>Tushar Bhagatkar (13BIT0271)<br>';
container.appendChild( info );
//
camera = new THREE.PerspectiveCamera( 40,
SCREEN_WIDTH / SCREEN_HEIGHT, 1, 10000 );
camera.position.y = 300;
camera.target = new THREE.Vector3( 0, 150, 0 );
scene = new THREE.Scene();
//
scene.add( new THREE.HemisphereLight( 0x443333,
0x222233 ) );
var light = new THREE.DirectionalLight( 0xffffff, 1 );
light.position.set( 1, 1, 1 );
scene.add( light );
//
var loader = new THREE.JSONLoader();
loader.load( "models/animated/flamingo.js", function(
geometry ) {
var material = new THREE.MeshPhongMaterial( {
color: 0xffffff,
morphTargets: true,
vertexColors: THREE.FaceColors,

shading: THREE.FlatShading
} );
var mesh = new THREE.Mesh( geometry, material );
mesh.position.x = - 150;
mesh.position.y = 150;
mesh.scale.set( 1.5, 1.5, 1.5 );
scene.add( mesh );
var mixer = new THREE.AnimationMixer( mesh );
mixer.clipAction( geometry.animations[ 0 ] ).setDuration( 1
).play();
mixers.push( mixer );
} );
loader.load( "models/animated/flamingo.js", function(
geometry ) {
geometry.computeVertexNormals();
geometry.computeMorphNormals();
var material = new THREE.MeshPhongMaterial( {
color: 0xffffff,
morphTargets: true,
morphNormals: true,
vertexColors: THREE.FaceColors,
shading: THREE.SmoothShading

} );
var mesh = new THREE.Mesh( geometry, material );
mesh.position.x = 150;
mesh.position.y = 150;
mesh.scale.set( 1.5, 1.5, 1.5 );
scene.add( mesh );
var mixer = new THREE.AnimationMixer( mesh );
mixer.clipAction( geometry.animations[ 0 ] ).setDuration( 1
).play();
mixers.push( mixer );
} );
//
renderer = new THREE.WebGLRenderer( { antialias: true } );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( SCREEN_WIDTH, SCREEN_HEIGHT );
container.appendChild( renderer.domElement );
//
stats = new Stats();
container.appendChild( stats.dom );
//
window.addEventListener( 'resize', onWindowResize, false );
}

//
function onWindowResize( event ) {
SCREEN_WIDTH = window.innerWidth;
SCREEN_HEIGHT = window.innerHeight;
renderer.setSize( SCREEN_WIDTH, SCREEN_HEIGHT );
camera.aspect = 0.5 * SCREEN_WIDTH / SCREEN_HEIGHT;
camera.updateProjectionMatrix();
}
//
function animate() {
requestAnimationFrame( animate );
render();
stats.update();
}
var radius = 600;
var theta = 0;
function render() {
theta += 0.1;
camera.position.x = radius * Math.sin(
THREE.Math.degToRad( theta ) );
camera.position.z = radius * Math.cos(
THREE.Math.degToRad( theta ) );

camera.lookAt( camera.target );
var delta = clock.getDelta();
for ( var i = 0; i < mixers.length; i ++ ) {
mixers[ i ].update( delta );
}
renderer.clear();
renderer.render( scene, camera );
}
</script>
</body>
</html>

Output

Conclusion
The Bird Mesh Modeling using Three.JS was completed successfully
by the team. The code is working perfectly and shows the actual bird
model along with the mesh model.
In the future more advanced features can be added such as freehand movement of model, hollow mesh model etc.

You might also like