Professional Documents
Culture Documents
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 24 Jul 2013 18:33:05 UTC
Contents
Articles
High Level View
3D computer graphics Rendering Ray casting Ray tracing 3D projection 1 1 4 12 15 23 27 27 37 40 45 49 51 52 52 57 60 61 63 66 68 71 73 77 84 87 89 92 96 99 101
Light
Light Radiance Photometry Shadow Umbra Distance fog
Material Properties
Shading Diffuse reflection Lambertian reflectance Gouraud shading OrenNayar reflectance model Phong shading BlinnPhong shading model Specular reflection Specular highlight Retroreflector Texture mapping Bump mapping Bidirectional reflectance distribution function Physics of reflection Rendering reflection Reflectivity Fresnel equations
Transparency and translucency Rendering transparency Refraction Total internal reflection List of refractive indices Schlick's approximation Bidirectional scattering distribution function
106 115 119 124 129 132 133 135 135 136 139 141 141 144 146 148 148 151 153 154 165 167 171 176 179 181 181 187 189 191 193 197 198 199
Object Intersection
Linesphere intersection Line-plane intersection Point in polygon
Efficiency Schemes
Spatial index Grid Octree
Global Illumination
Global illumination Rendering equation Distributed ray tracing Monte Carlo method Unbiased rendering Path tracing Radiosity Photon mapping Metropolis light transport
Other Topics
Anti-aliasing Ambient occlusion Caustics Subsurface scattering Motion blur Beam tracing Cone tracing Ray tracing hardware
Ray Tracers
3Delight Amiga Reflections Autodesk 3ds Max Anim8or ASAP Blender Brazil R/S BRL-CAD Form-Z Holomatix Rendition Imagine Indigo Renderer Kerkythea LightWave 3D LuxRender Manta Interactive Ray Tracer Maxwell Render Mental ray Modo OptiX PhotoRealistic RenderMan Picogen Pixie POV-Ray Radiance Real3D Realsoft 3D Sunflow TurboSilver V-Ray YafaRay
202 202 206 208 215 220 221 231 232 235 238 239 240 242 245 250 253 253 256 258 263 264 265 267 269 275 279 282 285 286 286 288
References
Article Sources and Contributors Image Sources, Licenses and Contributors 290 297
Article Licenses
License 303
Basics
3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Related concepts
CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering
3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.
3D computer graphics
History
William Fetter was credited with coining the term computer graphics in 1961[1][2] to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a handproduced by Ed Catmull and Fred Parke at the University of California.
Overview
3D computer graphics creation falls into three basic phases: 3D modeling the process of forming a computer model of an object's shape Layout and animation the motion and placement of objects within a scene 3D rendering the computer calculations that, based on light placement, surface types, and other qualities, generate the image
Modeling
The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A four-point polygon is a quad, and a polygon of more than four points is an n-gon[citation needed]. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.
Rendering
Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Examples of 3D rendering
3D computer graphics
Left: A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay. Center: A 3d model of a Dunkerque class battleship rendered with flat shading. Right: During the 3D rendering step, the number of reflections light rays can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.
Communities
There are a multitude of websites designed to help educate and support 3D graphic artists. Some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, provide product reviews or post examples of their own work.
References
[2] Computer Graphics, comphist.org (http:/ / www. comphist. org/ computing_history/ new_page_6. htm)
External links
A Critical History of Computer Graphics and Animation (http://accad.osu.edu/~waynec/history/lessons.html) How Stuff Works - 3D Graphics (http://computer.howstuffworks.com/3dgraphics.htm) History of Computer Graphics series of articles (http://hem.passagen.se/des/hocg/hocg_1960.htm)
Rendering
Rendering
Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. Also, the results of such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output. Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics and software development.
In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in real time. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.
Rendering
Usage
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees. For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.
Features
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together. shading how the color and brightness of a surface varies with lighting texture-mapping a method of applying detail to surfaces bump-mapping a method of simulating small-scale bumpiness on surfaces fogging/participating medium how light dims when passing through non-clear atmosphere or air shadows the effect of obstructing light soft shadows varying darkness caused by partially obscured light sources reflection mirror-like or highly glossy reflection transparency (optics), transparency (graphic) or opacity sharp transmission of light through solid objects translucency highly scattered transmission of light through solid objects refraction bending of light associated with transparency diffraction bending, spreading and interference of light passing by an object or aperture that disrupts the ray indirect illumination surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination) caustics (a form of indirect illumination) reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object depth of field objects appear blurry or out of focus when too far in front of or behind the object in focus motion blur objects appear blurry due to high-speed motion, or the motion of the camera non-photorealistic rendering rendering of scenes in an artistic style, intended to look like a painting or drawing
Image rendered with computer aided design.
Rendering
Techniques
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image. Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; and ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower. The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques. Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost. Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.
Rendering card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.
Ray casting
In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged. Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two. Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.
Ray tracing
Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are Path Tracing, Bidirectional Path Tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.[1] In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.
Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language.
Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.
Rendering In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments. As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required. However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.
Radiosity
Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it. The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly for indoor scenes. In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model. Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosityor complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame. Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.
Rendering
Optimization
Optimizations used by an artist when a scene is being developed
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.
Academic core
The implementation of a realistic renderer always has some basic element of physical simulation or emulation some computation which resembles or abstracts a real physical process. The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community. The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques. Rendering research is concerned with both the adaptation of scientific models and their efficient application.
Rendering
10
Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' all the movement of light in a scene.
Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.
Geometric optics
Rendering is practically exclusively concerned with the particle aspect of light physics known as geometric optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.
Visual perception
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping. Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods. Rendering for movies often takes place on a network of tightly connected computers known as a render farm. The current state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and the RenderMan shading language designed at Pixar.[2] (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators). Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include IPR and hardware rendering/shading.
Rendering
11
1978 Bump mapping [10] 1980 BSP trees [11] 1980 Ray tracing [12] 1981 Cook shader [13] 1983 MIP maps [14] 1984 Octree ray tracing [15] 1984 Alpha compositing [16] 1984 Distributed ray tracing [17] 1984 Radiosity [18] 1985 Hemicube radiosity [19] 1986 Light source tracing [20] 1986 Rendering equation [21] 1987 Reyes rendering [22] 1991 Hierarchical radiosity [23] 1993 Tone mapping [24] 1993 Subsurface scattering [25] 1995 Photon mapping [26] 1997 Metropolis light transport [27]
Rendering Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN1-56881-147-0. Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN1-55860-387-5. Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN1-55860-276-3. Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN0-12-178270-0. Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN0-201-12110-7. Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN0-12-286160-4. Description of the 'Radiance' system [30]
12
References
[2] A brief introduction to RenderMan (http:/ / portal. acm. org/ citation. cfm?id=1185817& jmp=abstract& coll=GUIDE& dl=GUIDE) [30] http:/ / radsite. lbl. gov/ radiance/ papers/ sg94. 1/
External links
SIGGRAPH (http://www.siggraph.org/) The ACMs special interest group in graphics the largest academic and professional association and conference. http://www.cs.brown.edu/~tor/List of links to (recent) siggraph papers (and some others) on the web.
Ray casting
Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer graphics. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering CSG models.[1]
Usage
Ray casting can refer to: the general problem of determining the first object intersected by a ray,[2] a technique for hidden surface removal based on finding the first intersection of a ray cast from the eye through each pixel of an image, a non-recursive ray tracing rendering algorithm that only casts primary rays, or a direct volume rendering method, also called volume ray casting. Although "ray casting" and "ray tracing" were often used interchangeably in early computer graphics literature,[3] more recent usage tries to distinguish the two.[4] The distinction is that ray casting is a rendering algorithm that never recursively traces secondary rays, whereas other ray tracing-based rendering algorithms may.
Ray casting
13
Concept
Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing. Ray tracing-based rendering algorithms operate in image order to render three dimensional scenes to two dimensional images. Geometric rays are traced from the eye of the observer to sample the light (radiance) travelling toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit. This eliminates the possibility of accurately rendering reflections, refractions, or the natural falloff of shadows; however all of these elements can be faked to a degree, by creative use of texture maps or other methods. The high speed of calculation made ray casting a handy rendering method in early real-time 3D video games. In nature, a light source emits a ray of light that travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons travelling along the same path. At this point, any combination of three things might happen with this light ray: absorption, reflection, and refraction. The surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, and reflective properties are again calculated based on the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to simulate this real-world process of tracing light rays using a computer can be considered extremely wasteful, as only a minuscule fraction of the rays in a scene would actually reach the eye. The first ray casting algorithm used for rendering was presented by Arthur Appel in 1968.[5] The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modelling techniques and easily rendered. An early use of Appel's ray casting rendering algorithm was by Mathematical Applications Group, Inc., (MAGI) of Elmsford, New York.[6]
Ray casting
14
Comanche series
The so-called "Voxel Space" engine developed by NovaLogic for the Comanche games traces a ray through each column of screen pixels and tests each ray against points in a heightmap. Then it transforms each element of the heightmap into a column of pixels, determines which are visible (that is, have not been occluded by pixels that have been drawn in front), and draws them with the corresponding color from the texture map.[8]
References
[5] "Ray-tracing and other Rendering Approaches" (http:/ / nccastaff. bournemouth. ac. uk/ jmacey/ CGF/ slides/ RayTracing4up. pdf) (PDF), lecture notes, MSc Computer Animation and Visual Effects, Jon Macey, University of Bournemouth [6] Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 2531, 1971. [7] Wolfenstein-style ray casting tutorial (http:/ / www. permadi. com/ tutorial/ raycast/ ) by F. Permadi [8] Andre LaMothe. Black Art of 3D Game Programming. 1995, ISBN 1-57169-004-2, pp. 14, 398, 935-936, 941-943. [9] "Ray shooting, depth orders and hidden surface removal", by Mark de Berg, Springer-Verlag, 1993, ISBN 3-540-57020-9, 201 pp.
External links
Raycasting planes in WebGL with source code (http://adrianboeing.blogspot.com/2011/01/ raycasting-two-planes-in-webgl.html) Raycasting (http://leftech.com/raycaster.htm) Interactive raycaster for the Commodore 64 in 254 bytes (with source code) (http://pouet.net/prod. php?which=61298)
Ray tracing
15
Ray tracing
In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
This recursive ray tracing of a sphere demonstrates the effects of shallow depth of field, area light sources and diffuse interreflection.
Algorithm overview
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
The ray tracing algorithm builds an image by extending rays into a scene
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could
Ray tracing potentially waste a tremendous amount of computation on light paths that are never recorded. Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
16
Ray tracing
17
Ray tracing
18
Disadvantages
A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed. Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required. The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give far more accurate simulation of real-world lighting.
The number of reflections a ray can take and how it is affected each time it encounters a surface is all controlled via software settings during ray tracing. Here, each ray was allowed to reflect up to 16 times. Multiple reflections of reflections can thus be seen. Created with Cobalt
The number of refractions a ray can take and how it is affected each time it encounters a surface is all controlled via software settings during ray tracing. Here, each ray was allowed to refract and reflect up to 9 times. Fresnel reflections were used. Also note the caustics. Created with Vray
It is also possible to approximate the equation using ray casting in a different way than what is traditionally considered to be "ray tracing". For performance, rays can be clustered according to their direction, with rasterization hardware and depth peeling used to efficiently sum the rays.[5]
Ray tracing reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.[6][7] Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.[8][9] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias. An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.[10] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions. First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
19
Example
As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is
with direction
(here
where is its distance between and . In our problem, we know and , and we need to find . Therefore, we substitute for :
Let
Ray tracing
20
found by solving this equation are the two ones such that
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction). If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere. The normal to the sphere is simply
is the intersection point found before. The reflection direction can be found by a reflection of , that is
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection. This is merely the math behind the linesphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.
Ray tracing
21
Bounding volumes
We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume. Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes. Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects. The volume of each node should be minimal. The sum of the volumes of all bounding volumes should be minimal. Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree. The time spent constructing the hierarchy should be much less than the time saved by using it.
In real time
The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel network distributed ray-tracing system that achieved several frames per second in rendering performance.[11] This performance was attained by means of the highly optimized yet platform independent LIBRT ray-tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed today as Open source software.[12] Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.[13] The OpenRT project includes a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.[14] On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93GHz.[15] At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive
Ray tracing intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.[16] Nvidia has shipped over 350,000,000 OptiX capable GPUs as of April 2013. OptiX-based renderers are used in Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers. Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".[17]
22
References
[1] Appel A. (1968) Some techniques for shading machine rendering of solids (http:/ / graphics. stanford. edu/ courses/ Appel. pdf). AFIPS Conference Proc. 32 pp.37-45 [2] Whitted T. (1979) An improved illumination model for shaded display (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 156. 1534). Proceedings of the 6th annual conference on Computer graphics and interactive techniques [4] A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002. [5] GPU Gems 2, Chapter 38. High-Quality Global Illumination Rendering Using Rasterization, Addison-Wesley (http:/ / http. developer. nvidia. com/ GPUGems2/ gpugems2_chapter38. html) [8] Global Illumination using Photon Maps (http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf) [9] Photon Mapping - Zack Waters (http:/ / web. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html) [10] http:/ / graphics. stanford. edu/ papers/ metro/ metro. pdf [11] See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 8698. [17] 3D World, April 2013
External links
What is ray tracing ? (http://www.codermind.com/articles/Raytracer-in-C++ -Introduction-What-is-ray-tracing.html) Ray Tracing and Gaming - Quake 4: Ray Traced Project (http://www.pcper.com/article.php?aid=334) Ray tracing and Gaming - One Year Later (http://www.pcper.com/article.php?aid=506) Interactive Ray Tracing: The replacement of rasterization? (http://www.few.vu.nl/~kielmann/theses/ avdploeg.pdf) A series of tutorials on implementing a raytracer using C++ (http://devmaster.net/posts/ raytracing-theory-implementation-part-1-introduction) Tutorial on implementing a raytracer in PHP (http://quaxio.com/raytracer/) The Compleat Angler (1978) (http://www.youtube.com/watch?v=WV4qXzM641o)
3D projection
23
3D projection
Part of a series on
Graphical projection
3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.
Orthographic projection
When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic projection ignores this effect to allow the creation of to-scale drawings for construction and engineering. Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and elevation. If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point , , onto the 2D point , using an orthographic projection parallel to the y axis (profile view), the following equations can be used:
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become: . While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened as they would be in a perspective projection.
3D projection
24
Perspective projection
When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition shows distant objects as smaller to provide additional realism. The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation: - the 3D position of a point A that is to be projected. - the 3D position of a point C representing the camera. - The orientation of the camera (represented, for instance, by TaitBryan angles). - the viewer's position relative to the display surface.[1]
Which results in: When Otherwise, to compute achieved by subtracting - the 2D projection of and . the 3D vector is projected to the 2D vector .
as the position of point A with respect to a coordinate with respect to the initial coordinate system. This is to the result. This transformation is often
called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes): [2] [3]
This representation corresponds to rotating by three Euler angles (more properly, TaitBryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". Note that if the camera is not rotated ( ), then the matrices drop out (as identities), and this reduces to simply a shift: Alternatively, without using matrices, (note that the signs of angles are inconsistent with matrix form):
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):[4]
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
3D projection The distance of the viewer from the display surface, , directly relates to the field of view, where is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface) The above equations can also be rewritten as:
25
In which
recording surface to the entrance pupil (camera center), and the entrance pupil.
Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.
Diagram
where is the screen x coordinate is the model x coordinate is the focal lengththe axial distance from the camera center to the image plane is the subject distance. Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
3D projection
26
References
[1] .
External links
A case study in camera projection (http://nccasymposium.bmth.ac.uk/2007/muhittin_bilginer/index.html) Creating 3D Environments from Digital Photographs (http://nccasymposium.bmth.ac.uk/2009/ McLaughlin_Chris/McLaughlin_C_WebBasedNotes.pdf)
Further reading
Kenneth C. Finney (2004). 3D Game Programming All in One (http://books.google.com/ ?id=cknGqaHwPFkC&pg=PA93&dq="3D+projection"). Thomson Course. p.93. ISBN978-1-59200-136-1. Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code. ISBN0759611874.
27
Light
Light
Visible light (commonly referred to simply as light) is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight.[1] Visible light has a wavelength in the range of about 380 nanometres (nm), or 380109m, to about 740nanometres between the invisible infrared, with longer wavelengths and the invisible ultraviolet, with shorter wavelengths. Primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarisation, while its speed in a vacuum, 299,792,458 meters per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in vacuum.
The Sun is Earth's primary source of light. About 44% of the sun's electromagnetic radiation that reaches the ground is in the visible light range.
In common with all types of EMR, visible light is emitted and absorbed in tiny "packets" called photons, and exhibits properties of both waves and particles. This property is referred to as the waveparticle duality. The study of light, known as optics, is an important research area in modern physics. In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not.[2][3] This article focuses on visible light. See the electromagnetic radiation article for the general term.
Speed light
The speed of light in a vacuum is defined to be exactly 299,792,458m/s (approximately 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum. Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rmer, a Danish physicist, in 1676. Using a telescope, Rmer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit.[4] However, its size was not known at that time. If Rmer had known the diameter of the Earth's orbit, he would have calculated a speed of 227,000,000m/s. Another, more accurate, measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel, and the rate of rotation, Fizeau was able to calculate the speed of light as 313,000,000m/s.
Light Lon Foucault used an experiment which used rotating mirrors to obtain a value of 298,000,000m/s in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mt. Wilson to Mt. San Antonio in California. The precise measurements yielded a speed of 299,796,000m/s. The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example the speed of light in water is about 3/4 of that in vacuum. However, the slowing process in matter is thought to result not from actual slowing of particles of light, but rather from their absorption and re-emission from charged particles in matter. As an extreme example of the nature of light-slowing in matter, two independent teams of physicists were able to bring light to a "complete standstill" by passing it through a Bose-Einstein Condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Mass., and the other at the Harvard-Smithsonian Center for Astrophysics, also in Cambridge.[5] However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped" it had ceased to be light.
28
EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which lead to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina. This change triggers the sensation of vision. There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, and this is how living animals detect it. Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the tissues of the eye and in particular the lens. Furthermore, the rods and cones located at the back of the human eye cannot detect the short ultraviolet wavelengths, and are in fact damaged by ultraviolet rays, a condition known as snow eye.[6] Many animals with eyes that do not require lenses (such as insects and shrimp) are able to directly detect ultraviolet visually, by quantum photon-absorption mechanisms, in much the same chemical way that normal humans detect visible light.
Light
29
Optics
The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light.
Refraction
Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law:
where
is the angle between the ray and the surface is the angle between the
ray and the surface normal in the second medium, and n1 and n2 are the indices of refraction, n = 1 in a vacuum and n > 1 in a transparent substance. When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction. The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation.
An example of refraction of light. The straw appears bent, because of refraction of light as it enters liquid from air.
Light sources
There are many sources of light. The most common light sources are thermal: a body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around 6,000Kelvin peaks in the visible region of the electromagnetic spectrum when plotted in wavelength A cloud illuminated by sunlight units [7] and roughly 44% of sunlight energy that reaches the ground is visible.[8] Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared, and only a fraction in the visible spectrum. The peak of the blackbody spectrum is in the deep infrared, at about 10 micrometer wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one, and finally a blue-white colour as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colours can
Light be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue colour in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals (emitting a wavelength band around 425nm, and is not seen in stars or pure thermal radiation). Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itselfso, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser. Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation, and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means, and boats moving through water can disturb plankton which produce a glowing wake. Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode ray tube television sets and computer monitors. Certain other mechanisms can produce light: Bioluminescence Cherenkov radiation Electroluminescence Scintillation Sonoluminescence triboluminescence
30
When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include: Particleantiparticle annihilation Radioactive decay
A city illuminated by artificial lighting
Light
31
Quantity Name Radiant energy Radiant flux Spectral power Radiant intensity Symbol Qe e [10] [10] [9] joule watt Name
Notes
radiant energy per unit time, also called radiantpower. radiant power per wavelength. power per unit solidangle. radiant intensity per wavelength.
e Ie
[10][11] watt per metre watt per steradian watt per steradian per metre watt per steradian per squaremetre
power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study. commonly measured in Wsr1m2nm1 with surface area and either wavelength or frequency.
Spectral radiance
[11] Le or [12] Le
watt per steradian per metre3 or watt per steradian per square metre per hertz watt per square metre
Irradiance
Ee
[10]
Wm2
MT3
power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well. commonly measured in Wm2nm1 [13] or 1022Wm2Hz1, known as solar flux unit.
Spectral irradiance
watt per metre3 or watt per square metre per hertz watt per square metre
Wm3 or Wm2Hz1
ML1T3 or MT2
Radiant exitance / Radiant emittance Spectral radiant exitance / Spectral radiant emittance Radiosity
Me
Wm2
MT3
[11] Me or [12] Me
watt per metre3 or watt per square metre per hertz watt per square metre
Wm3 or Wm2Hz1
ML1T3 or MT2
Je or [11] Je
Wm2
MT3
Jm2 Jm3
MT2 ML1T2
Light
32
Quantity Name Luminous energy Luminous flux Luminous intensity Luminance Illuminance Luminous emittance Luminous exposure Symbol Qv v Iv Lv Ev Mv Hv [15] [15] [14]
Unit Name lumen second lumen (=cdsr) candela (=lm/sr) Symbol lms lm cd
Notes
units are sometimes called talbots also called luminous power an SI base unit, luminous flux per unit solid angle units are sometimes called nits used for light incident on asurface used for light emitted from asurface
[16] [16]
candela per square metre cd/m2 lux (=lm/m2) lux (=lm/m2) lux second lx lx lxs
lumen second per metre3 lmsm3 L3TJ lumen per watt lm/W M1L2T3J ratio of luminous flux to radiant flux 1 also called luminous coefficient
The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum, and the cumulative response peaks at a wavelength of around 555nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account, and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy, and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye, and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both.
Light pressure
Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by c, the speed of light. Due to the magnitude of c, the effect of light pressure is negligible for everyday objects. For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U.S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers.[17] However, in nanometer-scale applications such as NEMS, the effect of light pressure is more significant, and exploiting light pressure to drive NEMS mechanisms and to flip nanometer-scale physical switches in integrated circuits is an active area of research.[18] At larger scales, light pressure can cause asteroids to spin faster,[19] acting on their irregular shapes as on the vanes of a windmill. The possibility of making solar sails that would accelerate spaceships in space is also under investigation.[20][21] Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum.[22] This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) is directly caused by light pressure.[23]
Light
33
Classical India
In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries CE developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous. On the other hand, the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See Indian atomism.) The basic atoms are those of earth (prthivi), water (pani), fire (agni), and air (vayu) Light rays are taken to be a stream of high velocity of tejas (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the tejas atoms.[citation needed] The Vishnu Purana refers to sunlight as "the seven rays of the sun".[citation needed] The Indian Buddhists, such as Dignga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy.[citation needed]
Descartes
Ren Descartes (15961650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Bacon, Grosseteste, and Kepler.[25] In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves.[citation needed] Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media. Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes' theory of light is regarded as the start of modern physical optics.[25]
Light
34
Particle theory
Pierre Gassendi (15921655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and preferred his view to Descartes' theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether. Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the Pierre Gassendi. gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the 18th century. The particle theory of light led Laplace to argue that a body could be so massive that light could not escape from it. In other words it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in The large scale structure of space-time, by Stephen Hawking and George F. R. Ellis.
Wave theory
To explain the origin of colors, Robert Hooke (1635-1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 Micrographia ("Observation XI"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629-1695) worked out a mathematical wave theory of light in 1678, and published it in his Treatise on light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the Luminiferous ether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium. [26] The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young), and that light could be polarised, if it were a transverse wave. Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colours were caused by different wavelengths of light, and explained colour vision in terms of three-coloured receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory.
Thomas Young's sketch of the two-slit experiment showing the diffraction of light. Young's experiments supported the theory that light consists of waves.
Later, Augustin-Jean Fresnel independently worked out his own wave theory of light, and presented it to the Acadmie des Sciences in 1817. Simeon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favour of the wave theory, helping to overturn Newton's corpuscular theory. By the year 1821, Fresnel was able to show via mathematical methods that polarisation could be explained only by the wave
Light theory of light and only if light was entirely transverse, with no longitudinal vibration whatsoever. The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance luminiferous aether proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the MichelsonMorley experiment. Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Lon Foucault, in 1850.[27] His result supported the wave theory, and the classical particle theory was finally abandoned, only to partly re-emerge in the 20th century.
35
Quantum theory
In 1900 Max Planck, attempting to explain black body radiation suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect, and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these liqht quanta particles photons. Eventually the modern theory of quantum mechanics came to picture light as (in some sense) both a particle and a wave, and (in another sense), as a phenomenon which is neither a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, modern physics sees light as something that can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles), and sometimes another macroscopic metaphor (water waves), but is actually something that cannot be fully imagined. As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both.
Electromagnetic theory as explanation for all types of visible light and all EM radiation
In 1845, Michael Faraday discovered that the plane of polarisation of linearly polarised light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation.[] This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along A linearly polarised light wave frozen in time and showing the two oscillating magnetic field lines.[] Faraday components of light; an electric field and a magnetic field perpendicular to each other and proposed in 1847 that light was a to the direction of motion (a transverse wave). high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether.
Light Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behaviour of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory, and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging, and wireless communications. In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines).
36
Notes
[1] CIE (1987). International Lighting Vocabulary (http:/ / www. cie. co. at/ publ/ abst/ 17-4-89. html). Number 17.4. CIE, 4th edition. ISBN 978-3-900734-07-7. By the International Lighting Vocabulary, the definition of light is: Any radiation capable of causing a visual sensation directly. [6] http:/ / www. yorku. ca/ eye/ lambdas. htm [7] http:/ / thulescientific. com/ LYNCH%20& %20Soffer%20OPN%201999. pdf [9] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [10] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiantemittance. [11] Spectral quantities given per unit wavelength are denoted with suffix "" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "()" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [12] Spectral quantities given per unit frequency are denoted with suffix ""(Greek)not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [13] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solarfluxunit(SFU). [14] Standards organizations recommend that photometric quantities be denoted with a suffix "v" (for "visual") to avoid confusion with radiometric or photon quantities. [15] Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and or K for luminous efficacy. [16] "J" here is the symbol for the dimension of luminous intensity, not the symbol for the unit joules. [18] See, for example, nano-opto-mechanical systems research at Yale University (http:/ / www. eng. yale. edu/ tanglab/ research. htm). [22] P. Lebedev, Untersuchungen ber die Druckkrfte des Lichtes, Ann. Phys. 6, 433 (1901). [25] Theories of light, from Descartes to Newton A. I. Sabra CUP Archive,1981 pg 48 ISBN 0-521-28436-8, ISBN 978-0-521-28436-3 [26] Fokko Jan Dijksterhuis, Lenses and Waves: Christiaan Huygens and the Mathematical Science of Optics in the 17th Century (http:/ / books. google. com/ books?id=cPFevyomPUIC), Kluwer Academic Publishers, 2004, ISBN 1-4020-2697-8
References
Radiance
37
Radiance
Radiance and spectral radiance are measures of the quantity of radiation that passes through or is emitted from a surface and falls within a given solid angle in a specified direction. They are used in radiometry to characterize diffuse emission and reflection of electromagnetic radiation. In astrophysics, radiance is also used to quantify emission of neutrinos and other particles. The SI unit of radiance is watts per steradian per square metre (Wsr1m2), while that of spectral radiance is Wsr1m2Hz1 or Wsr1m3 depending on if the spectrum is a function of frequency or of wavelength.
Description
Radiance characterizes total emission or reflection. Radiance is useful because it indicates how much of the power emitted by an emitting or reflecting surface will be received by an optical system looking at the surface from some angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system's entrance pupil. Since the eye is an optical system, radiance and its cousin luminance are good indicators of how bright an object will appear. For this reason, radiance and luminance are both sometimes called "brightness". This usage is now discouraged see Brightness for a discussion. The nonstandard usage of "brightness" for "radiance" persists in some fields, notably laser physics. The radiance divided by the index of refraction squared is invariant in geometric optics. This means that for an ideal optical system in air, the radiance at the output is the same as the input radiance. This is sometimes called conservation of radiance. For real, passive, optical systems, the output radiance is at most equal to the input, unless the index of refraction changes. As an example, if you form a demagnified image with a lens, the optical power is concentrated into a smaller area, so the irradiance is higher at the image. The light at the image plane, however, fills a larger solid angle so the radiance comes out to be the same assuming there is no loss at the lens. Spectral radiance expresses radiance as a function of frequency (Hz) with SI units Wsr1m2Hz1 or wavelength (nm) with units of Wsr1m2nm1 (more common than Wsr1m-3). In some fields spectral radiance is also measured in microflicks.[1][2] Radiance is the integral of the spectral radiance over all wavelengths or frequencies. For radiation emitted by an ideal black body at temperature T, spectral radiance is governed by Planck's law, while the integral of radiance over the hemisphere into which it radiates, in W/m2, is governed by the Stefan-Boltzmann law. There is no need for a separate law for radiance normal to the surface of a black body, in W/m2/sr, since this is simply the Stefan-Boltzmann law divided by . This factor is obtained from the solid angle 2 steradians of a hemisphere decreased by integration over the cosine of the zenith angle. More generally the radiance at an angle to the normal (the zenith angle) is given by the Stefan-Boltzmann law times cos()/.
Definition
Radiance is defined by
where L is the observed or measured radiance (Wm2sr1), in the direction , d is the differential operator, is the total radiant flux or power (W) emitted is the angle between the surface normal and the specified direction, A is the area of the surface (m2), and is the solid angle (sr) subtended by the observation or measurement.
Radiance The approximation only holds for small A and where cos is approximately constant. In general, L is a function of viewing angle through the cos term in the denominator as well as the , and potentially azimuth angle, dependence of . For the special case of a Lambertian source, L is constant such that is proportional to cos.
38
When calculating the radiance emitted by a source, A refers to an area on the surface of the source, and to the solid angle into which the light is emitted. When calculating radiance at a detector, A refers to an area on the surface of the detector and to the solid angle subtended by the source as viewed from that detector. When radiance is conserved, as discussed above, the radiance emitted by a source is the same as that received by a detector observing it. The spectral radiance (radiance per unit wavelength) is written L and the radiance per unit frequency is written L.
Intensity
Radiance is often, confusingly, called intensity in other areas of study, especially heat transfer, astrophysics and astronomy. Intensity has many other meanings in physics, with the most common being power per unit area. The distinction lies in the area rather than the subtended angle of the observer, and relative area of the source.
Quantity Name Radiant energy Radiant flux Spectral power Radiant intensity Spectral intensity Symbol Qe e [2] [2] [2][3] [1] joule watt watt per metre watt per steradian [3] Name
Notes
radiant energy per unit time, also called radiantpower. radiant power per wavelength. power per unit solidangle. radiant intensity per wavelength.
e Ie Ie Le
watt per steradian per metre watt per steradian per squaremetre
Radiance
power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study. commonly measured in Wsr1m2nm1 with surface area and either wavelength or frequency.
Spectral radiance
[3] Le or [4] Le
watt per steradian per metre3 or watt per steradian per square metre per hertz watt per square metre
Irradiance
Ee
[2]
Wm2
MT3
power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well.
Radiance
[3] Ee or [4] Ee
39
Spectral irradiance
watt per metre3 or watt per square metre per hertz watt per square metre
Wm3 or Wm2Hz1
ML1T3 or MT2
Radiant exitance / M [2] e Radiant emittance Spectral radiant exitance / Spectral radiant emittance Radiosity [3] Me or [4] Me
watt per metre3 or watt per square metre per hertz watt per square metre
Je or [3] Je He e
Wm2
MT3
Jm2 Jm3
MT2 ML1T2
See also: SI Radiometry Photometry [1] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [2] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiantemittance. [3] Spectral quantities given per unit wavelength are denoted with suffix "" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "()" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [4] Spectral quantities given per unit frequency are denoted with suffix ""(Greek)not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [5] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solarfluxunit(SFU).
Photometry
40
Photometry
Photometry is the science of the measurement of light, in terms of its perceived brightness to the human eye.[5] It is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power. In photometry, the radiant power at each wavelength is weighted by a luminosity function that models human brightness sensitivity. Typically, this weighting function is the photopic sensitivity function, although the scotopic function or other functions may also be applied in the same way.
Photopic (daytime-adapted, black curve) and scotopic [1] (darkness-adapted, green curve) luminosity functions. The photopic includes the CIE 1931 standard [2] (solid), the Judd-Vos 1978 modified data [3] (dashed), and the Sharpe, Stockman, Jagla & Jgle 2005 data [4] (dotted). The horizontal axis is wavelength in nm.
The human eye is not equally sensitive to all wavelengths of visible light. Photometry attempts to account for this by weighing the measured power at each wavelength with a factor that represents how sensitive the eye is at that wavelength. The standardized model of the eye's response to light as a function of wavelength is given by the luminosity function. Note that the eye has different responses as a function of wavelength when it is adapted to light conditions (photopic vision) and dark conditions (scotopic vision). Photometry is typically based on the eye's photopic response, and so photometric measurements may not accurately indicate the perceived brightness of sources in dim lighting conditions where colors are not discernible, such as under just moonlight or starlight.[5] Photopic vision is characteristic of the eye's response at luminance levels over three candela per square metre. Scotopic vision occurs below 210-5 cd/m2. Mesopic vision occurs between these limits and is not well characterised for spectral response.[5]
Photometric quantities
Measurement of the effects of electromagnetic radiation became a field of study as early as the end of 18th century. Measurement techniques varied depending on the effects under study and gave rise to different nomenclature. The total heating effect of infrared radiation as measured by thermometers led to development of radiometric units in terms of total energy and power. Use of the human eye as a detector led to photometric units, weighted by the eye's response characteristic. Study of the chemical effects of ultraviolet radiation led to characterization by the total dose or actinometric units expressed in photons per second. [5] Many different units of measure are used for photometric measurements. People sometimes ask why there need to be so many different units, or ask for conversions between units that can't be converted (lumens and candelas, for example). We are familiar with the idea that the adjective "heavy" can refer to weight or density, which are fundamentally different things. Similarly, the adjective "bright" can refer to a light source which delivers a high luminous flux (measured in lumens), or to a light source which concentrates the luminous flux it has into a very narrow beam (candelas), or to a light source that is seen against a dark background. Because of the ways in which light propagates through three-dimensional space spreading out, becoming concentrated, reflecting off shiny or
Photometry matte surfaces and because light consists of many different wavelengths, the number of fundamentally different kinds of light measurement that can be made is large, and so are the numbers of quantities and units that represent them. For example, offices are typically "brightly" illuminated by an array of many recessed fluorescent lights for a combined high luminous flux. A laser pointer has very low luminous flux (it could not illuminate a room) but is blindingly bright in one direction (high luminous intensity in that direction).
41
Quantity Name Luminous energy Luminous flux Luminous intensity Luminance Illuminance Luminous emittance Luminous exposure Symbol Qv v Iv Lv Ev Mv Hv [7] [7] [6]
Unit Name lumen second lumen (=cdsr) candela (=lm/sr) Symbol lms lm cd
Notes
units are sometimes called talbots also called luminous power an SI base unit, luminous flux per unit solid angle units are sometimes called nits used for light incident on asurface used for light emitted from asurface
[8] [8]
candela per square metre cd/m2 lux (=lm/m2) lux (=lm/m2) lux second lx lx lxs
lumen second per metre3 lmsm3 L3TJ lumen per watt lm/W M1L2T3J ratio of luminous flux to radiant flux 1 also called luminous coefficient
Photometry
42
Quantity Name Radiant energy Radiant flux Spectral power Radiant intensity Symbol Qe e [10] [10] [9] joule watt Name
Notes
radiant energy per unit time, also called radiantpower. radiant power per wavelength. power per unit solidangle. radiant intensity per wavelength.
e Ie
[10][11] watt per metre watt per steradian watt per steradian per metre watt per steradian per squaremetre
power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study. commonly measured in Wsr1m2nm1 with surface area and either wavelength or frequency.
Spectral radiance
[11] Le or [12] Le
watt per steradian per metre3 or watt per steradian per square metre per hertz watt per square metre
Irradiance
Ee
[10]
Wm2
MT3
power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well. commonly measured in Wm2nm1 [13] or 1022Wm2Hz1, known as solar flux unit.
Spectral irradiance
watt per metre3 or watt per square metre per hertz watt per square metre
Wm3 or Wm2Hz1
ML1T3 or MT2
Radiant exitance / Radiant emittance Spectral radiant exitance / Spectral radiant emittance Radiosity
Me
Wm2
MT3
[11] Me or [12] Me
watt per metre3 or watt per square metre per hertz watt per square metre
Wm3 or Wm2Hz1
ML1T3 or MT2
Je or [11] Je
Wm2
MT3
Jm2 Jm3
MT2 ML1T2
Photometry
43
Photometry
44
Illuminance
Foot-candle Phot
Notes
[1] [2] [3] [4] [5] http:/ / www. cvrl. org/ database/ text/ lum/ scvl. htm http:/ / www. cvrl. org/ database/ text/ cmfs/ ciexyz31. htm http:/ / www. cvrl. org/ database/ text/ lum/ vljv. htm http:/ / www. cvrl. org/ database/ text/ lum/ ssvl2. htm Michael Bass (ed.), Handbook of Optics Volume II - Devices, Measurements and Properties, 2nd Ed., McGraw-Hill 1995, ISBN 978-0-07-047974-6 pages 24-40 through 24-47 [6] Standards organizations recommend that photometric quantities be denoted with a suffix "v" (for "visual") to avoid confusion with radiometric or photon quantities. [7] Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and or K for luminous efficacy. [8] "J" here is the symbol for the dimension of luminous intensity, not the symbol for the unit joules. [9] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [10] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiantemittance. [11] Spectral quantities given per unit wavelength are denoted with suffix "" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "()" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [12] Spectral quantities given per unit frequency are denoted with suffix ""(Greek)not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [13] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solarfluxunit(SFU).
Photometry
45
Shadow
A shadow is an area where direct light from a light source cannot reach due to obstruction by an object. It occupies all of the space behind an opaque object with light in front of it. The cross section of a shadow is a two-dimensional silhouette, or reverse projection of the object blocking the light. The sun causes many objects to have shadows and at certain times of the day, when the sun is at certain heights, the lengths of shadows change. An astronomical object casts human-visible shadows when its apparent magnitude is equal or lower than 4.[1] Currently the only astronomical objects able to produce visible shadows on Earth are the sun, the moon and, in the right conditions, Venus or Jupiter.[2]
Shadow
46
Non-point source
For a non-point source of light, the shadow is divided into the umbra and penumbra. The wider the light source, the more blurred the shadow. If two penumbras overlap, the shadows appear to attract and merge. This is known as the Shadow Blister Effect. If there are multiple light sources there are multiple shadows, with overlapping parts darker, or a combination of colors. For a person or object touching the surface, like a person standing on the ground, or a pole in the ground, these converge at the point of touch.
Steam phase eruption of Castle Geyser in Yellowstone National Park cast a shadow on its own steam. Crepuscular rays are also seen.
Shadow
47
Shadow
48 Since there is no actual communication between points in a shadow (except for reflection or interference of light, at the speed of light), a shadow that projects over a surface of large distances (light years) cannot give information between those distances with the shadow's edge.[3]
In photography
In photography, which is essentially recording patterns of light, shade, and colour, "highlights" and "shadows" are the brightest and darkest parts of a scene or image. Photographic exposure must be adjusted (unless special effects are wanted) to allow the film or sensor, which has limited dynamic range, to record detail in the highlights without them being washed out, and in the shadows without their becoming undifferentiated black areas.
Fog shadows
Fog shadows look odd since humans are not used to seeing shadows in three dimensions. The thin fog is just dense enough to be illuminated by the light that passes through the gaps in a structure or in a tree. As a result, the path of an object shadow through the "fog" appears darkened. In a sense, these shadow lanes are similar to crepuscular rays, which are caused by cloud shadows, but here, they are caused by the shadows of solid objects.
Other notes
A shadow cast by the Earth on the Moon is a lunar eclipse. Conversely, a shadow cast by the Moon on the Earth is a solar eclipse. On satellite imagery and aerial photographs, taken vertically, tall buildings can be recognized as such by their long shadows (if the photographs are not taken in the tropics around noon), while these also show more of the shape of these buildings. A shadow shows, apart from distortion, the same image as the silhouette when looking at the object from the sun-side, hence the mirror image of the silhouette seen from the other side (see picture). Shadow as a term is often used for any occlusion, not just those with respect to light. For example, a rain shadow is a dry area, which, with Jasmine flowers soft shadows respect to the prevailing wind direction, is beyond a mountain range; the range is "blocking" water from crossing the area. An acoustic shadow can be created by terrain as well that will leave spots that can't easily hear sounds from a distance. Sciophobia, or sciaphobia, is the fear of shadows.
Shadow
49
Mythological connotations
An unattended shadow or shade was thought by some cultures to be similar to that of a ghost.
Heraldry
In heraldry, when a charge is supposedly shown in shadow (the appearance is of the charge merely being outlined in a neutral tint rather than being of one or more tinctures different from the field on which it is placed), it is called umbra-ted. Supposedly only a limited number of specific charges can be so depicted. Shadows can be colored by a colored transparent source of the shadow.
References
[1] NASA Science Question of the Week (http:/ / web. archive. org/ web/ 20070627044109/ http:/ / www. gsfc. nasa. gov/ scienceques2005/ 20060406. htm). Gsfc.nasa.gov (April 7, 2006). Retrieved on 2013-04-26. [3] Philip Gibbs (1997) Is Faster-Than-Light Travel or Communication Possible? (http:/ / math. ucr. edu/ home/ baez/ physics/ Relativity/ SpeedOfLight/ FTL. html#3) math.ucr.edu [4] Question Board Questions about Light (http:/ / www. pa. uky. edu/ sciworks/ qlight. htm). Pa.uky.edu. Retrieved on 2013-04-26.
External links
http://www.schoolsobservatory.org.uk/astro/esm/shadows How sun casts shadows over day hours
Umbra
The umbra, penumbra and antumbra are the names given to three distinct parts of a shadow, created by any light source after impinging on an opaque object. For a point source only the umbra is cast. These names are most often used to refer to the shadows cast by celestial bodies, though they are sometimes used to describe levels of darkness, such as in sunspots.
Umbra, penumbra, and antumbra
Umbra
50
Umbra
The umbra (Latin for "shadow") is the innermost and darkest part of a shadow, where the light source is completely blocked by the occluding body. An observer in the umbra experiences a total eclipse.
Penumbra
The penumbra (from the Latin paene "almost, nearly" and umbra "shadow") is the region in which only a portion of the light source is obscured by the occluding body. An observer in the penumbra Example of umbra, penumbra and antumbra outside experiences a partial eclipse. An alternative definition is that the astronomy penumbra is the region where some or all of the light source is obscured (i.e., the umbra is a subset of the penumbra). For example, NASA's Navigation and Ancillary Information Facility defines that a body in the umbra is also within the penumbra.[1] In radiation oncology, the penumbra is the space in the periphery of the main target of radiation therapy, and has been defined as the volume receiving between 80% and 20% of isodose.[2]
Antumbra
The antumbra (from Latin ante, 'before') is the region from which the occluding body appears entirely contained within the disc of the light source. If an observer in the antumbra moves closer to the light source, the apparent size of the occluding body increases until it causes a full umbra. An observer in this region experiences an annular eclipse, in which a bright ring is visible around the eclipsing body.
Earth's shadow, to scale, showing the extent of the umbral cone beyond the Moon's orbit (yellow dot, also to scale)
Umbra
51
References
[1] Event Finding Subsystem Preview (http:/ / naif. jpl. nasa. gov/ pub/ naif/ toolkit_docs/ Tutorials/ pdf/ individual_docs/ 45_event_finding_preview. pdf) Navigation and Ancillary Information Facility. [2] Page 55 (http:/ / books. google. com/ books?id=4CRZoDk5KWsC& pg=PA55& lpg=PA55) in:
Distance fog
Distance fog is a technique used in 3D computer graphics to enhance the perception of distance by simulating fog. Because many of the shapes in graphical environments are relatively simple, and complex shadows are difficult to render, many graphics engines employ a "fog" gradient so objects further from the camera are progressively more obscured by haze and by aerial perspective.[1] This technique simulates the effect of light scattering, which causes more distant objects to appear lower in contrast, especially in outdoor environments.
"Fogging" is another use of distance fog in mid-to-late 1990s games, when processing power was not enough to render far viewing distances, and clipping was employed. However, the effect could be very distracting since bits and pieces of polygons would flicker in and out of view instantly, and by applying a medium-ranged fog, the clipped polygons would fade in more realistically from the haze, even though the effect may have been considered unrealistic in some cases (such as dense fog inside of a building). Many early Nintendo 64 and PlayStation games used this effect, as in Turok: Dinosaur Hunter, Bubsy 3D, Star Wars: Rogue Squadron, Tony Hawk's Pro Skater, and Superman. The game Silent Hill uniquely worked fogging into the game's storyline, with the eponymous town being consumed by a dense layer of fog as the result of the player having entered an alternate reality. The application of fogging was so well received as an atmospheric technique that it has appeared in each of the game's sequels, despite improved technology negating it as a graphical necessity.
References
52
Material Properties
Shading
Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.
Gouraud shading, invented by Henri Gouraud in 1971, was one of the first shading techniques developed in computer graphics.
Drawing
Shading is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. Light patterns, such as objects having light and shaded areas, help when creating the illusion of depth on paper.[1][2]
Example of shading.
Shading
53
Computer graphics
In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on its angle to lights and its distance from lights to create a photorealistic effect. Shading is performed during the rendering process by a program called a shader.
Rendered image of a box. This image has no shading on its faces, but uses edge lines to separate the faces.
This is the same image rendered with shading of the faces to alter the colors of the 3 faces based on their angle to the light sources.
Shading
54
Lighting
Usually, upon rendering a scene a number of different lighting techniques will be used to make the rendering look more realistic. For this matter, a number of different types of light sources exist to provide customization for the shading of objects. Ambient lighting Shading is also dependant on lighting. An ambient light source represents a fixed-intensity and fixed-color light source that affects all objects in the scene equally. Upon rendering, all objects in the scene are brightened with the specified intensity and color. This type of light source is mainly used to provide the scene with a basic view of the different objects in it. Directional lighting A directional light source illuminates all objects equally from a given direction, like an area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff Point lighting Light originates from a single point, and spreads outward in all directions Spotlight lighting Spotlight, originates from a single point, and spreads outward in a coned direction Area lighting Area, originates from a single plane and illuminates all objects in a given direction beginning from that plane Volumetric lighting Volume, an enclosed space lighting objects within that space Shading is interpolated based on how the angle of these light sources reach the objects within a scene. Of course, these light sources can be and often are combined in a scene. The renderer then interpolates how these lights must be combined, and produces a 2d image to be displayed on the screen accordingly.
Distance falloff
Theoretically, two surfaces which are parallel, are illuminated the same amount from a distant light source, such as the sun. Even though one surface is further away, your eye sees more of it in the same space, so the illumination appears the same. Notice in the first image that the color on the front faces of the two boxes is exactly the same. It appears that there is a slight difference where the two faces meet, but this is an optical illusion because of the vertical edge below where the two faces meet. Notice in the second image that the surfaces on the boxes are bright on the front box and darker on the back box. Also the floor goes from light to dark as it gets farther away. This distance falloff effect produces images which appear more realistic without having to add additional lights to achieve the same effect.
Shading
55
Two boxes rendered with an OpenGL renderer. Note that the colors of the two front faces are the same even though one box is further away.
The same model rendered using ARRIS CAD which implements "Distance Falloff" to make surfaces which are closer to the eye appear brighter.
Distance falloff can be calculated in a number of ways: None - The light intensity received is the same regardless of the distance between the point and the light source. Linear - For a given point at a distance x from the light source, the light intensity received is proportional to 1/x. Quadratic - This is how light intensity decreases in reality if the light has a free path (i.e. no fog or any other thing in the air that can absorb or scatter the light). For a given point at a distance x from the light source, the light intensity received is proportional to 1/x2. Factor of n - For a given point at a distance x from the light source, the light intensity received is proportional to 1/xn. Any number of other mathematical functions may also be used.
Flat shading
Flat shading is a lighting technique used in 3D computer graphics to shade each polygon of an object based on the angle between the polygon's surface normal and the direction of the light source, their respective colors and the intensity of the light source. It is usually used for high speed Example of flat shading vs. interpolation rendering where more advanced shading techniques are too computationally expensive. As a result of flat shading all of the polygon's vertices are colored with one color, allowing differentiation between adjacent polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is drawn uniformly over the entire face. If a specular highlight doesnt fall on the representative point, it is missed entirely. Consequently, the specular reflection component is usually not included in flat shading computation.
Shading
56
Smooth shading
Smooth shading of a polygon displays the points in a polygon with smoothly-changing colors across the surface of the polygon. This requires you to define a separate color for each vertex of your polygon, because the smooth color change is computed by interpolating the vertex colors across the interior of the triangle with the standard kind of interpolation we saw in the graphics pipeline discussion. Computing the color for each vertex is done with the usual computation of a standard lighting model, but in order to compute the color for each vertex separately you must define a separate normal vector for each vertex of the polygon. This allows the color of the vertex to be determined by the lighting model that includes this unique normal. Types of smooth shading include: Gouraud shading Phong shading
Gouraud shading
1. Determine the normal at each polygon vertex 2. Apply an illumination model to each vertex to calculate the vertex intensity 3. Linearly interpolate the vertex intensities over the surface polygon Data structures Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh) More generally, need data structure for mesh Key: which polygons meet at each vertex Advantages Polygons, more complex than triangles, can also have different colors specified for each vertex. In these instances, the underlying logic for shading can become more intricate. Problems Even the smoothness introduced by Gouraud shading may not prevent the appearance of the shading differences between adjacent polygons. Gouraud shading is more CPU intensive and can become a problem when rendering real time environments with many polygons. T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided.
Phong shading
Phong shading, is similar to Gouraud shading except that the Normals are interpolated. Thus, the specular highlights are computed much more precisely than in the Gouraud shading model: 1. 2. 3. 4. Compute a normal N for each vertex of the polygon. From bi-linear interpolation compute a normal, Ni for each pixel. (This must be renormalized each time) From Ni compute an intensity Ii for each pixel of the polygon. Paint pixel to shade corresponding to Ii.
Shading
57
Edges appear more pronounced than they would on a real object because of a phenomenon in the eye known as lateral inhibition Same color for any point of the face Individual faces are visualized Not suitable for smooth objects Less expensive
Each point of the face has its own color Visualize underlying surface Suitable for any objects More expensive
References
Diffuse reflection
Diffuse reflection is the reflection of light from a surface such that an incident ray is reflected at many angles rather than at just one angle as in the case of specular reflection. An illuminated ideal diffuse reflecting surface will have equal luminance from all directions which lie in the half-space adjacent to the surface (Lambertian reflectance). A surface built from a non-absorbing powder such as plaster, or from fibers such as paper, or from a polycrystalline material such as white marble, reflects light diffusely with great efficiency. Many common materials exhibit a mixture of specular and diffuse reflection. The visibility of objects, but light-emitting ones, is primarily caused by diffuse reflection of light: it is diffusely-scattered light that forms the image of the object in the observer's eye.
Diffuse and specular reflection from a glossy [1] surface. The rays represent luminous intensity, which varies according to Lambert's cosine law for an ideal diffuse reflector.
Diffuse reflection
58
Mechanism
Diffuse reflection from solids is generally not due to surface roughness. A flat surface is indeed required to give specular reflection, but it does not prevent diffuse reflection. A piece of highly polished white marble remains white; no amount of polishing will turn it into a mirror. Polishing produces some specular reflection, but the remaining light continues to be diffusely reflected. The most general mechanism by which a surface gives diffuse reflection does not involve exactly the surface: most of the light is contributed by scattering centers beneath the surface,[2][3] as illustrated in Figure1 at right. If one were to imagine that the figure represents snow, and that the polygons are its (transparent) ice crystallites, an impinging ray is partially reflected (a few percent) by the first particle, enters in it, is again reflected by the interface with the second particle, enters in it, impinges on the third, and so on, generating a series of "primary" scattered rays in random directions, which, in turn, through the same mechanism, generate a large number of "secondary" scattered rays, which generate "tertiary" rays...[4] All these rays walk through the snow crystallytes, which do not absorb light, until they arrive at the surface and exit in random directions.[5] The result is that the light that was sent out is returned in all directions, so that snow is white despite being made of transparent material (ice crystals). For simplicity, "reflections" are spoken of here, but more generally the interface between the small particles that constitute many materials is irregular on a scale comparable with light wavelength, so diffuse light is generated at each interface, rather than a single reflected ray, but the story can be told the same way.
Figure1 General mechanism of diffuse reflection by a solid surface (refraction phenomena not represented)
This mechanism is very general, because almost all common materials are made of "small things" held together. Mineral materials are generally polycrystalline: one can describe them as made of a 3D mosaic of small, irregularly shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light, reproducing the above mechanism. Few materials don't follow it: among them metals, which do not allow light to enter; gases, liquids; glass and transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye. These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass
Diffuse reflection (Figure2), or, of course, if their homogeneous structure deteriorates, as in the eye lens. A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse reflection.
59
Colored objects
Up to now white objects have been discussed, which do not absorb light. But the above scheme continues to be valid in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in the material, and will emerge colored. More, diffusion affects in a substantial manner the color of objects, because it determines the average path of light in the material, and hence to which extent the various wavelengths are absorbed.[6] Red ink looks black when it stays in its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. Light coming from the bottle, instead, has crossed centimeters of ink, and has been heavily absorbed, even in its red wavelengths. And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored. A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white. This is quite general, because, except for metals, the reflectivity of most materials depends on their refraction index, which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so that all colors are reflected nearly with the same intensity. Reflections from different origin, instead, may be colored: metallic reflections, such as in gold or copper, or interferential reflections: iridescences, peacock feathers, butterfly wings, beetle elytra, or the antireflection coating of a lens.
Diffuse reflection
60
Interreflection
Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects. In 3D computer graphics, diffuse interreflection is an important component of global illumination. There are a number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two commonly used methods.
References
[2] P.Hanrahan and W.Krueger (1993), Reflection from layered surfaces due to subsurface scattering, in SIGGRAPH 93 Proceedings, J. T. Kajiya, Ed., vol.27, pp.165174 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p165-hanrahan. pdf). [3] H.W.Jensen et al. (2001), A practical model for subsurface light transport, in ' Proceedings of ACM SIGGRAPH 2001', pp.511518 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p511-jensen. pdf) [4] Only primary and secondary rays are represented in the figure. [5] Or, if the object is thin, it can exit from the opposite surface, giving diffuse transmitted light. [6] Paul Kubelka, Franz Munk (1931), Ein Beitrag zur Optik der Farbanstriche, Zeits. f. Techn. Physik, 12, 593601, see The Kubelka-Munk Theory of Reflectance (http:/ / web. eng. fiu. edu/ ~godavart/ BME-Optics/ Kubelka-Munk-Theory. pdf)
Lambertian reflectance
Lambertian reflectance is the property that defines an ideal diffusely reflecting surface. The apparent brightness of such a surface to an observer is the same regardless of the observer's angle of view. More technically, the surface's luminance is isotropic, and the luminous intensity obeys Lambert's cosine law. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria.
Examples
Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since the glossy coating creates specular highlights. Not all rough surfaces are Lambertian reflectors, but this is often a good approximation when the characteristics of the surface are unknown. Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance.
Lambertian reflectance where is the angle between the direction of the two vectors, the intensity will be the highest if the normal vector points in the same direction as the light vector ( , the surface will be perpendicular to the direction of the light), and the lowest if the normal vector is perpendicular to the light vector ( , the surface runs parallel with the direction of the light). Lambertian reflection from polished surfaces are typically accompanied by specular reflection (gloss), where the surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. This is simulated in computer graphics with various specular reflection models such as Phong, Cook-Torrance. etc.
61
Other waves
While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian reflectance.
References
Gouraud shading
Gouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes. In practice, Gouraud shading is most often used to achieve continuous lighting on triangle surfaces by computing the lighting at the corners of each triangle and linearly interpolating the resulting colours for each pixel covered by the triangle. Gouraud first published the technique in 1971.[1][2][3]
Description
Gouraud shading works as follows: An estimate to the surface normal of each vertex in a polygonal 3D model is either specified for each vertex or found by averaging the surface normals of the polygons that meet at each vertex. Using these estimates, lighting computations based on a reflection model, e.g. the Phong reflection model, are then performed to produce colour intensities at the vertices. For each screen pixel that is covered by the polygonal mesh, colour intensities can then be interpolated from the colour values calculated at the vertices.
Gouraud shading
62
Gouraud shading
63
References
Introduction
Reflectance is a physical property of a material that describes how it reflects incident light. The appearance of various materials are determined to a large extent by their reflectance properties. Most reflectance models can be broadly classified into two categories: diffuse and specular. In computer vision and computer graphics, the diffuse component is often assumed to be Lambertian. A surface that obeys Lambert's Law appears equally bright from all viewing directions. This model for diffuse reflection was proposed by Johann Comparison of a matte vase with the rendering based on the Heinrich Lambert in 1760 and has been perhaps the Lambertian model. Illumination is from the viewing direction most widely used reflectance model in computer vision and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the Lambertian model does not take the roughness of the surface into account. Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch. Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic (much larger than the wavelength of incident light) surface roughness is often projected onto a single detection element, which in turn produces an aggregate brightness value over many facets. Whereas Lamberts law may hold well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to violate Lamberts law. The primary reason for this is that the foreshortened facet areas will change for different viewing directions, and thus the surface appearance will be view-dependent.
64
Analysis of this phenomenon has a long history and can be traced back almost a century. Past work has resulted in empirical models designed to fit experimental data as well as theoretical results derived from first principles. Much of this work was motivated by the non-Lambertian reflectance of the moon. The OrenNayar reflectance model, developed by Michael Oren and Shree K. Nayar in 1993,[1] predicts reflectance from rough diffuse surfaces for the entire hemisphere of source and sensor directions. The model takes into account complex physical phenomena such as masking, shadowing and interreflections between points on the surface facets. It can be viewed as a generalization of Lamberts law. Today, it is widely used in computer graphics and animation for rendering rough surfaces.[citation needed] It also has important implications for human vision and computer vision problems, such as shape from shading, photometric stereo, etc.
Formulation
The surface roughness model used in the derivation of the Oren-Nayar model is the microfacet model, proposed by Torrance and Sparrow,[2] which assumes the surface to be composed of long symmetric V-cavities. Each cavity consists of two planar facets. The roughness of the surface is specified using a probability function for the distribution of facet slopes. In particular, the Gaussian distribution is often used, and thus the variance of the Gaussian distribution, , is a measure of the roughness of the surfaces. The standard deviation of the facet slopes (gradient of the surface elevation), ranges in .
In the OrenNayar reflectance model, each facet is assumed to be Lambertian in reflectance. As shown in the image at right, given the radiance of the incoming light , the radiance of the reflected light , according to the Oren-Nayar model, is
where , ,
OrenNayar reflectance model , , and is the albedo of the surface, and , and is the roughness of the surface. In the case of (i.e., all facets in the same plane), we have , and thus the Oren-Nayar model simplifies to the Lambertian model:
65
Results
Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough surfaces more accurately than the Lambertian model. Here are rendered images of a sphere using the Oren-Nayar model, corresponding to different surface roughnesses (i.e. different values):
Plot of the brightness of the rendered images, compared with the measurements on a cross section of the real vase.
66
Rough opaque specular surfaces (glossy surfaces) Rough transparent surfaces Each facet is made of glass (transparent)
References
[1] M. Oren and S.K. Nayar, " Generalization of Lambert's Reflectance Model (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/ Oren_SIGGRAPH94. pdf)". SIGGRAPH. pp.239-246, Jul, 1994 [2] Torrance, K. E. and Sparrow, E. M. Theory for off-specular reflection from roughened surfaces. J. Opt. Soc. Am.. 57, 9(Sep 1967) 1105-1114 [3] B. Walter, et al. " Microfacet Models for Refraction through Rough Surfaces (http:/ / www. cs. cornell. edu/ ~srm/ publications/ EGSR07-btdf. html)". EGSR 2007.
External links
The official project page for the Oren-Nayar model (http://www1.cs.columbia.edu/CAVE/projects/oren/) at Shree Nayar's CAVE research group webpage (http://www.cs.columbia.edu/CAVE/)
Phong shading
Phong shading refers to an interpolation technique for surface shading in 3D computer graphics. It is also called Phong interpolation[] or normal-vector interpolation shading.[] Specifically, it interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.
History
Phong shading and the Phong reflection model were developed at the University of Utah by Bui Tuong Phong, who published them in his 1973 Ph.D. dissertation.[1][2] Phong's methods were considered radical at the time of their introduction, but have evolved into a baseline shading method for many rendering applications. Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.
Phong interpolation
Phong shading improves upon Gouraud shading and provides a better approximation of the shading of a smooth surface. Phong shading assumes a smoothly varying surface normal vector. The Phong interpolation method works better than Gouraud shading when applied to a reflection model that has small specular highlights such as the Phong reflection model.
Phong shading The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a large polygon. Since these specular highlights are absent from the polygon's vertices and Gouraud shading interpolates based on the vertex colors, the specular highlight will be missing from the polygon's interior. This problem is fixed by Phong shading. Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading a normal vector is linearly interpolated across the surface of the polygon from the polygon's vertex normals. The surface normal is interpolated and normalized at each pixel and then used in a reflection model, e.g. the Phong reflection model, to obtain the final pixel color. Phong shading is more computationally expensive than Gouraud shading since the reflection model must be computed at each pixel instead of at each vertex. In modern graphics hardware, variants of this algorithm are implemented using pixel or fragment shaders.
67
Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white, reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the direction of the surface, and the ambient component is uniform (independent of direction).
References
[1] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311317. [2] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref
68
Description
In Phong shading, one must continually recalculate the scalar product between a viewer (V) and the beam from a light-source (L) reflected (R) on a surface. If, instead, one calculates a halfway vector between the viewer and light-source vectors,
with
, where and
is the Householder matrix that reflects a point in the hyperplane that contains the origin and has the normal This dot product represents the cosine of an angle that is half of the angle represented by Phong's dot product if V, L, N and R all lie in the same plane. This relation between the angles remains approximately true when the vectors don't lie in the same plane, especially when the angles are small. The angle between N and H is therefore sometimes called the halfway angle. Considering that the angle between the halfway vector and the surface normal is likely to be smaller than the angle between R and V used in Phong's model (unless the surface is viewed from a very steep angle for which it is likely to be larger), and since Phong is using an exponent can be set such that is closer to the former expression. For front-lit surfaces (specular reflections on surfaces facing the viewer), will result in specular highlights that very closely match the corresponding Phong reflections. However, while the Phong reflections are always round for a flat surface, the BlinnPhong reflections become elliptical when the surface is viewed from a steep angle. This can be compared to the case where the sun is reflected in the sea close to the horizon, or where a far away street light is reflected in wet pavement, where the reflection will always be much more extended vertically than horizontally.
69
Additionally, while it can be seen as an approximation to the Phong model, it produces more accurate models of empirically determined bidirectional reflectance distribution functions than Phong for many types of surfaces. (See: Experimental Validation of Analytical BRDF Models, Siggraph 2004 [3])
Efficiency
This rendering model is less efficient than pure Phong shading in most cases, since it contains a square root calculation. While the original Phong model only needs a simple vector reflection, this modified form takes more into consideration. However, as many CPUs and GPUs contain single and double precision square root functions (as standard features) and other instructions that can be used to speed up rendering, the time penalty for this kind of shader will not be noticed in most implementations. However, Blinn-Phong will be faster in the case where the viewer and light are treated to be at infinity. This is the case for directional lights. In this case, the half-angle vector is independent of position and surface curvature. It can be computed once for each light and then used for the entire frame, or indeed while light and viewpoint remain in the same relative position. The same is not true with Phong's original reflected light vector which depends on the surface curvature and must be recalculated for each pixel of the image (or for each vertex of the model in the case of vertex lighting). In most cases where lights are not treated to be at infinity, for instance when using point lights, the original Phong model will be faster.
Code sample
This sample in High Level Shader Language is a method of determining the diffuse and specular light from a point light. The light structure, position in space of the surface, view direction vector and the normal of the surface are passed through. A Lighting structure is returned; struct Lighting { float3 Diffuse; float3 Specular; }; struct PointLight { float3 position; float3 diffuseColor; float diffusePower; float3 specularColor;
BlinnPhong shading model float }; Lighting GetPointLight( PointLight light, float3 pos3D, float3 viewDir, float3 normal ) { Lighting OUT; if( light.diffusePower > 0 ) { float3 lightDir = light.position - pos3D; //3D position in space of the surface float distance = length( lightDir ); lightDir = lightDir / distance; // = normalize( lightDir ); distance = distance * distance; //This line may be optimised using Inverse square root //Intensity of the diffuse light. Saturate to keep within the 0-1 range. float NdotL = dot( normal, lightDir ); float intensity = saturate( NdotL ); // Calculate the diffuse light factoring in light color, power and the attenuation OUT.Diffuse = intensity * light.diffuseColor * light.diffusePower / distance; //Calculate the half vector between the light vector and the view vector. //This is faster than calculating the actual reflective vector. float3 H = normalize( lightDir + viewDir ); //Intensity of the specular light float NdotH = dot( normal, H ); intensity = pow( saturate( NdotH ), specularHardness ); //Sum up the specular light factoring OUT.Specular = intensity * light.specularColor * light.specularPower / distance; } return OUT; } specularPower;
70
71
References
[3] http:/ / people. csail. mit. edu/ wojciech/ BRDFValidation/ index. html
Specular reflection
Specular reflection is the mirror-like reflection of light (or of other kinds of wave) from a surface, in which light from a single incoming direction (a ray) is reflected into a single outgoing direction. Such behavior is described by the law of reflection, which states that the direction of incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray) make the same angle with respect to the surface normal, thus the angle of incidence equals the angle of reflection ( in the figure), and that the incident, normal, and reflected directions are coplanar. This behavior was first discovered through careful observation and measurement by Hero of Alexandria (AD c. 1070).[1]
Explanation
Specular reflection is distinct from diffuse reflection, where incoming light is reflected in a broad range of directions. An example of the distinction between specular and diffuse reflection would be glossy and matte paints. Matte paints have almost exclusively diffuse reflection, while glossy paints have both specular and diffuse reflection. A surface built from a non-absorbing powder, such as plaster, can be a nearly perfect diffuser, whereas polished metallic objects can specularly reflect light very efficiently. The reflecting material of mirrors is usually aluminum or silver.
Diagram of specular reflection
Even when a surface exhibits only specular reflection with no diffuse Reflections on still water are an example of reflection, not all of the light is necessarily reflected. Some of the light specular reflection. may be absorbed by the materials. Additionally, depending on the type of material behind the surface, some of the light may be transmitted through the surface. For most interfaces between materials, the fraction of the light that is reflected increases with increasing angle of incidence . If the light is propagating in a material with a higher index of refraction than the material whose surface it strikes, then total internal reflection may occur if the angle of incidence is greater than a certain critical angle. Specular reflection from a dielectric such as water can affect polarization and at Brewster's angle reflected light is completely linearly polarized parallel to the interface. The law of reflection arises from diffraction of a plane wave with small wavelength on a flat boundary: when the boundary size is much larger than the wavelength then electrons of the boundary are seen oscillating exactly in phase only from one direction the specular direction. If a mirror becomes very small compared to the wavelength, the law of reflection no longer holds and the behavior of light is more complicated. Waves other than visible light can also exhibit specular reflection. This includes other electromagnetic waves, as well as non-electromagnetic waves. Examples include ionospheric reflection of radiowaves, reflection of radio- or microwave radar signals by flying objects, acoustic mirrors, which reflect sound, and atomic mirrors, which reflect neutral atoms. For the efficient reflection of atoms from a solid-state mirror, very cold atoms and/or grazing
Specular reflection incidence are used in order to provide significant quantum reflection; ridged mirrors are used to enhance the specular reflection of atoms. The reflectivity of a surface is the ratio of reflected power to incident power. The reflectivity is a material characteristic, depends on the wavelength, and is related to the refractive index of the material through Fresnel's equations. In absorbing materials, like metals, it is related to the electronic absorption spectrum through the imaginary component of the complex refractive index. Measurements of specular reflection are performed with normal or varying incidence reflectometers using a scanning variable-wavelength light source. Lower quality measurements using a glossmeter quantify the glossy appearance of a surface in gloss units. The image in a flat mirror has these features: It is the same distance behind the mirror as the object is in front. It is the same size as the object. It is the right way up (erect). It appears to be laterally inverted, in other words left and right reversed. It is virtual, meaning that the image appears to be behind the mirror, and cannot be projected onto a screen.
72
Direction of reflection
The direction of a reflected ray is determined by the vector of incidence and the surface normal vector. Given an incident direction from the surface to the light source and the surface normal direction the specularly reflected direction (all unit vectors) is:[2][3]
where
is a scalar obtained with the dot product. Different authors may define the incident and reflection
directions with different signs. Assuming these Euclidean vectors are represented in column form, the equation can be equivalently expressed as a matrix-vector multiplication: where is the so-called Householder transformation matrix, defined as:
References
Specular highlight
73
Specular highlight
A specular highlight is the bright spot of light that appears on shiny objects when illuminated (for example, see image at right). Specular highlights are important in 3D computer graphics, as they provide a strong visual cue for the shape of an object and its location with respect to light sources in the scene.
Microfacets
The term specular means that light is perfectly reflected in a mirror-like way from the light source to the viewer. Specular reflection is visible only where the surface normal is oriented precisely halfway between the direction of incoming light and the direction of the viewer; Specular highlights on a pair of spheres. this is called the half-angle direction because it bisects (divides into halves) the angle between the incoming light and the viewer. Thus, a specularly reflecting surface would show a specular highlight as the perfectly sharp reflected image of a light source. However, many shiny objects show blurred specular highlights. This can be explained by the existence of microfacets. We assume that surfaces that are not perfectly smooth are composed of many very tiny facets, each of which is a perfect specular reflector. These microfacets have normals that are distributed about the normal of the approximating smooth surface. The degree to which microfacet normals differ from the smooth surface normal is determined by the roughness of the surface. At points on the object where the smooth normal is close to the half-angle direction, many of the microfacets point in the half-angle direction and so the specular highlight is bright. As one moves away from the center of the highlight, the smooth normal and the half-angle direction get farther apart; the number of microfacets oriented in the half-angle direction falls, and so the intensity of the highlight falls off to zero. The specular highlight often reflects the color of the light source, not the color of the reflecting object. This is because many materials have a thin layer of clear material above the surface of the pigmented material. For example plastic is made up of tiny beads of color suspended in a clear polymer and human skin often has a thin layer of oil or sweat above the pigmented cells. Such materials will show specular highlights in which all parts of the color spectrum are reflected equally. On metallic materials such as gold the color of the specular highlight will reflect the color of the material.
Models of microfacets
A number of different models exist to predict the distribution of microfacets. Most assume that the microfacet normals are distributed evenly around the normal; these models are called isotropic. If microfacets are distributed with a preference for a certain direction along the surface, the distribution is anisotropic. NOTE: In most equations, when it says it means
Phong distribution
In the Phong reflection model, the intensity of the specular highlight is calculated as:
Where R is the mirror reflection of the light vector off the surface, and V is the viewpoint vector. In the BlinnPhong shading model, the intensity of a specular highlight is calculated as:
Specular highlight
74
Where N is the smooth surface normal and H is the half-angle direction (the direction vector midway between L, the vector to the light, and V, the viewpoint vector). The number n is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the surface. These equations imply that the distribution of microfacet normals is an approximately Gaussian distribution (for large ), or approximately Pearson type II distribution, of the corresponding angle.[1] While this is a useful heuristic and produces believable results, it is not a physically based model. Another similar formula, but only calculated differently:
where R is an eye reflection vector, E is an eye vector (view vector), N is surface normal vector. All vectors are normalized ( ). L is a light vector. For example, then:
If
vector
is
normalized
then
Gaussian distribution
A slightly better model of microfacet distribution can be created using a Gaussian distribution.[citation usual function calculates specular highlight intensity as:
needed]
The
where m is a constant between 0 and 1 that controls the apparent smoothness of the surface.[2]
Beckmann distribution
A physically based model of microfacet distribution is the Beckmann distribution:[3]
where m is the rms slope of the surface microfacets (the roughness of the material).[4] Compare to the empirical models above, this function "gives the absolute magnitude of the reflectance without introducing arbitrary constants; the disadvantage is that it requires more computation".[5] However, this model can be simplified since . Also note that the product of normalized over the half-sphere which is obeyed by this function. and a surface distribution function is
Specular highlight
75
where n is the anisotropic exponent, V is the viewing direction, L is the direction of incoming light, and T is the direction parallel to the grooves or fibers at this point on the surface. If you have a unit vector D which specifies the global direction of the anisotropic distribution, you can compute the vector T at a given point by the following:
where N is the unit normal vector at that point on the surface. You can also easily compute the cosine of the angle between the vectors by using a property of the dot product and the sine of the angle by using the trigonometric identities. The anisotropic should be used in conjunction with a non-anisotropic distribution like a Phong distribution to
The specular term is zero if NL < 0 or NR < 0. All vectors are unit vectors. The vector R is the mirror reflection of the light vector off the surface, L is the direction from the surface point to the light, H is the half-angle direction, N is the surface normal, and X and Y are two orthogonal vectors in the normal plane which specify the anisotropic directions.
CookTorrance model
The CookTorrance model[5] uses a specular term of the form . Here D is the Beckmann distribution factor as above and F is the Fresnel term,
For performance reasons in real-time 3D graphics Schlick's approximation is often used to approximate Fresnel term. G is the geometric attenuation term, describing selfshadowing due to the microfacets, and is of the form . In these formulas E is the vector to the camera or eye, H is the half-angle vector, L is the vector to the light source and N is the normal vector, and is the angle between H and N.
Specular highlight
76
References
[1] Richard Lyon, "Phong Shading Reformulation for Hardware Renderer Simplification", Apple Technical Report #43, Apple Computer, Inc. 1993 PDF (http:/ / dicklyon. com/ tech/ Graphics/ Phong_TR-Lyon. pdf) [2] Glassner, Andrew S. (ed). An Introduction to Ray Tracing. San Diego: Academic Press Ltd, 1989. p. 148. [3] Petr Beckmann, Andr Spizzichino, The scattering of electromagnetic waves from rough surfaces, Pergamon Press, 1963, 503 pp (Republished by Artech House, 1987, ISBN 978-0-89006-238-8). [4] Foley et al. Computer Graphics: Principles and Practice. Menlo Park: Addison-Wesley, 1997. p. 764. [5] R. Cook and K. Torrance. " A reflectance model for computer graphics (http:/ / inst. eecs. berkeley. edu/ ~cs283/ sp13/ lectures/ cookpaper. pdf)". Computer Graphics (SIGGRAPH '81 Proceedings), Vol. 15, No. 3, July 1981, pp. 301316. [6] http:/ / radsite. lbl. gov/ radiance/ papers/
Retroreflector
77
Retroreflector
Retroreflector
A gold corner cube retroreflector Uses Distance measurement by optical delay line
A retroreflector (sometimes called a retroflector or cataphote) is a device or surface that reflects light back to its source with a minimum of scattering. An electromagnetic wave front is reflected back along a vector that is parallel to but opposite in direction from the wave's source. The device or surface's angle of incidence is greater than zero. This is unlike a planar mirror, which does this only if the mirror is exactly perpendicular to the wave front, having a zero angle of incidence.
Types of retroreflectors
There are several ways to obtain retroreflection:[1]
Corner reflector
A set of three mutually perpendicular reflective surfaces, placed to form the corner of a cube, work as a retroreflector. The three corresponding normal vectors of the corner's sides form a basis (x, y, z) in which to represent the direction of an arbitrary incoming ray, [a, b, c]. When the ray reflects from the first side, say x, the ray's x component, a, is reversed to -a while the y and z components are unchanged. Therefore as the ray reflects first from side x then side y and finally from side z the ray direction goes from [a,b,c] to [-a,b,c] to [-a,-b,c] to [-a,-b,-c] and it leaves the corner with all three components of motions exactly reversed.
Corner reflectors occur in two varieties. In the more common form, the corner is literally the truncated corner of a cube of transparent material such as conventional optical glass. In this structure, the reflection is achieved either by total internal reflection or silvering of the outer cube surfaces. The second form uses mutually perpendicular flat mirrors bracketing an air space. These two types have similar optical properties.
Retroreflector
78
A large relatively thin retroreflector can be formed by combining many small corner reflectors, using the standard triangular tiling.
Comparison of the effect of corner (1) and spherical (2) retroreflectors on three light rays. Reflective surfaces are drawn in dark blue.
Cat's eye
Another common type of retroreflector consists of refracting optical elements with a reflective surface, arranged so that the focal surface of the refractive element coincides with the reflective surface, typically a transparent sphere and a spherical mirror. This same effect can be optimally achieved with a single transparent sphere when the refractive index of the material is exactly two times the refractive index of the medium from which the radiation is incident. In that case, the sphere surface behaves as a concave spherical mirror with the required curvature for retroreflection. The refractive index need not be twice the ambient but can be anything exceeding 1.5 times as high; due to spherical aberration, there exists a radius from the centerline at which incident rays are focused at the center of the rear surface of the sphere.
The term cat's eye derives from the resemblance of the cat's eye retroreflector to the optical system that produces the well-known phenomenon of "glowing eyes" or eyeshine in cats and other vertebrates (which are only reflecting light, rather than actually glowing). The combination of the eye's lens and the cornea form the refractive converging system, while the tapetum lucidum behind the retina forms the spherical concave mirror. Because the function of the eye is to form an image on the retina, an eye focused on a distant object has a focal surface that approximately follows the reflective tapetum lucidum structure,[citation needed] which is the condition required to form a good retroreflection. This type of retroreflector can consist of many small versions of these structures incorporated in a thin sheet or in paint. In the case of paint containing glass beads, the paint glues the beads to the surface where retroreflection is required and the beads protrude, their diameter being about twice the thickness of the paint.
Eyeshine from retroreflectors of the transparent sphere type is clearly visible in this cat's eyes
Retroreflector
79
Phase-conjugate mirror
A third, much less common way of producing a retroreflector is to use the nonlinear optical phenomenon of phase conjugation. This technique is used in advanced optical systems such as high-power lasers and optical transmission lines. Phase-conjugate mirrors require a comparatively expensive and complex apparatus, as well as large quantities of power (as nonlinear optical processes can be efficient only at high enough intensities). However, phase-conjugate mirrors have an inherently much greater accuracy in the direction of the retroreflection, which in passive elements is limited by the mechanical accuracy of the construction.
Operation
Retroreflectors are devices that operate by returning light back to the light source along the same light direction. The coefficient of luminous intensity, RI, is the measure of a reflector performance, which is defined as the ratio of the strength of the reflected light (luminous intensity) to the amount of light that falls on the reflector (normal illuminance). A reflector will appear brighter as its RI value increases.[1] The RI value of the reflector is a function of the color, size, and condition of the reflector. Clear or white reflectors are the most efficient, and appear brighter than other colors. The surface area of the reflector is proportional to the RI value and increases as the reflective surface increases.[1]
The RI value is also a function of the spatial geometry between the observer, light source, and reflector. Figures 1 and 2 show the observation angle and entrance angle between the automobile's headlights, bicycle, and driver. The observation angle is the angle formed by the light beam and the driver's line of sight. Observation angle is a function of the distance between the headlights and the driver's eye, and the distance to the reflector. Traffic engineers use an Bicycle retroreflectors observation angle of 0.2 degrees to simulate a reflector target about 800 feet in front of a passenger automobile. As the observation angle increases, the reflector performance decreases. For example, a truck has a large separation between the headlight and the driver's eye compared to a passenger vehicle. A bicycle reflector appears brighter to the passenger car driver than to the truck driver at the same distance from the vehicle to the reflector.[1] The light beam and the normal axis of the reflector as shown in Figure 2 form the entrance angle. The entrance angle is a function of the orientation of the reflector to the light source. For example, the entrance angle between an automobile approaching a bicycle at an intersection 90 degrees apart is larger than the entrance angle for a bicycle directly in front of an automobile on a straight road. The reflector appears brightest to the observer when it is directly in line with the light source.[1] The brightness of a reflector is also a function of the distance between the light source and the reflector. At a given observation angle, as the distance between the light source and the reflector decreases, the light that falls on the reflector increases. This increases the amount of light returned to the observer and the reflector appears brighter.[1]
Retroreflector
80
Applications
Retroreflectors on roads
Retroreflection (sometimes called retroflection) is used on road surfaces, road signs, vehicles, and clothing (large parts of the surface of special safety clothing, less on regular coats). When the headlights of a car illuminate a retroreflective surface, the reflected light is directed towards the car and its driver (rather than in all directions as with diffuse reflection). However, a pedestrian can see retroreflective surfaces in the dark only if there is a light source directly between them and the reflector (e.g., via a flashlight they carry) or directly Retroreflectors are clearly visible in a pair of bicycle shoes. Light source is a flash behind them (e.g., via a car approaching a few centimeters above camera lens. from behind). "Cat's eyes" are a particular type of retroreflector embedded in the road surface and are used mostly in the UK and parts of the United States. Corner reflectors are better at sending the light back to the source over long distances, while spheres are better at sending the light to a receiver somewhat off-axis from the source, as when the light from headlights is reflected into the driver's eyes. Retroreflectors can be embedded in the road (level with the road surface), or they can be raised above the road surface. Raised reflectors are visible for very long distances (typically 0.5-1 kilometer or more), while sunken reflectors are visible only at very close ranges due to the higher angle required to properly reflect the light. Raised reflectors are generally not used in areas that regularly experience snow during winter, as passing snowplows can tear them off the roadways. Stress on roadways caused by cars running over embedded objects also contributes to accelerated wear and pothole formation. Retroreflective road paint is thus very popular in Canada and parts of the United States, as it is not affected by the passage of snowplows and does not affect the interior of the roadway. Where weather permits, embedded or raised retroreflectors are preferred as they last much longer than road paint, which is weathered by the elements, can be obscured by sediment or rain, and is ground away by the passage of vehicles.
Retroreflector accidents. These statistics are based on 14 year averages from 1995 to 2008. The MUTCD requires signs to be either illuminated or made with retroreflective sheeting materials and although most signs in the U.S. are made with retroreflective sheeting materials, they degrade over time resulting in a shorter life span. Until now, there has been little information available to determine how long the retroreflectivity lasts. The MUTCD now requires that agencies maintain traffic signs to a set of minimum levels but provide a variety of maintenance methods that agencies can use for compliance. The minimum retroreflectivity requirements do not imply that an agency must measure every sign. Rather, the new MUTCD language describes methods that agencies can use to maintain traffic sign retroreflectivity at or above the minimum levels. There is a visual assessment program which can be used in conjunction with an inspector and age limitations on the inspector, who will use comparison panels and specific types of vehicles during testing. There has been a great deal of debate as to whether this type of assessment and management will really hold water when it comes to liability and litigation. This method is not considered to be one that many would like to use because the data it is really open to interpretation, and as we know everybody sees things differently.
81
Retroreflector BLITS BLITS (Ball Lens In The Space) spherical retroreflector satellite was placed into orbit as part of a September 2009 Soyuz launch[5] by the Federal Space Agency of Russia with the assistance of the International Laser Ranging Service, an independent body originally organized by the International Association of Geodesy, the International Astronomical Union, and international committees.[6] The ILRS central bureau is located at the United States Goddard Space Flight Center. The reflector, a type of Luneburg lens, was developed and manufactured by the Institute for Precision Instrument Engineering (IPIE) in Moscow. The purpose of the mission was to validate the spherical glass retroreflector satellite concept and obtain SLR (Satellite Laser Ranging) data for solution of scientific problems in geophysics, geodynamics, and relativity. The BLITS allows millimeter and submillimeter accuracy SLR measurements, as its "target error" (uncertainty of reflection center relative to its center of mass) is less than 0.1mm. An additional advantage is that the Earths magnetic field does not affect the satellite orbit and spin parameters, unlike retroreflectors incorporated into active satellites. The BLITS allows the most accurate measurements of any SLR satellites, with the same accuracy level as a ground target.[7] The actual satellite is a solid sphere around 17cm dia, weighing 7.63kg. It is made with two hemispherical shells (outer radius 85.16 mm) of low refractive index glass (n=1.47), and an inner sphere or ball lens (radius 53.52 mm) made of a high refractive index glass (n=1.76). The hemispheres are glued over the ball lens with all spherical surfaces concentric; the external surface of one hemisphere is coated with aluminum and protected by a varnish layer. It was designed for ranging with a green (532nm) laser. When used for ranging, the phase center is 85.16 mm behind the sphere center, with a range correction of +196.94 mm taking into account the indices of refraction.[] A smaller spherical retroreflector of the same type but 6cm in diameter was fastened to the Meteor-3M spacecraft and tested during its space flight of 20012006. Before a collision with space debris, the satellite was in a sun-synchronous circular orbit, 832 km high, with an inclination of 98.77 degrees, an orbital period of 101.3 min, and its own spin period of 5.6 seconds.[] In early 2013, the satellite was found to have a new orbit 120m lower, a faster spin period of 2.1 seconds, and a different spin axis.[8] The change was traced back to an event that occurred 22 Jan 2013 at 07:57 UTC; data from the United Statess Space Surveillance Network showed that within 10 seconds of that time BLITS was close to the predicted path of a fragment of the former Chinese Fengyun-1C satellite, with a relative velocity of 9.6 km/s between them. The Chinese government destroyed the Fengyun-1C, at an altitude of 865 km, on 11 Jan 2007 as a test of an anti-satellite missile which resulted in 2,300 to 15,000 debris pieces.
82
Retroreflector
83
Other uses
Retroreflectors are used in the following example applications: In surveying with a total station or robot, the instrument man or robot aims a laser beam at a corner cube retroreflector held by the rodman. The instrument measures the propagation time of the light and converts it to a distance. In Canada, aerodrome lighting can be replaced by appropriately coloured retroreflectors, the most important of which are the white retroreflectors that delineate the runway edges, and must be seen by aircraft equipped with landing lights up to 2 nautical miles away.[9] In common (non-SLR) digital cameras, where the sensor system is retroreflective. Researchers have used this property to demonstrate a system to prevent unauthorized photographs by detecting digital cameras and beaming a highly-focused beam of light into the lens.[10] In movie screens to allow for high brilliance under dark conditions.[11] Digital compositing programs and chroma environments use retroreflection to replace traditional lit backdrops in composite work as they provide a more solid colour without requiring the backdrop to be lit separately.[12]
Notes
[1] U.S. Consumer Product Safety Commission Bicycle Reflector Project report (http:/ / www. cpsc. gov/ volstd/ bike/ BikeReport. pdf) [3] http:/ / ilrs. gsfc. nasa. gov/ docs/ williams_lw13. pdf [9] Transport Canada CARs 301.07 (http:/ / www. tc. gc. ca/ eng/ civilaviation/ regserv/ cars/ part3-301-155. htm#301_07) [10] ABC News: Device Seeks to Jam Covert Digital Photographers (http:/ / abcnews. go. com/ Technology/ FutureTech/ story?id=1139800& page=1) [11] Howstuffworks "Altered Reality" (http:/ / science. howstuffworks. com/ invisibility-cloak1. htm)
References
Optics Letters, Vol. 4, pp.190192 (1979), "Retroreflective Arrays as Approximate Phase Conjugators," by H.H. Barrett and S.F. Jacobs. Optical Engineering, Vol. 21, pp.281283 (March/April 1982), "Experiments with Retrodirective Arrays," by Stephen F. Jacobs. Scientific American, December 1985, "Phase Conjugation," by Vladimir Shkunov and Boris Zel'dovich. Scientific American, January 1986, "Applications of Optical Phase Conjugation," by David M. Pepper. Scientific American, April 1986, "The Amateur Scientist" ('Wonders with the Retroreflector'), by Jearl Walker. Scientific American, October 1990, "The Photorefractive Effect," by David M. Pepper, Jack Feinberg, and Nicolai V. Kukhtarev.
Retroreflector
84
External links
Apollo 15 Laser Ranging Retroreflector Experiment (http://www.lpi.usra.edu/expmoon/Apollo15/ A15_Experiments_LRRR.html) Manual of Traffic Signs - Retroreflective Sheetings Used for Sign Faces (http://www.trafficsign.us/signsheet. html) Motorcycle retroreflective Sheeting (http://dr650.zenseeker.net/ReflectiveTape.htm) Lunar retroflectors (http://physics.ucsd.edu/~tmurphy/apollo/lrrr.html) Howstuffworks article on retroreflector-based invisibility cloaks (http://science.howstuffworks.com/ invisibility-cloak.htm) Retroreflective measurement tool for roadway safety (http://pppcatalog.com/922)
Texture mapping
Texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
Texture mapping
A texture map is applied (mapped) to the surface of a shape or polygon.[1] This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result Examples of multitexturing (click for larger image); 1:Untextured sphere, 2:Texture and bump maps, 3:Texture map only, that seems to have more richness than could 4:Opacity and texture maps. otherwise be achieved with a limited number of polygons. Multitexturing is the use of more [2] than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.
Texture mapping The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. The fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped.
85
Perspective correctness
Texture coordinates are specified at each vertex of a given triangle, and these coordinates are interpolated using an extended Bresenham's line algorithm. If these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. This is a fast calculation, but Because affine texture mapping does not take into account the depth information about a there can be a noticeable discontinuity polygon's vertices, where the polygon is not perpendicular to the viewer it produces a noticeable defect. between adjacent triangles when these triangles are at an angle to the plane of the screen (see figure at right textures (the checker boxes) appear bent). Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating a 2D triangle. This achieves the correct visual effect, but it is slower to calculate. Instead of interpolating the texture coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the depth value is also interpolated and used to recover the perspective-correct coordinate. This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger (compressing the texture). Affine texture mapping directly interpolates a texture coordinate where Perspective correct mapping interpolates after dividing by depth recover the correct coordinate: , then uses its interpolated reciprocal to between two endpoints and :
Development
Classic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective correctness was about 16 times more expensive. To achieve two goals - faster arithmetic results, and keeping the arithmetic mill busy at all times - every triangle is further subdivided into groups of about 16 pixels. For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which improves details in non-architectural applications. Software renderers generally preferred screen subdivision because it has less overhead. Additionally they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). For instance, Doom restricted the world to vertical walls and horizontal floors/ceilings. This meant the walls would be a constant distance along a vertical line and the floors/ceilings would be a constant distance along a horizontal line. A fast affine mapping could
Texture mapping be used along those lines because it would be correct. A different approach was taken for Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor.[3] The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it. Another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares in screen space, and using an affine mapping on them. The distortion of affine mapping becomes much less noticeable on smaller polygons. Yet another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided,[4] but the amount of bookkeeping makes this method too slow on most systems. Finally, some programmers extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it.
86
Screen space sub division techniques. Top left: Quake-like, top right: bilinear, bottom left: const-z
References
[1] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/ [2] Blythe, David. Advanced Graphics Programming Techniques Using OpenGL (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/ notes/ notes. html). Siggraph 1999. (see: Multitexture (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/ notes/ node60. html)) [3] Abrash, Michael. Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN 1-57610-174-6 ( PDF (http:/ / www. gamedev. net/ reference/ articles/ article1698. asp)) (Chapter 70, pg. 1282)
External links
Introduction into texture mapping using C and SDL (http://www.happy-werner.de/howtos/isw/parts/3d/ chapter_2/chapter_2_texture_mapping.pdf) Programming a textured terrain (http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/ Textured_terrain.php) using XNA/DirectX, from www.riemers.net Perspective correct texturing (http://www.gamers.org/dEngine/quake/papers/checker_texmap.html) Time Texturing (http://www.fawzma.com/time-texturing-texture-mapping-with-bezier-lines/) Texture mapping with bezier lines Polynomial Texture Mapping (http://www.hpl.hp.com/research/ptm/) Interactive Relighting for Photos 3 Mtodos de interpolacin a partir de puntos (in spanish) (http://www.um.es/geograf/sigmur/temariohtml/ node43_ct.html) Methods that can be used to interpolate a texture knowing the texture coords at the vertices of a polygon
Bump mapping
87
Bump mapping
Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978.[1]
A sphere without bump mapping (left). A bump map to be applied to the sphere (middle). The sphere with the bump map applied (right) appears to have a mottled surface resembling an orange. Bump maps achieve this effect by changing how an illuminated surface reacts to light without actually modifying the size or shape of the surface
Bump mapping is much faster and consumes less resources for the same level of detail compared to displacement mapping because the geometry remains unchanged.
Bump mapping is limited in that it does not actually modify the shape of the underlying object. On the left, a mathematical function defining a bump map simulates a crumbling surface on a sphere, but the object's outline and shadow remain those of a perfect sphere. On the right, the same function is used to modify the surface of a sphere by generating an isosurface. This actually models a sphere with a bumpy surface with the result that both its outline and its shadow are rendered realistically.
There are primarily two methods to perform bump mapping. The first uses a height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn[1] and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows. Before lighting a calculation is performed for each visible point (or pixel) on the object's surface: 1. Look up the height in the heightmap that corresponds to the position on the surface. 2. Calculate the surface normal of the heightmap, typically using the finite difference method. 3. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined normal points in a new direction.
Bump mapping 4. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong reflection model. The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance changes as lights in the scene are moved around. The other method is to specify a normal map which contains the modified normal for each point on the surface directly. Since the normal is specified directly instead of derived from a height map this method usually leads to more predictable results. This makes it easier for artists to work with, making it the most common method of bump mapping today.[2] There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax mapping is one such extension. The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself.[3] Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including the displacement mapping where bumps are actually applied to the surface or using an isosurface.
88
References
[1] Blinn, James F. "Simulation of Wrinkled Surfaces" (http:/ / portal. acm. org/ citation. cfm?id=507101), Computer Graphics, Vol. 12 (3), pp.286-292 SIGGRAPH-ACM (August 1978) [2] Mikkelsen, Morten. Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008 (PDF) [3] Real-Time Bump Map Synthesis (http:/ / web4. cs. ucl. ac. uk/ staff/ j. kautz/ publications/ rtbumpmapHWWS01. pdf), Jan Kautz1, Wolfgang Heidrichy2 and Hans-Peter Seidel1, (1Max-Planck-Institut fr Informatik, 2University of British Columbia)
External links
Bump shading for volume textures (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=291525), Max, N.L., Becker, B.G., Computer Graphics and Applications, IEEE, Jul 1994, Volume 14, Issue 4, pages 18 20, ISSN 0272-1716 Bump Mapping tutorial using CG and C++ (http://www.blacksmith-studios.dk/projects/downloads/ bumpmapping_using_cg.php) Simple creating vectors per pixel of a grayscale for a bump map to work and more (http://freespace.virgin.net/ hugo.elias/graphics/x_polybm.htm) Bump Mapping example (http://www.neilwallis.com/java/bump2.htm) (Java applet)
89
Definition
The BRDF was first defined by Fred Nicodemus around 1965.[1] The definition is:
is
radiance,
or
power
per
unit
solid-angle-in-the-direction-of-a-ray
per
unit
is the angle
. The index indicates incident light, whereas the index indicates reflected
The reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities, is because other irradiating light than , which are of no interest for , might illuminate the surface which would unintentionally affect affected by . , whereas is only
Related functions
The Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) is a 6-dimensional function, , where describes a 2D location over an object's surface. The Bidirectional Texture Function (BTF) is appropriate for modeling non-flat surfaces, and has the same parameterization as the SVBRDF; however in contrast, the BTF includes non-local scattering effects like shadowing, masking, interreflections or subsurface scattering. The functions defined by the BTF at each point on the surface are thus called Apparent BRDFs. The Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF), is a further generalized 8-dimensional function in which light entering the surface may scatter internally and exit at another location. In all these cases, the dependence on the wavelength of light has been ignored and binned into RGB channels. In reality, the BRDF is wavelength dependent, and to account for effects such as iridescence or luminescence the
90
Applications
The BRDF is a fundamental radiometric concept, and accordingly is used in computer graphics for photorealistic rendering of synthetic scenes (see the Rendering equation), as well as in computer vision for many inverse problems such as object recognition.
Models
BRDFs can be measured directly from real objects using calibrated cameras and lightsources;[2] however, many phenomenological and analytic models have been proposed including the Lambertian reflectance model frequently assumed in computer graphics. Some useful features of recent models include: accommodating anisotropic reflection editable using a small number of intuitive parameters accounting for Fresnel effects at grazing angles being well-suited to Monte Carlo methods.
Wojciech et al. found that interpolating between measured samples produced realistic results and was easy to understand.[3]
Some examples
Lambertian model, representing perfectly diffuse (matte) surfaces by a constant BRDF. LommelSeeliger, lunar and Martian reflection. Phong reflectance model, a phenomenological model akin to plastic-like specularity.[4] BlinnPhong model, resembling Phong, but allowing for certain quantities to be interpolated, reducing computational overhead.[5] TorranceSparrow model, a general model representing surfaces as distributions of perfectly specular microfacets.[6] CookTorrance model, a specular-microfacet model (TorranceSparrow) accounting for wavelength and thus color shifting.[7] Ward's anisotropic model, a specular-microfacet model with an elliptical-Gaussian distribution function dependent on surface tangent orientation (in addition to surface normal).[] OrenNayar model, a "directed-diffuse" microfacet model, with perfectly diffuse (rather than specular) microfacets.[8] Ashikhmin-Shirley model, allowing for anisotropic reflectance, along with a diffuse substrate under a specular surface.[9] HTSG (He,Torrance,Sillion,Greenberg), a comprehensive physically based model.[10] Fitted Lafortune model, a generalization of Phong with multiple specular lobes, and intended for parametric fits of measured data.[11] Lebedev model for analytical-grid BRDF approximation.[12]
91
Acquisition
Traditionally, BRDF measurements were taken for a specific lighting and viewing direction at a time using gonioreflectometers. Unfortunately, using such a device to densely measure the BRDF is very time consuming. One of the first improvements on these techniques used a half-silvered mirror and a digital camera to take many BRDF samples of a planar target at once. Since this work, many researchers have developed other devices for efficiently acquiring BRDFs from real world samples, and it remains an active area of research. There is an alternative way to measure BRDF based on HDR images. The standard algorithm is to measure the BRDF point cloud from images and optimize it by one of the BRDF models.[13]
References
[3] Wojciech Matusik, Hanspeter Pfister, Matt Brand, and Leonard McMillan. A Data-Driven Reflectance Model (http:/ / people. csail. mit. edu/ wojciech/ DDRM/ index. html). ACM Transactions on Graphics. 22(3) 2002. [4] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311317. [6] K. Torrance and E. Sparrow. Theory for Off-Specular Reflection from Roughened Surfaces. J. Optical Soc. America, vol. 57. 1976. pp. 11051114. [7] R. Cook and K. Torrance. "A reflectance model for computer graphics". Computer Graphics (SIGGRAPH '81 Proceedings), Vol. 15, No. 3, July 1981, pp. 301316. [8] S.K. Nayar and M. Oren, " Generalization of the Lambertian Model and Implications for Machine Vision (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/ Nayar_IJCV95. pdf)". International Journal on Computer Vision, Vol. 14, No. 3, pp. 227251, Apr, 1995 [9] Michael Ashikhmin, Peter Shirley, An Anisotropic Phong BRDF Model, Journal of Graphics Tools 2000 [10] X. He, K. Torrance, F. Sillon, and D. Greenberg, A comprehensive physical model for light reflection, Computer Graphics 25 (1991), no. Annual Conference Series, 175186. [11] E. Lafortune, S. Foo, K. Torrance, and D. Greenberg, Non-linear approximation of reflectance functions. In Turner Whitted, editor, SIGGRAPH 97 Conference Proceedings, Annual Conference Series, pp. 117126. ACM SIGGRAPH, Addison Wesley, August 1997. [12] Ilyin A., Lebedev A., Sinyavsky V., Ignatenko, A., Image-based modelling of material reflective properties of flat objects (In Russian) (http:/ / data. lebedev. as/ LebedevGraphicon2009. pdf). In: GraphiCon'2009.; 2009. p. 198-201. [13] BRDFRecon project (http:/ / lebedev. as/ index. php?p=1_7_BRDFRecon)
Further reading
Lubin, Dan; Robert Massom (2006-02-10). Polar Remote Sensing. Volume I: Atmosphere and Oceans (1st ed.). Springer. p.756. ISBN3-540-43097-0. Matt, Pharr; Greg Humphreys (2004). Physically Based Rendering (1st ed.). Morgan Kauffmann. p.1019. ISBN0-12-553180-X. Schaepman-Strub, G.; M. E. Schaepman, T. H. Painter, S. Dangel, J. V. Martonchik (2006-07-15). "Reflectance quantities in optical remote sensing: definitions and case studies" (http://www.sciencedirect.com/science/ article/B6V6V-4K427VX-1/2/d8f9855bc59ae8233e2ee9b111252701). Remote Sensing of Environment 103 (1): 2742. doi: 10.1016/j.rse.2006.03.002 (http://dx.doi.org/10.1016/j.rse.2006.03.002). Retrieved 2007-10-18.
Physics of reflection
92
Physics of reflection
Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit specular reflection. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.
Reflection of light
Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. Furthermore, if the interface is between a dielectric and a conductor, the phase of the reflected wave is retained, otherwise if the interface is between two dielectrics, the phase may be retained or inverted, depending on the indices of refraction.[citation needed] A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the reflection actually occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass.
Double reflection: The sun is reflected in the water, which is reflected in the paddle.
In the diagram at left, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, i and the angle of reflection, r. The law of reflection states that i = r, or in other words, the angle of incidence equals the angle of reflection. In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be Diagram of specular reflection used to predict how much of the light is reflected, and how much is refracted in a given situation. Total internal reflection of light from a denser medium occurs if the angle of incidence is above the critical angle. Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle
Physics of reflection with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector. When light reflects off a material denser (with higher refractive index) than the external medium, it undergoes a polarity inversion. In contrast, a less dense, lower refractive index material will reflect light in phase. This is an important principle in the field of thin-film optics. Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.
93
Laws of reflection
If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows: 1. The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. 2. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal. 3. The reflected ray and the incident ray are on the opposite sides of the normal. These three laws can all be derived from the reflection equation. Mechanism In the classical electrodynamics, light is considered as an electromagnetic wave, which is described by the Maxwell Equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave (in all directions, like a dipole antenna). All these waves add up to give specular reflection and refraction, according to the Huygens-Fresnel principle.
An example of the law of reflection
Physics of reflection In case of dielectric (glass), the electric field of the light acts on the electrons in the glass, the moving electrons generate a field and become a new radiator. The refraction light in the glass is the combined of the forward radiation of the electrons and the incident light and; the backward radiation is the one we see reflected from the surface of transparent materials, this radiation comes from everywhere in the glass, but it turns out that the total effect is equivalent to a reflection from the surface. In metals, the electrons with no binding energy are called free electrons. The density number of the free electrons is very large. When these electrons oscillate with the incident light, the phase differences between the radiation field of these electrons and the incident field are , so the forward radiation will compensate the incident light at a skin depth, and backward radiation is just the reflected light. Lightmatter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.
94
Diffuse reflection
When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law. The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.[]
Retroreflection
Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came. When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet. Some animals' retinas act as retroreflectors, as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.
A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made
Physics of reflection partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.
95
Multiple reflections
When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle.[1] The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.[2]
Sound reflection
When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000Hz), and thus a very wide range of wavelengths (from about 20mm to 17m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in Sound diffusion panel for high frequencies many directionsto scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction.
Physics of reflection
96
Seismic reflection
Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.
Rendering reflection
Reflection in computer graphics is used to emulate reflective objects like mirrors and shiny surfaces. Reflection is accomplished in a ray trace renderer by following a ray from the eye to the mirror and then calculating where it bounces from, and continuing the process until no surface is found, or a non-reflective surface is found. Reflection on a shiny surface like wood or tile can add to the photorealistic effects of a 3D rendering. Polished - A Polished Reflection is an undisturbed reflection, like a mirror or chrome. Blurry - A Blurry Reflection means that tiny random bumps on the surface of the material cause the reflection to be blurry. Metallic - A reflection is Metallic if the highlights and reflections retain the color of the reflective object. Glossy - This term can be misused. Sometimes it is a setting which Ray traced model demonstrating specular reflection. is the opposite of Blurry. (When "Glossiness" has a low value, the reflection is blurry.) However, some people use the term "Glossy Reflection" as a synonym for "Blurred Reflection." Glossy used in this context means that the reflection is actually blurred.
Rendering reflection
97
Examples
Polished or Mirror reflection
Mirrors are usually almost 100% reflective.
Metallic Reflection
Normal, (nonmetallic), objects reflect light and colors in the original color of the object being reflected. Metallic objects reflect lights and colors altered by the color of the metallic object itself.
The large sphere on the left is blue with its reflection marked as metallic. The large sphere on the right is the same color but does not have the metallic property selected.
Rendering reflection
98
Blurry Reflection
Many materials are imperfect reflectors, where the reflections are blurred to various degrees due to surface roughness that scatters the rays of the reflections.
The large sphere on the left has sharpness set to 100%. The sphere on the right has sharpness set to 50% which creates a blurry reflection.
Glossy Reflection
Fully glossy reflection, shows highlights from light sources, but does not show a clear reflection from objects.
The sphere on the left has normal, metallic reflection. The sphere on the right has the same parameters, except that the reflection is marked as "glossy".
Reflectivity
99
Reflectivity
Reflectivity and reflectance generally refer to the fraction of incident electromagnetic power that is reflected at an interface, while the term "reflection coefficient" is used for the fraction of electric field reflected.[1]
Spectral reflectance curves for aluminium (Al), silver (Ag), and gold (Au) metal mirrors at normal incidence.
Reflectance
The reflectance or reflectivity is thus the square of the magnitude of the reflection coefficient.[2] The reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance (or reflectivity) is always a positive real number. According to the CIE (the International Commission on Illumination),[3] reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects.[4] Fresnel reflection coefficients for a boundary When reflection occurs from thin layers of material, internal reflection surface between air and a variable material in effects can cause the reflectance to vary with surface thickness. dependence of the complex refractive index and Reflectivity is the limit value of reflectance as the sample becomes the angle of incidence thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space.[5] The reflectance spectrum or spectral reflectance curve is the plot of the reflectance as a function of wavelength.
Reflectivity
100
Surface type
Going back to the fact that reflectivity is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection. For specular surfaces, such as glass or polished metal, reflectivity will be nearly zero at all angles except at the appropriate reflected angle - that is, reflected radiation will follow a different path from incident radiation for all cases other than radiation normal to the surface. For diffuse surfaces, such as matte white paint, reflectivity is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most real objects have some mixture of diffuse and specular reflective properties.
Water reflectivity
Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equations.[6] Fresnel reflection is directional and therefore does not contribute significantly to albedo which is primarily diffuse reflection. A real water surface may be wavy. Reflectivity assuming a flat surface as given by the Fresnel equations can be adjusted to account for waviness.
Reflectivity of smooth water at 20C (refractive index=1.333)
Grating efficiency
The generalization of reflectance to a diffraction grating, which disperses light by wavelength, is called diffraction efficiency.
Applications
Reflectivity is an important concept in the fields of optics, solar thermal energy, telecommunication and radar.
References
[1] [2] [3] [4] [5] Klein and Furtak, Optics (http:/ / www. amazon. com/ dp/ 0471872970) E. Hecht (2001). Optics (4th ed.). Pearson Education. ISBN 0-8053-8566-5. CIE (the International Commission on Illumination) (http:/ / www. cie. co. at/ ) CIE International Lighting Vocabulary (http:/ / www. cie. co. at/ index. php/ index. php?i_ca_id=306) Palmer and Grant, The Art of Radiometry (http:/ / www. amazon. com/ dp/ 081947245X)
[6] Ottaviani, M. and Stamnes, K. and Koskulics, J. and Eide, H. and Long, S.R. and Su, W. and Wiscombe, W., 2008: 'Light Reflection from Water Waves: Suitable Setup for a Polarimetric Investigation under Controlled Laboratory Conditions. Journal of Atmospheric and Oceanic Technology, 25 (5), 715--728.
Reflectivity
101
External links
reflectivity of metals (chart) (http://www.tvu.com/metalreflectivityLR.jpg) Reflectance Data (http://www.graphics.cornell.edu/online/measurements/reflectance/index.html) Painted surfaces etc. Grating efficiency (http://www.gratinglab.com/information/handbook/chapter9.asp)
Fresnel equations
The Fresnel equations (or Fresnel conditions), deduced by Augustin-Jean Fresnel /frnl/, describe the behaviour of light when moving between media of differing refractive indices. The reflection of light that the equations predict is known as Fresnel reflection.
Overview
Partial transmission and reflection amplitudes of a wave travelling
When light moves from a medium of a given refractive from a low to high refractive index medium. index n1 into a second medium with refractive index n2, both reflection and refraction of the light may occur. The Fresnel equations describe what fraction of the light is reflected and what fraction is refracted (i.e., transmitted). They also describe the phase shift of the reflected light. The equations assume the interface is flat, planar, and homogeneous, and that the light is a plane wave.
The fraction of the incident power that is reflected from the interface is given by the reflectance R and the fraction that is refracted is given by the transmittance T.[1] The media are assumed to be non-magnetic. The calculations of R and T depend on polarisation of the incident ray. Two cases are analyzed: 1. The incident light is polarized with its electric field parallel to the plane containing the incident, reflected, and refracted rays, i.e. in the plane of the diagram above. Such light is described as p-polarized.
Fresnel equations 2. The incident light is polarized with its electric field perpendicular to the plane described above. The light is said to be s-polarized, from the German senkrecht (perpendicular). For the s-polarized light, the reflection coefficient is given by
102
where the second form is derived from the first by eliminating t using Snell's law and trigonometric identities. For the p-polarized light, the R is given by
As a consequence of the conservation of energy, the transmission coefficients are given by [2]
and
These relationships hold only for power coefficients, not for amplitude coefficients as defined below. If the incident light is unpolarised (containing an equal mix of s- and p-polarisations), the reflection coefficient is
For common glass, the reflection coefficient is about 4%. Note that reflection by a window is from the front side as well as the back side, and that some of the light bounces back and forth a number of times between the two sides. The combined reflection coefficient for this case is 2R/(1+R), when interference can be neglected (see below). The discussion given here assumes that the permeability is equal to the vacuum permeability 0 in both media. This is approximately true for most dielectric materials, but not for some other types of material. The completely general Fresnel equations are more complicated.
Fresnel equations
103
Special angles
At one particular angle for a given n1 and n2, the value of Rp goes to zero and a p-polarised incident ray is purely refracted. This angle is known as Brewster's angle, and is around 56 for a glass medium in air or vacuum. Note that this statement is only true when the refractive indices of both materials are real numbers, as is the case for materials like air and glass. For materials that absorb light, like metals and semiconductors, n is complex, and Rp does not generally go to zero. When moving from a denser medium into a less dense one (i.e., n1>n2), above an incidence angle known as the critical angle, all light is reflected and Rs=Rp=1. This phenomenon is known as total internal reflection. The critical angle is approximately 41 for glass in air.
Amplitude equations
Equations for coefficients corresponding to ratios of the electric field complex-valued amplitudes of the waves (not necessarily real-valued magnitudes) are also called "Fresnel equations". These take several different forms, depending on the choice of formalism and sign convention used. The amplitude coefficients are usually represented by lower case r and t.
Formulas
Using the conventions above,[3]
Notice that
but
.[4]
Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the amplitude reflection coefficient is related to the reflectance R by [5]
Fresnel equations . The transmittance T is generally not equal to |t|2, since the light travels with different direction and speed in the two media. The transmittance is related to t by.[6] . The factor of n2/n1 occurs from the ratio of intensities (closely related to irradiance). The factor of cos t/cos i represents the change in area m of the pencil of rays, needed since T, the ratio of powers, is equal to the ratio of (intensity area). In terms of the ratio of refractive indices, , and of the magnification m of the beam cross section occurring at the interface, .
104
Multiple surfaces
When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser. An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include FabryProt interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference. The transfer-matrix method, or the recursive Rouard method[7] can be used to solve multiple-surface problems.
References
[1] Hecht (1987), p. 100. [2] Hecht (1987), p. 102. [3] Lecture notes by Bo Sernelius, main site (http:/ / www. ifm. liu. se/ courses/ TFYY67/ ), see especially Lecture 12 (http:/ / www. ifm. liu. se/ courses/ TFYY67/ Lect12. pdf). [4] Hecht (2003), p. 116, eq.(4.49)-(4.50). [5] Hecht (2003), p. 120, eq.(4.56). [6] Hecht (2003), p. 120, eq.(4.57). [7] see, e.g. O.S. Heavens, Optical Properties of Thin Films, Academic Press, 1955, chapt. 4.
Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN0-201-11609-X. Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. ISBN0-321-18878-0.
Fresnel equations
105
Further reading
The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2. Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, ISBN 978-0-471-89931-0 The Light Fantastic Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, ISBN 978-0-19-856646-5 Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3
External links
Fresnel Equations (http://scienceworld.wolfram.com/physics/FresnelEquations.html) Wolfram FreeSnell (http://people.csail.mit.edu/jaffer/FreeSnell) Free software computes the optical properties of multilayer materials Thinfilm (http://thinfilm.hansteen.net/) Web interface for calculating optical properties of thin films and multilayer materials. (Reflection & transmission coefficients, ellipsometric parameters Psi & Delta) Simple web interface for calculating single-interface reflection and refraction angles and strengths. (http://www. calctool.org/CALC/phys/optics/reflec_refrac) Reflection and transmittance for two dielectrics (http://wm.eecs.umich.edu/webMathematica/eecs434/f08/ ideliz/final.jsp) Mathematica interactive webpage that shows the relations between index of refraction and reflection. A self-contained first-principles derivation (http://www.jedsoft.org/physics/notes/multilayer.pdf) of the transmission and reflection probabilities from a multilayer with complex indices of refraction.
106
107
Introduction
With regard to the absorption of light, primary material considerations include: At the electronic level, absorption in the ultraviolet and visible (UV-Vis) portions of the spectrum depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific frequency, and does not violate selection rules. For example, in most glasses, electrons have no available energy levels above them in range of that associated with visible light, or if they do, they violate selection rules, meaning there is no appreciable absorption in pure (undoped) glasses, making them ideal transparent materials for windows in buildings. At the atomic or molecular level, physical absorption in the infrared portion of the spectrum depends on the frequencies of atomic or molecular vibrations or chemical bonds, and on selection rules. Nitrogen and oxygen are not greenhouse gases because the absorption is forbidden by the lack of a molecular dipole moment.
With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include:
Simulated comparisons of (from top to bottom): decreasing levels of opacity; increasing levels of translucency; and increasing levels of transparency; behind each panel is a left-right gradiented grey star
Crystalline structure: whether or not the atoms or molecules exhibit the 'long-range order' evidenced in crystalline solids. Glassy structure: scattering centers include fluctuations in density or composition. Microstructure: scattering centers include internal surfaces such as grain boundaries, crystallographic defects and microscopic pores. Organic materials: scattering centers include fiber and cell structures and boundaries.
108
Applications
Optical transparency in polycrystalline materials is limited by the amount of light which is scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometer, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in Large laser elements made from transparent ceramics can be produced at a polycrystalline materials include relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized microstructural defects such as pores and custom-designed doping profiles. This makes ceramic laser elements particularly grain boundaries. In addition to pores, most important for high-energy lasers. of the interfaces in a typical metal or ceramic object are in the form of grain boundaries which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent.
109
In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength or roughly 600/15 = 40nm) eliminates much of light scattering, resulting in a translucent or even transparent material.
Soldiers pictured during the 2003 Iraq War seen through Night Vision Goggles
Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology.[4] Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications Translucency of a material being used to will have improved overall strength, especially for high-shear highlight the structure of a photographic subject conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall. Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 35 micrometer mid-infrared range. Yttria is fully transparent from 35 micrometers, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. Not surprisingly, a combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field.
110
Vibrational: Resonance in atomic/molecular vibrational modes. These transitions are typically in the infrared portion of the spectrum.
Transparency and translucency An electron selectively absorbs a portion of the photon, and the remaining frequencies are transmitted in the form of spectral color. Most of the time, it is a combination of the above that happens to the light that hits an object. The electrons in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen then to the absorbed energy. as it may be re-emitted by the electron as radiant energy (in this case the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e. transformed into heat), or the electron can be freed from the atom (as in the photoelectric and Compton effects).
111
If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted.[5][6]
112
Transparency in insulators
An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light. When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms.[] In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface. Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses. If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light wave. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced. Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, natural gas, are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous.[citation needed] Light scattering in an ideal defect-free crystalline (non-metallic) solid which provides no scattering centers for incoming lightwaves will be due primarily to any effects of anharmonicity within the ordered lattice. Lightwave transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials.[7]
Optical waveguides
Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless.
113
An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively.
A laser beam bouncing down an acrylic rod, illustrating the total internal reflection of light in a multimode optical fiber
When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g. combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems.
Mechanisms of attenuation
Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a signal across large distances. In optical fibers the main attenuation source Light attenuation by ZBLAN and silica fibers is scattering from molecular level irregularities (Rayleigh scattering)[8] due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes[citation needed]. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation.[9][10]
114
References
[8] I. P. Kaminow, T. Li (2002), Optical fiber telecommunications IV, Vol.1, p. 223 (http:/ / books. google. com/ books?id=GlxnCiQlNwEC& q& f=false& pg=PA223#v=onepage& q& f=false)
Further reading
Electrodynamics of continuous media, Landau, L. D., Lifshits. E.M. and Pitaevskii, L.P., (Pergamon Press, Oxford, 1984) Laser Light Scattering: Basic Principles and Practice Chu, B., 2nd Edn. (Academic Press, New York 1992) Solid State Laser Engineering, W. Koechner (Springer-Verlag, New York, 1999) Introduction to Chemical Physics, J.C. Slater (McGraw-Hill, New York, 1939) Modern Theory of Solids, F. Seitz, (McGraw-Hill, New York, 1940) Modern Aspects of the Vitreous State, J.D.MacKenzie, Ed. (Butterworths, London, 1960)
External links
Properties of Light (http://sol.sci.uop.edu/~jfalward/physics17/chapter12/chapter12.html) UV-Vis Absorption (http://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/uvvisab1.htm) Infrared Spectroscopy (http://www.cem.msu.edu/~reusch/VirtualText/Spectrpy/InfraRed/infrared.htm) Brillouin Scattering (http://www.soest.hawaii.edu/~zinin/Zi-Brillouin.html) Transparent Ceramics (http://www.ikts.fhg.de/business/strukturkeramik/basiswerkstoffe/oxidkeramik/ transparentkeramik_en.html) Bulletproof Glass (http://science.howstuffworks.com/question476.htm) Transparent ALON Armor (http://science.howstuffworks.com/transparent-aluminum-armor.htm) Properties of Optical Materials (http://www.harricksci.com/infoserver/Optical Materials.cfm) What makes glass transparent ? (http://science.howstuffworks.com/question404.htm) Brillouin scattering in optical fiber (http://www.rp-photonics.com/brillouin_scattering.html) Thermal IR Radiation and Missile Guidance (http://www.ausairpower.net/TE-IR-Guidance.html)
Rendering transparency
115
Rendering transparency
Transparency is possible in a number of graphics file formats. The term transparency is used in various ways by different people, but at its simplest there is "full transparency" i.e. something that is completely invisible. Of course, only part of a graphic should be fully transparent, or there would be nothing to see. More complex is "partial transparency" or "translucency" where the effect is achieved that a graphic is partially transparent in the same way as colored glass. Since ultimately a printed page or computer or television screen can only be one color at a point, partial transparency is always simulated at some level by mixing colors. There are many different ways to mix colors, so in some cases transparency is ambiguous. In addition, transparency is often an "extra" for a graphics format, and some graphics programs will ignore the transparency. Raster file formats that support transparency include GIF, PNG, BMP and TIFF, through either a transparent color or an alpha channel. Most vector formats implicitly support transparency because they simply avoid putting any objects at a given point. This includes EPS and WMF. For vector graphics this may not strictly be seen as transparency, but it requires much of the same careful programming as transparency in raster formats. More complex vector formats may allow transparency combinations between the elements within the graphic, as well as that above. This includes SVG and PDF. A suitable raster graphics editor shows transparency by a special pattern, e.g. a checkerboard pattern.
Transparent pixels
One color entry in a single GIF or PNG image's palette can be defined as "transparent" rather than an actual color. This means that when the decoder encounters a pixel with this value, it is rendered in the background color of the part of the screen where the image is placed, also if this varies pixel-by-pixel as in the case of a background image. Applications include:
an image that is not rectangular can be filled to the required rectangle using transparent surroundings; the image can even have holes (e.g. be ring-shaped) in a run of text, a special symbol for which an image is used because it is not available in the character set, can be given a transparent background, resulting in a matching background. The transparent color should be chosen carefully, to avoid items that just happen to be the same color vanishing. Even this limited form of transparency has patchy implementation, though most popular web browsers are capable of displaying transparent GIF images. This support often does not extend to printing, especially to printing devices (such as PostScript) which do not include support for transparency in the device or driver. Outside the world of web browsers, support is fairly hit-or-miss for transparent GIF files.
This image has binary transparency (some pixels fully transparent, other pixels fully opaque). It can be transparent against any background because it is monochrome.
Rendering transparency
116
This image has binary transparency. However, it is grayscale, with anti-aliasing, so it looks good only against a white background. Set against a different background, a "ghosting" effect from the shades of gray would result.
This image has partial transparency (254 possible levels of transparency between fully transparent and fully opaque). It can be transparent against any background despite being anti-aliased.
The process of combining a partially transparent color with its background ("compositing") is often ill-defined and the results may not be exactly the same in all cases. For example, where color correction is in use, should the colors be composited before or after color correction?
Compositing calculations
While some transparency specifications are vague, others may give mathematical details of how two colors are to be composited. This gives a fairly simple example of how compositing calculations can work, can produce the expected results, and can also produce surprises. In this example, two grayscale colors are to be composited. Grayscale values are considered to be numbers between 0.0 (white) and 1.0 (black). To emphasize: this is only one possible rule for transparency. If working with transparency, check the rules in use for your situation.
This image shows the results of overlaying each of the above transparent PNG images on a background color of #6080A0. Note the gray fringes on the letters of the middle image.
Rendering transparency
117
The color at a point, where color G1 and G2 are to be combined, is . Some consequences of this are: Where the colors are equal, the result is the same color because . Where one color (G1) is white (0.0), the result is . This will always be less than any nonzero value of G2, so the result is whiter than G2. (This is easily reversed for the case where G2 is white).
This shows how the above images would look when, for example, editing them. The grey and white check pattern would be converted into transparency.
Where one color (G1) is black (1.0), the result is is more black than G2. The formula is commutative since
. This will always be more than G2, so the result . This means it does not matter which
order two graphics are mixed i.e. which of the two is on the top and which is on the bottom. The formula is not associative since ( ( G1 + G2 ) / 2 + G3 ) / 2 = G1 / 4 + G2 / 4 + G3 / 2 ( G1 + ( G2 + G3 ) / 2 ) / 2 = G1 / 2 + G2 / 4 + G3 / 4 This is important as it means that when combining three or more objects with this rule for transparency, the final color depends very much on the order of doing the calculations. Although the formula is simple, it may not be ideal. Human perception of brightness is not linear - we do not necessarily consider that a gray value of 0.5 is halfway between black and white. Such details may not matter when transparency is used only to soften edges, but in more complex designs this may be significant. Most people working seriously with transparency will need to see the results and may fiddle with the colors or (where possible) the algorithm to arrive at the results they need. This formula can easily be generalized to RGB color or CMYK color by applying the formula to each channel separately. For example, final . But it cannot be applied to all color models. For example Lab color would produce results that were surprising. An alternative model is that at every point in each element to be combined for transparency there is an associated color and opacity between 0 and 1. For each color channel, you might work with this model: if a channel with intensity and opacity overlays a channel with intensity and opacity the result will be a channel with intensity equal to , and opacity . Each channel must be multiplied by corresponding alpha value before composition (so called premultiplied alpha). The SVG file specification uses this type of blending, and this is one of the models that can be used in PDF. Alpha channels may be implemented in this way, where the alpha channel provides an opacity level to be applied equally to all other channels. To work with the above formula, the opacity needs to be scaled to the range 0 to 1, whatever its external representation (often 0 to 255 if using 8 bit samples such as "RGBA").
Transparency in PDF
Starting with version 1.4 of the PDF standard (Adobe Acrobat version 5), transparency (including translucency) is supported. Transparency in PDF files allows to achieve various effects, including adding shadows to objects, making objects semi-transparent and having objects blend into each other or into text. PDF supports many different blend modes, not just the most common averaging method, and the rules for compositing many overlapping objects allow choices (such as whether a group of objects are blended before being blended with the background, or whether each object in turn is blended into the background).
Rendering transparency PDF transparency is a very complex model, its original specification by Adobe being over 100 pages long. A key source of complication is that blending objects with different color spaces can be tricky and error-prone as well as cause compatibility issues. Transparency in PDF was designed not to cause errors in PDF viewers that did not understand it they would simply display all elements as fully opaque. However, this was a two-edged sword as users with older viewers, PDF printers, etc. could see or print something completely different from the original design. The fact that the PDF transparency model is so complicated means that it is not well supported. This means that RIPs and printers often have problems printing PDFs with transparency. The solution to this is either to rasterize the image or to apply vector transparency flattening to the PDF. However vector transparency flattening is extremely complex and only supported by a few specialist packages.
118
Transparency in PostScript
The PostScript language has limited support for full (not partial) transparency, depending on the PostScript level. Partial transparency is available with the pdfmark extension,[1] available on many PostScript implementations.
Level 1
Level 1 PostScript offers transparency via two methods: A one-bit (monochrome) image can be treated as a mask. In this case the 1-bits can be painted any single color, while the 0-bits are not painted at all. This technique cannot be generalised to more than one color, or to vector shapes. Clipping paths can be defined. These restrict what part of all subsequent graphics can be seen. This can be used for any kind of graphic, however in level 1, the maximum number of nodes in a path was often limited to 1500, so complex paths (e.g. cutting around the hair in a photograph of a person's head) often failed.
Level 2
Level 2 PostScript adds no specific transparency features. However, by the use of patterns, arbitrary graphics can be painted through masks defined by any vector or text operations. This is, however, complex to implement. In addition, this too often reached implementation limits, and few if any application programs ever offered this technique.
Level 3
Level 3 PostScript adds further transparency option for any raster image. A transparent color, or range of colors, can be applied; or a separate 1-bit mask can be used to provide an alpha channel.
Encapsulated PostScript
EPS files contain PostScript, which may be level 1, 2 or 3 and make use of the features above. A more subtle issue arises with the previews for EPS files that are typically used to show the view of the EPS file on screen. There are viable techniques for setting transparency in the preview. For example, a TIFF preview might use a TIFF alpha channel. However, many applications do not use this transparency information and will therefore show the preview as a rectangle. A semi-proprietary technique pioneered in Photoshop and adopted by a number of pre-press applications is to store a clipping path in a standard location of the EPS, and use that for display. In addition, few of the programs that generate EPS previews will generate transparency information in the preview. Some programs have sought to get around this by treating all white in the preview as transparent, but this too is problematic in the cases where some whites are not transparent.
Rendering transparency More recently, applications have been appearing that ignore the preview altogether; they therefore get information on which parts of the preview to paint by interpreting the PostScript.
119
Refraction
Refraction is the change in direction of a wave due to a change in its transmission medium. Refraction is essentially a surface phenomenon. The phenomenon is mainly in governance to the law of conservation of energy and momentum. Due to change of medium, the phase velocity of the wave is changed but its frequency remains constant. This is most commonly observed when a wave passes from one medium to another at any angle other than 90 or 0. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction is described by Snell's law, which states that for a given pair of media and a wave with a single frequency, the ratio of the sines of the angle of incidence 1 and angle of refraction 2 is equivalent to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to the opposite ratio of the indices of refraction (n2 / n1):
Light on airplexi surface in this experiment undergoes refraction (lower ray) and reflection (upper ray).
In general, the incident wave is partially refracted and partially reflected; the details of this behavior are described by the Fresnel equations.
An image of the Golden Gate Bridge is refracted and bent by many differing three dimensional drops of water.
Refraction
120
Explanation
In optics, refraction is a phenomenon that often occurs when waves travel from a medium with a given refractive index to a medium with another at an oblique angle. At the boundary between the media, the wave's phase velocity is altered, usually causing a change in direction. Its wavelength increases or decreases but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass, assuming there is a change in refractive index. A ray traveling along the normal (perpendicular to the boundary) will change speed, but not direction. Refraction still occurs in this case. Understanding of this concept led to the invention of lenses and the refracting telescope.
Refraction of light at the interface between two media of different refractive indices, with n2 > n1. Since the phase velocity is lower in the second medium (v2 < v1), the angle of refraction 2 is less than the angle of incidence 1; that is, the ray in the higher-index medium is closer to the normal.
Refraction can be seen when looking into a bowl of water. Air has a refractive index of about 1.0003, and water has a refractive index of about 1.33. If a person looks at a straight object, such as a pencil or straw, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than An object (in this case a pencil) part immersed in water looks bent due to refraction: the light waves where the actual rays originated. This causes the pencil to appear from X change direction and so seem to originate higher and the water to appear shallower than it really is. The depth at Y. (More accurately, for any angle of view, Y that the water appears to be when viewed from above is known as the should be vertically above X, and the pencil apparent depth. This is an important consideration for spearfishing should appear shorter, not longer as shown.) from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by archer fish. For small angles of incidence (measured from the normal, when sin is approximately the same as tan ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But as the angle of incidence approaches 90o, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached. The diagram on the right shows an example of refraction in water waves. Ripples travel from the left and pass over a shallower region inclined at an angle to the wavefront. The waves travel slower in the more shallow water, so the wavelength decreases and the wave bends at the boundary. The dotted line represents the normal to the boundary. The dashed line represents the original direction of the waves. This phenomenon explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their
Refraction original direction of travel to an angle more normal to the shoreline.[1] Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency, a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies. While refraction allows for phenomena such as rainbows, it may also produce peculiar optical phenomena, such as mirages and Fata Morgana. These are caused by the change of the refractive index of air with temperature. Recently some metamaterials have been created which have a negative refractive index. With metamaterials, we can also obtain total refraction phenomena when the wave impedances of the two media are matched. There is then no reflected wave.[2] Also, since refraction can make objects appear closer than they are, it is responsible for allowing water to magnify objects. First, as light is entering a drop of water, it slows down. If the water's surface is not flat, then the light will be bent into a new path. This round shape will bend the light outwards and as it spreads out, the image you see gets larger.
121
A useful analogy in explaining the refraction of light would be to imagine a marching band as they march at an oblique angle from pavement (a fast medium) into mud (a slower medium). The marchers on the side that runs into the mud first will slow down first. This causes the whole band to pivot slightly toward the normal (make a smaller angle from the normal).
Clinical significance
In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision.[3]
Acoustics
In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent upon the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water.[4] Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries;[5] however, beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to
Refraction address the meteorological effects of bending of sound rays in the lower atmosphere.[6]
122
Gallery
Refraction
123
The straw appears to be broken, due to refraction of light as it emerges into the air.
Refraction
124
References
[5] Mary Somerville (1840), On the Connexion of the Physical Sciences, J. Murray Publishers, (originally by Harvard University)
External links
Java illustration of refraction (http://www.falstad.com/ripple/ex-refraction.html) Java simulation of refraction through a prism (http://www.phy.hk/wiki/englishhtm/RefractionByPrism.htm) Reflections and Refractions in Ray Tracing (http://www.flipcode.com/archives/reflection_transmission.pdf), a simple but thorough discussion of the mathematics behind refraction and reflection. Flash refraction simulation- includes source (http://www.interactagram.com/physics/optics/refraction/), Explains refraction and Snell's Law. Animations demonstrating optical refraction (http://qed.wikina.org/refraction/) by QED
The larger the angle to the normal, the smaller is the fraction of light transmitted rather than reflected, until the angle at which total internal reflection occurs. (The color of the rays is to help distinguish the rays, and is not meant to indicate any color dependence.)
125
Optical description
Total internal reflection of light can be demonstrated using a semi-circular block of glass or plastic. A "ray box" shines a narrow beam of light (a "ray") onto the glass. The semi-circular shape ensures that a ray pointing towards the centre of the flat face will hit the curved surface at a right angle; this will prevent refraction at the air/glass boundary of the curved surface. At the glass/air boundary of the flat surface, what happens will depend on the angle. Where c is the critical angle measurement which is caused by the sun or a light source (measured normal to the surface):
If < c, the ray will split. Some of the ray will reflect off the boundary, and some will refract as it passes through. This is not total internal reflection. If > c, the entire ray reflects from the boundary. None passes through. This is called total internal reflection. This physical property makes optical fibers useful and prismatic binoculars possible. It is also what gives diamonds their distinctive sparkle, as diamond has an unusually high refractive index.
Critical angle
The critical angle is the angle of incidence above which total internal reflection occurs. The angle of incidence is measured with respect to the normal at the refractive boundary (see diagram illustrating Snell's law). Consider a light ray passing from glass into air. The light emanating from the interface is bent towards the glass. When the incident angle is increased sufficiently, the transmitted angle (in air) reaches 90 degrees. It is at this point no light is transmitted into air. The critical angle is given by Snell's law,
Illustration of Snell's law, .
To find the critical angle, we find the value for is equal to the critical angle Now, we can solve for .
when
90 and thus
If the incident ray is precisely at the critical angle, the refracted ray is tangent to the boundary at the point of incidence. If for example, visible light were traveling through acrylic glass (with an index of refraction of approximately 1.50) into air (with an index of refraction of 1.00), the calculation would give the critical angle for light from acrylic into air, which is . Light incident on the border with an angle less than 41.8 would be partially transmitted, while light incident on the border at larger angles with respect to normal would be totally internally reflected.
Total internal reflection If the fraction is greater than 1, then arcsine is not definedmeaning that total internal reflection does not is less than 1.
126
occur even at very shallow or grazing incident angles. So the critical angle is only defined when
Refraction of light at the interface between two media, including total internal reflection.
A special name is given to the angle of incidence that produces an angle of refraction of 90. It is called the critical angle.
If
As a result of this
becomes complex:
The electric field of the transmitted plane wave is given by one obtains: and . Using the fact that and Snell's law, one finally obtains
where
and
Total internal reflection This wave in the optically less dense medium is known as the evanescent wave. Its characterized by its propagation in the x direction and its exponential attenuation in the z direction. Although there is a field in the second medium, it can be shown that no energy flows across the boundary. The component of Poynting vector in the direction normal to the boundary is finite, but its time average vanishes. Whereas the other two components of Poynting vector (here x-component only), and their time averaged values are in general found to be finite.
127
When a glass of water is held firmly, ridges making up the fingerprints are made visible by frustrated total internal reflection. Light tunnels from the glass into the ridges through the very [] short air gap.
The transmission coefficient for FTIR is highly sensitive to the spacing between the high index media (the function is approximately exponential until the gap is almost closed), so this effect has often been used to modulate optical transmission and reflection with a large dynamic range. An example application of this principle is the multi-touch sensing technology for displays [1] as developed at the New York Universitys Media Research Lab.
128
Applications
Total internal reflection is the operating principle of optical fibers, which are used in endoscopes and telecommunications. Total internal reflection is the operating principle of automotive rain sensors, which control automatic windscreen/windshield wipers. Another application of total internal reflection is the spatial filtering of light.[2] Prismatic binoculars use the principle of total internal reflections to get a very clear image. Some multi-touch screens use frustrated total internal reflection in combination with a camera and appropriate software to pick up multiple targets.
Mirror like effect
Gonioscopy employs total internal reflection to view the anatomical angle formed between the eye's cornea and iris. A gait analysis instrument, CatWalk XT,[3] uses frustrated total internal reflection in combination with a high speed camera to capture and analyze footprints of laboratory rodents. Optical fingerprinting devices use frustrated total internal reflection in order to record an image of a person's fingerprint without the use of ink. A Total internal reflection fluorescence microscope uses the evanescent wave produced by TIR to excite fluorophores close to a surface. This is useful for the study of surface properties of biological samples.[4]
129
References
This article incorporatespublic domain material from the General Services Administration document "Federal Standard 1037C" [5] (in support of MIL-STD-188).
[1] http:/ / www. cs. nyu. edu/ ~jhan/ ftirsense/ [5] http:/ / www. its. bldrdoc. gov/ fs-1037/ fs-1037c. htm
External links
FTIR Touch Sensing (http://cs.nyu.edu/~jhan/ftirsense/index.html) Multi-Touch Interaction Research (http://cs.nyu.edu/~jhan/ftirtouch/index.html) Georgia State University (http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/totint.html) Total Internal Reflection (http://demonstrations.wolfram.com/TotalInternalReflection/) by Michael Schreiber, Wolfram Demonstrations Project Total Internal Reflection (http://www.stmary.ws/highschool/physics/home/notes/waves/ TotalInternalReflection.htm) St. Mary's Physics Online Notes Bowley, Roger (2009). "Total Internal Reflection" (http://www.sixtysymbols.com/videos/reflection.htm). Sixty Symbols. Brady Haran for the University of Nottingham.
Refraction, critical angle and total internal reflection of light at the interface between two media.
130
List
Some representative refractive indices
Material Vacuum Air at STP Gases at 0 C and 1 atm Air Carbon dioxide Helium Hydrogen 589.29 1.000293 589.29 1.00045 589.29 1.000036 589.29 1.000132 Liquids at 20 C Arsenic trisulfide and sulfur in methylene iodide Benzene Carbon disulfide Carbon tetrachloride Ethyl alcohol (ethanol) Silicone oil Water 1.9 589.29 1.501 589.29 1.628 589.29 1.461 589.29 1.361 1.52045 589.29 1.3330 Solids at room temperature Titanium dioxide (Rutile phase ) Diamond Strontium titanate Amber Fused silica (also called Fused Quartz) Sodium chloride 589.29 2.496 589.29 2.419 589.29 2.41 589.29 1.55 589.29 1.458 589.29 1.544 Other materials Liquid helium Water ice Cornea (human) Lens (human) Acetone Ethanol Glycerol Bromine 1.025 1.31 1.373/1.380/1.401 [4] 1.386 - 1.406 1.36 1.36 1.4729 1.661 [] [] [] [3] [] [1] [] [] [] [] [2] [] [] [] [] [] [] [] (nm) n 1 (by definition) 1.000277 Ref.
131
Teflon AF Teflon Cytop Sylgard 184 PLA Acrylic glass Polycarbonate PMMA PETg PET Crown glass (pure) Flint glass (pure) Crown glass (impure) Flint glass (impure) Pyrex (a borosilicate glass) Cryolite Rock salt Sapphire Sugar Solution, 25% Sugar Solution, 50% Sugar Solution, 75% Cubic zirconia Potassium Niobate (KNbO3) Silicon carbide Cinnabar (Mercury sulfide) Gallium(III) phosphide Gallium(III) arsenide Zinc Oxide Germanium Silicon 590 390 1.315 1.35 - 1.38 1.34 1.4118 1.46 1.490 - 1.492 1.584 - 1.586 1.4893 - 1.4899 1.57 1.5750 1.50 - 1.54 1.60 - 1.62 1.485 - 1.755 1.523 - 1.925 1.470 1.338 1.516 1.7621.778 1.3723 1.4200 1.4774 2.15 - 2.18 2.28 2.65 - 2.69 3.02 3.5 3.927 2.4 4.01 3.96 [] [] [] [] [] [] [] []
[]
132
References
[1] Meyrowitz, R, A compilation and classification of immersion media of high index of refraction, American Mineralogist 40: 398 (1955) (http:/ / www. minsocam. org/ ammin/ AM40/ AM40_398. pdf) [2] Silicon and Oil Refractive Index Standards (http:/ / www. dcglass. com/ htm/ p-ri-oil. htm) [3] RefractiveIndex.INFO - Refractive index and related constants (http:/ / refractiveindex. info/ ?group=CRYSTALS& material=TiO2)
External links
International Association for the Properties of Water and Steam (http://www.iapws.org/relguide/rindex.pdf) Ioffe institute, Russian Federation (http://www.ioffe.ru/SVA/NSM/nk/index.html) Crystran, United Kingdom (http://www.crystran.co.uk/) Jena University, Germany (http://www.astro.uni-jena.de/Laboratory/Database/jpdoc/f-dbase.html) Hyperphysics list of refractive indices (http://hyperphysics.phy-astr.gsu.edu/hbase/tables/indrf.html#c1) Luxpop: Index of refraction values and photonics calculations (http://www.luxpop.com/) Kaye and Laby Online (http://www.kayelaby.npl.co.uk/general_physics/2_5/2_5_8.html) Provided by the National Physical Laboratory, UK List of Refractive Indices of Solvents (http://macro.lsu.edu/HowTo/solvents/Refractive Index.htm)
Schlick's approximation
In 3D computer graphics, Schlick's approximation is a formula for approximating the contribution of the Fresnel term in the specular reflection of light from a non-conducting interface (surface) between two media. According to Schlick's model, the specular reflection coefficient R can be approximated by:
where
is the angle between the viewing direction and the half-angle direction, which is halfway between the . And are the indices of is the reflection coefficient for light incoming parallel to the or minimal reflection). In computer graphics, one of the
incident light direction and the viewing direction, hence refraction of the two medias at the interface and normal (i.e. the value of the Fresnel term when interfaces is usually air, meaning that
References
Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum 13 (3): 233. doi:10.1111/1467-8659.1330233 [1].
References
[1] http:/ / dx. doi. org/ 10. 1111%2F1467-8659. 1330233
133
The term BSDF is sometimes used in a slightly different context, for the function describing the amount of the scatter (not scattered light), simply as a function of the incident light angle. An example to illustrate this context: for perfectly lambertian surface the BSDF(angle)=const. This approach is used for instance to verify the output quality by the manufacturers of the glossy surfaces.Wikipedia:Please clarify Another recent usage of the term BSDF can be seen in some 3D packages, when vendors use it as a 'smart' category to encompass the simple well known cg algorithms like Phong, BlinnPhong etc.
134
BTDF (Bidirectional transmittance distribution function) is similar to BRDF but for the opposite side of the surface. (see the top image).
References
[1] http:/ / en. wikipedia. org/ wiki/ Bidirectional_scattering_distribution_function#endnote_endnote_veach1997_a [2] http:/ / en. wikipedia. org/ wiki/ Bidirectional_scattering_distribution_function#endnote_nicodemus_1977
1. ^ Eric Veach (1997), "Robust Monte Carlo Methods for Light Transport Simulation" (http://graphics.stanford. edu/papers/veach_thesis/thesis-bw.pdf), page 86 (http://graphics.stanford.edu/papers/veach_thesis/ chapter3.ps) citing Paul Heckbert (1991). "Simulating Global Illumination Using Adaptive Meshing", PhD thesis, University of California, Berkeley, page 26. 2. ^ Randall Rauwendaal "Rendering General BSDFs and BSSDFs" (http://graphics.cs.ucdavis.edu/~bcbudge/ ecs298_2004/General_BSDFs_BSSDFs.ppt) 3. ^ The original definition in Nicodemus et al. 1977 has scattering surface, but somewhere along the way, the word ordering was reversed.
135
Object Intersection
Linesphere intersection
In analytic geometry, a line and a sphere can intersect in three ways: no intersection at all, at exactly one point, or in two points. Methods for distinguishing these cases, and determining equations for the points in the latter cases, are useful in a number of circumstances. For example, this is a common calculation to perform during ray tracing (Eberly 2006:698).
The three possible line-sphere intersections: 1. No intersection. 2. Point intersection. 3. Two point intersection.
- distance along line from starting point - direction of line (a unit vector) - origin of the line - points on the line :
Searching for points that are on the line and on the sphere means combining the equations and solving for Equations combined
Expanded
Rearranged
The form of a Quadratic formula is now observable. (This quadratic equation is an example of Joachimsthal's Equation [1].)
136
solutions exist, i.e. the line does not intersect the sphere (case 1). If it is zero, then exactly one solution exists, i.e. the line just touches the sphere in one point (case 2). If it is greater than zero, two solutions exist, and thus the line touches the sphere in two points (case 3).
References
David H. Eberly (2006), 3D game engine design: a practical approach to real-time computer graphics, 2nd edition, Morgan Kaufmann. ISBN 0-12-229063-1
References
[1] http:/ / mathworld. wolfram. com/ JoachimsthalsEquation. html
Line-plane intersection
In analytic geometry, the intersection of a line and a plane can be the empty set, a point, or a line. Distinguishing these cases, and determining equations for the point and line in the latter cases have use, for example, in computer graphics, motion planning, and collision detection.
The three possible plane-line intersections: 1. No intersection. 2. Point intersection. 3. Line intersection.
Line-plane intersection
137
Parametric form
A line is described by all points that are a given direction from a point. Thus a general point on a line can be represented as
where
where co-linear.
,
The intersection of line and plane.
The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation:
If the line is parallel to the plane then the vectors If the solution satisfies the condition If the solution satisfies
, and
the matrix will be singular. This situation will also occur when the line lies in the plane. , then the intersection point is on the line between
then the intersection point is in the plane inside the triangle spanned by the three points This problem is typically solved by expressing it in matrix form, and inverting it:
and
Line-plane intersection
138
Algebraic form
In vector notation, a plane can be expressed as the set of points for which
where
where is a vector in the direction of the line, Substitute the line into the plane equation to get Distribute to get
If the line starts outside the plane and is parallel to the plane, there is no intersection. In this case, the above denominator will be zero and the numerator will be non-zero. If the line starts inside the plane and is parallel to the plane, the line intersects the plane everywhere. In this case, both the numerator and denominator above will be zero. In all other cases, the line intersects the plane once and represents the intersection as the distance along the line from , i.e.
Uses
In the ray tracing method of computer graphics a surface can be represented as a set of pieces of planes. The intersection of a ray of light with each plane is used to produce an image of the surface. In vision-based 3D reconstruction, a subfield of computer vision, depth values are commonly measured by so-called triangulation method, which finds the intersection between light plane and ray reflected toward camera. The algorithm can be generalised to cover intersection with other planar figures, in particular, the intersection of a polyhedron with a line.
Point in polygon
139
Point in polygon
In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon. It is a special case of point location problems and finds applications in areas that deal with processing geometrical data, such as computer graphics, computer vision, geographical information systems (GIS), motion planning, and CAD. An early description of the problem in computer graphics shows two common approaches (ray casting and angle summation) in use as early as 1974.[1] An attempt of computer graphics veterans to trace the history of the problem and some tricks for its solution can be found in an issue of the Ray Tracing News.[2]
An example of a simple polygon
Point in polygon the vertex in both cases. If the polygon is specified by its vertices, then this problem is eliminated by checking the y-coordinates of the ray and the ends of the tested polygon side before actual computation of the intersection. In other cases, when polygon sides are computed from other types of data, other tricks must be applied for the numerical robustness of the algorithm.
140
Comparison
For simple polygons, both algorithms will always give the same results for all points. However, for complex polygons, the algorithms may give different results for points in the regions where the polygon intersects itself, where the polygon does not have a clearly defined inside and outside. In this case, the former algorithm is called the even-odd rule. One solution is to transform (complex) polygons in simpler, but even-odd-equivalent ones before the intersection check.[3]
Special cases
Simpler algorithms are possible for monotone polygons, star-shaped polygons and convex polygons.
References
[1] Ivan Sutherland et al.,"A Characterization of Ten Hidden-Surface Algorithms" 1974, ACM Computing Surveys vol. 6 no. 1. [2] "Point in Polygon, One More Time..." (http:/ / jedi. ks. uiuc. edu/ ~johns/ raytracer/ rtn/ rtnv3n4. html#art22), Ray Tracing News, vol. 3 no. 4, October 1, 1990.
141
Efficiency Schemes
Spatial index
A spatial database is a database that is optimized to store and query data that represents objects defined in a geometric space. Most spatial databases allow representing simple geometric objects such as points, lines and polygons. Some spatial databases handle more complex structures such as 3D objects, topological coverages, linear networks, and TINs. While typical databases are designed to manage various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types efficiently. These are typically called geometry or feature. The Open Geospatial Consortium created the Simple Features specification and sets standards for adding spatial functionality to database systems.[1]
Spatial index
Spatial indices are used by spatial databases (databases which store information related to objects in space) to optimize spatial queries. Conventional index types do not efficiently handle spatial queries such as how far two points differ, or whether points fall within a spatial area of interest. Common spatial index methods include: Grid (spatial index) Z-order (curve) Quadtree Octree UB-tree
R-tree: Typically the preferred method for indexing spatial data.[citation needed] Objects (shapes, lines and points) are grouped using the minimum bounding rectangle (MBR). Objects are added to an MBR within the index that
Spatial index will lead to the smallest increase in its size. R+ tree R* tree Hilbert R-tree X-tree kd-tree m-tree - an m-tree index can be used for the efficient resolution of similarity queries on complex objects as compared using an arbitrary metric.
142
Oracle Spatial Microsoft SQL Server has support for spatial types since version 2008 PostgreSQL DBMS (database management system) uses the spatial extension PostGIS to implement the standardized datatype geometry and corresponding functions. MySQL DBMS implements the datatype geometry plus some spatial functions that have been implemented according to the OpenGIS specifications.[4] However, in MySQL version 5.5 and earlier, functions that test spatial relationships are limited to working with minimum bounding rectangles rather than the actual geometries. MySQL versions earlier than 5.0.16 only supported spatial data in MyISAM tables. As of MySQL 5.0.16, InnoDB, NDB, BDB, and ARCHIVE also support spatial features. Neo4j - Graph database that can build 1D and 2D indexes as Btree, Quadtree and Hilbert curve directly in the graph AllegroGraph - a Graph database provides a novel mechanism for efficient storage and retrieval of two-dimensional geospatial coordinates for Resource Description Framework data. It includes an extension syntax for SPARQL queries MongoDB supports geospatial indexes in 2D Esri has a number of both single-user and multiuser geodatabases. SpaceBase [5] is a real-time spatial database.[6] CouchDB a document based database system that can be spatially enabled by a plugin called Geocouch CartoDB [7] is a cloud based geospatial database on top of PostgreSQL with PostGIS. StormDB [8] is an upcoming cloud based database on top of PostgreSQL with geospatial capabilities. SpatialDB [9] by MineRP is the worlds first open standards (OGC) spatial database with spatial type extensions for the Mining Industry.[10]
Spatial index
143
References
[1] OGC Homepage (http:/ / www. opengeospatial. org) [2] All Registered Products at opengeospatial.org (http:/ / www. opengeospatial. org/ resources/ ?page=products) [3] Open Source GIS website (http:/ / opensourcegis. org/ ) [4] http:/ / dev. mysql. com/ doc/ refman/ 5. 5/ en/ gis-introduction. html [5] http:/ / paralleluniverse. co [6] SpaceBase product page on the Parallel Universe website (http:/ / paralleluniverse. co/ product) [7] http:/ / cartodb. com/ [8] http:/ / www. stormdb. com [9] http:/ / www. minerpsolutions. com/ en/ software/ enterprise/ spatialDB [10] SpatialDB product page on the MineRP website (http:/ / www. minerpsolutions. com/ en/ software/ enterprise/ spatialDB)
Further reading
Spatial Databases: A Tour (http://www.spatial.cs.umn.edu/Book/), Shashi Shekhar and Sanjay Chawla, Prentice Hall, 2003 (ISBN 0-13-017480-7) ESRI Press (http://gis.esri.com/esripress/). ESRI Press titles include Modeling Our World: The ESRI Guide to Geodatabase Design, and Designing Geodatabases: Case Studies in GIS Data Modeling , 2005 Ben Franklin Award (http://www.pma-online.org/benfrank2005_winnerfinalist.cfm) winner, PMA, The Independent Book Publishers Association. Spatial Databases - With Application to GIS (http://mkp.com/books/data-management) Philippe Rigaux, Michel Scholl and Agnes Voisard. Morgan-Kauffman Publishers. 2002 (ISBN 1-55860-588-6)
External links
An introduction to PostgreSQL PostGIS (http://www.mapbender.org/presentations/ Spatial_Data_Management_Arnulf_Christl/html/) PostgreSQL PostGIS as components in a Service Oriented Architecture (http://www.gisdevelopment.net/ magazine/years/2006/jan/18_1.htm) SOA A Trigger Based Security Alarming Scheme for Moving Objects on Road Networks (http://www.springerlink. com/content/vn7446g28924jv5v/) Sajimon Abraham, P. Sojan Lal, Published by Springer Berlin / Heidelberg-2008.
Grid
144
Grid
In the context of a spatial index, a grid (a.k.a. "mesh", also "global grid" if it covers the entire surface of the globe) is a regular tessellation of a manifold or 2-D surface that divides it into a series of contiguous cells, which can then be assigned unique identifiers and used for spatial indexing purposes. A wide variety of such grids have been proposed or are currently in use, including grids based on "square" or "rectangular" cells, triangular grids or meshes, hexagonal grids, grids based on diamond-shaped cells, and possibly more. The range is broad and the possibilities are expanding.
Types of grids
"Square" or "rectangular" grids are frequently the simplest in use, i.e. for translating spatial information expressed in Cartesian coordinates (latitude and longitude) into and out of the grid system. Such grids may or may not be aligned with the gridlines of latitude and longitude; for example, Marsden squares, World Meteorological Organization squares, c-squares and others are aligned, while UTM, and various national (=local) grid based systems such as the British national grid reference system are not. In general, these grids fall into two classes, those that are "equal angle", that have cell sizes that are constant in degrees of latitude and longitude but are unequal in area (particularly with varying latitude), or those that are "equal area", that have cell sizes that are constant in distance on the ground (e.g. 100km, 10km) but not in degrees of longitude, in particular. The most influential triangular grid is the "Quaternary Triangular Mesh" or QTM that was developed by Geoffrey Dutton in the early 1980s. It eventually resulted in a thesis entitled "A Hierarchical Coordinate System for Geoprocessing and Cartography" that was published in 1999 (see publications list on Dutton's SpatialEffects [1] website). This grid was also employed as the basis of the rotatable globe that forms part of the Microsoft Encarta product. For a discussion of Discrete Global Grid Systems featuring hexagonal and other grids (including diamond-shaped), the paper of Sahr et al. (2003)[2] is recommended reading. In general, triangular and hexagonal grids are constructed so as to better approach the goals of equal-area (or nearly so) plus more seamless coverage across the poles, which tends to be a problem area for square or rectangular grids since in these cases, the cell width diminishes to nothing at the pole and those cells adjacent to the pole then become 3- rather than 4-sided. Criteria for optimal discrete global gridding have been proposed by both Goodchild and Kimerling[3] in which equal area cells are deemed of prime importance. Quadtrees are a specialised form of grid in which the resolution of the grid is varied according to the nature and/or complexity of the data to be fitted, across the 2-d space, and are considered separately under that heading. Polar grids utilize the polar coordinate system. In polar grids, intervals of a prescribed radius (circles) that are divided into sectors of a certain angle. Coordinates are given as the radius and angle from the center of the grid
Grid (pole).
145
Other uses
The individual cells of a grid system can also be useful as units of aggregation, for example as a precursor to data analysis, presentation, mapping, etc. For some applications (e.g., statistical analysis), equal-area cells may be preferred, although for others this may not be a prime consideration. In computer science, one often needs to find out all cells a ray is passing through in a grid (for raytracing or collision detection) and that is called Grid Traversal.
References
[1] http:/ / www. spatial-effects. com/ SE-papers1. html [2] Kevin Sahr, Denis White, and A. Jon Kimerling. 2003. Geodesic Discrete Global Grid Systems. Cartography and Geographic Information Science, 30(2), 121-134. (http:/ / www. sou. edu/ cs/ sahr/ dgg/ pubs/ gdggs03. pdf#search="hexagonal grid kimerling") [3] Criteria and Measures for the Comparison of Global Geocoding Systems, Keith C. Clarke, University of California (http:/ / www. ncgia. ucsb. edu/ globalgrids-book/ comparison) [4] Rigaux, P., Scholl, M., and Voisard, A. 2002. Spatial Databases - with application to GIS. Morgan Kaufmann, San Francisco, 410pp.
Indexing the Sky - Clive Page (http://www.star.le.ac.uk/~cgp/ag/skyindex.html) - Grid indices for astronomy
External links
Grid Traversal implementation details and applet demonstration (http://www.gamerendering.com/2009/07/ 20/grid-traversal/) PYXIS Discrete Global Grid System using the ISEA3H Grid (http://www.pyxisinnovation.com/pyxwiki/ index.php?title=How_PYXIS_Works)
Octree
146
Octree
An octree is a tree data structure in which each internal node has exactly eight children. Octrees are most often used to partition a three dimensional space by recursively subdividing it into eight octants. Octrees are the three-dimensional analog of quadtrees. The name is formed from oct + tree, but note that it is normally written "octree" with only one "t". Octrees are often used in 3D graphics and 3D game engines.
Left: Recursive subdivision of a cube into octants. Right: The corresponding octree.
History
The use of octrees for 3D computer graphics was pioneered by Donald Meagher at Rensselaer Polytechnic Institute, described in a 1980 report "Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer",[1] for which he holds a 1995 patent (with a 1984 priority date) "High-speed image generation of complex solid objects using octree encoding" [2]
Octree
147
References
[3] Henning Eberhardt, Vesa Klumpp, Uwe D. Hanebeck, Density Trees for Efficient Nonlinear State Estimation, Proceedings of the 13th International Conference on Information Fusion, Edinburgh, United Kingdom, July, 2010. (http:/ / isas. uka. de/ Publikationen/ Fusion10_EberhardtKlumpp. pdf)
External links
Octree Quantization in Microsoft Systems Journal (http://www.microsoft.com/msj/archive/S3F1.aspx) Color Quantization using Octrees in Dr. Dobb's (http://www.ddj.com/184409805) Color Quantization using Octrees in Dr. Dobb's Source Code (ftp://ftp.drdobbs.com/sourcecode/ddj/1996/ 9601.zip) Octree Color Quantization Overview (http://web.cs.wpi.edu/~matt/courses/cs563/talks/color_quant/ CQoctree.html) Parallel implementation of octtree generation algorithm, P. Sojan Lal, A Unnikrishnan, K Poulose Jacob, ICIP 1997, IEEE Digital Library (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=727419) Generation of Octrees from Raster Scan with Reduced Information Loss, P. Sojan Lal, A Unnikrishnan, K Poulose Jacob, IASTED International conference VIIP 2001 (http://dblp.uni-trier.de/db/conf/viip/viip2001. html#LalUJ01) (http://www.actapress.com/catalogue2009/proc_series13.html#viip2001) C++ implementation (GPL license) (http://nomis80.org/code/octree.html) Parallel Octrees for Finite Element Applications (http://sc07.supercomputing.org/schedule/pdf/pap117.pdf) Cube 2: Sauerbraten - a game written in the octree-heavy Cube 2 engine (http://www.sauerbraten.org/) Ogre - A 3d Object-oriented Graphics Rendering Engine with a Octree Scene Manager Implementation (MIT license) (http://www.ogre3d.org) Dendro: parallel multigrid for octree meshes (MPI/C++ implementation) (http://www.cc.gatech.edu/csela/ dendro) Video: Use of an octree in state estimation (http://www.youtube.com/watch?v=Jw4VAgcWruY)
148
Global Illumination
Global illumination
Rendering without global illumination. Areas that lie outside of the ceiling lamp's direct light lack definition. For example, the lamp's housing appears completely uniform. Without the ambient light added into the render, it would appear uniformly black.
Rendering with global illumination. Light is reflected by surfaces, and colored light transfers from one surface to another. Notice how color from the red wall and green wall (not visible) reflects onto other surfaces in the scene. Also notable is the caustic projected onto the red wall from light passing through the glass sphere. Global illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination). Theoretically reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another object (as opposed to an object being affected only by a direct light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.
Global illumination Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly. Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon mapping, and image based lighting are examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate. These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design. In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power.
149
Procedure
More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations to the rendering equation. Well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here: Inversion: is not applied in practice Expansion: bi-directional approach: Photon mapping + Distributed ray tracing, Bi-directional path tracing, Metropolis light transport Iteration: Radiosity In Light path notation global lighting the paths of the type L (D | S) corresponds * E.
Global illumination
150
Image-based lighting
Another way to simulate real global illumination, is the use of High dynamic range images (HDRIs), also known as environment maps, which encircle the scene, and they illuminate. This process is known as image-based lighting.
List of methods
Method Ray tracing Description/Notes Several enhanced variants exist for solving problems related to sampling, aliasing, soft shadows: Distributed ray tracing, Cone tracing, Beam tracing. Unbiased, Variant: Bi-directional Path Tracing enhanced variants: Progressive Photon Mapping, Stochastic Progressive Photon Mapping (Consistent enhanced variants: Multidimensional Lightcuts, Bidirectional Lightcuts Extensively used in movie animations [2][3] [1] )
Path tracing Photon mapping Lightcuts Point Based Global Illumination Radiosity Metropolis light transport
Finite element method, very good for precomputations. Builds upon bi-directional path tracing, unbiased
References
[1] http:/ / www. luxrender. net/ wiki/ SPPM [2] http:/ / graphics. pixar. com/ library/ PointBasedGlobalIlluminationForMovieProduction/ paper. pdf [3] http:/ / www. karstendaemen. com/ thesis/ files/ intro_pbgi. pdf
External links
SSRT (http://www.nirenstein.com/e107/page.php?11) C++ source code for a Monte-carlo pathtracer (supporting GI) - written with ease of understanding in mind. Video demonstrating global illumination and the ambient color effect (http://www.archive.org/details/ MarcC_AoI-Global_Illumination) Real-time GI demos (http://realtimeradiosity.com/demos) survey of practical real-time GI techniques as a list of executable demos kuleuven (http://www.cs.kuleuven.be/~phil/GI/) - This page contains the Global Illumination Compendium, an effort to bring together most of the useful formulas and equations for global illumination algorithms in computer graphics. GI Tutorial (http://www.youtube.com/watch?v=K5a-FqHz3o0) - Video tutorial on faking global illumination within 3D Studio Max by Jason Donati
Rendering equation
151
Rendering equation
In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al.[1] and James Kajiya[2] in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.
The physical basis for the rendering equation is the law of conservation of energy. Assuming that L denotes radiance, we have that at each particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light itself is the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and cosine of the incident angle.
The rendering equation describes the total amount of light emitted from a point x along a particular viewing direction, given a function for incoming light and a BRDF.
Equation form
The rendering equation may be written in the form
where to is a particular wavelength of light is time is the location in space is the direction of the outgoing light is the negative direction of the incoming light is the total spectral radiance of wavelength , from a particular position is emitted spectral radiance is the unit hemisphere containing all possible values for is an integral over is the bidirectional reflectance distribution function, the proportion of light reflected from at position , time , and at wavelength is spectral radiance of wavelength coming inward toward from direction at time is the weakening factor of inward irradiance due to incident angle, as the light flux is smeared across a surface whose area is larger than the projected area perpendicular to the ray directed outward along direction at time
Two noteworthy features are: its linearityit is composed only of multiplications and additions, and its spatial homogeneityit is the same in all positions and orientations. These mean a wide range of factorings and rearrangements of the equation are possible.
Rendering equation Note this equation's spectral and time dependence may be sampled at or integrated over sections of the visible
152
spectrum to obtain, for example, a trichromatic color sample. A pixel value for a single frame in an animation may be obtained by fixing motion blur can be produced by averaging over some given time interval (by integrating over the time interval and dividing by the length of the interval).[3]
Applications
Solving the rendering equation for any given scene is the primary challenge in realistic rendering. One approach to solving the equation is based on finite element methods, leading to the radiosity algorithm. Another approach using Monte Carlo methods has led to many different algorithms including path tracing, photon mapping, and Metropolis light transport, among others.
Limitations
Although the equation is very general, it does not capture every aspect of light reflection. Some missing aspects include the following: Transmission, which occurs when light is transmitted through the surface, like for example when it hits a glass object or a water surface, Subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque however, it is not necessary to account for this if transmission is included in the equation, since that will effectively include also light scattered under the surface, Polarization, where different light polarizations will sometimes have different reflection distributions, for example when light bounces at a water surface, Phosphorescence, which occurs when light or other electromagnetic radiation is absorbed at one moment in time and emitted at a later moment in time, usually with a longer wavelength (unless the absorbed electromagnetic radiation is very intense), Interference, where the wave properties of light are exhibited, Fluorescence, where the absorbed and emitted light have different wavelengths, Non-linear effects, where very intense light can increase the energy level of an electron with more energy than that of a single photon (this can occur if the electron is hit by two photons at the same time), and emission of light with higher frequency than the frequency of the light that hit the surface suddenly becomes possible, and Relativistic Doppler effect, where light that bounces on an object that is moving in a very high speed will get its wavelength changed; if the light bounces at an object that is moving towards it, the impact will compress the photons, so the wavelength will become shorter and the light will be blueshifted and the photons will be packed more closely so the photon flux will be increased; if it bounces at an object that is moving away from it, it will be redshifted and the photons will be packed more sparsely so the photon flux will be decreased. For scenes that are either not composed of simple surfaces in a vacuum or for which the travel time for light is an important factor, researchers have generalized the rendering equation to produce a volume rendering equation[4] suitable for volume rendering and a transient rendering equation[5] for use with data from a time-of-flight camera.
Rendering equation
153
More advanced effects are also possible using the same framework. For instance, depth of field can be achieved by distributing ray origins over the lens area. In an animated scene, motion blur can be simulated by distributing rays in time. Distributing rays in the spectrum allows for the rendering of dispersion effects, such as rainbows and prisms. Mathematically, in order to evaluate the rendering equation, one must evaluate several integrals. Conventional ray tracing estimates these integrals by sampling the value of the integrand at a single point in the domain, which is clearly a very bad approximation. Distributed ray tracing samples the integrand at many randomly chosen points and averages the results to obtain a better approximation. It is essentially an application of the Monte Carlo method to 3D computer graphics, and for this reason is also called stochastic ray tracing. The term distributed ray tracing also sometimes refers to the application of distributed computing techniques to ray tracing, but because of ambiguity this is more properly called parallel ray tracing (in reference to parallel computing).
External links
Stochastic rasterization [1]
References
[1] http:/ / research. nvidia. com/ publication/ real-time-stochastic-rasterization-conventional-gpu-architectures
154
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; i.e., by running simulations many times over in order to calculate those same probabilities heuristically just like actually playing and recording your results in a real casino situation: hence the name. They are often used in physical and mathematical problems and are most suited to be applied when it is impossible to obtain a closed-form expression or infeasible to apply a deterministic algorithm. Monte Carlo methods are mainly used in three distinct problems: optimization, numerical integration and generation of samples from a probability distribution. Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). They are used to model phenomena with significant uncertainty in inputs, such as the calculation of risk in business. They are widely used in mathematics, for example to evaluate multidimensional definite integrals with complicated boundary conditions. When Monte Carlo simulations have been applied in space exploration and oil exploration, their predictions of failures, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[1] The modern version of the Monte Carlo method was invented in the late 1940s by Stanislaw Ulam, while he was working on nuclear weapon projects at the Los Alamos National Laboratory. It was named, by Nicholas Metropolis, after the Monte Carlo Casino, where Ulam's uncle often gambled.[] Immediately after Ulam's breakthrough, John von Neumann understood its importance and programmed the ENIAC computer to carry out Monte Carlo calculations.
155
Introduction
Monte Carlo methods vary, but tend to follow a particular pattern: 1. Define a domain of possible inputs. 2. Generate inputs randomly from a probability distribution over the domain. 3. Perform a deterministic computation on the inputs. 4. Aggregate the results. For example, consider a circle inscribed in a unit square. Given that the circle and the square have a ratio of areas that is /4, the value of can be approximated using a Monte Carlo method:[] 1. Draw a square on the ground, then inscribe a circle within it. 2. Uniformly scatter some objects of uniform size (grains of rice or sand) over the square. 3. Count the number of objects inside the circle and the total number of objects. 4. The ratio of the two counts is an estimate of the ratio of the two areas, which is /4. Multiply the result by 4 to estimate .
Monte Carlo method applied to approximating the value of . After placing 30000 random points, the estimate for is within 0.07% of the actual value. This happens with an approximate probability of 20%.
In this procedure the domain of inputs is the square that circumscribes our circle. We generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the circle). Finally, we aggregate the results to obtain our final result, the approximation of . If grains are purposely dropped into only the center of the circle, they are not uniformly distributed, so our approximation is poor. Second, there should be a large number of inputs. The approximation is generally poor if only a few grains are randomly dropped into the whole square. On average, the approximation improves as more grains are dropped.
History
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using a probabilistic analog (see Simulated annealing). An early variant of the Monte Carlo method can be seen in the Buffon's needle experiment, in which can be estimated by dropping needles on a floor made of parallel strips of wood. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but did not publish anything on it.[] In 1946, physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus, and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Stanislaw Ulam had the idea of using random experiments. He recounts his inspiration as follows: The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible
Monte Carlo method to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations. Stanislaw Ulam[2] Being secret, the work of von Neumann and Ulam required a code name. Von Neumann chose the name Monte Carlo. The name refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.[][3][4] Using lists of "truly random" random numbers was extremely slow, but von Neumann developed a way to calculate pseudorandom numbers, using the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified it as being faster than any other method at his disposal, and also noted that when it went awry it did so obviously, unlike methods that could be subtly incorrect. Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers that had been previously used for statistical sampling.
156
Definitions
There is no consensus on how Monte Carlo should be defined. For example, Ripley[] defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky[] distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to determine the properties of some phenomenon (or behavior). Examples: Simulation: Drawing one pseudo-random uniform variable from the interval (0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation. Monte Carlo method: The area of an irregular figure inscribed in a unit square can be determined by throwing darts at the square and computing the ratio of hits within the irregular figure to the total number of darts thrown. This is a Monte Carlo method of determining area, but not a simulation. Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval (0,1], and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin. Kalos and Whitlock[] point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
157
Applications
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with a large number of coupled degrees of freedom. Areas of application include:
Physical sciences
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms. In statistical physics Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems.[8] Quantum Monte Carlo methods solve the many-body problem for quantum systems. In
Monte Carlo method experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both the evolution of galaxies[9] and the transmission of microwave radiation through a rough planetary surface.[10] Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
158
Engineering
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example, in microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits. in geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis. in wind energy yield analysis, the predicted energy output of a wind farm during its lifetime is calculated giving different levels of uncertainty (P90, P50, etc.) impacts of pollution are simulated[] and diesel compared with petrol.[] In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or Particle filter that forms the heart of the SLAM (Simultaneous Localization and Mapping) algorithm. In Aerospace engineering, Monte Carlo methods are used to ensure that multiple parts of an assembly will fit into an engine component.
Computational biology
Monte Carlo methods are used in computational biology, such for as Bayesian inference in phylogeny. Biological systems such as proteins[11] membranes,[12] images of cancer,[13] are being studied by means of computer simulations. The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy. Computer simulations allow us to monitor the local environment of a particular molecule to see if some chemical reaction is happening for instance. We can also conduct thought experiments when the physical experiments are not feasible, for instance breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields.
Computer graphics
Path Tracing, occasionally referred to as Monte Carlo Ray Tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
Applied statistics
In applied statistics, Monte Carlo methods are generally used for two purposes: 1. To compare competing statistics for small samples under realistic data conditions. Although Type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.[14]
Monte Carlo method 2. To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice or more frequentlyfor the efficiency of not having to track which permutations have already been selected).
159
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move. Monte Carlo Tree Search has been used successfully to play games such as Go,[17] Tantrix,[18] Battleship,[19] Havannah,[20] and Arimaa.[21]
160
Telecommunications
When planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
Use in mathematics
In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Integration
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensionsfar too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to a series of nested one-dimensional integrals.[] 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom. Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergencei.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.[] A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling[23][24] or the VEGAS algorithm.
Monte-Carlo integration works by comparing random points with the value of the function
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly. Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis-Hastings algorithm, Gibbs sampling and the Wang and Landau
161
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.[25][26]
Computational mathematics
Monte Carlo methods are useful in many areas of computational mathematics, where a "lucky choice" can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n that is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this
Monte Carlo method answer when it is wrong too oftenin this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
162
Notes
[11] [15] [16] [17] [18] [19] [20] [21] , http:/ / sander. landofsand. com/ publications/ Monte-Carlo_Tree_Search_-_A_New_Framework_for_Game_AI. pdf http:/ / mcts. ai/ about/ index. html http:/ / link. springer. com/ chapter/ 10. 1007/ 978-3-540-87608-3_6 http:/ / www. tantrix. com:4321/ Tantrix/ TRobot/ MCTS%20Final%20Report. pdf http:/ / www0. cs. ucl. ac. uk/ staff/ D. Silver/ web/ Publications_files/ pomcp. pdf http:/ / link. springer. com/ chapter/ 10. 1007/ 978-3-642-17928-0_10 http:/ / www. arimaa. com/ arimaa/ papers/ ThomasJakl/ bc-thesis. pdf
References
Anderson, H.L. (1986). "Metropolis, Monte Carlo and the MANIAC" (http://library.lanl.gov/cgi-bin/ getfile?00326886.pdf). Los Alamos Science 14: 96108. Baeurle, Stephan A. (2009). "Multiscale modeling of polymer materials using field-theoretic methodologies: A survey about recent developments". Journal of Mathematical Chemistry 46 (2): 363426. doi: 10.1007/s10910-008-9467-3 (http://dx.doi.org/10.1007/s10910-008-9467-3). Berg, Bernd A. (2004). Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code). Hackensack, NJ: World Scientific. ISBN981-238-935-0. Binder, Kurt (1995). The Monte Carlo Method in Condensed Matter Physics. New York: Springer. ISBN0-387-54369-4. Caflisch, R. E. (1998). Monte Carlo and quasi-Monte Carlo methods. Acta Numerica 7. Cambridge University Press. pp.149. Davenport, J. H. "Primality testing revisited". Proceeding ISSAC '92 Papers from the international symposium on Symbolic and algebraic computation: 123129. doi: 10.1145/143242.143290 (http://dx.doi.org/10.1145/ 143242.143290). ISBN0-89791-489-9. Doucet, Arnaud; Freitas, Nando de; Gordon, Neil (2001). Sequential Monte Carlo methods in practice. New York: Springer. ISBN0-387-95146-6. Eckhardt, Roger (1987). "Stan Ulam, John von Neumann, and the Monte Carlo method" (http://www.lanl.gov/ history/admin/files/Stan_Ulam_John_von_Neumann_and_the_Monte_Carlo_Method.pdf). Los Alamos Science, Special Issue (15): 131137. Fishman, G. S. (1995). Monte Carlo: Concepts, Algorithms, and Applications. New York: Springer. ISBN0-387-94527-X. C. Forastero and L. Zamora and D. Guirado and A. Lallena (2010). "A Monte Carlo tool to simulate breast cancer screening programmes". Phys. In Med. And Biol. 55 (17): 5213. Bibcode: 2010PMB....55.5213F (http://adsabs. harvard.edu/abs/2010PMB....55.5213F). doi: 10.1088/0031-9155/55/17/021 (http://dx.doi.org/10.1088/ 0031-9155/55/17/021). Golden, Leslie M. (1979). "The Effect of Surface Roughness on the Transmission of Microwave Radiation Through a Planetary Surface". Icarus 38 (3): 451. Bibcode: 1979Icar...38..451G (http://adsabs.harvard.edu/ abs/1979Icar...38..451G). doi: 10.1016/0019-1035(79)90199-4 (http://dx.doi.org/10.1016/ 0019-1035(79)90199-4). Gould, Harvey; Tobochnik, Jan (1988). An Introduction to Computer Simulation Methods, Part 2, Applications to Physical Systems. Reading: Addison-Wesley. ISBN0-201-16504-X. Grinstead, Charles; Snell, J. Laurie (1997). Introduction to Probability. American Mathematical Society. pp.1011. Hammersley, J. M.; Handscomb, D. C. (1975). Monte Carlo Methods. London: Methuen. ISBN0-416-52340-4.
Monte Carlo method Hartmann, A.K. (2009). Practical Guide to Computer Simulations (http://www.worldscibooks.com/physics/ 6988.html). World Scientific. ISBN978-981-283-415-7. Hubbard, Douglas (2007). How to Measure Anything: Finding the Value of Intangibles in Business. John Wiley & Sons. p.46. Hubbard, Douglas (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. John Wiley & Sons. Kahneman, D.; Tversky, A. (1982). Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press. Kalos, Malvin H.; Whitlock, Paula A. (2008). Monte Carlo Methods. Wiley-VCH. ISBN978-3-527-40760-6. Kroese, D. P.; Taimre, T.; Botev, Z.I. (2011). Handbook of Monte Carlo Methods (http://www. montecarlohandbook.org). New York: John Wiley & Sons. p.772. ISBN0-470-17793-4. MacGillivray, H. T.; Dodd, R. J. (1982). "Monte-Carlo simulations of galaxy systems" (http://www. springerlink.com/content/rp3g1q05j176r108/fulltext.pdf). Astrophysics and Space Science (Springer Netherlands) 86 (2). MacKeown, P. Kevin (1997). Stochastic Simulation in Physics. New York: Springer. ISBN981-3083-26-3. Metropolis, N. (1987). "The beginning of the Monte Carlo method" (http://library.lanl.gov/la-pubs/00326866. pdf). Los Alamos Science (1987 Special Issue dedicated to Stanislaw Ulam): 125130. Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward (1953). "Equation of State Calculations by Fast Computing Machines". Journal of Chemical Physics 21 (6): 1087. Bibcode: 1953JChPh..21.1087M (http://adsabs.harvard.edu/abs/1953JChPh..21.1087M). doi: 10.1063/1.1699114 (http://dx.doi.org/10.1063/1.1699114). Metropolis, N.; Ulam, S. (1949). "The Monte Carlo Method". Journal of the American Statistical Association (American Statistical Association) 44 (247): 335341. doi: 10.2307/2280232 (http://dx.doi.org/10.2307/ 2280232). JSTOR 2280232 (http://www.jstor.org/stable/2280232). PMID 18139350 (http://www.ncbi. nlm.nih.gov/pubmed/18139350). M. Milik and J. Skolnick (Jan 1993). "Insertion of peptide chains into lipid membranes: an off-lattice Monte Carlo dynamics model". Proteins 15 (1): 1025. doi: 10.1002/prot.340150104 (http://dx.doi.org/10.1002/ prot.340150104). PMID 8451235 (http://www.ncbi.nlm.nih.gov/pubmed/8451235). Mosegaard, Klaus; Tarantola, Albert (1995). "Monte Carlo sampling of solutions to inverse problems". J. Geophys. Res. 100 (B7): 1243112447. Bibcode: 1995JGR...10012431M (http://adsabs.harvard.edu/abs/ 1995JGR...10012431M). doi: 10.1029/94JB03097 (http://dx.doi.org/10.1029/94JB03097). P. Ojeda and M. Garcia and A. Londono and N.Y. Chen (Feb 2009). "Monte Carlo Simulations of Proteins in Cages: Influence of Confinement on the Stability of Intermediate States". Biophys. Jour. (Biophysical Society) 96 (3): 10761082. Bibcode: 2009BpJ....96.1076O (http://adsabs.harvard.edu/abs/2009BpJ....96.1076O). doi: 10.1529/biophysj.107.125369 (http://dx.doi.org/10.1529/biophysj.107.125369). Int Panis L; De Nocker L, De Vlieger I, Torfs R (2001). "Trends and uncertainty in air pollution impacts and external costs of Belgian passenger car traffic International". Journal of Vehicle Design 27 (14): 183194. doi: 10.1504/IJVD.2001.001963 (http://dx.doi.org/10.1504/IJVD.2001.001963). Int Panis L, Rabl A, De Nocker L, Torfs R (2002). "Diesel or Petrol ? An environmental comparison hampered by uncertainty". In P. Sturm. Mitteilungen Institut fr Verbrennungskraftmaschinen und Thermodynamik (Technische Universitt Graz Austria). Heft 81 Vol 1: 4854. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1996) [1986]. Numerical Recipes in Fortran 77: The Art of Scientific Computing. Fortran Numerical Recipes 1 (Second ed.). Cambridge University Press. ISBN0-521-43064-X. Ripley, B. D. (1987). Stochastic Simulation. Wiley & Sons. Robert, C. P.; Casella, G. (2004). Monte Carlo Statistical Methods (2nd ed.). New York: Springer. ISBN0-387-21239-6.
163
Monte Carlo method Rubinstein, R. Y.; Kroese, D. P. (2007). Simulation and the Monte Carlo Method (2nd ed.). New York: John Wiley & Sons. ISBN978-0-470-17793-8. Savvides, Savvakis C. (1994). "Risk Analysis in Investment Appraisal". Project Appraisal Journal 9 (1). doi: 10.2139/ssrn.265905 (http://dx.doi.org/10.2139/ssrn.265905). Sawilowsky, Shlomo S.; Fahoome, Gail C. (2003). Statistics via Monte Carlo Simulation with Fortran. Rochester Hills, MI: JMASM. ISBN0-9740236-0-4. Sawilowsky, Shlomo S. (2003). "You think you've got trivials?" (http://education.wayne.edu/jmasm/ sawilowsky_effect_size_debate.pdf). Journal of Modern Applied Statistical Methods 2 (1): 218225. Silver, David; Veness, Joel (2010). "Monte-Carlo Planning in Large POMDPs" (http://books.nips.cc/papers/ files/nips23/NIPS2010_0740.pdf). In Lafferty, J.; Williams, C. K. I.; Shawe-Taylor, J.; Zemel, R. S.; Culotta, A. Advances in Neural Information Processing Systems 23. Neural Information Processing Systems Foundation. Szirmay-Kalos, Lszl (2008). Monte Carlo Methods in Global Illumination - Photo-realistic Rendering with Randomization. VDM Verlag Dr. Mueller e.K. ISBN978-3-8364-7919-6. Tarantola, Albert (2005). Inverse Problem Theory (http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/ SIAM/index.html). Philadelphia: Society for Industrial and Applied Mathematics. ISBN0-89871-572-5. Vose, David (2008). Risk Analysis, A Quantitative Guide (Third ed.). John Wiley & Sons.
164
External links
Overview and reference list (http://mathworld.wolfram.com/MonteCarloMethod.html), Mathworld Caf math : Monte Carlo Integration (http://www.cafemath.fr/mathblog/article.php?page=MonteCarlo.php) : A blog article describing Monte Carlo integration (principle, hypothesis, confidence interval) Feynman-Kac models and particle Monte Carlo algorithms (http://www.math.u-bordeaux1.fr/~delmoral/ simulinks.html) Website on the applications of particle Monte Carlo methods in signal processing, rare event simulation, molecular dynamics, financial mathematics, optimal control, computational physics, and biology. Introduction to Monte Carlo Methods (http://www.phy.ornl.gov/csep/CSEP/MC/MC.html), Computational Science Education Project The Basics of Monte Carlo Simulations (http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html), University of Nebraska-Lincoln Introduction to Monte Carlo simulation (http://office.microsoft.com/en-us/excel-help/ introduction-to-monte-carlo-simulation-HA010282777.aspx) (for Microsoft Excel), Wayne L. Winston Monte Carlo Simulation for MATLAB and Simulink (http://www.mathworks.com/discovery/ monte-carlo-simulation.html) Monte Carlo Methods Overview and Concept (http://www.brighton-webs.co.uk/montecarlo/concept.htm), brighton-webs.co.uk Monte Carlo techniques applied in physics (http://personal-pages.ps.ic.ac.uk/~achremos/Applet1-page.htm) Monte Carlo Method Example (http://waqqasfarooq.com/waqqasfarooq/index.php?option=com_content& view=article&id=47:monte-carlo&catid=34:statistics&Itemid=53), A step-by-step guide to creating a monte carlo excel spreadsheet Approximate And Double Check Probability Problems Using Monte Carlo method (http://orcik.net/ programming/approximate-and-double-check-probability-problems-using-monte-carlo-method/) at Orcik Dot Net
Unbiased rendering
165
Unbiased rendering
In computer graphics, unbiased rendering refers to a rendering technique that does not introduce any systematic error, or bias, into the radiance approximation. Because of this, they are often used to generate the reference image to which other rendering techniques are compared. Mathematically speaking, the expected value of the unbiased estimator will always be the correct value, for any number of samples. Error found in an unbiased rendering will be due to variance, which manifests itself as high-frequency noise in the resultant image. Variance is reduced by and standard deviation by for samples, meaning that four times as many samples are needed to halve the error. This makes unbiased rendering techniques less attractive for realtime or interactive rate applications. Conversely, an image produced by an unbiased renderer that appears smooth and noiseless is probabilistically correct. A biased rendering method is not necessarily wrong, and it can still converge to the correct answer if the An example of an unbiased render using Indigo Renderer estimator is consistent. It does, however, introduce a certain bias error, usually in the form of a blur, in efforts to reduce the variance (high-frequency noise). It is important to note that an unbiased technique may not consider all possible paths. Path tracing can not handle caustics generated from a point light source, as it is impossible to randomly generate the path that directly reflects into the point. Progressive photon mapping (PPM), a biased rendering technique, can handle caustics quite well. PPM is also provably consistent, meaning that as the number of samples goes to infinity, the bias error goes to zero, and the probability that the estimate is correct reaches one. Unbiased rendering methods include: Path Tracing Light tracing Bidirectional path tracing Metropolis light transport (and derived Energy Redistribution Path Tracing[]) Stochastic progressive photon mapping[1]
Unbiased rendering
166
Unbiased renderers
Arnold[] Blender Cycles Luxrender Fryrender Indigo Renderer Maxwell Render Octane Render NOX renderer Thea render Kerkythea (Hybrid) mental ray (optional) VRay (optional)
References
[1] http:/ / www. luxrender. net/ wiki/ SPPM
Bibliography
"fryrender F.A.Q." (http://randomcontrol.com/component/rcfaq/?faqid=1&id=3). RandomControl, SLU. Retrieved 2010-05-20. Mike Farnsworth. "Biased vs Unbiased Rendering" (http://renderspud.blogspot.com/2006/10/ biased-vs-unbiased-rendering.html). RenderSpud. Retrieved 2010-05-20. "How to choose rendering software" (http://www.3dworldmag.com/2010/01/15/ how_to_choose_rendering_software_part_2/).
Path tracing
167
Path tracing
Path tracing is a computer graphics method of rendering images of three dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources (light bulbs), and optically-correct cameras, path tracing can produce still images that are indistinguishable from photographs. Path tracing naturally simulates many effects that have to be specifically added to other methods (conventional ray tracing or scanline rendering), such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and indirect lighting. Implementation of a renderer including these effects is correspondingly simpler. Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of other rendering algorithms. In order to get high quality images from path tracing, a large number of rays must be traced to avoid visible noisy artifacts.
History
The rendering equation and its use in computer graphics was presented by James Kajiya in 1986.[1] Path Tracing was introduced then as an algorithm to find a numerical solution to the integral of the rendering equation. A decade later, Lafortune suggested many refinements, including bidirectional path tracing.[2] Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas. More recently, CPUs and GPUs have become powerful enough to render images more quickly, causing more widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on a GPU in 2002.[3] In February 2009 Austin Robison of Nvidia demonstrated the first commercial implementation of a path tracer running on a GPU [4], and other implementations have followed, such as that of Vladimir Koylazov in August 2009. [5] This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL and GPU ray tracing SDKs such as OptiX.
Description
The rendering equation of Kajiya adheres to three particular principles of optics; the Principle of global illumination, the Principle of Equivalence (reflected light is equivalent to emitted light), and the Principle of Direction (reflected light and scattered light have a direction). In the real world, objects and surfaces are visible due to the fact that they are reflecting light. This reflected light then illuminates other objects in turn. From that simple observation, two principles follow. I. For a given indoor scene, every object in the room must contribute illumination to every other object. II. Second, there is no distinction to be made between illumination emitted from a light source and illumination reflected from a surface. Invented in 1984, a rather different method called radiosity was faithful to both principles. However, radiosity equivocates the illuminance falling on a surface with the luminance that leaves the surface. This forced all surfaces to be Lambertian, or "perfectly diffuse". While radiosity received a lot of attention at its invocation, perfectly diffuse surfaces do not exist in the real world. The realization that illumination scattering throughout a scene must also scatter with a direction was the focus of research throughout the 1990s, since accounting for direction always exacted a price of steep increases in calculation times on desktop computers. Principle III follows.
Path tracing III. The illumination coming from surfaces must scatter in a particular direction that is some function of the incoming direction of the arriving illumination, and the outgoing direction being sampled. Kajiya's equation is a complete summary of these three principles, and path tracing, which approximates a solution to the equation, remains faithful to them in its implementation. There are other principles of optics which are not the focus of Kajiya's equation, and therefore are often difficult or incorrectly simulated by the algorithm. Path Tracing is confounded by optical phenomena not contained in the three principles. For example, Bright, sharp caustics; radiance scales by the density of illuminance in space. Subsurface scattering; a violation of principle III above. Chromatic aberration. fluorescence. iridescence. Light is a spectrum of frequencies.
168
// Pick a random direction from here and keep going. Ray newRay; newRay.origin = r.pointWhereObjWasHit; newRay.direction = RandomUnitVectorInHemisphereOf(r.normalWhereObjWasHit); // This is NOT a cosine-weighted distribution!
Path tracing
// Compute the BRDF for this ray (assuming Lambertian reflection) float cos_theta = DotProduct(newRay.direction, r.normalWhereObjWasHit); Color BDRF = m.reflectance * cos_theta; Color reflected = TracePath(newRay, depth + 1);
169
All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1). There are other considerations to take into account to ensure conservation of energy. In particular, in the naive case, the reflectance of a diffuse BRDF must not exceed or the object will reflect more light than it receives (this
however depends on the sampling scheme used, and can be difficult to get right).
Performance
A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases. Noise is particularly a problem for animations, giving them a normally-unwanted "film-grain" quality of random speckling. The central performance bottleneck in Path Tracing is the complex geometrical calculation of casting a ray. Importance Sampling is a technique which is motivated to cast less rays through the scene while still converging correctly to outgoing luminance on the surface point. This is done by casting more rays in directions in which the luminance would have been greater anyway. If the density of rays cast in certain directions matches the strength of contributions in those directions, the result is identical, but far less rays were actually cast. Importance Sampling is used to match ray density to Lambert's Cosine law, and also used to match BRDFs. Metropolis light transport can result in a lower-noise image with fewer samples. This algorithm was created in order to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to reach the part of the scene that the camera is viewing. It is also shown promise on correctly rendering pathological situations with caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing ones. In this sense, the algorithm "remembers" the successful paths from light sources to the camera.
Path tracing
170
In real time
An example of advanced path-tracing engine capable of real-time graphic is Brigade [6] by Jacco Bikker. The first version of this highly-optimized game-oriented engine was released on January 26th, 2012. It's the successor of the Arauna real-time ray-tracing engine, made by the same author, and it requires the CUDA architecture (by Nvidia) to run.
Notes
[1] [2] [3] [4] [5] [6] http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_kajiya1986rendering http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_lafortune1996mathematical http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_purcell2002ray http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_robisonNVIRT http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_pathGPUimplementations http:/ / igad. nhtv. nl/ ~bikker/
1. ^ Kajiya, J. T. (1986). "The rendering equation". Proceedings of the 13th annual conference on Computer graphics and interactive techniques. ACM. CiteSeerX: 10.1.1.63.1402 (http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.63.1402). 2. ^ Lafortune, E, Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering (http:// www.graphics.cornell.edu/~eric/thesis/index.html), (PhD thesis), 1996. 3. ^ Purcell, T J; Buck, I; Mark, W; and Hanrahan, P, "Ray Tracing on Programmable Graphics Hardware", Proc. SIGGRAPH 2002, 703 - 712. See also Purcell, T, Ray tracing on a stream processor (http://graphics.stanford. edu/papers/tpurcell_thesis/) (PhD thesis), 2004. 4. ^ Robison, Austin, "Interactive Ray Tracing on the GPU and NVIRT Overview" (http://realtimerendering.com/ downloads/NVIRT-Overview.pdf), slide 37, I3D 2009. 5. ^ Vray demo (http://www.youtube.com/watch?v=eRoSFNRQETg); Other examples include Octane Render, Arion, and Luxrender. 6. ^ Veach, E., and Guibas, L. J. Metropolis light transport (http://graphics.stanford.edu/papers/metro/metro. pdf). In SIGGRAPH97 (August 1997), pp.6576. 7. This "Introduction to Global Illumination" (http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm) has some good example images, demonstrating the image noise, caustics and indirect lighting properties of images rendered with path tracing methods. It also discusses possible performance improvements in some detail. 8. SmallPt (http://www.kevinbeason.com/smallpt/) is an educational path tracer by Kevin Beason. It uses 99 lines of C++ (including scene description). This page has a good set of examples of noise resulting from this technique.
Radiosity
171
Radiosity
Radiosity is a global illumination algorithm used in 3D computer graphics rendering. Radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms (such as path tracing), which handle all types of light paths, typical radiosity methods only account for paths which leave a light source and are reflected diffusely some number of times (possibly zero) before hitting the eye; such paths are represented by the code "LD*E". Radiosity is a global illumination [1] Screenshot of scene rendered with RRV (simple implementation of radiosity algorithm in the sense that the illumination renderer based on OpenGL) 79th iteration. arriving at the eye comes not just from the light sources, but all the scene surfaces interacting with each other as well. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. Radiosity methods were first developed in about 1950 in the engineering field of heat transfer. They were later refined specifically for application to the problem of rendering computer graphics in 1984 by researchers at Cornell University.[2] Notable commercial radiosity engines are Enlighten by Geomerics, used for games including Battlefield 3 and Need for Speed: The Run, 3D Studio Max, formZ, LightWave 3D and the Electric Image Animation System.
Visual characteristics
The inclusion of radiosity calculations in the rendering process often lends an added element of realism to the finished scene, because of the way it mimics real-world phenomena. Consider a simple room scene. The image on the left was rendered with a typical direct illumination renderer. There are three types of lighting in this scene which have been specifically chosen and placed by the Difference between standard direct illumination without shadow umbra, and radiosity with shadow umbra artist in an attempt to create realistic lighting: spot lighting with shadows (placed outside the window to create the light shining on the floor), ambient lighting (without which any part of the room not lit directly by a light source would be totally dark), and omnidirectional lighting without shadows (to reduce the flatness of the ambient lighting).
Radiosity The image on the right was rendered using a radiosity algorithm. There is only one source of light: an image of the sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or designed by the artist.
172
Mathematical formulation
The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the amount of light energy transferred among surfaces. In order to simplify computations, the method assumes that all scattering is perfectly diffuse. Surfaces are typically discretized into quadrilateral or triangular elements over which a piecewise polynomial function is defined. After this breakdown, the amount of light energy transfer can be computed by using the known reflectivity of the reflecting patch, combined with the view factor of the two patches. This dimensionless quantity is computed from the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of the first patch which is covered by the second patch. More correctly, radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:
where:
Radiosity B(x)i dAi is the total energy leaving a small area dAi around a point x. E(x)i dAi is the emitted energy. (x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per unit area (the total energy which arrives from other patches). S denotes that the integration variable x' runs over all the surfaces in the scene r is the distance between x and x' x and x' are the angles between the line joining x and x' and vectors normal to the surface at x and x' respectively. Vis(x,x' ) is a visibility function, defined to be 1 if the two points x and x' are visible from each other, and 0 if they are not. If the surfaces are approximated by a finite number of planar patches, each of which is taken to have a constant radiosity Bi and reflectivity i, the above equation gives the discrete radiosity equation,
173
where Fij is the geometrical view factor for the radiation leaving j and hitting patch i. This equation can then be applied to each patch. The equation is monochromatic, so color radiosity rendering requires calculation for each of the required colors.
Solution methods
The equation can formally be solved as matrix equation, to give the vector solution:
This gives the full "infinite bounce" solution for B directly. However the number of calculations to compute the matrix solution scales according to n3, where n is the number of patches. This becomes prohibitive for realistically large values of n. Instead, the equation can more readily be solved iteratively, by repeatedly applying the single-bounce update formula above. Formally, this is a solution of the matrix equation by Jacobi iteration. Because the reflectivities i are less than 1, this scheme converges quickly, typically requiring only a handful of iterations to produce a reasonable solution. Other standard iterative methods for matrix equation solutions can also be used, for example the GaussSeidel method, where updated values for each patch are used in the calculation as soon as they are computed, rather than all being updated synchronously at the end of each sweep. The solution can also be tweaked to iterate over each of the sending elements in turn in its main outermost loop for each update, rather than each of the receiving patches. This is known as the shooting variant of the algorithm, as opposed to the gathering variant. Using the view factor reciprocity, Ai Fij = Aj Fji, the update equation can also be re-written in terms of the view factor Fji seen by each sending patch Aj:
The geometrical form factor (or "projected solid angle") Fij. Fij can be obtained by projecting the element Aj onto a the surface of a unit hemisphere, and then projecting that in turn onto a unit circle around the point of interest in the plane of Ai. The form factor is then equal to the proportion of the unit circle covered by this projection. Form factors obey the reciprocity relation AiFij = AjFji
This is sometimes known as the "power" formulation, since it is now the total transmitted power of each element that is being updated, rather than its radiosity. The view factor Fij itself can be calculated in a number of ways. Early methods used a hemicube (an imaginary cube centered upon the first surface to which the second surface was projected, devised by Cohen and Greenberg in 1985).
Radiosity The surface of the hemicube was divided into pixel-like squares, for each of which a view factor can be readily calculated analytically. The full form factor could then be approximated by adding up the contribution from each of the pixel-like squares. The projection onto the hemicube, which could be adapted from standard methods for determining the visibility of polygons, also solved the problem of intervening patches partially obscuring those behind. However all this was quite computationally expensive, because ideally form factors must be derived for every possible pair of patches, leading to a quadratic increase in computation as the number of patches increased. This can be reduced somewhat by using a binary space partitioning tree to reduce the amount of time spent determining which patches are completely hidden from others in complex scenes; but even so, the time spent to determine the form factor still typically scales as n log n. New methods include adaptive integration[3]
174
Sampling approaches
The form factors Fij themselves are not in fact explicitly needed in either of the update equations; neither to estimate the total intensity j Fij Bj gathered from the whole view, nor to estimate how the power Aj Bj being radiated is distributed. Instead, these updates can be estimated by sampling methods, without ever having to calculate form factors explicitly. Since the mid 1990s such sampling approaches have been the methods most predominantly used for practical radiosity calculations. The gathered intensity can be estimated by generating a set of samples in the unit circle, lifting these onto the hemisphere, and then seeing what was the radiosity of the element that a ray incoming in that direction would have originated on. The estimate for the total gathered intensity is then just the average of the radiosities discovered by each ray. Similarly, in the power formulation, power can be distributed by generating a set of rays from the radiating element in the same way, and spreading the power to be distributed equally between each element a ray hits. This is essentially the same distribution that a path-tracing program would sample in tracing back one diffuse reflection step; or that a bidirectional ray tracing program would sample to achieve one forward diffuse reflection step when light source mapping forwards. The sampling approach therefore to some extent represents a convergence between the two techniques, the key difference remaining that the radiosity technique aims to build up a sufficiently accurate map of the radiance of all the surfaces in the scene, rather than just a representation of the current view.
Radiosity
175
Advantages
One of the advantages of the Radiosity algorithm is that it is relatively simple to explain and implement. This makes it a useful algorithm for teaching students about global illumination algorithms. A typical direct illumination renderer already contains nearly all of the algorithms (perspective transformations, texture mapping, hidden surface removal) required to implement radiosity. A strong grasp of mathematics is not required to understand or implement this algorithm[citation needed].
Limitations
A modern render of the iconic Utah teapot. Radiosity was used for all diffuse illumination in this scene.
Typical radiosity methods only account for light paths of the form LD*E, i.e., paths which start at a light source and make multiple diffuse bounces before reaching the eye. Although there are several approaches to integrating other illumination effects such as specular[4] and glossy [5] reflections, radiosity-based methods are generally not used to solve the complete rendering equation. Basic radiosity also has trouble resolving sudden changes in visibility (e.g., hard-edged shadows) because coarse, regular discretization into piecewise constant elements corresponds to a low-pass box filter of the spatial domain. Discontinuity meshing [6] uses knowledge of visibility events to generate a more intelligent discretization.
References
[2] "Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile, Modeling the interaction of light between diffuse surfaces (http:/ / www. cs. rpi. edu/ ~cutler/ classes/ advancedgraphics/ S07/ lectures/ goral. pdf)",, Computer Graphics, Vol. 18, No. 3. [3] G Walton, Calculation of Obstructed View Factors by Adaptive Integration, NIST Report NISTIR-6925 (http:/ / www. bfrl. nist. gov/ IAQanalysis/ docs/ NISTIR-6925. pdf), see also http:/ / view3d. sourceforge. net/ [4] http:/ / portal. acm. org/ citation. cfm?id=37438& coll=portal& dl=ACM [5] http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ clustering/ [6] http:/ / www. cs. cmu. edu/ ~ph/ discon. ps. gz
Radiosity
176
Further reading
Radiosity Overview, from HyperGraph of SIGGRAPH (http://www.siggraph.org/education/materials/ HyperGraph/radiosity/overview_1.htm) (provides full matrix radiosity algorithm and progressive radiosity algorithm) Radiosity, by Hugo Elias (http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm) (also provides a general overview of lighting algorithms, along with programming examples) Radiosity, by Allen Martin (http://web.cs.wpi.edu/~matt/courses/cs563/talks/radiosity.html) (a slightly more mathematical explanation of radiosity) ROVER, by Tralvex Yeap (http://www.tralvex.com/pub/rover/abs-mnu.htm) (Radiosity Abstracts & Bibliography Library)
External links
RADical, by Parag Chaudhuri (http://www.cse.iitd.ernet.in/~parag/projects/CG2/asign2/report/RADical. shtml) (an implementation of shooting & sorting variant of progressive radiosity algorithm with OpenGL acceleration, extending from GLUTRAD by Colbeck) Radiosity Renderer and Visualizer (http://dudka.cz/rrv) (simple implementation of radiosity renderer based on OpenGL) Enlighten (http://www.geomerics.com) (Licensed software code that provides realtime radiosity for computer game applications. Developed by the UK company Geomerics)
Photon mapping
In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of light such as spectral rendering. Unlike path tracing, bidirectional path tracing and Metropolis light transport, photon mapping is a "biased" rendering algorithm, which means that averaging many renders using this method does not converge to a correct solution to the rendering equation. However, since it is a consistent method, a correct solution can be achieved by increasing the number of photons.
Photon mapping
177
Effects
Caustics
Light refracted or reflected causes patterns called caustics, usually visible as concentrated patches of light on nearby surfaces. For example, as light rays pass through a wine glass sitting on a table, they are refracted and patterns of light are visible on the table. Photon mapping can trace the paths of individual photons to model where these concentrated patches of light will appear.
Diffuse interreflection
A model of a wine glass ray traced with photon Diffuse interreflection is apparent when light from one diffuse object is mapping to show caustics. reflected onto another. Photon mapping is particularly adept at handling this effect because the algorithm reflects photons from one surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the scene. Color bleed is an example of diffuse interreflection.
Subsurface scattering
Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or reflected in a different direction. Subsurface scattering can accurately be modeled using photon mapping. This was the original way Jensen implemented it; however, the method becomes slow for highly scattering materials, and bidirectional surface scattering reflectance distribution functions (BSSRDFs) are more efficient in these situations.
Usage
Construction of the photon map (1st pass)
With photon mapping, light packets called photons are sent out into the scene from the light sources. Whenever a photon intersects with a surface, the intersection point and incoming direction are stored in a cache called the photon map. Typically, two photon maps are created for a scene: one especially for caustics and a global one for other light. After intersecting the surface, a probability for either reflecting, absorbing, or transmitting/refracting is given by the material. A Monte Carlo method called Russian roulette is used to choose one of these actions. If the photon is absorbed, no new direction is given, and tracing for that photon ends. If the photon reflects, the surface's bidirectional reflectance distribution function is used to determine the ratio of reflected radiance. Finally, if the photon is transmitting, a function for its direction is given depending upon the nature of the transmission. Once the photon map is constructed (or during construction), it is typically arranged in a manner that is optimal for the k-nearest neighbor algorithm, as photon look-up time depends on the spatial distribution of the photons. Jensen advocates the usage of kd-trees. The photon map is then stored on disk or in memory for later usage.
Photon mapping
178
Optimizations
To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of simply sending out photons in random directions, they are sent in the direction of a known object that is a desired photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the algorithm: for example, choosing the number of photons to send, and where and in what pattern to send them. It would seem that emitting more photons in a specific direction would cause a higher density of photons to be stored in the photon map around the position where the photons hit, and thus measuring this density would give an inaccurate value for irradiance. This is true; however, the algorithm used to compute radiance does not depend on irradiance estimates. For soft indirect illumination, if the surface is Lambertian, then a technique known as irradiance caching may be used to interpolate values from previous calculations. To avoid unnecessary collision testing in direct illumination, shadow photons can be used. During the photon mapping process, when a photon strikes a surface, in addition to the usual operations performed, a shadow photon is emitted in the same direction the original photon came from that goes all the way through the object. The next object it collides with causes a shadow photon to be stored in the photon map. Then during the direct illumination calculation, instead of sending out a ray from the surface to the light that tests collisions with objects, the photon map is queried for shadow photons. If none are present, then the object has a clear line of sight to the light source and additional calculations can be avoided. To optimize image quality, particularly of caustics, Jensen recommends use of a cone filter. Essentially, the filter gives weight to photons' contributions to radiance depending on how far they are from ray-surface intersections.
Photon mapping This can produce sharper images. Image space photon mapping [1] achieves real-time performance by computing the first and last scattering using a GPU rasterizer.
179
Variations
Although photon mapping was designed to work primarily with ray tracers, it can also be extended for use with scanline renderers.
External links
Global Illumination using Photon Maps [2] Realistic Image Synthesis Using Photon Mapping [3] ISBN 1-56881-147-0 Photon mapping introduction [4] from Worcester Polytechnic Institute Bias in Rendering [5] Siggraph Paper [6]
References
[1] [2] [3] [4] [5] [6] http:/ / research. nvidia. com/ publication/ hardware-accelerated-global-illumination-image-space-photon-mapping http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf http:/ / graphics. ucsd. edu/ ~henrik/ papers/ book/ http:/ / www. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html http:/ / www. cgafaq. info/ wiki/ Bias_in_rendering http:/ / www. cs. princeton. edu/ courses/ archive/ fall02/ cs526/ papers/ course43sig02. pdf
180
181
Other Topics
Anti-aliasing
In digital signal processing, spatial anti-aliasing is the technique of minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications. Anti-aliasing means removing signal components that have a higher frequency than is able to be properly resolved by the recording (or sampling) device. This removal is done before (re)sampling at a lower resolution. When sampling is performed without removing this part of the signal, it causes undesirable artifacts such as the black-and-white noise near the top of figure 1-a below. In signal acquisition and audio, anti-aliasing is often done using an analog anti-aliasing filter to remove the out-of-band component of the input signal prior to sampling with an analog-to-digital converter. In digital photography, optical anti-aliasing filters are made of birefringent materials, and smooth the signal in the spatial optical domain. The anti-aliasing filter essentially blurs the image slightly in order to reduce the resolution to or below that achievable by the digital sensor (the larger the pixel pitch, the lower the achievable resolution at the sensor level).
Examples
(a)
(b) Figure 1
(c)
In computer graphics, anti aliasing improves the appearance of polygon edges, so they are not "jagged" but are smoothed out on the screen. However, it incurs a performance cost for the graphics card and uses more video memory. The level of anti-aliasing determines how smooth polygon edges are (and how much video memory it consumes). Figure 1-a illustrates the visual distortion that occurs when anti-aliasing is not used. Near the top of the image, where the checkerboard is very small, the image is both difficult to recognize and not aesthetically appealing. In contrast, Figure 1-b shows an anti-aliased version of the scene. The checkerboard near the top Figure 2 blends into gray, which is usually the desired effect when the resolution is insufficient to show the detail. Even near the bottom of the image, the edges appear much smoother in the anti-aliased image. Figure 1-c shows another anti-aliasing algorithm, based on the sinc filter, which is considered better than the algorithm used in 1-b.[] Figure 2 shows magnified portions (interpolated using the nearest neighbor algorithm) of Figure 1-a (left) and 1-c (right) for comparison. In Figure 1-c, anti-aliasing has interpolated the brightness of the pixels at the boundaries to produce gray pixels since the space is occupied by both black and white tiles. These help make Figure 1-c appear
Anti-aliasing much smoother than Figure 1-a at the original magnification. In Figure 3, anti-aliasing was used to blend the boundary pixels of a sample graphic; this reduced the aesthetically jarring effect of the sharp, step-like boundaries that appear in the aliased graphic at the left. Anti-aliasing is often applied in rendering text on a computer screen, to suggest smooth contours that better emulate the appearance of text produced by conventional ink-and-paper printing. Particularly with fonts displayed on typical LCD screens, it is common to use subpixel rendering techniques like ClearType. Subpixel rendering requires special color-balanced anti-aliasing filters to turn what would be severe color distortion into barely-noticeable color fringes. Equivalent results can be had by making individual subpixels addressable as if they were full pixels, and supplying a hardware-based anti-aliasing filter as is done in the OLPC XO-1 laptop's display controller. Pixel geometry affects all of this, whether the anti-aliasing and subpixel addressing are done in software or hardware.
182
Above left: an aliased version of a simple shape; above right: an anti-aliased version of the same shape; right: The anti-aliased graphic at 5x magnification
Figure 3
where j and k are arbitrary non-negative integers. There are also frequency components involving the sine functions in one or both dimensions, but for the purpose of this discussion, the cosine will suffice. The numbers j and k together are the frequency of the component: j is the frequency in the x direction, and k is the frequency in the y direction. The goal of an anti-aliasing filter is to greatly reduce frequencies above a certain limit, known as the Nyquist frequency, so that the signal will be accurately represented by its samples, or nearly so, in accordance with the sampling theorem; there are many different choices of detailed algorithm, with different filter transfer functions. Current knowledge of human visual perception is not sufficient, in general, to say what approach will look best.
Anti-aliasing
183
Anti-aliasing
184
Mipmapping
There is also an approach specialized for texture mapping called mipmapping, which works by creating lower resolution, prefiltered versions of the texture map. When rendering the image, the appropriate-resolution mipmap is chosen and hence the texture pixels (texels) are already filtered when they arrive on the screen. Mipmapping is generally combined with various forms of texture filtering in order to improve the final result.
It happens that, in this case, there is additional information that can be used. By re-calculating with the distance estimator, points were identified that are very close to the edge of the set, so that unusually fine detail is aliased in from the rapidly changing escape times near the edge of the set. The colors derived from these calculated points have been identified as unusually unrepresentative of their pixels.Wikipedia:Please clarify Those points were replaced, in the third image, by interpolating the points around them. This reduces the noisiness of the image but has the side effect of brightening the colors. So this image is not exactly the same that would be obtained with an even larger set of calculated points. To show what was discarded, the rejected points, blended into a grey background, are shown in the fourth image. Finally, "Budding Turbines" is so regular that systematic (Moir) aliasing can clearly be seen near the main "turbine axis" when it is downsized by taking the nearest pixel. The aliasing in the first image appears random because it comes from all levels of detail, below the pixel size. When the lower level aliasing is suppressed, to make the third
Anti-aliasing image and then that is down-sampled once more, without anti-aliasing, to make the fifth image, the order on the scale of the third image appears as systematic aliasing in the fifth image. The best anti-aliasing and down-sampling method here depends on one's point of view. When fitting the most data into a limited array of pixels, as in the fifth image, sinc function anti-aliasing would seem appropriate. In obtaining the second and third images, the main objective is to filter out aliasing "noise", so a rotationally symmetrical function may be more appropriate. Pure down-sampling of an image has the following effect (viewing at full-scale is recommended):
185
Object-based anti-aliasing
A graphics rendering system creates an image based on objects constructed of polygonal primitives; the aliasing effects in the image can be reduced by applying an anti-aliasing scheme only to the areas of the image representing silhouette edges of the objects. The silhouette edges are anti-aliased by creating anti-aliasing primitives which vary in opacity. These anti-aliasing primitives are joined to the silhouetted edges, and create a region in the image where the objects appear to blend into the background. The method has some important advantages over classical methods
Anti-aliasing based on the accumulation bufferWikipedia:Please clarify since it generates full-scene anti-aliasing in only two passes and does not require the use of additional memory required by the accumulation buffer. Object-based anti-aliasing was first developed at Silicon Graphics for their Indy workstation.
186
History
Important early works in the history of anti-aliasing include: Freeman, H. (March 1974). "Computer processing of line drawing images". ACM Computing Surveys 6 (1): 5797. doi:10.1145/356625.356627 [5]. Crow, Franklin C. (November 1977). "The aliasing problem in computer-generated shaded images". Communications of the ACM 20 (11): 799805. doi:10.1145/359863.359869 [6]. Catmull, Edwin (August 2325, 1978). "A hidden-surface algorithm with anti-aliasing". Proceedings of the 5th annual conference on Computer graphics and interactive techniques. pp.611.
References
[4] http:/ / www. 4p8. com/ eric. brasseur/ gamma. html [5] http:/ / dx. doi. org/ 10. 1145%2F356625. 356627 [6] http:/ / dx. doi. org/ 10. 1145%2F359863. 359869
External links
Antialiasing and Transparency Tutorial (http://lunaloca.com/tutorials/antialiasing/): Explains interaction between antialiasing and transparency, especially when dealing with web graphics Interpolation and Gamma Correction (http://web.archive.org/web/20050408053948/http://home.no.net/ dmaurer/~dersch/gamma/gamma.html) In most real-world systems, gamma correction is required to linearize the response curve of the sensor and display systems. If this is not taken into account, the resultant non-linear distortion will defeat the purpose of anti-aliasing calculations based on the assumption of a linear system response. The Future of Anti-Aliasing (http://www.eurogamer.net/articles/digital-foundry-future-of-anti-aliasing): A comparison of the different algorithms MSAA, MLAA, DLAA and FXAA (French) Le rle du filtre anti-aliasing dans les APN (the function of anti-aliasing filter in dSLR) (http://www. astrosurf.com/luxorion/apn-anti-aliasing.htm)
Ambient occlusion
187
Ambient occlusion
In computer graphics, ambient occlusion attempts to approximate the way light radiates in real life, especially off what are normally considered non-reflective surfaces. Unlike local methods like Phong shading, ambient occlusion is a global method, meaning the illumination at each point is a function of other geometry in the scene. However, it is a very crude approximation to full global illumination. The soft appearance achieved by ambient occlusion alone is similar to the way an object appears on an overcast day.
Implementation
Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a surface to be touched by various elements (e.g., dirt, light, etc.). It has been popularized in production animation due to its relative simplicity and efficiency. In the industry, ambient occlusion is often referred to as "sky light".[citation
needed]
The ambient occlusion shading model has the nice property of offering a better perception of the 3d shape of the displayed objects. This was shown in a paper where the authors report the results of perceptual experiments showing that depth discrimination under diffuse uniform sky lighting is superior to that predicted by a direct lighting model.[1] The occlusion at a point on a surface with normal can be computed by integrating the visibility function
where and
, defined to be zero if
approximate this integral in practice: perhaps the most straightforward way is to use the Monte Carlo method by casting rays from the point and testing for intersection with other scene geometry (i.e., ray casting). Another approach (more suited to hardware acceleration) is to render the view from by rasterizing black geometry against a white background and taking the (cosine-weighted) average of rasterized fragments. This approach is an example of a "gathering" or "inside-out" approach, whereas other algorithms (such as depth-map ambient occlusion) employ "scattering" or "outside-in" techniques. In addition to the ambient occlusion value, a "bent normal" vector is often generated, which points in the average direction of unoccluded samples. The bent normal can be used to look up incident radiance from an environment map to approximate image-based lighting. However, there are some situations in which the direction of the bent normal is a misrepresentation of the dominant direction of illumination, e.g.,
Ambient occlusion
188
In this example the bent normal Nb has an unfortunate direction, since it is pointing at an occluded surface.
In this example, light may reach the point p only from the left or right sides, but the bent normal points to the average of those two sources, which is, unfortunately, directly toward the obstruction.
Recognition
In 2010, Hayden Landis, Ken McGaugh and Hilmar Koch were awarded a Scientific and Technical Academy Award for their work on ambient occlusion rendering.[2]
References
[2] Oscar 2010: Scientific and Technical Awards (http:/ / www. altfg. com/ blog/ awards/ oscar-2010-scientific-and-technical-awards-489/ ), Alt Film Guide, Jan 7, 2010
External links
Depth Map based Ambient Occlusion (http://www.andrew-whitehurst.net/amb_occlude.html) NVIDIA's accurate, real-time Ambient Occlusion Volumes (http://research.nvidia.com/publication/ ambient-occlusion-volumes) Assorted notes about ambient occlusion (http://www.cs.unc.edu/~coombe/research/ao/) Ambient Occlusion Fields (http://www.tml.hut.fi/~janne/aofields/) real-time ambient occlusion using cube maps PantaRay ambient occlusion used in the movie Avatar (http://research.nvidia.com/publication/ pantaray-fast-ray-traced-occlusion-caching-massive-scenes) Fast Precomputed Ambient Occlusion for Proximity Shadows (http://hal.inria.fr/inria-00379385) real-time ambient occlusion using volume textures Dynamic Ambient Occlusion and Indirect Lighting (http://download.nvidia.com/developer/GPU_Gems_2/ GPU_Gems2_ch14.pdf) a real time self ambient occlusion method from Nvidia's GPU Gems 2 book GPU Gems 3 : Chapter 12. High-Quality Ambient Occlusion (http://http.developer.nvidia.com/GPUGems3/ gpugems3_ch12.html) ShadeVis (http://vcg.sourceforge.net/index.php/ShadeVis) an open source tool for computing ambient occlusion xNormal (http://www.xnormal.net) A free normal mapper/ambient occlusion baking application
Ambient occlusion 3dsMax Ambient Occlusion Map Baking (http://www.mrbluesummers.com/893/video-tutorials/ baking-ambient-occlusion-in-3dsmax-monday-movie) Demo video about preparing ambient occlusion in 3dsMax
189
Caustics
In optics, a caustic or caustic network [1] is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface.[] The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. [] Therefore in the image to the right, the caustics can be the patches of light or their bright edges. These shapes often have cusp singularities.
Caustics produced by a glass of water
Explanation
Concentration of light, especially sunlight, can burn. The word caustic, in fact, comes from the Greek , burnt, via the Latin causticus, burning. A common situation where caustics are visible is when light shines on a drinking glass. The glass casts a shadow, but also produces a curved region of bright light. In ideal circumstances (including perfectly parallel rays, as if from a point source at infinity), a nephroid-shaped patch of light can be produced.[2] Rippling caustics are commonly formed when light shines through waves on a body of water. Another familiar caustic is the rainbow.[3][4] Scattering of light by raindrops causes different wavelengths of light to be refracted into arcs of differing radius, producing the bow.
Caustics
190
Computer graphics
In computer graphics, most modern rendering systems support caustics. Some of them even support volumetric caustics. This is accomplished by raytracing the possible paths of the light beam through the glass, accounting for the refraction and reflection. Photon mapping is one implementation of this. The focus of most computer graphics systems is aesthetics rather than physical accuracy. Some computer graphic systems work by "forward ray tracing" wherein photons are modeled as coming from a light source and bouncing around the environment according to rules. Caustics are formed in the A computer-generated image of a wine glass ray regions where sufficient photons strike a surface causing it to be traced with photon mapping to simulate caustics brighter than the average area in the scene. Backward ray tracing works in the reverse manner beginning at the surface and determining if there is a direct path to the light source.[5] Some examples of 3D ray-traced caustics can be found here [6].
References
[1] Lynch DK and Livingston W (2001). Color and Light in Nature. Cambridge University Press. ISBN 978-0-521-77504-5. Chapter 3.16 The caustic network, Google books preview (http:/ / books. google. com/ books?id=4Abp5FdhskAC& pg=PA93& lpg=PA93& dq=Caustic+ Network& source=bl& ots=bN-ULVuyq3& sig=VHt0Y8UFxFOaoDBL8E_gmVxgoIg& hl=en& ei=7qRgSpufMtW-lAedlpnRCQ& sa=X& oi=book_result& ct=result& resnum=4) [2] Circle Catacaustic (http:/ / mathworld. wolfram. com/ CircleCatacaustic. html). Wolfram MathWorld. Retrieved 2009-07-17. [3] Rainbow caustics (http:/ / atoptics. co. uk/ fz552. htm) [4] Caustic fringes (http:/ / atoptics. co. uk/ fz564. htm) [5] http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch02. html [6] http:/ / www. theeshadow. com/ h/ caustic/
Born, Max; and Wolf, Emil (1999). Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (7th ed.). Cambridge University Press. ISBN0-521-64222-1. Nye, John (1999). Natural Focusing and Fine Structure of Light: Caustics and Wave Dislocations. CRC Press. ISBN978-0-7503-0610-2.
Further reading
Ferraro, Pietro (1996). "What a caustic!". The Physics Teacher 34 (9): 572. Bibcode: 1996PhTea..34..572F (http:/ /adsabs.harvard.edu/abs/1996PhTea..34..572F). doi: 10.1119/1.2344572 (http://dx.doi.org/10.1119/1. 2344572).
Subsurface scattering
191
Subsurface scattering
Subsurface scattering (or SSS) is a mechanism of light transport in which light penetrates the surface of a translucent object, is scattered by interacting with the material, and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material, before passing back out of the material at an angle other than the angle it would have if it had been reflected directly off the surface. Subsurface scattering is important in 3D computer graphics, being necessary for the realistic rendering of materials such as marble, skin, and milk.
Direct surface scattering (left), plus subsurface scattering (middle), create the final image on the right.
Rendering Techniques
Most materials used in real-time computer graphics today only account for the interaction of light at the Example of Subsurface scattering made in Blender software. surface of an object. In reality, many materials are slightly translucent: light enters the surface; is absorbed, scattered and re-emitted potentially at a different point. Skin is a good case in point; only about 6% of reflectance is direct, 94% is from subsurface scattering.[] An inherent property of semitransparent materials is absorption. The further through the material light travels, the greater the proportion absorbed. To simulate this effect, a measure of the distance the light has traveled through the material must be obtained.
Subsurface scattering other more traditional lighting models, allows the creation of different materials such as marble, jade and wax. Potentially, problems can arise if models are not convex, but depth peeling [] can be used to avoid the issue. Similarly, depth peeling can be used to account for varying densities beneath the surface, such as bone or muscle, to give a more accurate scattering model. As can be seen in the image of the wax head to the right, light isnt diffused when passing through object using this technique; back features are clearly shown. One solution to this is to take multiple samples at different points on surface of the depth map. Alternatively, a different approach to approximation can be used, known as texture-space diffusion.
192
Motion blur
193
Motion blur
Motion blur is the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure.
An example of motion blur showing a London bus passing a telephone box in London
Motion blur
194
Animation
In computer animation (2D or 3D) it is computer simulation in time and/or on each frame that the 3D rendering/animation is being made with real video camera during its fast motion or fast motion of "cinematized" objects or to make it look more natural or smoother. Without this simulated effect each frame shows a perfect instant in time (analogous to a camera with an infinitely fast shutter), with zero motion blur. This is why a video game with a frame rate of 25-30 frames per second will seem staggered, while natural motion filmed at the same frame rate appears rather more continuous. Many modern video games feature motion blur, especially vehicle simulation games. Some of the better-known games that utilise this are the recent Need for Speed titles, Unreal Tournament III, The Legend of Zelda: Majora's Mask, among many others. There are two main methods used in video games to achieve motion blur: cheaper full-screen effects, which typically only take camera movement (and sometimes how fast the camera is moving in 3-D Space to create a radial blur) into mind, and more "selective" or "per-object" motion blur, which typically uses a shader to create a velocity buffer to mark motion intensity for a motion blurring effect to be applied to or uses a shader to perform geometry extrusion. In pre-rendered computer animation, such as CGI movies, realistic motion blur can be drawn because the renderer has more time to draw each frame. Temporal anti-aliasing produces frames as a composite of many instants. Motion lines in cel animation are drawn in the same direction as motion blur and perform much the same duty. Go motion is a variant of stop motion animation that moves the models during the exposure to create a less staggered effect.
Two animations rotating around a figure, with motion blur (left) and without
Computer graphics
In 2D computer graphics, motion blur is an artistic filter that converts the digital image[1]/bitmap[2]/raster image in order to simulate the effect. Many graphical software products (e.g. Adobe Photoshop or GIMP) offer simple motion blur filters. However, for advanced motion blur filtering including curves or non-uniform speed adjustment, specialized software products are necessary.[3]
Biology
When an animal's eye is in motion, the image will suffer from motion blur, resulting in an inability to resolve details. To cope with this, humans generally alternate between saccades (quick eye movements) and fixation (focusing on a single point). Saccadic masking makes motion blur during a saccade invisible. Similarly, smooth pursuit allows the eye to track a target in rapid motion, eliminating motion blur of that target instead of the scene.
Motion blur
195
Sometimes, motion blur can be removed from images with the help of deconvolution. Some video game players claim that artificial motion blur causes headaches.[4] For some games, it is recommended to disable motion blur and use a high refresh rate screen and playing with a high fps count, that way it becomes more natural to pinpoint objects on the screen (useful when you have to react to them in small time windows). SomeWikipedia:Avoid weasel words players argue that motion blur should come naturally from the eyes, and screens shouldn't need to simulate that effect.
Restoration
An example of blurred image restoration with Wiener deconvolution:
From left: original image, blurred image and de-blurred image. Notice some artifacts in de-blurred image.
Motion blur
196
References
[1] Motion Blur Effect (http:/ / www. tutorialsroom. com/ tutorials/ graphics/ motion_blur. html), TutorialsRoom [2] Photoshop - Motion Blur (http:/ / artist. tizag. com/ photoshopTutorial/ motionblur. php), tizag.com [3] Traditional motion blur methods (http:/ / www. virtualrig-studio. com/ traditional-motion-blur. php), virtualrig-studio.com
Gallery
Motion blur is frequently employed in sports photography (particularly motor sports) to convey a sense of speed. To achieve this effect it is necessary to use a slow shutter speed and pan the lens of the camera in time with the motion of the object
Taken aboard an airplane turning above San Jose at night. The city lights form concentric strips.
The traffic on this street leaves brilliant streaks due to the low shutter speed of the camera and the cars' relatively fast speed.
Strickland Falls in Tasmania, Australia, taken using a neutral density filter. ND filters reduce light of all colors or wavelengths equally, allowing an increase in aperture and decrease in shutter speed without overexposing the image. To create the motion blur seen here, the shutter must be kept open for a relatively long time, making it necessary to reduce the amount of light coming through the lens.
Beam tracing
197
Beam tracing
Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations. Beam tracing is a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with beams. Beams are shaped like unbounded pyramids, with (possibly complex) polygonal cross sections. Beam tracing was first proposed by Paul Heckbert and Pat Hanrahan.[1] In beam tracing, a pyramidal beam is initially cast through the entire viewing frustum. This initial viewing beam is intersected with each polygon in the environment, typically from nearest to farthest. Each polygon that intersects with the beam must be visible, and is removed from the shape of the beam and added to a render queue. When a beam intersects with a reflective or refractive polygon, a new beam is created in a similar fashion to ray-tracing. A variant of beam tracing casts a pyramidal beam through each pixel of the image plane. This is then split up into sub-beams based on its intersection with scene geometry. Reflection and transmission (refraction) rays are also replaced by beams.This sort of implementation is rarely used, as the geometric processes involved are much more complex and therefore expensive than simply casting more rays through the pixel. Cone tracing is a similar technique using a cone instead of a complex pyramid. Beam tracing solves certain problems related to sampling and aliasing, which can plague conventional ray tracing approaches.[2] Since beam tracing effectively calculates the path of every possible ray within each beam [3](which can be viewed as a dense bundle of adjacent rays), it is not as prone to under-sampling (missing rays) or over-sampling (wasted computational resources). The computational complexity associated with beams has made them unpopular for many visualization applications. In recent years, Monte Carlo algorithms like distributed ray tracing (and Metropolis light transport?) have become more popular for rendering calculations. A 'backwards' variant of beam tracing casts beams from the light source into the environment. Similar to backwards raytracing and photon mapping, backwards beam tracing may be used to efficiently model lighting effects such as caustics.[4] Recently the backwards beam tracing technique has also been extended to handle glossy to diffuse material interactions (glossy backward beam tracing) such as from polished metal surfaces.[5] Beam tracing has been successfully applied to the fields of acoustic modelling[6] and electromagnetic propagation modelling.[7] In both of these applications, beams are used as an efficient way to track deep reflections from a source to a receiver (or vice-versa). Beams can provide a convenient and compact way to represent visibility. Once a beam tree has been calculated, one can use it to readily account for moving transmitters or receivers. Beam tracing is related in concept to cone tracing.
References
[1] P. S. Heckbert and P. Hanrahan, " Beam tracing polygonal objects (http:/ / www. eng. utah. edu/ ~cs7940/ papers/ p119-heckbert. pdf)", Computer Graphics 18(3), 119-127 (1984). [2] A. Lehnert, "Systematic errors of the ray-tracing algorithm", Applied Acoustics 38, 207-221 (1993). [3] Steven Fortune, "Topological Beam Tracing", Symposium on Computational Geometry 1999: 59-68 [4] M. Watt, "Light-water interaction using backwards beam tracing", in "Proceedings of the 17th annual conference on Computer graphics and interactive techniques(SIGGRAPH'90)",377-385(1990). [5] B. Duvenhage, K. Bouatouch, and D.G. Kourie, "Exploring the use of Glossy Light Volumes for Interactive Global Illumination", in "Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa", 2010. [6] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West, "A beam tracing approach to acoustic modelling for interactive virtual environments", in Proceedings of the 25th annual conference on Computer graphics and interactive techniques (SIGGRAPH'98), 21-32 (1998). [7] Steven Fortune, "A Beam-Tracing Algorithm for Prediction of Indoor Radio Propagation", in WACG 1996: 157-166
Cone tracing
198
Cone tracing
Principles
Cone tracing[1] and beam tracing are a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with thick rays. This is done for two reasons:
Cone tracing
199
References
[1] [2] [3] [4] Amanatides. Ray Tracing with Cones Siggraph'84 http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 129. 582 Homan Igehy. Tracing Ray Differentials. http:/ / www-graphics. stanford. edu/ papers/ trd/ Fabrice Neyret. Modeling Animating and Rendering Complex Scenes using Volumetric Textures. http:/ / hal. inria. fr/ inria-00537523 Cyril Crassin, Fabrice Neyret, Miguel Sainz, Simon Green, Elmar Eisemann. Interactive Indirect Illumination Using Voxel Cone Tracing. http:/ / www. icare3d. org/ research-cat/ publications/ interactive-indirect-illumination-using-voxel-cone-tracing. html [5] http:/ / www. youtube. com/ watch?v=fAsg_xNzhcQ
Implementations
Various implementations of ray tracing hardware have been created, both experimental and commercial: (20022009) ART VPS company (founded 2002[12]), situated in the UK, sold ray tracing hardware for off-line rendering. The hardware used multiple specialized processors that accelerated ray-triangle intersection tests. Software provided integration with Maya (see Autodesk Maya) and Max (see Autodesk 3ds Max) data formats, and utilized the Renderman scene description language for sending data to the processors (the .RIB or Renderman Interface Bytestream file format).[13] As of 2010, ARTVPS no longer produces ray tracing hardware but continues to produce rendering software.[12] (2002) The computer graphics laboratory at Saarland University headed by Dr. -Ing Slusallek has produced prototype ray tracing hardware including the FPGA based fixed function data driven SaarCOR (Saarbrckens Coherence Optimized Ray Tracer) chip[14][15][16] and a more advanced programmable (2005) processor, the Ray Processing Unit (RPU)[17]
Ray tracing hardware (1996) Researchers at Princeton university proposed using DSPs to build a hardware unit for ray tracing acceleration, named "TigerSHARK"[18] Implementations of volume rendering using ray tracing algorithms on custom hardware have also been proposed : (2002) VIZARD II[19] or built (1999) : vg500 / VolumePro ASIC based system[20][21] Caustic Graphics[22] have produced a plug in card, the "CausticOne" (2010), that accelerates global illumination and other ray based rendering processes when coupled to a PC CPU and GPU. The hardware is designed to organize scattered rays (typically produced by global illumination problems) into more coherent sets (lower spatial or angular spread) for further processing by an external processor.[23] Siliconarts[24] developed a dedicated real-time ray tracing hardware (2010). RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced.
200
201
202
Ray Tracers
3Delight
3Delight
Developer(s) Stable release DNA Research 9.0.84 / February2,2011
Operating system Windows, Mac OS X, Linux Type Licence Website 3D computer graphics Proprietary www.3delight.com [1]
3Delight, is 3D computer graphics software that runs on Microsoft Windows, OS X and Linux. It is developed by DNA Research, commonly shortened to DNA, a subsidiary of Taarna Studios. It is a photorealistic, RenderMan-compliant offline renderer.
History
Work on 3Delight started in 1999. The renderer became first publicly available in 2000.[2] 3Delight was the first RenderMan-compliant renderer combining the REYES algorithm with on-demand ray-tracing. The only other RenderMan-compliant renderer capable of ray tracing at the time was BMRT. BMRT was not a REYES renderer though. 3Delight was meant to be a commercial product from the beginning. However, DNA decided to make it available free of charge from August 2000 to March 2005 in order to build a user base. During this time, customers using a large number of licenses on their sites or requiring extensive support were asked to kindly work out an agreement with DNA that specified some form of fiscal compensation for this. In March 2005, the license was changed. The first license was still free. From the second license onwards, the renderer used to be 1,000 USD per two thread node resp. 1,500 USD per four thread node. The first company that licenses 3Delight commercially, in early 2005, was Rising Sun Pictures. The current licensing scheme is based on number of threads or cores. The first license, limited to two cores, is free.
Features
3Delight primarily uses the REYES algorithm but is also well capable of doing ray tracing and global illumination. The renderer is fully multi-threaded and also supports distributed rendering. This allows for accelerated rendering on multi-CPU hosts or environments where a large number of computers are joined into a grid. It implements all required capabilities for a RenderMan-compliant renderer and also the following optional ones:[3] Area light sources Depth of field Displacement mapping Environment mapping
3Delight Global illumination Level of detail Motion blur Programmable shading Special camera projections (through the "ray trace hider") Ray tracing Shadow depth mapping Solid modeling Texture mapping Volume shading
203
3Delight also supports the following capabilities, which are not part of any capabilities list: Photon mapping Point clouds Hierarchical subdivision surfaces NURB curves Brick maps (3 dimensional, mip-mapped textures) (RIB) Conditionals
Modules
3Delight is based on modules. The primary module is the REYES module which implements a REYES-based renderer. Another module, called 'Sabretooth', is used for ray-tracing and also supports global illumination calculations through certain shadeops. 3Delight supports explicit ray tracing of camera rays by selecting a different hider, essentially turning the renderer from a hybrid REYES/ray tracing one into a full ray-tracer. Other features include: Extended display subset functionality to allow rendering of geometric primitives, writing to the same display variable, to different images. For example, display subsets could be used to render the skin and fur of a creature to two separate images at once without the fur matting the skin passes. Memory efficient point clouds. Like brick maps, point clouds are organized in a spatial data structure and are loaded lazily, keeping the memory requirements as low as possible. Procedural geometry is instanced lazily even during ray tracing, keeping the memory requirements as low as possible. Displacement shaders can be stacked. Displacement shaders can (additionally) be run on the vertices of a geometric primitive, before that primitive is even shaded. The gather() shadeop can be used on point clouds and to generate sample distributions from (high dynamic range) images, e.g. for easily combining photon mapping with image based lighting. First order ray differentials on any ray fired from within a shader. A read/write disk cache that allows the renderer to take strain off the network, when heavy scene data needs to be repeatedly distributed to clients on a render farm or image data sent back from such clients to a central storage server.
3Delight A C API that allows running RenderMan Shading Language (RSL) code on arbitrary data, e.g. inside a modelling application.
204
Supported platforms
Apple Mac OS X on the PowerPC and x86 architectures GNU/Linux on the x86, x86-64 and Cell architectures Microsoft Windows on the x86 and x86-64 architectures
Operating environments
The renderer comes in both 32-bit and 64-bit versions. The latter allowing the processing of very large scene datasets.
Discontinued platforms
Platforms supported in the past included: Digital Equipment Corporation Digital UNIX on the Alpha architecture Silicon Graphics IRIX on the MIPS architecture (might still be supported, on request) Sun Microsystems Solaris on the SPARC architecture
3Delight
205
Film credits
3Delight has been used for visual effects work on many films. Some notable examples are: Assault on Precinct 13 Bailey's Billions Black Christmas Blades of Glory The Blood Diamond Charlotte's Web CJ7 / Cheung Gong 7 hou The Chronicles of Narnia: The Lion, the Witch and the Wardrobe The Chronicles of Riddick Cube Zero District 9 Fantastic Four Fantastic Four: Rise of the Silver Surfer Final Destination 3 Harry Potter and the Half-Blood Prince Harry Potter and the Order of the Phoenix Hulk The Incredible Hulk The Last Mimzy The Ruins The Seeker: The Dark is Rising Terminator Salvation Superman Returns The Woods X-Men: The Last Stand X-Men Origins: Wolverine
It was also used to render the following full CG features: Adventures in Animation (Imax 3D featurette) Happy Feet Two Free Jimmy
References
[1] http:/ / www. 3delight. com/ [3] 3Delight Technical Specifications (http:/ / www. 3delight. com/ en/ uploads/ docs/ 3delight/ 3delight_tech_specs. pdf) [4] http:/ / www. 3delight. com/ en/ uploads/ press-release/ 3dsp-10. pdf [5] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ e1f4c1a9891aede7 [6] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ a8615ea70c31b587 [7] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ ab754ad521d41035 [8] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c389d5e90c943d24 [9] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8f4d252ff97b8aaa [10] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8454e655e24588c8 [11] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 96054503e7fd242a [12] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 22cac22ce089a235 [13] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c8c7c6337e998e55 [14] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8b7f2b432aad4e21 [15] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 6ed1bad3e15a9c07 [16] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8857c89e2856a1de
3Delight
[17] [18] [19] [20] [21] [22] [23] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ e0b9c83a8ef7e433 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c292a7283ae98b0d http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c9a112d87632314c http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 0b2cbf41ec7f1c95 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ ffd884b847b3f7cc http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ fb1bf705bb874588 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ fb13237fd2bf20ad
206
External links
3Delight home page (http://www.3delight.com/) Rodmena Network (http://www.rodmena.com/)
Amiga Reflections
Amiga Reflections is 3D modeling and rendering software developed by Carsten Fuchs for the Commodore Amiga. It was later renamed Monzoom. The first Bookware release was 1989, and contained a book and a floppy disk. The book was the manual and had some tutorials explaining how a raytracer works. The Floppy contained the software with some models and examples. Carsten Fuchs extended the software with a more advanced modeler and an animation module in 1992 the Reflections-Animator. As of version 4.3, in 1998,[1] Amiga Reflections was renamed Monzoom or Monzoom 3D and distributed by Oberland Computer.[2] Monzoom Pro was available on CD with the March/April 2008 issue of the German print magazine Amiga Future.[3][4][5] Monzoom also became available for PC as Shareware.[6]
Rendered with Reflections-Animator
Publications
Books
Fuchs, Carsten (1992). Amiga reflections animator. Haar bei Mnchen: Markt-&-Technik-Verlag. ISBN978-3-87791-166-2. Fuchs, Carsten (1989-01). Amiga reflections. Haar bei Mnchen: Markt-&-Technik-Verlag. ISBN978-3-89090-727-7.
Scientific articles
Glittenberg, Carl (2004). "Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques" [7]. Proceedings of SPIE. Ophthalmic Technologies XIV. San Jose, CA, USA. pp.275285. doi:10.1117/12.555626 [8]. Retrieved 2010-03-01. - discusses using 3D software, including Reflektions 4.3 (an alternate name for Reflections/Monzoom), to teach ophthalmology and train for retinal surgery.
Amiga Reflections
207
References
[1] Listing of Carsten Fuch's software at Amiga Future http:/ / www. amigafuture. de/ asd_search. php?submit=1& cat_id=13& e=Carsten+ Fuchs [2] Exhibition History, Computer '98, in Issue 10/98 of AMIGA-Magazin by Jrn-Erik Burkert (German) http:/ / www. amiga-magazin. de/ magazin/ a10-98/ koeln. html [3] Contents listing of March/April 2008 Number 71 of Amiga Future magazine(in German) http:/ / www. amigafuture. de/ kb. php?mode=article& k=2357& highlight=monzoom [4] Cover of March/April 2008 Number 71 of Amiga Future magazine http:/ / www. amigafuture. de/ album_page. php?pic_id=4526 [5] Alternate cover and ToC listing in English (but not from the publisher) http:/ / www. vesalia. de/ e_af71. htm [6] Monzoom for Windows (German) http:/ / software. magnus. de/ grafik-video-audio/ download/ 3d-programm-monzoom-s. html [7] http:/ / spie. org/ x648. html?pf=true%0D%0A%09%09%09%09& product_id=555626 [8] http:/ / dx. doi. org/ 10. 1117%2F12. 555626
External links
Objects, plugins, images
Aminet objects and images (http://aminet.net/search?path=pix&desc=reflections) (German) plugins, scripts, etc for Monzoom (http://www.geoxis.de/monzoom/)
208
Operating system Windows 7 and Windows 8[1] Platform Type License Website x64 3D computer graphics Trialware www.autodesk.com/3dsmax [2]
Autodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software for making 3D animations, models, and images. It was developed and produced by Autodesk Media and Entertainment. It has modeling capabilities, a flexible plugin architecture and can be used on the Microsoft Windows platform. It is frequently used by video game developers (such as for a popular children's online game known as Roblox, and the Trainz franchise), many TV commercial studios and architectural visualization studios. It is also used for movie effects and movie pre-visualization. In addition to its modeling and animation tools, the latest version of 3ds Max also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.[3]
Autodesk 3ds Max Autodesk 3ds Max 2008 Autodesk 3ds Max 9 Autodesk 3ds Max 8 Autodesk 3ds Max 7.5 Autodesk 3ds Max 7 3ds max 6 3ds max 5
209
Features
MAXScript MAXScript is a built-in scripting language that can be used to automate repetitive tasks, combine existing functionality in new ways, develop new tools and user interfaces, and much more. Plugin modules can be created entirely within MAXScript. Character Studio Character Studio was a plugin which since version 4 of Max is now integrated in 3D Studio Max, helping users to animate virtual characters. The system works using a character rig or "Biped" skeleton which has stock settings that can be modified and customized to the fit character meshes and animation needs. This tool also includes robust editing tools for IK/FK switching, Pose manipulation, Layers and Keyframing workflows, and sharing of animation data across different Biped skeletons. These "Biped" objects have other useful features that help accelerate the production of walk cycles and movement paths, as well as secondary motion. Scene Explorer Scene Explorer, a tool that provides a hierarchical view of scene data and analysis, facilitates working with more complex scenes. Scene Explorer has the ability to sort, filter, and search a scene by any object type or property (including metadata). Added in 3ds Max 2008, it was the first component to facilitate .NET managed code in 3ds Max outside of MAXScript. DWG import 3ds Max supports both import and linking of DWG files. Improved memory management in 3ds Max 2008 enables larger scenes to be imported with multiple objects. Texture assignment/editing 3ds Max offers operations for creative texture and planar mapping, including tiling, mirroring, decals, angle, rotate, blur, UV stretching, and relaxation; Remove Distortion; Preserve UV; and UV template image export. The texture workflow includes the ability to combine an unlimited number of textures, a material/map browser with support for drag-and-drop assignment, and hierarchies with thumbnails. UV workflow features include Pelt mapping, which defines custom seams and enables users to unfold UVs according to those seams; copy/paste materials, maps and colors; and access to quick mapping types (box, cylindrical, spherical). General keyframing Two keying modes set key and auto key offer support for different keyframing workflows. Fast and intuitive controls for keyframing including cut, copy, and paste let the user create animations with ease. Animation trajectories may be viewed and edited directly in the viewport. Constrained animation Objects can be animated along curves with controls for alignment, banking, velocity, smoothness, and looping, and along surfaces with controls for alignment. Weight path-controlled animation between multiple curves, and animate the weight. Objects can be constrained to animate with other objects in many ways including look at, orientation in different coordinate spaces, and linking at different points in time. These constraints also
Autodesk 3ds Max support animated weighting between more than one target. All resulting constrained animation can be collapsed into standard keyframes for further editing. Skinning Either the Skin or Physique modifier may be used to achieve precise control of skeletal deformation, so the character deforms smoothly as joints are moved, even in the most challenging areas, such as shoulders. Skin deformation can be controlled using direct vertex weights, volumes of vertices defined by envelopes, or both. Capabilities such as weight tables, paintable weights, and saving and loading of weights offer easy editing and proximity-based transfer between models, providing the accuracy and flexibility needed for complicated characters. The rigid bind skinning option is useful for animating low-polygon models or as a diagnostic tool for regular skeleton animation. Additional modifiers, such as Skin Wrap and Skin Morph, can be used to drive meshes with other meshes and make targeted weighting adjustments in tricky areas. Skeletons and inverse kinematics (IK) Characters can be rigged with custom skeletons using 3ds Max bones, IK solvers, and rigging tools powered by Motion Capture Data. All animation tools including expressions, scripts, list controllers, and wiring can be used along with a set of utilities specific to bones to build rigs of any structure and with custom controls, so animators see only the UI necessary to get their characters animated. Four plug-in IK solvers ship with 3ds Max: history-independent solver, history-dependent solver, limb solver, and spline IK solver. These powerful solvers reduce the time it takes to create high-quality character animation. The history-independent solver delivers smooth blending between IK and FK animation and uses preferred angles to give animators more control over the positioning of affected bones. The history-dependent solver can solve within joint limits and is used for machine-like animation. IK limb is a lightweight two-bone solver, optimized for real-time interactivity, ideal for working with a character arm or leg. Spline IK solver provides a flexible animation system with nodes that can be moved anywhere in 3D space. It allows for efficient animation of skeletal chains, such as a characters spine or tail, and includes easy-to-use twist and roll controls. Integrated Cloth solver In addition to reactors cloth modifier, 3ds Max software has an integrated cloth-simulation engine that enables the user to turn almost any 3D object into clothing, or build garments from scratch. Collision solving is fast and accurate even in complex simulations.(image.3ds max.jpg) Local simulation lets artists drape cloth in real time to set up an initial clothing state before setting animation keys. Cloth simulations can be used in conjunction with other 3ds Max dynamic forces, such as Space Warps. Multiple independent cloth systems can be animated with their own objects and forces. Cloth deformation data can be cached to the hard drive to allow for nondestructive iterations and to improve playback performance. Integration with Autodesk Vault Autodesk Vault plug-in, which ships with 3ds Max, consolidates users 3ds Max assets in a single location, enabling them to automatically track files and manage work in progress. Users can easily and safely share, find, and reuse 3ds Max (and design) assets in a large-scale production or visualization environment.
210
211
Industry usage
Many recent films have made use of 3ds Max, or previous versions of the program under previous names, in CGI animation, such as Avatar and 2012, which contain computer generated graphics from 3ds Max alongside live-action acting. 3ds Max has also been used in the development of 3D computer graphics for a number of video games. Architectural and engineering design firms use 3ds Max for developing concept art and previsualization.
Apples made with 3ds max
Educational usage
Educational programs at secondary and tertiary level use 3ds Max in their courses on 3D computer graphics and computer animation. Students in the FIRST competition for 3d animation are known to use 3ds Max.
Modeling techniques
Polygon modeling
Polygon modeling is more common with game design than any other modeling technique as the very specific control over individual polygons allows for extreme optimization. Usually, the modeler begins with one of the 3ds max primitives, and using such tools as bevel and extrude, adds detail to and refines the model. Versions 4 and up feature the Editable Polygon object, which simplifies most mesh editing operations, and provides subdivision smoothing at customizable levels. Version 7 introduced the edit poly modifier, which allows the use of the tools available in the editable polygon object to be used higher in the modifier stack (i.e., on top of other modifications)
212
Predefined primitives
This is a basic method, in which one models something using only boxes, spheres, cones, cylinders and other predefined objects from the list of Predefined Standard Primitives or a list of Predefined Extended Primitives. One may also apply boolean operations, including subtract, cut and connect. For example, one can make two spheres which will work as blobs that will connect with each other. These are called metaballs.[citation needed] Some of the 3ds Max Primitives as they appear in the wireframe view of 3ds Max 9 3ds Max Standard Primitives: Box (top right), Cone (top center), Pyramid (top left), Sphere (bottom left), Tube (bottom center) and Geosphere (bottom right) 3ds Max Extended Primitives: Torus Knot (top left), ChamferCyl (top center), Hose (top right), Capsule (bottom left), Gengon (bottom, second from left), OilTank (bottom, second from right) and Prism (bottom right)
Standard primitives
Box: Produces a rectangular prism. An alternative variation of box, called Cube, proportionally constrains the length, width and height of the box. Produces a cylinder. Produces a torus or a ring with a circular cross section, sometimes referred to as a doughnut. Produces a Utah teapot. Since the teapot is a parametric object, the user can choose which parts of the teapot to display after creation. These parts include the body, handle, spout and lid. Produces upright or inverted cones. Produces a full sphere, hemisphere, or other portion of a sphere. Produces round or prismatic tubes. The tube is similar to the cylinder with a hole in it. Produces a pyramid with a square or rectangular base and triangular sides. Produces a special type of flat polygon mesh that can be enlarged by any amount at render time. The user can specify factors to magnify the size or number of segments, or both. Modifiers such as displace can be added to a plane to simulate a hilly terrain.
Geosphere: Produces spheres and hemispheres based on three classes of regular polyhedrons.
Extended primitives
213
Hedra:
ChamferBox: Produces a box with beveled or rounded edges. OilTank: Spindle: Gengon: Prism: Torus knot: Creates a cylinder with convex caps. Creates a cylinder with conical caps. Creates an extruded, regular-sided polygon with optionally filleted side edges. Creates a three-sided prism with independently segmented sides. Creates a complex or knotted torus by drawing 2D curves in the normal planes around a 3D curve. The 3D curve (called the Base Curve) can be either a circle or a torus knot. It can be converted from a torus knot object to a NURBS surface.
ChamferCyl: Creates a cylinder with beveled or rounded cap edges. Capsule: L-Ex: C-Ext: Hose: Creates a cylinder with hemispherical caps. Creates an extruded L-shaped object. Creates an extruded C-shaped object. Creates a flexible object, similar to a spring.
Rendering
Scanline rendering The default rendering method in 3DS Max is scanline rendering. Several advanced features have been added to the scanliner over the years, such as global illumination, radiosity, and ray tracing. mental ray mental ray is a production quality renderer developed by mental images. It is integrated into the later versions of 3ds Max, and is a powerful raytracing renderer with bucket rendering, a technique that allows distributing the rendering task for a single image between several computers efficiently, using TCP network protocol. RenderMan A third party connection tool to RenderMan pipelines is also available for those that need to integrate Max into Renderman render farms. Used by Pixar for rendering several of their CGI animated films. V-Ray A third-party render engine plug-in for 3D Studio MAX. It is widely used, frequently substituting the standard and mental ray renderers which are included bundled with 3ds Max. V-Ray continues to be compatible with older versions of 3ds Max. Brazil R/S A third-party high-quality photorealistic rendering system created by SplutterFish, LLC capable of fast ray tracing and global illumination. FinalRender Another third-party raytracing render engine created by Cebas. Capable of simulating a wide range of real-world physical phenomena. Fryrender A third party photorealistic, physically based, unbiased and spectral renderer created by RandomControl capable of very high quality and realism. Arion Render A third party hybrid GPU+CPU interactive, unbiased raytracer created by RandomControl, based on Nvidia CUDA.
Autodesk 3ds Max Indigo Renderer A third-party photorealistic renderer with plugins for 3ds max. Maxwell Render A third-party photorealistic rendering system created by Next Limit Technologies providing robust materials and highly accurate unbiased rendering. Octane Render A third party unbiased GPU raytracer with plugins for 3ds max, based on Nvidia CUDA. BIGrender 3.0 Another third-party rendering plugin. Capable of overcoming 3DS rendering memory limitations with rendering huge pictures. Luxrender An open-source raytracer supporting 3DS Max, Cinema 4D, Softimage, and Blender. Focuses on photorealism by simulating real light physics as much as possible.
214
Licensing
Earlier versions (up to and including 3D Studio Max R3.1) required a special copy protection device (called a dongle) to be plugged into the parallel port while the program was run, but later versions incorporated software based copy prevention methods instead. Current versions require online registration. Due to the high price of the commercial version of the program, Autodesk also offers a free student version, which explicitly states that it is to be used for "educational purposes only". The student version has identical features to the full version, but is only for single use and cannot be installed on a network. The student license expires after three years, at which time the user, if they are still a student, may download the latest version, renewing the license for another three years. Autodesk also sells a perpetual student license which allows 3ds Max to be used for the lifetime of the original purchaser, who must be a student only at the time of purchase. The software cannot be used commercially, but Autodesk offers a significant discount when upgrading to a commercial license. Because final renders are watermark free, the perpetual student license is suitable for portfolio creation. Typically, the perpetual student license version of 3ds Max is bundled with Maya, Softimage XSI, Motionbuilder, Mudbox and Sketchbook Pro as a complete package.
Notes
[1] [2] [3] [4] [5] http:/ / usa. autodesk. com/ adsk/ servlet/ ps/ dl/ item?siteID=123112& id=15458146& linkID=9241177 http:/ / www. autodesk. com/ 3dsmax "Autodesk 3ds Max Detailed Features" (http:/ / usa. autodesk. com/ adsk/ servlet/ pc/ index?siteID=123112& id=13567426), 2008-03-25 History of Autodesk 3ds Max (http:/ / area. autodesk. com/ maxturns20/ history) (http:/ / usa. autodesk. com/ adsk/ servlet/ ps/ dl/ index?id=2334435& linkID=9241178& siteID=123112)
External links
3ds max Official site (http://www.autodesk.com/3dsmax) 3ds max Resource site (http://area.autodesk.com/) 3ds Max - Free Video Tutorials (http://www.cgmeetup.net/home/tutorials/autodesk-3d-studio-max/) Autodesk 3ds Max 2014 New Features video (http://www.cgmeetup.net/home/ autodesk-maya-max-and-softimage-2014-sneak-peek-videos/) 3ds max Exporter (http://blog.sketchfab.com/3ds-max-exporter)
Autodesk 3ds Max 3D Studio Max (http://www.dmoz.org/Computers/Software/Graphics/3D/Rendering_and_Modelling/ 3D_Studio_Max//) at the Open Directory Project History of 3ds Max (http://wiki.cgsociety.org/index.php/3ds_Max_History) Pre-history of 3ds Max (http://www.asterius.com/atari/)
215
Anim8or
Anim8or
Anim8or screenshot Developer(s) Initial release Stable release Preview release R. Steven Glanville July20,1999 0.95c / April2,2007 0.97d / September21,2008
Development status Active [1] Operating system Type License Website Microsoft Windows 3D Modeling and Animation Freeware www.anim8or.com [2]
Anim8or is a freeware OpenGL based 3D modeling and animation program by R. Steven Glanville, a software engineer at NVidia. Currently at version 0.97, it is a compact program with several tools which would normally be expected in high-end, paid software. To date, every version released has been under 2 MB, despite the fact that it does not make full use of the windows native interface, carrying some graphical elements of its own. Although few official tutorials have been posted by the author, many other users have posted their own on sites such as YouTube and the anim8or home page. While Anim8or was once comparable to other freeware 3D animation software such as Blender, it has seen less progression in recent years.
Development
On July 20, 1999, a message was posted to the newsgroup comp.graphics.packages.3dstudio, introducing the first version of Anim8or to the public.[3] In its first week, the original version was downloaded almost 100 times.[4] The next version, 0.2, was released on September 6, 1999, containing bug fixes and the ability to save images as JPEG files. In the past few years, newer versions have been released, introducing features such as undo and redo commands, keyboard shortcuts, an improved renderer and morph targets. With each new version, the popularity of Anim8or has grown. It has been featured in several magazines including 3D User, Freelog, c't and the Lockergnome newsletter. Anim8or's latest stable version, 0.95, was released to the public on November 4, 2006, although beta versions were available earlier for users wanting to test them and provide feedback. This version introduced features such as graphic material shaders, the ASL scripting language, plug-in support and numerous bug fixes. Version 0.95a was posted on December 2, 2006 and contains further bug fixes. Anim8or's mascot is a simple red robin, aptly named as Robin, that most users learn to model and animate in Anim8or's "A Simple Walk Tutorial". Users are often also very familiar with the eggplant, a model first designed by Steven to demonstrate 3D printers at SIGGRAPH. It is likely the first model most Anim8or modellers have ever
Anim8or created, as it is taught in the introductory tutorial to demonstrate the basics of the modeler and the tools available.
216
Layout
Anim8or's interface is separated into four sections, each with its own tool set: Object editor - individual objects are stored and edited within the object editor. Objects may be composed of primitives such as spheres, or more complex shapes made by extruding polygons along the z axis and adjusting the vertexes. Materials are then applied, per face if desired. The user also has the option to make morph targets for each object. Figure editor - in order to animate more complex models, they can be given a Skeleton. Users can give each "bone" the ability to rotate on all 3 axes within certain limits and attach individual objects to each bone. Sequence editor - this is an extension of the figure editor, allowing the use of key frame animation to animate individual bones with a degree of accuracy of 0.1. Scene editor - elements from the three other sections are imported and arranged in the scene editor. The key frames from the sequence editor can be modified, along with other variables, such as a figure's position in 3D space or the state of a morph target. An image can be rendered in any of the four editors, but only in the scene editor can lights and other graphical elements be used. The interface is a mixture of window's native interface, for such elements as the right-click context menu, and one specific to Anim8or, such as the graphical icons in the left-hand toolbar.
Features
Although it is not as powerful as high-end commercial programs, it contains many features that are important to a 3D computer graphics package while remaining free. Such features include: 3D Modeler with primitives such as spheres, cubes, and cylinders Mesh modification and subdivision Splines, extrusions, lathing, modifiers, bevelling and warping TrueType font support allowing for 2D and 3D text The ability to import .3DS, .LWO and .OBJ files for modification The ability to export .3DS, .OBJ, .VTX and .C files for use in external programs Plug-in support, using the Anim8or Scripting Language, also known as ASL for short 3D object browser to allow the user to view 3D files in a specified directory Textures in .BMP, .GIF and .JPG formats Environment maps, bump maps, transparency, specularity amongst others Character editor with joints Morph targets Renderer supporting fog, infinite, local and spot lights, anti-aliasing, alpha channels and depth channels Printing directly from the program Volumetric Shadows as well as ray traced hard and soft shadows A plain text file format, allowing for the development of external tools such as Terranim8or Hierarchies
A basic feature list can also be found at the Anim8or website [5], although the list is incomplete.
Anim8or
217
System requirements
As far as multimedia standards go, Anim8or has very low system requirements. It is worth noting however, that certain features, particularly shadows, Anti-aliasing and Anim8or's resident ray tracer quickly become burdens on a computer's resources. While originally designed to work with Windows, users have reported running it successfully on Apple computers with Connectix Virtual PC and on Linux with WINE. This may be partially due to Anim8or's stand-alone design. This means that it can be pasted onto a USB memory stick or other removable media and run directly from it on any computer that meets the minimum specification. The minimum requirements are: 300 MHz Processor Windows 95 or higher OpenGL graphics card with full ICD support 64MB of RAM (128MB recommended, 256MB with Windows XP) 5MB of hard drive space (the application is less than 2MB, but the manual and project/texture files can occupy several times this space).
Render made in Anim8or 0.97D utilizing new features including reflections and Ambient Occlusion
Anim8or
218
Future releases
Not much is known about what features will be modified or included in future versions, although users have posted suggestions on related forums. Inverse kinematics will likely be added,[6] as it was included the latest release, but was disabled because it was not quite ready for use.[7] According to the Anim8or forums, an admin has in 2011 heard back from the creator and has said that future release is not going to be expected for quite a while.[] Suspected planned features are: Fast AVI creation using OpenGL Advanced material manager Some of these features may not be included in the next release. "Anim8or has come a long way since the first release called v0.1. There are still may areas that need improvement, primarily the renderer, but it's getting close to what I had originally imagined as the magic v1.0. I don't plan on stopping there, but it'll be a nice milestone along the way." - R. Steven Glanville[8] RecentWikipedia:Manual of Style/Dates and numbers#Chronological items discussions have suggested that a new version is in development; however, this may be awhile until we see such results.[] On May 23rd, 2013, a V0.97E pre-beta was released. This new version included many new features. The Anim8or Forum's Admin said that a stable 0.97 release would come anytime from June to November.
Community
The Anim8or community is hosted on two forums, the official forum on the Anim8or.com website, and user-run forum at http:/ / Anim8orWorld [9] this site boasts an inbuilt Chat/Shoutbox, Media Gallery and modelling Workshop. Anim8orWorld.com incorporates both Animanon and Dotan8 the community Magazine. There are many fan sites hosted by community members with user-created tutorials, image galleries and programs.
References
[1] http:/ / www. anim8or. com/ smf/ index. php?topic=4644. 0 [2] http:/ / www. anim8or. com/ main [3] Glanville, R. Steven (July 20, 1999). "Anim8or: New free animation software available". comp.graphics.packages.3dstudio. Archived by Google Groups. (http:/ / groups. google. ca/ group/ comp. graphics. packages. 3dstudio/ msg/ cc12cc557013bb?hl=en& lr=) [4] Glanville, R. Steven. Thanks for the support (http:/ / www. anim8or. com/ news/ index. html). July 27, 1999. URL accessed at 03:02, 20 January 2006 (UTC) [5] http:/ / www. anim8or. com/ main/ welcome. html#Features [8] Glanville, R. Steven. Anim8or.com home page (http:/ / www. anim8or. com/ main/ welcome. html). January 29, 2005. URL accessed at 03:02, 20 January 2006 (UTC) [9] http:/ / Anim8orWorld. com
External links
Official Anim8or Site, containing Tutorials, news, a gallery, etc. (http://www.anim8or.com/) Anim8orWorld (http://anim8orWorld.com/Forum) Anim8or User Manual (http://www.anim8or.com/manual/index.html) Terranim8or: an external tool for developing terrain and special effects (http://www.biederman.net/leslie/ terranim8or/terranim8or.htm)
Original newsgroup message introducing Anim8or. Archived by Google Groups (http://groups.google.ca/ groups?hl=en&lr=&selm=379564AD.181C6D85@anim8or.com) Description of Anim8or's file format, .AN8 (http://www.anim8or.com/resources/an8_format.txt) User run Anim8or wiki (http://wiki.anim8or.org/)
Anim8or User led movie projects, "Anim8or: the Movie" (http://anim8orstudios.com/) Specification of Anim8or Scripting Language (http://www.anim8or.com/scripts/spec/ Anim8or_Scripting_Language.html) List of available PlugIns (http://homepage.ntlworld.com/w.watson3/main/parameteric.html) Web browser plugin for showing .an8 files on webpages (http://www.3dmlw.com)
219
ASAP
220
ASAP
Advanced Systems Analysis Program
Stable release
Development status Maintained Operating system Type License Website Windows CAD Software Proprietary. Copyright(c) 1982-2010, Breault Research Organization, Inc. [1]
The Advanced Systems Analysis Program (ASAP) is optical engineering software used to simulate optical systems. ASAP can handle coherent as well as incoherent light sources. It is a non-sequential ray tracing tool which means that it can be used not only to analyze lens systems but also for stray light analysis. It uses a Gaussian beam approximation for analysis of coherent sources.
External links
NASA Tech article on ASAP [2] Breault Research Organization Website [3]
References
[1] http:/ / www. breault. com/ software/ asap. php [2] http:/ / www. techbriefs. com/ component/ option,com_docman/ task,doc_details/ gid,2813/ Itemid,41 [3] http:/ / www. breault. com/
Blender
221
Blender
Blender
Blender 2.66 welcome screen Developer(s) Initial release Stable release Written in Blender Foundation 1995 2.68 / July18,2013 C, C++ and Python []
Operating system FreeBSD, GNU/Linux, Mac OS X and Microsoft Windows[1] Type License Website 3D computer graphics software GNU General Public License v2 or later www.blender.org [2]
Blender is a free and open-source 3D computer graphics software product used for creating animated films, visual effects, art, 3D printed models, interactive 3D applications and video games. Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.
History
The Dutch animation studio Neo Geo and Not a Number Technologies (NaN) developed Blender as an in-house application. The primary author was Ton Roosendaal, who previously wrote a ray tracer called Traces for Amiga in 1989. The name Blender was inspired by a song by Yello, from the album Baby.[3] Roosendaal founded NaN in June 1998 to further develop and distribute the program. They initially distributed the program as shareware until NaN went bankrupt in 2002.
Blender The creditors agreed to release Blender under the GNU General Public License, for a one-time payment of 100,000 (US$100,670 at the time). On July 18, 2002, Roosendaal started a Blender funding campaign to collect donations, and on September 7, 2002, announced that they had collected enough funds and would release the Blender source code. Today, Blender is free, open-source software and isapart from the Blender Institute's two half-time and two full-time employeesdeveloped by the community.[4] The Blender Foundation initially reserved the right to use dual licensing, so that, in addition to GNU GPL, Blender would have been available also under the Blender License that did not require disclosing source code but required payments to the Blender Foundation. However, they never exercised this option and suspended it indefinitely in 2005.[5] Currently, Blender is solely available under GNU GPL.
222
Suzanne
In JanuaryFebruary 2002 it was clear that NaN could not survive, and would close the doors in March. Nevertheless, they put out one more release: 2.25. As a sort-of easter egg, a last personal tag, the artists and developers decided to add a 3D model of a chimpanzee. It was created by Willem-Paul van Overbruggen (SLiD3), who named it Suzanne after the orangutan in the Kevin Smith film Jay and Silent Bob Strike Back. Suzanne is Blender's alternative to more common test models such as the Utah Teapot and the Stanford Bunny. A low-polygon model with only 500 faces, Suzanne is often used as a quick and easy way to test Suzanne material, animation, rigs, texture, and lighting setups, and is also frequently used in joke images[citation needed]. Suzanne is still included in Blender. The largest Blender contest gives out an award called the Suzanne Awards.
Blender
223
Features
Blender has a relatively small installation size, of about 70 megabytes for builds and 115 megabytes for official releases. Official versions of the software are released for Linux, Mac OS X, Microsoft Windows, and FreeBSD[6] in both 32 and 64 bits. Though it is often distributed without extensive example scenes found in some other programs,[7] the software contains features that are characteristic of high-end 3D software.[8] Among its capabilities are: Support for a variety of geometric primitives, including polygon meshes, fast subdivision surface modeling, Bezier curves, NURBS surfaces, metaballs, multi-res digital sculpting (including maps baking, remeshing, resymetrize, decimation..), outline font, and a new n-gon modeling system called B-mesh.
Internal render engine with scanline ray tracing, indirect lighting, and ambient occlusion that can export in a wide variety of formats. A pathtracer render engine called Cycles, which can use GPU to assist rendering. Cycles supported Open Shading Language shading since blender 2.65.[9] Integration with a number of external render engines through plugins. Keyframed animation tools including inverse kinematics, armature (skeletal), hook, curve and lattice-based deformations, shape keys (morphing), non-linear animation, constraints, and vertex weighting. Simulation tools for Soft body dynamics including mesh collision detection, LBM fluid dynamics, smoke simulation, Bullet rigid body dynamics, ocean generator with waves. A particle system that includes support for particle-based hair. Modifiers to apply non-destructive effects. Python scripting for tool creation and prototyping, game logic, importing and/or exporting from other formats, task automation and custom tools. Basic non-linear video/audio editing. The Blender Game Engine, a sub-project, offers interactivity features such as collision detection, dynamics engine, and programmable logic. It also allows the creation of stand-alone, real-time applications ranging from architectural visualization to video game construction. A fully integrated node-based compositor within the rendering pipeline accelerated with OpenCL. Procedural and node-based textures, as well as texture painting, projective painting, vertex painting, weight painting and dynamic painting. Realtime control during physics simulation and rendering. Camera and object tracking.
Blender
224
A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay
Sintel and her dragon rendered with Blender. Blender offers the ability to create realistic-looking character models.
User interface
Blender has had a reputation of being difficult to learn for users accustomed to other 3D graphics software[citation needed]. Nearly every function has a direct keyboard shortcut and there can be several different shortcuts per key. Since Blender became free software, there has been effort to add comprehensive contextual menus as well as make the tool usage more logical and streamlined. There have also been efforts to visually enhance the user interface, with the Blender's user interface underwent a significant update during the 2.5x series introduction of color themes, transparent floating widgets, a new and improved object tree overview, and other small improvements (such as a color picker widget). Blender's user interface incorporates the following concepts: Editing modes The two primary modes of work are Object Mode and Edit Mode, which are toggled with the Tab key. Object mode is used to manipulate individual objects as a unit, while Edit mode is used to manipulate the actual object data. For example, Object Mode can be used to move, scale, and rotate entire polygon meshes, and Edit Mode can be used to manipulate the individual vertices of a single mesh. There are also several other modes, such as Vertex Paint, Weight Paint, and Sculpt Mode. The 2.45 release also had the UV Mapping Mode, but it was merged with the Edit Mode in 2.46 Release Candidate 1.[10] Hotkey utilization Most of the commands are accessible via hotkeys. Until the 2.x and especially the 2.3x versions, this was in fact the only way to give commands, and this was largely responsible for creating Blender's reputation as a difficult-to-learn program. The new versions have more comprehensive GUI menus. Numeric input Numeric buttons can be "dragged" to change their value directly without the need to aim at a particular widget, thus saving screen real estate and time. Both sliders and number buttons can be constrained to various step sizes with modifiers like the Ctrl and Shift keys. Python expressions can also be typed directly into
Blender number entry fields, allowing mathematical expressions to specify values. Workspace management The Blender GUI is made up of one or more screens, each of which can be divided into sections and subsections that can be of any type of Blender's views or window-types. Each window-type's own GUI elements can be controlled with the same tools that manipulate 3D view. For example, one can zoom in and out of GUI-buttons in the same way one zooms in and out in the 3D viewport. The GUI viewport and screen layout is fully user-customizable. It is possible to set up the interface for specific tasks such as video editing or UV mapping or texturing by hiding features not utilized for the task.[11]
225
Hardware requirements
Blender has very low hardware requirements compared to other 3D suites.[12][13] However, for advanced effects and high-poly editing, a more powerful system is needed.
Input
Three-button mouse
File format
Blender features an internal file system that allows one to pack multiple scenes into a single file (called a ".blend" file). All of Blender's ".blend" files are forward, backward, and cross-platform compatible with other versions of Blender, with the following exceptions: Loading animations stored in post-2.5 files in Blender pre-2.5. This is due to the reworked animation subsystem introduced in Blender 2.5 being inherently incompatible with older versions. Loading meshes stored in post 2.63. This is due to the introduction of BMesh [15], a more versatile/featureful mesh format. Snapshot ".blend" files can be auto-saved periodically by the program, making it easier to survive a program crash. All scenes, objects, materials, textures, sounds, images, post-production effects for an entire animation can be stored in a single ".blend" file. Data loaded from external sources, such as images and sounds, can also be stored externally and referenced through either an absolute or relative pathname. Likewise, ".blend" files themselves can also be used as libraries of Blender assets. Interface configurations are retained in the ".blend" files, such that what you save is what you get upon load. This file can be stored as "user defaults" so this screen configuration, as well as all the objects stored in it, is used every time you load Blender. The actual ".blend" file is similar to the EA Interchange File Format, starting with its own header (for example BLENDER_v248) that specifies the version, endianness and pointer size, followed by the file's DNA (a full specification of the data format used) and, finally, a collection of binary blocks storing actual data. Presence of the
Blender DNA block in .blend files means the format is self-descriptive and any software able to decode the DNA can read any .blend file, even if some fields or data block types must be ignored. Although it is relatively difficult to read and convert a ".blend" file to another format using external tools, there are several software packages able to do this, for example readblend. A wide variety of import/export scripts that extend Blender capabilities (accessing the object data via an internal API) make it possible to inter-operate with other 3D tools. CAD software uses surface description models that are significantly different from the ones used in Blender because Blender is not designed for CAD. Therefore, the direct import or export of CAD files is not possible. Jeroen Bakker documented the Blender file format to allow inter-operation with other tooling. The document can be found at the The mystery of the blend website.[16] A DNA structure browser[17] is also available on this site. Blender organizes data as various kinds of "data blocks", such as Objects, Meshes, Lamps, Scenes, Materials, Images and so on. An object in Blender consists of multiple data blocks for example, what the user would describe as a polygon mesh consists of at least an Object and a Mesh data block, and usually also a Material and many more, linked together. This allows various data blocks to refer to each other. There may be, for example, multiple Objects that refer to the same Mesh, allowing Blender to keep a single copy of the mesh data in memory, and making subsequent editing of the shared mesh result in shape changes in all Objects using this Mesh. This data-sharing approach is fundamental to Blender's philosophy and its user interface and can be applied to multiple data types. Objects, meshes, materials, textures etc. can also be linked to from other .blend files, allowing the use of .blend files as reusable resource libraries.
226
Development
Since the opening of the source, Blender has experienced significant refactoring of the initial codebase and major additions to its feature set. Improvements include an animation system refresh;[24] a stack-based modifier system;[25] an updated particle system[26] (which can also be used to simulate hair and fur); fluid dynamics; soft-body dynamics; GLSL shaders support[27] in the game engine; advanced UV unwrapping;[28] a fully recoded render pipeline, allowing separate render passes and "render to texture"; node-based material editing and compositing; Projection painting.[29]
Blender Part of these developments were fostered by Google's Summer of Code program, in which the Blender Foundation has participated since 2005. The current stable release version is 2.67b[] (as of 30 May 2013), a second bugfix release to the previous version 2.67 that was released on May 7, 2013 and the 2.67a update released on May 22, 2013. New features in 2.67 included:[] New Freestyle rendering engine Improved paint system Improved compositing and nodes and improvements to the Cycles render engine, the motion tracker, mesh editing tools, 3D printing Addon, and other new Addons. Besides new features, 260 bugs from previous releases were fixed.
227
Support
In the month following the release of Blender v2.44, it was downloaded 800,000 times;[30] this worldwide user base forms the core of the support mechanisms for the program. Most users learn Blender through community tutorials and discussion forums on the internet such as Blender Artists;[31] however, another learning method is to download and inspect ready-made Blender models. Numerous other sites, for example BlenderArt Magazine[32]a free, downloadable magazine with each issue handling a particular area in 3D developmentand BlenderNation [33], provide information on everything surrounding Blender, showcase new techniques and features, and provide tutorials and other guides.
Blender
228
Tears of Steel promotional poster Blender started out as an inhouse tool for a Dutch commercial animation company, NeoGeo.[34] Blender has been used for television commercials in several parts of the world including Australia,[35] Iceland,[36] Brazil,[37][38] Russia[39] and Sweden.[40] The first large professional project that used Blender was Spider-Man 2, where it was primarily used to create animatics and pre-visualizations for the storyboard department. As an animatic artist working in the storyboard department of Spider-Man 2, I used Blender's 3D modeling and character animation tools to enhance the storyboards, re-creating sets and props, and putting into motion action and camera moves in 3D space to help make Sam Raimi's vision as clear to other departments as possible.[41] Anthony Zierhut,[42] Animatic Artist, Los Angeles. The French-language film Friday or Another Day (Vendredi ou un autre jour) was the first 35mm feature film to use Blender for all the special effects, made on GNU/Linux workstations.[43] It won a prize at the Locarno International Film Festival. The special effects were by Digital Graphics[44] of Belgium. Blender has also been used for shows on the History Channel, alongside many other professional 3D graphics programs.[45] Tomm Moore's The Secret of Kells, which was partly produced in Blender by the Belgian studio Digital Graphics, has been nominated for an Oscar in the category 'Best Animated Feature Film'.[46] Plumferos, a commercial animated feature film created entirely in Blender,[47] was premiered in February 2010 in Argentina. Its main characters are anthropomorphic talking animals.
Open Projects
Every 1-2 years the Blender Foundation announces a new creative project to help drive innovation in Blender.
Blender
229
References
[2] http:/ / www. blender. org [10] New features currently in SVN (http:/ / web. archive. org/ web/ 20080512001842/ http:/ / www. blender. org/ development/ current-projects/ changes-since-245/ ). Blender.org [11] Using Blender with multiple monitors (http:/ / www. blenderguru. com/ quick-tip-use-blender-on-multiple-monitors/ ). Blenderguru.com. Retrieved on 2012-07-06. [15] http:/ / wiki. blender. org/ index. php/ Dev:Ref/ Release_Notes/ 2. 63/ BMesh [16] Jeroen Bakker. The mystery of the blend. The blender file-format explained (http:/ / www. atmind. nl/ blender/ mystery_ot_blend. html). Atmind.nl (2009-03-27). Retrieved on 2012-07-06. [17] Blender SDNA 249. Internal SDNA structures. Atmind.nl. Retrieved on 2012-07-06. (http:/ / www. atmind. nl/ blender/ blender-sdna-249. html) [18] Benot Saint-Moulin. 3D softwares comparisons table (http:/ / www. tdt3d. be/ articles_viewer. php?art_id=99), TDT 3D, November 7, 2007 [19] The Big CG Survey 2010, Industry Perspective (http:/ / cgenie. com/ articles/ 1289-big-cg-survey-2010-industry-perspective. html), CGenie, 2010
Blender
[20] The Big CG Survey 2010, Initial Results (http:/ / cgenie. com/ articles/ 1158-cgenies-big-cg-survey-is-now-open-have-your-say. html), CGenie, 2010 [21] Scientific Visualization Lecture 7: Other Visualization software, Patrik Malm, Centre for Image Analysis, Swedish University of Agricultural Sciences, Uppsala University (http:/ / www. it. uu. se/ edu/ course/ homepage/ vetvis/ ht09/ Handouts/ Lecture07. pdf), Swedish University of Agricultural Sciences, p32 [22] Blender's 2.56 release log "What to Expect" and "User Interface" details (http:/ / www. blender. org/ development/ release-logs/ blender-256-beta/ ). Blender.org. Retrieved on 2012-07-06. [23] The Making of Sintel (http:/ / web. archive. org/ web/ 20110707063540/ http:/ / www. 3dworldmag. com/ 2011/ 02/ 09/ the-making-of-sintel/ 4/ ). 3DWorld magazine (2011-02-09) [24] Blender Animation system refresh project (http:/ / wiki. blender. org/ index. php/ BlenderDev/ AnimationUpdate). Wiki.blender.org. Retrieved on 2012-07-06. [25] Modifiers (http:/ / wiki. blender. org/ index. php/ Blenderdev/ Modifiers). Wikiblender.org. Retrieved on 2012-07-06. [26] New Particle options and Guides (http:/ / www. blender. org/ development/ release-logs/ blender-240/ new-particle-options-and-guides/ ). Blender.org. Retrieved on 2012-07-06. [27] GLSL Pixel and Vertex shaders (http:/ / www. blender. org/ development/ release-logs/ blender-241/ glsl-pixel-and-vertex-shaders/ ). Blender.org. Retrieved on 2012-07-06. [28] Subsurf UV Mapping (http:/ / www. blender. org/ development/ release-logs/ blender-241/ subsurf-uv-mapping/ ). Blender.org. Retrieved on 2012-07-06. [31] Blenderartists.org (http:/ / www. blenderartists. org/ forum/ ). Blenderartists.org. Retrieved on 2012-07-06. [32] Blenderart.org (http:/ / blenderart. org/ ). Blenderart.org. Retrieved on 2012-07-06. [33] http:/ / www. blendernation. com/ [34] History (http:/ / www. blender. org/ blenderorg/ blender-foundation/ history/ ). blender.org (2002-10-13). Retrieved on 2012-07-06. [35] Blender in TV Commercials (http:/ / www. studiorola. com/ news/ blender-in-tv-commercials/ ). Studiorola.com (2009-09-26). Retrieved on 2012-07-06. [36] Midstraeti Showreel on the Blender Foundation's official YouTube channel (http:/ / www. youtube. com/ watch?v=TWSAdAO6ynU). Youtube.com (2010-11-02). Retrieved on 2012-07-06. [39] Russian Soda Commercial by ARt DDs (http:/ / www. blendernation. com/ 2010/ 08/ 25/ russian-soda-commercial-by-art-dds/ ). Blendernation.com (2010-08-25). Retrieved on 2012-07-06. [40] Apoteksgruppen ELW TV Commercial made with Blender (http:/ / vimeo. com/ 21344454). Vimeo.com (2011-03-22). Retrieved on 2012-07-06. [41] Testimonials, [42] Anthonyzierhut.com (http:/ / www. anthonyzierhut. com/ ). Anthonyzierhut.com. Retrieved on 2012-07-06. [44] Digitalgraphics.be (http:/ / www. digitalgraphics. be). Digitalgraphics.be. Retrieved on 2012-07-06. [46] The Secret of Kells nominated for an Oscar! (http:/ / web. archive. org/ web/ 20100618021840/ http:/ / www. blendernation. com/ the-secret-of-kells-nominated-for-an-oscar/ ) blendernation.com (2010-02-04). [48] Peach.blender.org (http:/ / peach. blender. org/ ). Peach.blender.org (2008-10-03). Retrieved on 2012-07-06. [49] Durian.blender.org (http:/ / durian. blender. org/ ). Durian.blender.org. Retrieved on 2012-07-06. [50] How long is the movie? (http:/ / durian. blender. org/ news/ how-long-is-the-movie/ ). Durian.blender.org (2010-04-15). Retrieved on 2012-07-06. [51] Sintel Official Premiere (http:/ / durian. blender. org/ news/ sintel-official-premiere/ ). Durian.blender.org (2010-08-16). Retrieved on 2012-07-06. [52] Sintel The Game announcement (http:/ / blenderartists. org/ forum/ showthread. php?t=186893). Blenderartists.org. Retrieved on 2012-07-06. [53] Sintel The Game website (http:/ / sintelgame. org/ ). Sintelgame.org. Retrieved on 2012-07-06.
230
Further reading
Van Gumster, Jason (2009). Blender For Dummies. Indianapolis, Indiana: Wiley Publishing, Inc. p.408. ISBN978-0-470-40018-0. "Blender 3D Design, Spring 2008" (http://ocw.tufts.edu/Course/57). Tufts OpenCourseWare. Tufts University. 2008. Retrieved July 23, 2011. "Release Logs" (http://www.blender.org/development/release-logs/). Blender.org. Blender Foundation. Retrieved July 23, 2011.
Blender
231
External links
Official website (http://www.blender.org/) Blender Artists Community (http://www.blenderartists.org/forum/) BlenderNation: Blender news site (http://www.blendernation.com/) Blender (http://www.dmoz.org/Computers/Software/Graphics/3D/Rendering_and_Modelling/Blender//) at the Open Directory Project BlenderArt Magazine: A bi-monthly Blender magazine for Blender learners (http://www.blenderart.org/) Blender NPR: Dedicated to Stylize and Non-Photorealistic Rendering (http://blendernpr.org)
Brazil R/S
Brazil Rendering System was a proprietary commercial plugin for 3D Studio Max and Autodesk VIZ. Steve Blackmon and Scott Kirvan started developing Brazil R/S while working as the R&D team of Blur Studio, and formed the company SplutterFish to sell and market Brazil. It was capable of photorealistic rendering using fast ray tracing and global illumination. It was used by computer graphics artists to generate content for print, online content, broadcast solutions and feature films. Some major examples are Star Wars Episode III: Revenge of the Sith,[1] Sin City,[2] Superman Returns[3] and The Incredibles.[4] Splutterfish was acquired by Caustic Graphics in 2008,[5] which was later acquired by Imagination Technologies in December 2010.[6] Imagination Technologies announced Brazil's end-of-life, effective May 14, 2012.[7]
BRL-CAD
232
BRL-CAD
Original author(s) Mike Muuss Developer(s) Initial release Stable release Operating system Type License Website Army Research Laboratory 1984 7.22.0 / June26,2012 Cross-platform CAD BSD, LGPL www.brlcad.org [1]
BRL-CAD is a constructive solid geometry (CSG) solid modeling computer-aided design (CAD) system. It includes an interactive geometry editor, ray tracing support for graphics rendering and geometric analysis, computer network distributed framebuffer support, scripting, image-processing and signal-processing tools. The entire package is distributed in source code and binary form. Although BRL-CAD can be used for a variety of engineering and graphics applications, the package's primary purpose continues to be the support of ballistic and electromagnetic analyses. In keeping with the Unix philosophy of developing independent tools to perform single, specific tasks and then linking the tools together in a package, BRL-CAD is basically a collection of libraries, tools, and utilities that work together to create, raytrace, and interrogate geometry and manipulate files and data. In contrast to many other 3D modelling applications, BRL-CAD uses Constructive Solid Geometry rather than Boundary Representation.[2] This means BRL-CAD can "study physical phenomena such as ballistic penetration and thermal, radiative, neutron, and other types of transport"[3] The BRL-CAD libraries are designed primarily for the geometric modeler who also wants to tinker with software and design custom tools. Each library is designed for a specific purpose: creating, editing, and raytracing geometry, and image handling. The application side of BRL-CAD also offers a number of tools and utilities that are primarily concerned with geometric conversion, interrogation, image format conversion, and command-line-oriented image manipulation.
BRL-CAD
233
History
In 1979, the U.S. Army Ballistic Research Laboratory (BRL) (now the United States Army Research Laboratory) expressed a need for tools that could assist with the computer simulation and engineering analysis of combat vehicle systems and environments. When no existing computer-aided design (CAD) package was found to be adequate for this purpose, BRL software developersled by Mike Muussbegan assembling a suite of utilities capable of interactively displaying, editing, and interrogating geometric models. This suite became known as BRL-CAD. Development on BRL-CAD as a package subsequently began in 1983; the first public release was made in 1984. BRL-CAD became an open source project on December, 2004.
Lead developer Mike Muuss works on the XM-1 tank in BRLCAD at a PDP11/70 terminal, circa 1980.
The BRL-CAD source code repository is believed to be the oldest public version-controlled codebase in the world that's still under active development, dating back to 1983-12-16 00:10:31 UTC.[4]
BRL-CAD
234
References
[1] http:/ / www. brlcad. org [3] BRL-CAD overview on their wiki (http:/ / brlcad. org/ wiki/ Overview#Why_CSG_Modeling. 3F)
External links
Official website (http://brlcad.org) Future ideas (http://brlcad.org/~sean/ideas.html) BRL-CAD (http://sourceforge.net/projects/brlcad/) on SourceForge.net
Free support
#brlcad on irc.freenode.net (irc://irc.freenode.net/brlcad) Mailing lists (http://sourceforge.net/mail/?group_id=105292) Support requests (http://sourceforge.net/tracker/?group_id=105292&atid=640803)
Commercial support
SURVICE Engineering, Inc. (http://www.survice.com) at www.BRLCAD.com (http://www.brlcad.com)
Form-Z
235
Form-Z
formZ
Developer(s) Stable release AutoDesSys 6.6 / February 2008
Operating system Microsoft Windows, Mac OS X Type License Website 3D computer graphics Proprietary www.formz.com [1]
formZ is a computer-aided (CAD) design tool developed by AutoDesSys for all design fields that deal with the articulation of 3D spaces and forms and which is used for 3D modeling, drafting, animation and rendering.
Overview
formZ is a general-purpose [2] solid and surface modeler with an extensive set of 2D/3D form manipulating and sculpting capabilities. It is a design tool for architects, landscape architects, urban designers, engineers, animators, illustrators and movie makers, industrial and interior designers and all other design areas. formZ can be used on Windows as well as on Macintosh computers and in addition to English it is also available in German, Italian, Spanish, French, Greek, Korean and Japanese.
Modeling
In general, formZ allows design in 3D or in 2D, using numeric or interactive graphic input through either line or smooth shaded drawings (OpenGL)among drafting, modeling, rendering, and animation platforms. Key modeling features include Boolean solids to generate complex composite objects; the ability to create curved surfaces from a variety of splines, including NURBS and Bzier/Coons patches and mechanical or organic forms using metaformz, nurbz, patches, subdivisions, displacements, or skinning; specialty tools such as Terrain models, Platonic solids, geodesic spheres, double line wall objects, staircases, helixes, screws, and bolts. In addition formZ modeling supports methods for transforming and morphing 3D shapes and allows the production of both animated visualizations of scenes and the capture of 3D shapes as they morph into other forms, introducing modeling methods that explore 3D forms beyond traditional means. Technical output oriented modeling allows users to refine the design with double precision CAD accuracy to full structural detail for 3D visualization for the production of 2D construction drawings, 3D printing, rapid prototyping, and CNC milling and offers information management of bills of materials and spreadsheet support for construction documents.
Animation
formZ offers a seamlessly integrated animation environment, where objects, lights, cameras, and surface styles (colors) can be animated and transformed over time. The animation features are object-centric and are applied as modeling operations, which, in addition to supporting the production of animated visualizations, they also support dynamic modeling and the creation of forms that go significantly beyond the repertoire of conventional modeling tools. This offers a powerful avenue for design explorations.
Form-Z
236
Rendering
RenderZone Plus provides photorealistic rendering with global illumination based on final gather (raytrace), ambient occlusion, and radiosity, for advanced simulation of lighting effects and rendering techniques, which result in renderings with the most realism, as the illumination of a scene takes into account the accurate distribution of light in the environment. Consequently excellent results are achieved in a short time, with little effort to set up and easy to control. Key rendering features include multiple light types (distant (sun), cone, point, projector, area, custom, line, environment, and atmospheric) whereas the environment and atmospheric lights, which may be considered advanced light types, are especially optimized for global illumination. Both procedural and pre-captured textures are offered and can be mapped onto the surfaces of objects using six different mapping methods: flat, cubic, cylindrical, spherical, parametric, or UV coordinates. Decals can be attached on top of other surface styles to produce a variety of rendering effects, such as labels on objects, graffiti on walls, partially reflective surfaces, masked transparencies, and more. State of the art shaders are used to render surfaces and other effects. A surface style is defined by up to four layers of shaders, which produce color, reflections, transparency, and bump effects. They can be applied independently or can be correlated. Libraries with many predefined materials are included and can be easily extended and customized. Also available is a sketch rendering mode that produces non photorealistic images, which appear as if they were drawn by manual rendering techniques, such as oil painting, water color, or pencil hatches.
Form-Z
237
Literature
Pierluigi Serraino: History of form Z, Birkhuser, 2002, ISBN 3-7643-6563-3
External links
AutoDesSys, the publishers of formZ [9] formZ User Gallery [10] form.Z Videos [11] formZ User Forum [12]
Notes
[1] [2] [3] [4] [5] [6] [7] http:/ / www. formz. com http:/ / blog. novedge. com/ 2007/ 05/ an_interview_wi_2. html http:/ / www. formz. com/ forum2/ messages/ 16/ 40392. html?1262203691 http:/ / www. cgw. com/ Publications/ CGW/ 2007/ Volume-30-Issue-7-July-2007-/ Ship-Shape. aspx http:/ / www. formz. com/ gallery2/ gallery. php?A=206& L=M http:/ / www. formz. com/ gallery2/ gallery. php?A=231& L=R http:/ / www. oliverscholl. com/ portfolio/ Selects. html#3
[8] http:/ / www. formz. com/ gallery2/ gallery. php?A=255& L=T [9] http:/ / www. autodessys. com [10] http:/ / www. formz. com/ gallery/ user_page. php [11] http:/ / www. formz. com/ products/ formz/ formzVideo_320. php?startMovie=formZ_intro_p1_320. flv [12] http:/ / www. formz. com/ forum/ discus41/ discus. cgi?pg=topics
Holomatix Rendition
238
Holomatix Rendition
Holomatix Rendition
Developer(s) Stable release Holomatix Ltd 1.0.458 / 8th October 2010
Holomatix Rendition was a raytracing renderer, which was broadly compatible with mental ray. Its rendering method was similar to that of FPrime in that it displayed a continuously refined rendered image until final production quality image was achieved. This differs to traditional rendering methods where the rendered image is built up block-by-block. It was developed by Holomatix Ltd, based in London, UK. As of December 2011, the Rendition product has been retired and is no longer available or being updated. The product is no longer mentioned on the developer's web site, either. The successor is SprayTrace.
Rendering Features
As it used the same shader and lighting model as mental ray, Rendition supported the same rendering and ray tracing features as mental ray, including: Final Gather Global Illumination (through Photon Mapping) Polygonal and Parametric Surfaces (NURBS, Subdivision) Displacement Mapping Motion Blur Lens Shaders
Supported platforms
Autodesk Maya, up to and including 2011 SAP Autodesk 3ds Max, up to and including 2010 SoftImage XSI, up to and including 2011, but not 2011 SP1
Holomatix Rendition
239
External links
Holomatix Rendition home page (WayBack machine archived page - page no longer served by Holomatix) [1] Interview discussing Rendition with Holomatix CTO on 3d-test.com [2] SprayTrace home page (the successor to Rendition) [3]
References
[1] http:/ / web. archive. org/ web/ 20110712213838/ http:/ / www. holomatix. com/ products/ rendition/ about/ [2] http:/ / www. 3d-test. com/ interviews/ holomatix_2. htm [3] http:/ / www. spraytrace. com/
Imagine
Imagine was the name of a cutting-edge 3D modeling and ray tracing program, originally for the Amiga computer and later also for MSDOS and Microsoft Windows. It was created by Impulse, Inc. It used the .iob extension for its objects. Imagine was a derivative of the software TurboSilver, which was also for the Amiga and written by Impulse. The Windows version of the program was abandoned when Impulse dropped out of the 3D software market; but the Amiga version is still maintained and sold by CAD Technologies [1]. The Windows and DOS versions have been made available in full along with other freely distributed products such as Organica at the fansite Imagine 3D [2] which also has a forum, gallery and downloads section.
External links
Program for reading IOB files [3] Aminet Imagine traces [4] Imagine 3D fan site [2]
References
[1] [2] [3] [4] http:/ / www. imaginefa. com http:/ / www. imagine3d. org http:/ / www. pygott. demon. co. uk/ prog. htm http:/ / aminet. net/ pix/ imagi
Indigo Renderer
240
Indigo Renderer
A photorealistic image rendered with Indigo. Developer(s) Stable release Glare Technologies 3.4 (November5,2012) [] [1]
Operating system Linux, Mac OS X and Microsoft Windows Type License Website Rendering system Proprietary commercial software www.indigorenderer.com [2]
Indigo Renderer is a 3D rendering software that uses unbiased rendering technologies to create photo-realistic images. In doing so, Indigo uses equations that simulate the behaviour of light, with no approximations or guesses taken. By accurately simulating all the interactions of light, Indigo is capable of producing effects such as: Depth of field, as when a camera is focused on one object and the background is blurred Spectral effects, as when a beam of light goes through a prism and a rainbow of colours is produced Refraction, as when light enters a pool of water and the objects in the pool seem to be bent Reflections, from subtle reflections on a polished concrete floor to the pure reflection of a silvered mirror Caustics, as in light that has been focused through a magnifying glass and has made a pattern of brightness on a surface
Indigo uses methods such as Metropolis light transport (MLT), spectral light calculus, and virtual camera model. Scene data is stored in XML or IGS format.
Indigo Renderer Indigo features Monte-Carlo path tracing, bidirectional path tracing and MLT on top of bidirectional path tracing, distributed render capabilities, and progressive rendering (image gradually becomes less noisy as rendering progresses). Indigo also supports subsurface scattering and has its own image format (.igi). Indigo was originally released as freeware until the 2.0 release, when it became a commercial product.
241
References
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_stable_software_release/ Indigo_Renderer& action=edit [2] http:/ / www. indigorenderer. com
External links
Official website (http://www.indigorenderer.com) Glare Technologies website (http://www.glaretechnologies.com/)
Kerkythea
242
Kerkythea
Kerkythea
Kerkythea is capable of rendering photorealistic caustics and global illumination. Developer(s) Stable release Operating system Type License Ioannis Pantazopoulos Kerkythea 2008 Echo / October18,2008 Microsoft Windows, Linux, Mac OS X 3D Graphics Software Freeware
Kerkythea is a standalone rendering system that supports raytracing and Metropolis light transport, uses physically accurate materials and lighting, and is distributed as freeware. CurrentlyWikipedia:Manual of Style/Dates and numbers#Chronological items, the program can be integrated with any software that can export files in obj and 3ds formats, including 3ds Max, Blender, LightWave 3D, SketchUp, Silo and Wings3D. The developer ceased developing Kerkythea to focus on the development of a commercial renderer named Thea Render.[citation needed]
History
Kerkythea started development in 2004 and released its first version on April 2005. Initially it was only compatible with Microsoft Windows, but an updated release on October 2005 made it Linux compatible. It is nowWikipedia:Manual of Style/Dates and numbers#Chronological items also available for Mac OS X. In May 2009 it was announced that the development team started a new commercial renderer, although Kerkythea will be updated and it will stay free, which is a relief considering that the site has now been down for two months with only apathetic excuses from the admins, who still refuse to allow anyone to set up a mirror site to make the files available.
Exporters
There are 6 official exporters for Kerkythea. BlenderBlend2KT Exporter to XML format 3D Studio Max3dsMax2KT 3dsMax Exporter GMaxGMax2KT GMax Exporter SketchUpSU2KT SketchUp Exporter SU2KT Light Components
Kerkythea
243
Features
Supported 3D File Formats 3DS Format OBJ Format XML (internal) Format SIA (Silo) Format (partially supported)
Supported Image Formats All supported by FreeImage library (JPEG, BMP, PNG, TGA and HDR included) Supported Materials Matte Perfect Reflections/Refractions Blurry Reflections/Refractions Translucency (SSS) Dielectric Material Thin Glass Material Phong shading Material Ward Anisotropic Material Anisotropic Ashikhmin Material Lafortune Material Layered Material [Additive Combination of the Above with use of Alpha Maps]
Supported Shapes Triangulated Meshes Sphere Planes Supported Lights Omni Light Spot Light Projector Light Point Diffuse Area Diffuse Point Light Spherical Soft Shadows Ambient Lighting Sky Lighting [Physical Sky, SkySphere Bitmap (Normal or HDRI)]
Supported Textures Constant Colors Bitmaps (Normal and HDRI) Procedurals [Perlin Noise, Marble, Wood, Windy, Checker, Wireframe, Normal Ramp, Fresnel Ramp] Any Weighted or Multiplicative Combination of the Above
Supported Features Bump Mapping Normal mapping Clip Mapping Bevel Mapping (an innovative KT feature!) Edge Outlining
Kerkythea Depth of field Fog Isotropic Volume Scattering Faked Caustics Faked Translucency Dispersion Anti-aliasing [Texture Filtering, Edge Antialiasing] Selection Rendering Surface and Material Instancing
244
Supported Camera Types Planar Projection [ Pinhole, Thin lens ] Cylindrical Pinhole Spherical Pinhole Supported Rendering Techniques Classic Ray Tracing Path Tracing (Kajiya) Bidirectional Path Tracing (Veach & Guibas) Metropolis Light Transport (Kelemen, Kalos et al.) Photon mapping (Jensen) [mesh maps, photon maps, final gathering, irradiance caching, caustics] Diffuse Interreflection (Ward) Depth Rendering Mask Rendering Clay Rendering
Application Environment OpenGL Real-Time Viewer [basic staging capabilities] Integrated Material Editor Easy Rendering Customization Sun/Sky Customization Script System Command Line Mode
External links
Official website [1] (Down for two months now, and apparently in no rush to get it working again.) Kerkythea's Forum [2] Kerkythea's Wiki [3]
References
[1] http:/ / www. kerkythea. net/ joomla/ [2] http:/ / www. kerkythea. net/ phpBB2/ index. php [3] http:/ / wikihost. org/ wikis/ kerkythea/ wiki/ start
LightWave 3D
245
LightWave 3D
LightWave 3D
Operating system Amiga, IRIX, Mac OS X, Windows Type License Website 3D computer graphics Proprietary http:/ / www. lightwave3d. com
LightWave 3D is a 3D computer graphics software program developed by NewTek. The latest release of LightWave runs on Windows and Mac OS X.
Overview
LightWave is a software package used for rendering 3D images, both animated and static. It includes a rendering engine that supports such advanced features as realistic reflection and refraction, radiosity, and caustics. The 3D modeling component supports both polygon modeling and subdivision surfaces. The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.
History
In 1988, Allen Hastings created a rendering and animation program called Videoscape, and his friend Stuart Ferguson created a complementary 3D modeling program called Modeler, both sold by Aegis Software. NewTek planned to incorporate Videoscape and Modeler into its video editing suite, Video Toaster. Originally intended to be called "NewTek 3D Animation System for the Amiga", Hastings later came up with the name "LightWave 3D", inspired by two contemporary high-end 3D packages: Intelligent Light and Wavefront. In 1990, the Video Toaster suite was released, incorporating LightWave 3D, and running on the Commodore Amiga computer. LightWave 3D has been available as a standalone application since 1994, and version 9.3 runs on both Mac OS X and Windows platforms. Starting with the release of version 9.3, the Mac OS X version has been updated to be a Universal Binary. The last known standalone revision for the Amiga was Lightwave 5.0, released in 1995. Shortly after the release of the first PC version, NewTek discontinued the Amiga version, citing the platform's uncertain future. LightWave was used to create special effects for the Babylon 5, Star Trek: Voyager, Space: Above and Beyond and seaQuest DSV science fiction television series; the program was also utilized in the production of Titanic as well as the recent Battlestar Galactica TV series, Sin City, Star Trek, 300 and Star Wars movies. The short film 405 was produced by two artists from their homes using Lightwave. In the Finnish Star Trek parody Star Wreck: In the Pirkinning, most of the visual effects were done in LightWave by Finnish filmmaker Samuli Torssonen, who produced the VFX work for the feature film Iron Sky. The film Jimmy Neutron: Boy Genius was made entirely in
LightWave 3D Lightwave 6 and messiah:Studio. In 2007, the first feature film to be 3d animated completely by one person without the typical legion of animators made its debut, Flatland the Film by Ladd Ehlinger Jr. It was animated entirely in Lightwave 3D 7.5 and 8.0. In its ninth version, the market for LightWave ranges from hobbyists to high-end deployment in video games, television and cinema. NewTek shipped a 64-bit version of LightWave 3D as part of the fifth free update of LightWave 3D 8, and was featured in a keynote speech by Bill Gates at WinHEC 2005. On Feb 4 2009, NewTek announced "LightWave CORE" its next-generation 3D application via a streamed live presentation to 3D artists around the world. It features a highly customizable and modernized user interface, Python scripting integration that offers realtime code and view previews, an updated file format based on the industry standard Collada format, substantial revisions to its modeling technologies and a realtime iterative viewport renderer. It will also be the first Lightwave product to be available on the Linux operating system. CORE was eventually cancelled as a standalone product and NewTek announced that the CORE advancements would become part of the ongoing LightWave platform, starting with LightWave 10. On February 20, 2012 NewTek began shipping LightWave 11 Software, the latest version of its professional 3D modeling, animation, and rendering software.[1] LightWave 11 incorporates many new features, such as instancing, flocking and fracturing tools, flexible Bullet Dynamics, Pixologic Zbrush support, and more. LightWave 11 is used for all genres of 3D content creation-from film and broadcast visual effects production, to architectural visualization, and game design.[2][3][4]
246
Features
Dynamics
Lightwave is equipped with all the required dynamics such as hard body, soft body and cloth. Hard body dynamics equips the user to simulate effects like rockslides, building demolitions and sand effects, using realistic forces like gravity and collisions. Soft body equips the user with a tool that can simulate jelly or jiggling fat on overweight characters. This can also be applied to characters for a dynamic hair effect. Cloth can be applied to clothing for characters. This can also be used for hair to simulate more realistic hair movement. The CORE subsystem of Lightwave 11 includes a new rigid-body dynamics engine called Bullet.
LightWave 3D
247
Hypervoxels
Hypervoxels are a means to render different particle animation effects. Different modes of operation have the ability to generate appearances that mimic: Blobby metaballs for things like water or mercury, including reflection or refraction surface settings Sprites which are able to reproduce effects like fire or flocking birds Volume shading for simulating clouds or fog type effects
Material shaders
Lightwave comes with a nodal texture editor that comes with a collection of special-purpose material shaders. Some of the types of surface for which these shaders have been optimized include: general-purpose subsurface scattering materials for materials like wax or plastics realistic skin, including subsurface scattering and multiple skin layers metallic, reflective, materials using energy conservation algorithms transparent, refractive materials including accurate total internal reflection algorithms
Nodes
With LW 9, Newtek added Node editors to the Surface Editor and Mesh Displacement parts of LightWave. They also however release the Node SDK with the software, so any developer can add their own Node Editors via plug-ins, and a few have done so, notably Denis Pontonnier, who created free to download node editors and many other utility nodes for all of the sdk classes in LightWave. This now means users can utilise nodes for modifying images and renders, procedural textures, modifying the shape of hypervoxels, controlling motions of objects, driving animation channels, and use things like particles and other meshes to drive these functions. This has greatly enhanced the abilities of standalone LightWave. The node areas of LightWave continue to expand, with volumetric lights now controllable with nodes.
LScript
LScript is one of LightWave's scripting languages. It provides a comprehensive set of prebuilt functions you can use when scripting how LightWave behaves.
Python
With LW 11, Newtek added Python support as an option for custom scripting.
Bullet Physics
From LW 11, Newtek have added Bullet support.
Lightwave SDK
The SDK (Software Development Kit) provides a set of C classes for writing native plugins for Lightwave.
LightWave 3D
248
Licensing
Prior to being made available as a stand-alone product in 1994, LightWave required the presence of a Video Toaster in an Amiga to run. Until version 11.0.3[17][18], LightWave licenses were bound to a hardware dongle (e.g. Safenet USB or legacy parallel port models). Without a dongle LightWave would operate in "Discovery Mode" which severely restricts functionality. One copy of LightWave supports distributed rendering on up to 999 nodes.
References
[1] [2] [3] [4] [5] NewTek Ships LightWave 11 Software (http:/ / www. newtek. com/ newtek-now/ press/ 25-lightwave-news/ 465. html) LightWave 11 - Features List (https:/ / www. lightwave3d. com/ new_features/ ) LightWave official gallery (https:/ / www. lightwave3d. com/ community/ gallery/ ) caption is not allowed "Lightwave projects list" (http:/ / www. newtek. com/ lightwave/ projects. php), Newtek.com. Retrieved on 2008-07-18.
[6] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm [7] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm [8] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm
LightWave 3D
[9] http:/ / www. fxguide. com/ article395. html [10] http:/ / www. twin-pixels. com/ software-used-making-of-avatar/ [11] http:/ / www. cgtantra. com/ index. php?option=com_content& task=view& id=394& Itemid=35 [12] http:/ / www. newtek. com/ lightwave/ featured_300. php [13] http:/ / news. creativecow. net/ story/ 860343 [14] http:/ / www. newtek. com/ lightwave/ newsletter. php?pageNum_monthlynews=1& totalRows_monthlynews=16#nick [15] http:/ / www. fxguide. com/ article583. html [16] Iron Sky Signal E21 - Creating the Visual Effects (http:/ / www. youtube. com/ watch?v=czpYwqV22p4& feature=player_embedded). Energiaproductions YouTube channel. At 3:45. [17] http:/ / forums. newtek. com/ archive/ index. php/ t-129707. html [18] https:/ / www. lightwave3d. com/ buy/
249
External links
LightWave's official site (http://www.lightwave3d.com/) NewTek's official site (http://www.newtek.com/) LightWave 3D Community (http://forums.newtek.com/) Active User Community. Liberty3D 3D Community Focused on LightWave3D and complimentary tools (http://www.liberty3d.com/ forums/) Active User Community.
LuxRender
250
LuxRender
LuxRender
A screenshot of Luxrender 0.7 Rendering a Desert Eagle Developer(s) Stable release Jean-Philippe Grimaldi, Jean-Francois Romang, David Bucciarelli, Ricardo Lipas Augusto, Asbjorn Heid and others. 1.2 [1] / February24,2013 []
Operating system Cross-platform Type License Website 3D computer graphics GPLv3 www.luxrender.net [2]
LuxRender is a free and open source software rendering system for physically correct image synthesis. The program runs on GNU/Linux, Mac OS X, and Microsoft Windows.
Overview
LuxRender features only a 3D renderer; it relies on other programs (3D modeling programs) to create the scenes to render, including the models, materials, lights and cameras. This content can then be exported from the application it was created in for rendering using LuxRender. Fully functional exporters are available for Blender, DAZ Studio and Autodesk 3ds Max; partially functional ones for Cinema 4D, Maya, SketchUp and XSI.[3] After opening the exported file, the only thing LuxRender will do is render the scene. You can however tweak various post processing settings from the graphical interface of the program.[4]
History
LuxRender is based on PBRT [5], a physically based ray tracing program.[] Although very capable and well structured, PBRT focuses on academic use and is not easily usable by digital artists. As PBRT is licensed under the GPL, it was possible to start a new program based on PBRT's source code. With the blessings of the original authors, a small group of programmers took this step in September 2007. The new program was named LuxRender and was to focus on artistic use. Since its initial stage, the program has attracted interest of various programmers around the
LuxRender world.[] On 24 June 2008, the first official release was announced.[6] This was the first release that is considered to be usable for the general public.
251
Features
The main features of LuxRender as of version 0.8 include:[][] Biased and unbiased rendering: Users can choose between physical accuracy (unbiased) and speed (biased). Full spectral rendering: Instead of the RGB colour spectrum, full spectra are used for internal calculations. Hierarchical procedural and image based texture system: Procedural and image based textures can be mixed in various ways, making it possible to create complex materials. Displacement mapping and subdivision: Based on procedural or image textures, object surfaces can be transformed.
Rendering of a school interior with LuxRender. Modelled in Blender.
Network and co-operative rendering: Rendering time can be reduced by combining the processing power of multiple computers. IPv6 is also supported. Perspective (including shift lens), orthographic and environment cameras HDR output: Render output can be saved in various file formats, including .png, .tga and .exr. Instances: Instancing significantly saves system resources, in particular memory consumption by reusing mesh data in duplicated objects. Built in post-processing: While rendering, you can add post processed effects like bloom, glare, chromatic aberration and vignetting. Motion blur, depth of field and lens effects: True motion blur, both for the camera and individual objects, and physically accurate Lens Effects, including Depth Of Field. Light groups: By using light groups, one can output various light situations from a single rendering, or make adjustments to the balance between light sources in real time. Tone mapping Image denoising Fleximage (virtual film): Allows you to pause and continue renders. The current state of the render can be written to a file, so that any system can continue the render at a later moment. GPU acceleration for path tracing when sampling one light at a time. Film response curves to emulate traditional cameras color response (some curve are for black&white films too). Volumetric rendering using Homogeneous volumes by defining an interior, and exterior volume. Subsurface Scattering
LuxRender
252
Planned/Implemented Features
The new features of LuxRender for version 1.0 include:[][] Improved speeds for Hybrid Path GPU Rendering: The Path GPU Rendering Engine have various speed and stability improvements. New Hybrid Bidirectional Rendering: A GPU Accelerated version of the LuxRender Bi-Directional Path Tracer is in development. However, it does not currently support all of LuxRender's traditional Bi-Directional Path tracing features yet. Networking: Improvements to LuxRender's networking mode. A new layered material: Allows the layering of multiple materials on one object. A new glossy coating material: Allows a glossy material to be placed on an object, coating the underneath material. New features and developments updated here: LuxRender Development Blog [7]
References
[1] http:/ / www. luxrender. net/ forum/ viewtopic. php?f=12& t=9653 [2] http:/ / www. luxrender. net/ [3] http:/ / www. luxrender. net/ wiki/ Exporter_Status [4] http:/ / www. luxrender. net/ wiki/ index. php?title=Luxrender_Manual [5] http:/ / www. pbrt. org [7] http:/ / www. luxrender. net/ en_GB/ development_blog
External links
Official website (http://www.luxrender.net/) LuxRender coverage on cgindia.org (http://www.cgindia.org/2007/10/ introducing-luxrender-free-unbiased-3d.html) Examples of Luxrender output (http://www.luxrender.net/forum/gallery2.php)
253
References
James Bigler, Abraham Stephens, and Steven G. Parker. (2006) "Design for Parallel Interactive Ray Tracing Systems". In proceedings of the IEEE Symposium on Interactive Ray Tracing. (http://ieeexplore.ieee.org/xpl/ freeabs_all.jsp?arnumber=4061561) Abraham Stephens, Solomon Boulos, James Bigler, Ingo Wald, and Steven G. Parker. (2006) "An Application of Scalable Massive Model Interaction using Shared Memory Systems". In proceedings of the Eurographics Symposium on Parallel Graphics and Visualization. (http://www.cs.utah.edu/~boulos/papers/manta-egpgv06. pdf)
External links
Official page of the Manta interactive ray tracer (http://mantawiki.sci.utah.edu/manta/index.php/Main_Page)
Maxwell Render
Maxwell Render
Sample Maxwell Render image output by Benjamin Brosdaux Developer(s) Initial release Stable release Next Limit Technologies 26April2006 2.7 / 19June2012
Operating system Linux (on x86-64) Mac OS X (on IA-32 and PowerPC) Microsoft Windows (on IA-32 and x86-64) Available in Type License Website English Raytracer Proprietary commercial software www.maxwellrender.com [1]
Maxwell Render
254
Maxwell Render is a software package that aids in the production of photorealistic images from computer 3D model data; a 3D renderer. It was introduced as an early alpha on December 2004 (after 2 years of internal development) and it utilized a global illumination (G.I.) algorithm based on a metropolis light transport variation.
Overview
Maxwell Render was among the first widely available implementations of unbiased rendering and its G.I. algorithm was linked directly to a physical camera paradigm to provide a simplified rendering experience wherein the user was not required to adjust arbitrary illumination parameter settings, as was typical of scanline renderers and raytracers of the time.[citation needed] Maxwell Render was developed by Madrid-based Next Limit Technologies, which was founded in 1998 by engineers Victor Gonzalez and Ignacio Vargas.
Software components
Maxwell Render (core render engine) Maxwell Studio Material Editor Maxwell FIRE (Fast Interactive Rendering) Network components (Maxwell Monitor, Manager and Node) Plugins (modeling software specific)
History
The first alpha was released on 4 December 2004; the first release candidate on 2 December 2005.[2] The latter marked also the first appearance of Maxwell Studio and Material Editor. Further releases were Version 1.0 (26 April 2006) Version 1.1 (4 July 2006) Version 1.5 (23 May 2007) Version 1.6 (22 November 2007) Version 1.7 (19 June 2008) Version 2.0 (23 September 2009) Version 2.1 (18 July 2010) Version 2.5 (13 December 2010) Version 2.6 (2 November 2011) Version 2.7 (19 June 2012)
Maxwell Render
255
Nuke Photoshop
References
[1] http:/ / www. maxwellrender. com [2] Next Limit, Maxwell Render announcements (http:/ / www. maxwellrender. com/ forum/ viewforum. php?f=1)
External links
Official Website (http://www.maxwellrender.com) Next Limit Technologies (http://www.nextlimit.com) (Parent Company) Official Forum (http://www.maxwellrender.com/forum/index.php) Maxwell Render Resources (http://resources.maxwellrender.com/) Maxwell Render Tutorials (http://think.maxwellrender.com/) Maxwell Render on Facebook (http://www.facebook.com/pages/Maxwell-Render-The-Light-Simulator/ 66133283904) Maxwell Render on Twitter (http://twitter.com/MaxwellRender)
Mental ray
256
Mental ray
Mental Ray
Original author(s) Mental Images Developer(s) Stable release Preview release Type License Website Nvidia 3.11 3.11 Renderer Proprietary www.nvidia-arc.com/products/nvidia-mental-ray/ [1]
Mental Ray (stylized as mental ray) is a production-quality rendering application developed by Mental Images (Berlin, Germany). Mental Images was bought in December 2007 by NVIDIA. As the name implies, it supports ray tracing to generate images. Mental Ray has been used in many feature films, including Hulk, The Matrix Reloaded & Revolutions, Star Wars Episode II: Attack of the Clones, The Day After Tomorrow and Poseidon. [2][3]
Features
The primary feature of Mental Ray is the achievement of high performance through parallelism on both multiprocessor machines and across render farms. The software uses acceleration techniques such as scanline for primary visible surface determination and binary space partitioning for secondary rays. It also supports caustics and physically correct simulation of global illumination employing photon maps. Any combination of diffuse, glossy (soft or scattered), and specular reflection and transmission can be simulated. Mental Ray was designed to be integrated into a third-party application using an API or be used as a An image rendered using Mental Ray which demonstrates global standalone program using the .mi scene file format for illumination, photon maps, depth of field, ambient occlusion, glossy reflections, soft shadows and bloom batch-mode rendering. Up to this moment there are many programs integrating this renderer such as Autodesk Maya, 3D Studio Max, AutoCAD, Cinema 4D and Revit, Softimage|XSI, Side Effects Software's Houdini, SolidWorks and Dassault Systmes' CATIA. Most of these software front-ends provide their own library of custom shaders (described below). However assuming these shaders are available to mental ray, any mi file can be rendered, regardless of the software that generated it. Mental Ray is fully programmable and infinitely variable, supporting linked subroutines also called shaders written in C or C++. This feature can be used to create geometric elements at runtime of the renderer, procedural textures, bump and displacement maps, atmosphere and volume effects, environments, camera lenses, and light sources.
Mental ray
257
Supported geometric primitives include polygons, subdivision surfaces, and trimmed free-form surfaces such as NURBS, Bzier, and Taylor monomial. Phenomena consist of one or more shader trees (DAG). A phenomenon looks like regular shader to the user, and in fact may be a regular shader, but generally it will contain a link to a shader DAG, which may include the introduction or modification of geometry, introduction of lenses, environments, and compile options. The idea of a Phenomenon is to package elements and hide complexity. In 2003, Mental Images was awarded an Academy Award for their contributions to the mental ray rendering software for motion pictures.
An image of diamond rendered using Mental Ray in CATIA V5R19 Photo Studio.
Notes
[1] http:/ / www. nvidia-arc. com/ products/ nvidia-mental-ray/ [2] " mental images Software Developers Receive Academy Award (http:/ / www. mentalimages. com/ news/ detail/ article/ / mental-image-22. html)". Mental Images Press Release, April 23, 2011 [3] " Large as Life: Industrial Light & Magic Looks to mental ray to Create "Poseidon" (http:/ / www. mentalimages. com/ news/ detail/ article/ / large-as-lif. html)". Mental Images Press Release, April 23, 2011
Further reading
Driemeyer, Thomas: Rendering with Mental Ray, SpringerWienNewYork, ISBN 3-211-22875-6 Driemeyer, Thomas: Programming mental ray, SpringerWienNewYork, ISBN 3-211-24484-0 Kopra, Andy: Writing mental ray Shaders: A perceptual introduction, SpringerWienNewYork, ISBN 978-3-211-48964-2
External links
Mental Ray home page (http://www.nvidia-arc.com/products/nvidia-mental-ray.html) Mental Ray user wiki (http://www.mymentalray.com/wiki/index.php/Mental_ray_cookbook) Mental Ray user blog (http://elementalray.wordpress.com/) mental ray and iray for CINEMA 4D (http://www.m4d.info) by at GmbH MetaSL Material Library (http://materials.mentalimages.com/)
Modo
258
Modo
modo
Operating system Windows , Linux , Mac OS X Type License Website 3D computer graphics Proprietary www.luxology.com/modo/ [2]
modo is a polygon and subdivision surface modeling, sculpting, 3D painting, animation and rendering package developed by Luxology, LLC. The program incorporates features such as n-gons and edge weighting, and runs on Microsoft Windows, Linux and Mac OS X platforms.
History
modo was created by the same core group of software engineers that previously created the pioneering 3D application LightWave 3D, originally developed on the Amiga platform and bundled with the Amiga-based Video Toaster workstations that were popular in television studios in the late 1980s and early 1990s. They are based in Mountain View, California. In 2001, senior management at NewTek (makers of LightWave) and their key LightWave engineers disagreed regarding the notion for a complete rewrite of LightWave's work-flow and technology.[3] Newtek's Vice President of 3D Development, Brad Peebler, eventually left Newtek to form Luxology, and was joined by Allen Hastings and Stuart Ferguson (the lead developers of Lightwave), along with some of the LightWave programming team members. After more than three years of development work, modo was demonstrated at Siggraph 2004 and released in September of the same year. In April 2005, the high-end visual effects studio Digital Domain integrated modo into their production pipeline. Other studios to adopt modo include Pixar, Industrial Light & Magic, Zoic Studios, id Software, Eden FX, Studio ArtFX, The Embassy Visual Effects, Naked Sky Entertainment and Spinoff Studios. At Siggraph 2005, modo 201 was pre-announced. This promised many new features including the ability to paint in 3D ( la ZBrush, BodyPaint 3D), multi-layer texture blending, as seen in LightWave, and, most significantly, a rendering solution which promised physically-based shading, true lens distortion, anisotropic reflection blurring and built-in polygon instancing. modo 201 was released on May 24, 2006. modo 201 was the winner of the Apple Design Awards for Best Use of Mac OS X Graphics for 2006. In October 2006, modo also won "Best 3D/Animation Software" from MacUser magazine. In January 2007, modo won the Game Developer Frontline Award for "Best Art Tool". modo 202 was released on August 1, 2006. It offered faster rendering speed and several new tools including the ability to add thickness to geometry. A 30 day full-function trial version of the software was made available. modo was recentlyWikipedia:Manual of Style/Dates and numbers#Chronological items used in the production of the feature films Stealth, Ant Bully, and Wall*E.
Modo In March 2007, Luxology released modo 203 as a free update. It included new UV editing tools, faster rendering and a new DXF translator. The release of modo 301 on September 10, 2007 added animation and sculpting to its toolset. The animation tools include being able to animate cameras, lights, morphs and geometry as well as being able to import .mdd files. Sculpting in modo 301 is done through mesh based and image based sculpting (vector displacement maps) or a layered combination of both. modo 302, was released on April 3, 2008 with some tool updates, more rendering and animation features and a physical sky and sun model. modo 302 was a free upgrade for existing users. modo 303 was skipped in favor of the development of modo 401. modo 401 shipped on June 18, 2009. This release has many animation and rendering enhancements and is newly available on 64-bit Windows. On October 6, 2009, modo 401 SP2 was released followed by modo 401 SP3 on January 26, 2010 and SP5 on July 14th of the same year.[4] modo 501 shipped on December 15, 2010. This version was the first to run on 64-bit Mac OS X. It contains support for Pixar Subdivision Surfaces, faster rendering and a visual connection editor for creating re-usable animation rigs. modo 601 shipped on Feb 29, 2012. This release offers additional character animation tools, dynamics, a general purpose system of deformers, support for retopology modeling and numerous rendering enhancements. modo 701 shipped on Mar 25, 2013. This is the current version and offers audio support, a Python API for writing plugins, additional animation tools and layout, more tightly integrated dynamics, and a procedural particle system along with other rendering enhancements such as render proxy and environment importance sampling.
259
Workflow
modo's workflow differs substantially from many other mainstream 3D applications. While Maya and 3ds Max stress using the right tool for the job, modo artists typically use a much smaller number of basic tools and combine them to create new tools using the Tool Pipe and customizable action centers and falloffs.
Action Centers
modo allows an artist to choose the "pivot point" of a tool or action in realtime simply by clicking somewhere. Thus, modo avoids making the artist invoke a separate "adjust pivot point" mode. In addition, the artist can tell modo to derive a tool's axis orientation from the selected or clicked on element, bypassing the needs for a separate "adjust tool axis" mode.
Falloffs
Any tool can be modified with customizable falloff, which modifies its influence and strength according to geometric shapes. Radial falloff will make the current tool affect elements in the center of a resizable sphere most strongly, while elements at the edges will be barely affected at all. Linear falloff will make the tool affect elements based on a gradient that lies along a user-chosen line, etc.
3D painting
modo allows an artist to paint directly onto 3D models and even paint instances of existing meshes onto the surface of an object. The paint system allows users to use a combination of tools, brushes and inks to achieve many different paint effects and styles. Examples of the paint tools in modo are airbrush, clone, smudge, and blur. These tools are paired with your choice of "brush" (such as soft or hard edge, procedural). Lastly, you add an ink, an example of which is image ink, where you paint an existing image onto a 3D model. Pressure sensitive tablets are supported. The results of painting are stored in a bitmap and that map can be driving anything in modo's Shader Tree. Thus you
Modo can paint into a map that is acting as a bump map and see the bumps in real-time in the viewport.
260
Renderer
modo's renderer is multi-threaded and scales nearly linearly with the addition of processors or processor cores. That is, an 8-core machine will render a given image approximately eight times as fast as a single-core machine with the same per-core speed. modo runs on up to 32 cores and offers the option of network rendering. In addition to the standard renderer, which can take a long time to run with a complex scene on even a fast machine, modo has a progressive preview renderer which renders to final quality if left alone. modo's user interface allows you to configure a work space that includes a preview render panel, which renders continuously in the background, restarting the render every time you change the model. This gives a more accurate preview of your work in progress as compared to the typical hardware shading options. In practice, this means you can do fewer full test renders along the way toward completion of a project. The preview renderer in modo 401 offers progressive rendering, meaning the image resolves to near final image quality if you let it keep running. modo material assignment is done via a shader tree that is layer-based rather than node-based. modo's renderer is a physically based ray-tracer. It includes features like caustics, dispersion, stereoscopic rendering, fresnel effects, subsurface scattering, blurry refractions (e.g. frosted glass), volumetric lighting (smokey bar effect), and Pixar-patented Deep Shadows.
Select features
N-gon modeling and rendering (subdivided polygons with >4 points) Tool Pipe for creating customized tools Edges and Edge Weighting User specified navigation controls for zoom, pan Macros Scripting (Perl, Python, Lua) Customizable User Interface Extensive file input and output including X3D file export
Modo
261
IEEE Floating Point Accuracy Transparency (can vary with Absorption Distance)
Modo Subsurface scattering Anisotropic Blurred Reflections Instance Rendering Render Baking to Color and Normal Maps True Lens Distortion Physically Based Shading Model Fresnel effects Motion Blur Volumetrics Depth of Field IES (photometric) light support Walkthrough mode provides steady GI solution over range of frames Network Rendering Numerous render outputs Render Passes
262
modo once included imageSynth, a Plug-in for creating seamless textures in Adobe Photoshop CS1 or later. This bundle ended with the release of modo 301. Luxology has announced that the imageSynth plugin for Photoshop has been retired.[5]
Books
The Official Luxology modo Guide by Dan Ablan ISBN 1-59863-068-7 (October 2006) Le Mans C9 Experience by Andy Brown (video-based modo tutorials) (January 2007) Sports Shoe Tutorials by Andy Brown (video-based modo tutorials) (March 2007) Wrist Watch Tutorials by Andy Brown (video-based modo tutorials) (April 2007) modo 301 Signature Courseware DVD by Dan Ablan (October 2007) Seahorse (sculpting) Tutorial by Andy Brown (video-based modo tutorials) (August 2007) The Alley Tutorial by Andy Brown (game asset creation) (October 2007) modo in Focus Tutorials by Andy Brown (November 2007) Introductory videos and 30 day trial version Real World modo: The Authorized Guide: In the Trenches with modo by Wes McDermott (Paperback) (September 2009)
References
[1] http:/ / www. luxology. com/ modo/ technicalspecifications/ index. aspx [2] http:/ / www. luxology. com/ modo/ [3] "Modo What Lightwave Should Have Become." (http:/ / forums. luxology. com/ discussion/ topic. aspx?id=18008) Luxology.com (http:/ / forums. luxology. com). Accessed February 2012. [4] (January 26, 2010.) @luxology on Twitter (http:/ / twitter. com/ Luxology/ status/ 8253045571). Accessed February 2012.
Additional sources
Cohen, Peter (June 10, 2005). "Luxology modo ready for Intel switch" (http://www.macworld.com/article/ 45268/2005/06/modo.html). Macworld.com (http://www.macworld.com). Retrieved February 22, 2012. Cohen, Peter (October 8, 2007). "Luxology licenses Pixar graphics tech" (http://www.macworld.com/article/ 60420/2007/10/luxology.html). Macworld.com (http://www.macworld.com). Retrieved February 22, 2012. Sheridan Perry, Todd (August 11, 2008). "Luxology's Modo 302" (http://www.animationmagazine.net/ tech_reviews/luxologys-modo-302/). Animation Magazine (http://www.animationmagazine.net). Retrieved February 22, 2012.
Modo
263
External links
Modo (http://www.luxology.com/modo/) on Luxology.com Luxology's Modo 501 at GDC 2011 (http://software.intel.com/en-us/videos/ luxologys-modo-501-at-gdc-2011) from Intel.com (http://software.intel.com)
OptiX
NVIDIA OptiX (officially, OptiX Application Acceleration Engine) is a real time ray tracing engine for CUDA-based video cards such as the GeForce, Quadro, and Tesla series. According to NVIDIA, OptiX is designed to be flexible enough for "procedural definitions and hybrid rendering approaches." Aside from computer Graphics rendering, Optix also helps in optical & acoustical design, radiation research, and collision analysis.
External links
NVIDIA OptiX Application Acceleration Engine [1]
References
[1] http:/ / www. nvidia. com/ object/ optix. html
PhotoRealistic RenderMan
264
PhotoRealistic RenderMan
PhotoRealistic RenderMan
Developer(s) Pixar Animation Studios Type License Website Rendering system Proprietary commercial software renderman.pixar.com/view/renderman [1]
PhotoRealistic RenderMan, or PRMan for short, is a proprietary photorealistic RenderMan-compliant renderer. It primarily uses the Reyes algorithm but is also fully capable of doing ray tracing and global illumination. PRMan is produced by Pixar Animation Studios and used to render all of their in-house 3D animated movie productions. It is also available as a commercial product licensed to third parties, sold as part of a bundle called RenderMan Pro Server, a RenderMan-compliant rendering software system developed by Pixar based on their own interface specification. RenderMan for Maya is a full version of PRMan that is designed to be completely integrated with the Maya high-end 3D computer graphics software package; however, it is still in its infancy.[citation needed]
Awards
RenderMan is often used in creating digital visual effects for the Hollywood blockbuster movies of today such as Titanic, the Star Wars prequels, and The Lord of the Rings. As part of the 73rd Scientific and Technical Academy Awards ceremony presentation on March 3, 2001, The Academy of Motion Picture Arts and Sciences Board of Governors honored Ed Catmull, Loren Carpenter, and Rob Cook, with an Academy Award of Merit (Oscar) "for significant advancements to the field of motion picture rendering as exemplified in Pixars RenderMan." This was the first Oscar awarded to a software package for its outstanding contributions to the field.
External links
Pixar's official PRMan website [2]
References
[1] http:/ / renderman. pixar. com/ view/ renderman [2] http:/ / renderman. pixar. com/ products/ tools/ rps. html
Picogen
265
Picogen
picogen
Operating system GNU/Linux, Windows Platform Type License Website Cross-platform Sourcecode Rendering system, 3d graphics software GPL, Version 3, or newer http:/ / picogen. org
Picogen is a rendering system for the creation and rendering of artificial terrain, based on ray tracing. It is free software.
Overview
While the primary purpose of picogen is to display realistic 3D terrain, both in terms of terrain formation and image plausibility, it also is a heightmap-creation tool,[1] in which heightmaps are programmed in a syntax reminiscent of LISP.[2] The shading system is partially programmable.[3]
Example features
Whitted-Style Ray Tracer for quick previews Rudimentary path tracer for high quality results Partial implementation of Preetham's Sun-/Skylight Model [4] Procedural Heightmaps, though before rendering they are tesselated
An alpine landscape.
Picogen
266
Frontends
Currently there is a frontend to picogen, called picogen-wx (based on wxWidgets). It is encapsulated from picogen and thus communicates with it on command-line level. Picogen-wx provides several panels to design the different aspects of a landscape, e.g. the Sun/Sky- or the Terrain-Texture-Panel. Each panel has its own preview window, though each preview window can be reached from any other panel. Landscapes can be loaded and saved through an own, simple XML-based file format, and images of arbitrary size (including antialiasing) can be saved.
External links
Project Website [5] picogens [[DeviantArt [6]]-Group-Page]
References
[1] Introduction to mkheightmap (http:/ / picogen. org/ wiki/ index. php?title=Introduction_to_mkheightmap) The heightmap panel.
Height Language Reference (http:/ / picogen. org/ wiki/ index. php?title=Height_Slang_Reference) Shaders in picogen (http:/ / ompf. org/ forum/ viewtopic. php?f=6& t=1050) A Practical Analytical Model for Daylight, Preetham, et al. (http:/ / www. cs. utah. edu/ vissim/ papers/ sunsky/ ) http:/ / picogen. org/ http:/ / picogen. deviantart. com/
Pixie
267
Pixie
Pixie
Developer(s) Stable release Okan Arikan et Al. 2.2.6 / 15 June 2009
Operating system Windows, Mac OS X, Linux Type License Website 3D computer graphics GPL and LGPL www.renderpixie.com [1]
Pixie is a free (open source), photorealistic raytracing renderer for generating photorealistic images, developed by Okan Arikan in the Department of Computer Science at The University of Texas At Austin. It is RenderMan-compliant (meaning it reads conformant RIB, and supports full SL shading language shaders) and is based on the Reyes rendering architecture, but also support raytracing for hidden surface determination. Like the proprietary BMRT, Pixie is popular with students learning the RenderMan Interface, and is a suitable replacement for it. Contributions to Pixie are facilitated by SourceForge and the Internet where it can also be downloaded free of charge as source code or precompiled. It compiles for Windows (using Visual Studio 2005), Linux and on Mac OS X (using Xcode or Unix-style configure script). Key features include: 64-bit Capable Fast multi-threaded execution. Possibility to distribute the rendering process to several machines. Motion blur and depth of field. Programmable shading (using RenderMan Shading Language) including full displacement support. Scalable, multi-resolution raytracing using ray differentials. Global illumination. Support for conditional RIB. Point cloud baking and 3D textures.
External links
Home page [1] Pixie Wiki [2] Okan Arikan's home page [3] Blender - Open Source 3D Creator [4] Rib Mosaic - Blender Rib Export [5]
Pixie
268
References
[1] [2] [3] [4] [5] http:/ / www. renderpixie. com/ http:/ / www. renderpixie. com/ pixiewiki/ Main_Page http:/ / www. cs. utexas. edu/ ~okan/ http:/ / www. blender. org/ http:/ / sourceforge. net/ projects/ ribmosaic/
POV-Ray
269
POV-Ray
POV-Ray
Operating system Cross-platform Type License Website Ray tracer POV-Ray License www.povray.org [3]
[4]
The Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program available for a variety of computer platforms. It was originally based on DKBTrace, written by David Kirk Buck and Aaron A. Collins. There are also influences from the earlier Polyray raytracer contributed by its author Alexander Enzmann. POV-Ray is freeware with the source code available.
History
Sometime in the 1980s, David Kirk Buck downloaded the source code for a Unix raytracer to his Amiga. He experimented with it for a while, eventually deciding to write his own raytracer, named DKBTrace after his initials. He posted it to the "You Can Call Me Ray" bulletin board system in Chicago, thinking others might be interested in it. In 1987, Aaron A. Collins downloaded DKBTrace and began working on an x86-based port of it. He and David Buck collaborated to add several more features. When the program proved to be more popular than anticipated, they could not keep up with demand for more features. Thus, in July 1991 David turned over the project to a team of programmers working in the GraphDev forum on CompuServe. At the same time, he felt that it was inappropriate to use his initials on a program he no longer maintained. The name "STAR" (Software Taskforce on Animation and Rendering) was considered, but eventually the name became the "Persistence of Vision Raytracer", or "POV-Ray" for short.[5] POV-Ray was the first ray tracer to render an image in orbit, rendered by Mark Shuttleworth inside the International Space Station.[6] Features of the application and a summary of its history are discussed in an interview with David Kirk Buck and Chris Cason on episode 24 of FLOSS Weekly.[7]
POV-Ray
270
Features
POV-Ray has matured substantially since it was created. Recent versions of the software include the following features: A Turing-complete scene description language (SDL) that supports macros and loops.[8] Library of ready-made scenes, textures, and objects Support for a number of geometric primitives and constructive solid geometry Several kinds of light sources Atmospheric effects such as fog and media (smoke, clouds) Reflections, refractions, and light caustics using photon mapping
Glass scene rendered in POV-Ray, demonstrating radiosity, photon mapping, focal blur, and other photorealistic capabilities.
Surface patterns such as wrinkles, bumps, and ripples, for use in procedural textures and bump mapping Radiosity Image format support for textures and rendered output, including TGA, PNG, JPEG (only input) among others Extensive user documentation One of POV-Ray's main attractions is its large collection of third party support. A large number of tools, textures, models, scenes, and tutorials can be found on the web. It is also a useful reference for those wanting to learn how ray tracing and related geometry and graphics algorithms work.
Current version
The current official version of POV-Ray is 3.6. Some of the main features of this release: Extends UV mapping to more primitives. Adds 16 and 32 bit integer data to density file. Various bugfixes and speed-ups. Improved 64-bit compatibility.
Beta-testing of version 3.7 is underway as of July 2008. The main improvement over 3.6 will be SMP support to allow the renderer to take advantage of multiple processors. Additionally, support has been added for HDRI, including the OpenEXR and Radiance file formats, and improved bounding using BSP trees. In July 2006, Intel Corporation started using the beta version to demonstrate their new dual-core Conroe processor due to the efficiency of the 3.7 beta's SMP implementation.
POV-Ray
271
Primitives
POV-Ray, in addition to standard geometric shapes like tori, spheres and heightfields, supports mathematically defined primitives such as the isosurface (a finite approximation of an arbitrary function), the polynomial primitive (an infinite object defined by a 15th order or lower polynomial), the julia fractal (a 3-dimensional slice of a 4-dimensional fractal), the superquadratic ellipsoid (intermediate between a sphere and a cube), and the parametric primitive (using equations that represent its surface, rather than its interior). POV-Ray internally represents objects using their mathematical definitions; all POV-Ray primitive objects can be described by mathematical functions. This is different from many 3D computer modeling packages, which typically use triangle meshes to compose all objects. This fact provides POV-Ray with several advantages and disadvantages over other rendering / modeling systems. POV-Ray primitives are more accurate than their polygonal counterparts. Objects that can be described in terms of spheres, planar surfaces, cylinders, tori and the like are perfectly smooth and mathematically accurate in POV-Ray renderings, whereas polygonal artifacts may be visible in mesh-based modeling software. POV-Ray primitives are also simpler to define than most of their polygonal counterparts. In POV-Ray, a sphere is described simply by its center and radius; in a mesh-based environment, a sphere must be described by a multitude of small polygons.
Venn diagram of 4 spheres created with CSG. The source is on the description page.
Some dice rendered in POV-Ray. CSG, refraction and focal blur are demonstrated.
On the other hand, primitive-, script-based modeling is not always a practical method to create objects such as realistic characters or complex man-made artifacts like cars. Those objects have to be created in mesh-based modeling applications such as Wings 3D or Blender and then converted to POV-Ray's own mesh format.
POV-Ray #version 3.6; //Includes a separate file defining a number of common colours #include "colors.inc" global_settings { assumed_gamma 1.0 } //Sets a background colour for the image (dark grey) background { color rgb <0.25, 0.25, 0.25> } //Places a camera //direction : Sets, among other things, the field of view of the camera //right: Sets the aspect ratio of the image //look_at: Tells the camera where to look camera { location <0.0, 0.5, -4.0> direction 1.5*z right x*image_width/image_height look_at <0.0, 0.0, 0.0> } //Places a light source //color : Sets the color of the light source (white) //translate : Moves the light source to a desired location light_source { <0, 0, 0> color rgb <1, 1, 1> translate <-5, 5, -5> } //Places another light source //color : Sets the color of the light source (dark grey) //translate : Moves the light source to a desired location light_source { <0, 0, 0> color rgb <0.25, 0.25, 0.25> translate <6, -6, -6> } //Sets a box //pigment : Sets a color for the box ("Red" as defined in "colors.inc") //finish : Sets how the surface of the box reflects light //normal : Sets a bumpiness for the box using the "agate" in-built model //rotate : Rotates the box box { <-0.5, -0.5, -0.5>, <0.5, 0.5, 0.5> texture { pigment { color Red } finish { specular 0.6 } normal { agate 0.25 scale 1/2 } } rotate <45,46,47> } The following script fragment shows the use of variable declaration, assignment, comparison and the while loop construct:
272
POV-Ray
273
#declare the_angle = 0; #while (the_angle < 360) box { <-0.5, -0.5, -0.5> <0.5, 0.5, 0.5> texture { pigment { color Red } finish { specular 0.6 } normal { agate 0.25 scale 1/2 } } rotate the_angle } #declare the_angle = the_angle + 45; #end
Modeling
The POV-Ray program itself does not include a modeling feature; it is essentially a pure renderer with a sophisticated model description language. To accompany this feature set, third parties have developed a large variety of modeling software, some specialized for POV-Ray, others supporting import and export of its data structures. A number of POV-Ray compatible modelers are linked from Povray.org: Modelling Programs [9].
Software
Development and maintenance
Official modifications to the POV-Ray source tree are done and/or approved by the POV-Team. Most patch submission and/or bug reporting is done in the POV-Ray newsgroups on the [nntp://news.povray.org/ news.povray.org] news server (with a Web interface also available [10]). Since POV-Ray's source is available there are unofficial forks and patched versions of POV-Ray available from third parties; however, these are not officially supported by the POV-Team. Official POV-Ray versions currently do not support shader plug-ins.[11] Some features, like radiosity and splines are still in development and may be subject to syntactical change.
POV-Ray
274
Platform support
POV-Ray is distributed in compiled format for Macintosh, Windows and Linux. Support for Intel Macs is not available in the Macintosh version, but since Mac OS X is a version of Unix the Linux version can be compiled on it. POV-Ray also could be ported to any platform which has a compatible C++ compiler. People with Intel Macs can use the fork MegaPOV though, as that is compiled as universal binary.[citation needed] The beta 3.7 versions with SMP support, however, are still available only for Windows and Linux.
Licensing
POV-Ray is distributed under the POV-Ray License, which permits free distribution of the program source code and binaries, but restricts commercial distribution and the creation of derivative works other than fully functional versions of POV-Ray. Although the source code is available for modification, due to specificWikipedia:Please clarify restrictions, it is not open source according to the OSI definition of the term. One of the reasons that POV-Ray is not licensed under the free software GNU General Public License (GPL), or other open source licenses, is that POV-Ray was developed before the GPL-style licenses became widely used; the developers wrote their own license for the release of POV-Ray, and contributors to the software have worked under the assumption that their contributions would be licensed under the POV-Ray License. A complete rewrite of POV-Ray ("POV-Ray 4.0") is currently under discussion, which would use a more liberal license, most likely GPL v3.[12]
References
[3] [4] [5] [6] [7] [8] http:/ / www. povray. org/ povlegal. html http:/ / www. povray. org/ POV-Ray: Documentation: 1.1.5 The Early History of POV-Ray (http:/ / www. povray. org/ documentation/ view/ 3. 6. 0/ 7/ ) Reach for the stars (http:/ / www. oyonale. com/ iss. php) The TWiT Netcast Network with Leo Laporte (http:/ / twit. tv/ floss24) Paul Bourke: Supershape in 3D (http:/ / local. wasp. uwa. edu. au/ ~pbourke/ geometry/ supershape3d/ ) are examples of POV-Ray images made with very short code [9] http:/ / www. povray. org/ resources/ links/ 3D_Programs/ Modelling_Programs/ [10] http:/ / news. povray. org/ groups/ [11] for such an implementation, see e.g., http:/ / www. aetec. ee/ fv/ vkhomep. nsf/ pages/ povman2
External links
POV-Ray homepage (http://www.povray.org/) POV-Ray (http://www.dmoz.org/Computers/Software/Graphics/3D/Animation_and_Design_Tools/ POV-Ray/) at the Open Directory Project
Radiance
275
Radiance
Radiance
Developer(s) Stable release Preview release Written in Greg Ward 4.1 (2011-11-01) [] Non [] C [2] [1]
Operating system Unix, Linux, Mac OS X, Windows Available in License Website ? Project-specific open source [3]
Radiance is a suite of tools for performing lighting simulation originally written by Greg Ward.[] It includes a renderer as well as many other tools for measuring the simulated light levels. It uses ray tracing to perform all lighting calculations, accelerated by the use of an octree data structure. It pioneered the concept of high dynamic range imaging, where light levels are (theoretically) open-ended values instead of a decimal proportion of a maximum (e.g. 0.0 to 1.0) or integer fraction of a maximum (0 to 255 / 255). It also implements global illumination using the Monte Carlo method to sample light falling on a point. Greg Ward started developing Radiance in 1985 while at Lawrence Berkeley National Laboratory. The source code was distributed under a license forbidding further redistribution. In January 2002 Radiance 3.4 was relicensed under a less restrictive license. One study found Radiance to be the most generally useful software package for architectural lighting simulation. The study also noted that Radiance often serves as the underlying simulation engine for many other packages.[4]
Radiance
276
10
This can then be arrayed in another file using the xform program (described later): scene.rad void metal chrome 0 0 5 0.8 0.8 0.9 0.0
0.8
!xform -a 5 -t 20 0 0 myball.rad This creates a chrome material and five chrome spheres spaced 20 units apart along the X-axis. Before a scene can be used, it must be compiled into an octree file ('.oct') using the oconv tool. Most of the rendering tools (see below) use an octree file as input.
Tools
The Radiance suite includes over 50 tools. They were designed for use on Unix and Unix-like systems. Many of the tools act as filters, taking input on standard input and sending the processed result to standard output. These can be used on the Unix command line and piped to a new file, or included in Radiance scene files ('.rad') themselves, as shown above.
Geometry manipulation
Several radiance programs manipulate Radiance scene data by reading from either a specified file or their standard input, and writing to standard output. xform allows an arbitrary number of transformations to be performed on a '.rad' file. The transformations include translation, rotation (around any of the three axes), and scaling. It also can perform multi-dimensional arraying. replmarks replaces certain triangles in a scene with objects from another file. Used for simplifying a scene when modelling in a 3D modeller.
Radiance
277
Generators
Generators simplify the task of modelling a scene, they create certain types of geometry from supplied parameters. genbox creates a box. genrprism extrudes a given 2D polygon along the Z-axis. genrev creates a surface of revolution from a given function. genworm creates a worm given four functions - the (x, y, z) coordinates of the path, and the radius of the worm. gensurf creates a tesselated surface from a given function. gensky creates a description for a CIE standard sky distribution.
Geometry converters
Radiance includes a number of programs for converting scene geometry from other formats. These include: nff2rad converts NFF objects to Radiance geometry. obj2rad convert Wavefront .obj files to Radiance geometry. obj2mesh convert Wavefront .obj files to a Radiance compiled mesh. This can then be included in a scene using the recently added mesh primitive. More efficient than using obj2rad and includes texture coordinates.
Rendering
rpict is the renderer, producing a Radiance image on its standard output. rvu is an interactive renderer, opening an X11 window to show the render in progress, and allowing the view to be altered. rtrace is a tool for tracing specific rays into a scene. It reads the parameters for these rays on its standard input and returns the light value on standard output. rtrace is used by other tools, and can even be used to render images on its own by using the vwray program to generate view rays to be piped to it. dayfact is an interactive script to compute luminance values and daylight factors on a grid. findglare takes an image or scene and finds bright sources that would cause discomforting glare in human eyes. mkillum takes a surface (e.g. a window or lamp shade) and computes the lighting contribution going through it. This data is then used by the illum material modifier to make lighting from these secondary sources more accurate and efficient to compute.
Radiance
278
Integration
rad is a front-end which reads a '.rif' file describing a scene and multiple camera views. Previously, make and a makefile were used in a similar role. rad coordinates oconv, mkillum, rpict/rview and other programs to render an image (or preview) from the source scene file(s). trad is a GUI front-end to rad using Tcl/Tk. ranimate is a front-end which coordinates many programs to generate virtual walk-through animations i.e. the camera moves but the scene is static.
References
[1] [2] [3] [4] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_stable_software_release/ Radiance& action=edit http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_preview_software_release/ Radiance& action=edit http:/ / radsite. lbl. gov/ radiance/ Geoffrey G. Roy, A Comparative Study of Lighting Simulation Packages Suitable for use in Architectural Design, Murdoch University, October 2000
Sources
Greg Ward Larson and Rob Shakespeare, Rendering with Radiance, Morgan Kaufmann, 1998. ISBN 1-55860-499-5
External links
Radiance homepage (http://radsite.lbl.gov/radiance/HOME.html) Radiance online (http://www.radiance-online.org/) Rendering with Radiance online (http://radsite.lbl.gov/radiance/book/) Anyhere Software--Greg Ward's consulting firm (http://anyhere.com/)
Real3D
279
Real3D
Realsoft 3D
Developer(s) Initial release Stable release Written in Realsoft Oy 1990 7.0.41 / 2010 C
Operating system IRIX, Linux, Microsoft Windows, Mac OS X Available in Type License Website EN 3D Software Proprietary http:/ / www. realsoft. fi
Realsoft 3D is a modeling and raytracing application created by Realsoft Graphics Oy. Originally called Real 3D, it was developed for the Amiga computer and later also for Linux, Irix, Mac OS X and Microsoft Windows. It was initially written in 1983 on Commodore 64 by two Finnish brothers Juha and Vesa Meskanen. The development of Real 3D was really started in 1985, as Juha Meskanen started his studies at the Lahti University of Applied Sciences, Finland. Juha's brother Vesa joined the development and jumped out from his university career to start the Realsoft company in 1989.
Versions history
Real 3D v1
The first commercial Real 3D version was released on Amiga. It used IFF REAL for its objects. It featured constructive solid geometry, support for smooth curved quadric surfaces and a ray-tracer for photo realistic rendering.[1] 1.2 was released in 1990, was already distributed in several European countries. 1.3 was released early 1991. 1.4 was released in December 1991. It was the last version derived from the original Real3D design. Despite of small version number increments, v1, v1.2 and v 1.3 were all major releases, with new manuals and packaging.
Real 3D v2
It was released in 1993. Version 2 was redesigned with a new source code base. It introduced ground breaking features - booleans, CSG primitives, b-spline curves, physically based animation simulation, morph based animation techniques and phenomenal rendered output. It took full advantage of the multi tasking abilities of the Amiga - allowing the user to continue editing a scene on another window while it rendered. Microsoft Windows version was released in 1994.
Real3D
280
Real 3D v3
Version 3.5 was released in 1996. It was the last version based on v2 architecture. Realsoft had started a new major development project around 1994. The project involved a complete software rewrite, new object oriented source code, platform independent design, modularity, and adoption of several other state-ofart development methods. Amiga version 3.5, which is also the last version for this system, is now freely available by AmiKit.[2]
Realsoft 3D v4
Version 4 was released year 2000. Beginning with this release, Realsoft renamed the product to Realsoft 3D. It was released on multiple platforms, including Microsoft Windows, Linux and SGI Irix. 4.1 trial version was released in 2001-02-21.[3] Retail version was released in 2001-05-25.[4] 4.2 upgrade was released in 2001-07-19.[5] Linux (Intel, AMD) and SGI Irix (MIPS) versions were released in 2001-07-24.[6] 4.5 was released in 2002-10-23,[7] which introduced for example caustics and global illumination rendering features. Linux version was released in 2002-11-03.[8] Irix version of 4.5/SP1 (build 26.41b) was released in 2003-06-03.[9]
Realsoft 3D v5
Windows version was released in 2004-11-15.[10] It expanded the feature set in all program areas. It was also an important visual leap forward supporting full-color 32 bit icons. Service pack 1 (5.1) for Windows was released in 2005-02-03.[11] Service Pack 2 (5.2) for Windows was released in 2005-10-02.[12] Service Pack 3 (5.3) for Windows was released in 2006-11-01.[13] Mac OS X version was released in 2006-12-18.[14] Service Pack 4 (5.4) was released in 2007-05-01.[15] Irix version was released in 2007-12-16.[16]
Realsoft 3D v6
Windows version was released in 2007-12-18.[17] It introduced powerful parametric tools for plant modeling and building construction. 64-bit platforms were supported. Service pack 1 (6.1) for Windows was released in 2008-05-15.[18] Linux version was released in 2008-07-01.[19] Mac OS X version was released in 2008-11-11.[20]
Real3D
281
Realsoft 3D v7
It was announced in 2008-12-13, with projected release date between 2nd or 3rd quarter of 2009. It was released on December 7, 2009 for 32-bit and 64-bit Microsoft Windows.
Receptions
Jeff Paries of Computer Graphics World claimed version 4 is an excellent addition to your toolbox at a reasonable price.[21]
References
[1] [2] [3] [4] [5] [6] [7] [8] [9] Realsoft 3D - History (http:/ / www. realsoft. com/ daemon/ History. htm) AmiKit Add-ons (http:/ / amikit. amiga. sk/ add-ons. htm) Realsoft 3D V.4.1 trial version released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-02-25) Realsoft 3D V.4.1 released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-05-08) Realsoft 4.2 upgrade available for registered users (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-21) Realsoft 3D v.4.2 released on Linux and SGI Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-29) Realsoft 3D v.4.5 shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-10-29) Realsoft 3D v.4.5 for Linux released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-12-31) New version of Realsoft 3D for Irix available (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2003-08-08)
[10] Realsoft 3D V.5 for Windows Shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2004-12-22) [11] Service Pack 1 for V5 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-04-06) [12] Service Pack 2 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-12-23) [13] Service Pack 3 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2006-12-18) [14] Realsoft 3D for Mac OSX released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-01) [15] Realsoft 3D V5.1 Service Pack 4 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-04) [16] Realsoft 3D V5 Service Pack 4 for Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-17) [17] Realsoft 3D Version 6 for Windows Platforms Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-18) [18] SP1 for Version 6 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-06-11) [19] Realsoft 3D v6 for Linux Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-11-11) [20] Realsoft 3D Version 6 for Mac OS X (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-12-23) [21] Realsoft 3D Version 4 (http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing& mod=Publications::Article& mid=8F3A7027421841978F18BE895F87F791& tier=4& id=25D8808747CB446EB79361C510813D50)
External links
Official website (http://www.realsoft.com) Render Daemon, additional resources Realsoft 3D (http://www.realsoft.com/daemon/index.html) Forum, Image gallery (http://www.realsoft.org) info site (http://www.realsoft.info) Aminet Real3D traces (http://aminet.net/pix/real3) Realsoft 3D Wiki (http://rs3dwiki.the-final.info)
Realsoft 3D
282
Realsoft 3D
Realsoft 3D
Developer(s) Initial release Stable release Written in Realsoft Oy 1990 7.0.41 / 2010 C
Operating system IRIX, Linux, Microsoft Windows, Mac OS X Available in Type License Website EN 3D Software Proprietary http:/ / www. realsoft. fi
Realsoft 3D is a modeling and raytracing application created by Realsoft Graphics Oy. Originally called Real 3D, it was developed for the Amiga computer and later also for Linux, Irix, Mac OS X and Microsoft Windows. It was initially written in 1983 on Commodore 64 by two Finnish brothers Juha and Vesa Meskanen. The development of Real 3D was really started in 1985, as Juha Meskanen started his studies at the Lahti University of Applied Sciences, Finland. Juha's brother Vesa joined the development and jumped out from his university career to start the Realsoft company in 1989.
Versions history
Real 3D v1
The first commercial Real 3D version was released on Amiga. It used IFF REAL for its objects. It featured constructive solid geometry, support for smooth curved quadric surfaces and a ray-tracer for photo realistic rendering.[1] 1.2 was released in 1990, was already distributed in several European countries. 1.3 was released early 1991. 1.4 was released in December 1991. It was the last version derived from the original Real3D design. Despite of small version number increments, v1, v1.2 and v 1.3 were all major releases, with new manuals and packaging.
Real 3D v2
It was released in 1993. Version 2 was redesigned with a new source code base. It introduced ground breaking features - booleans, CSG primitives, b-spline curves, physically based animation simulation, morph based animation techniques and phenomenal rendered output. It took full advantage of the multi tasking abilities of the Amiga - allowing the user to continue editing a scene on another window while it rendered. Microsoft Windows version was released in 1994.
Realsoft 3D
283
Real 3D v3
Version 3.5 was released in 1996. It was the last version based on v2 architecture. Realsoft had started a new major development project around 1994. The project involved a complete software rewrite, new object oriented source code, platform independent design, modularity, and adoption of several other state-ofart development methods. Amiga version 3.5, which is also the last version for this system, is now freely available by AmiKit.[2]
Realsoft 3D v4
Version 4 was released year 2000. Beginning with this release, Realsoft renamed the product to Realsoft 3D. It was released on multiple platforms, including Microsoft Windows, Linux and SGI Irix. 4.1 trial version was released in 2001-02-21.[3] Retail version was released in 2001-05-25.[4] 4.2 upgrade was released in 2001-07-19.[5] Linux (Intel, AMD) and SGI Irix (MIPS) versions were released in 2001-07-24.[6] 4.5 was released in 2002-10-23,[7] which introduced for example caustics and global illumination rendering features. Linux version was released in 2002-11-03.[8] Irix version of 4.5/SP1 (build 26.41b) was released in 2003-06-03.[9]
Realsoft 3D v5
Windows version was released in 2004-11-15.[10] It expanded the feature set in all program areas. It was also an important visual leap forward supporting full-color 32 bit icons. Service pack 1 (5.1) for Windows was released in 2005-02-03.[11] Service Pack 2 (5.2) for Windows was released in 2005-10-02.[12] Service Pack 3 (5.3) for Windows was released in 2006-11-01.[13] Mac OS X version was released in 2006-12-18.[14] Service Pack 4 (5.4) was released in 2007-05-01.[15] Irix version was released in 2007-12-16.[16]
Realsoft 3D v6
Windows version was released in 2007-12-18.[17] It introduced powerful parametric tools for plant modeling and building construction. 64-bit platforms were supported. Service pack 1 (6.1) for Windows was released in 2008-05-15.[18] Linux version was released in 2008-07-01.[19] Mac OS X version was released in 2008-11-11.[20]
Realsoft 3D
284
Realsoft 3D v7
It was announced in 2008-12-13, with projected release date between 2nd or 3rd quarter of 2009. It was released on December 7, 2009 for 32-bit and 64-bit Microsoft Windows.
Receptions
Jeff Paries of Computer Graphics World claimed version 4 is an excellent addition to your toolbox at a reasonable price.[21]
References
[1] [2] [3] [4] [5] [6] [7] [8] [9] Realsoft 3D - History (http:/ / www. realsoft. com/ daemon/ History. htm) AmiKit Add-ons (http:/ / amikit. amiga. sk/ add-ons. htm) Realsoft 3D V.4.1 trial version released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-02-25) Realsoft 3D V.4.1 released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-05-08) Realsoft 4.2 upgrade available for registered users (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-21) Realsoft 3D v.4.2 released on Linux and SGI Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-29) Realsoft 3D v.4.5 shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-10-29) Realsoft 3D v.4.5 for Linux released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-12-31) New version of Realsoft 3D for Irix available (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2003-08-08)
[10] Realsoft 3D V.5 for Windows Shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2004-12-22) [11] Service Pack 1 for V5 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-04-06) [12] Service Pack 2 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-12-23) [13] Service Pack 3 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2006-12-18) [14] Realsoft 3D for Mac OSX released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-01) [15] Realsoft 3D V5.1 Service Pack 4 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-04) [16] Realsoft 3D V5 Service Pack 4 for Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-17) [17] Realsoft 3D Version 6 for Windows Platforms Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-18) [18] SP1 for Version 6 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-06-11) [19] Realsoft 3D v6 for Linux Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-11-11) [20] Realsoft 3D Version 6 for Mac OS X (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-12-23) [21] Realsoft 3D Version 4 (http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing& mod=Publications::Article& mid=8F3A7027421841978F18BE895F87F791& tier=4& id=25D8808747CB446EB79361C510813D50)
External links
Official website (http://www.realsoft.com) Render Daemon, additional resources Realsoft 3D (http://www.realsoft.com/daemon/index.html) Forum, Image gallery (http://www.realsoft.org) info site (http://www.realsoft.info) Aminet Real3D traces (http://aminet.net/pix/real3) Realsoft 3D Wiki (http://rs3dwiki.the-final.info)
Sunflow
285
Sunflow
Sunflow
A raytraced rendered image produced by Sunflow. Developer(s) Stable release Operating system Type License Website Christopher Kulla 0.07.2 / February9,2007 Cross-platform Ray tracer MIT [2] [1]
Sunflow is an open source global illumination rendering system written in Java. The current status of the project is unknown, but the last announcement on the program's official page was made in 2007.
References
[1] http:/ / sourceforge. net/ projects/ sunflow/ [2] http:/ / sunflow. sourceforge. net/
External links
Sunflow Rendering System website (http://sunflow.sourceforge.net/) Sunflow Forum (http://sunflow.sourceforge.net/phpbb2/) Sunflow documentation wiki (http://sfwiki.geneome.net/)
TurboSilver
286
TurboSilver
TurboSilver was one of the original 3D raytracing software packages available for the Amiga and for personal computers in general. It was first revealed by its creator Impulse at the October 1986 AmiEXPO. Beaten to the line by Sculpt 3D which was released in July 1986. November 1987 saw the release of version 2.0. Version 3.0 was released in January 1988. Impulse created a replacement for it, named Imagine in 1991.[citation needed]
References
Chronology of Amiga Computers [1] Ken Polsson
References
[1] http:/ / pctimeline. info/ amiga/ index. htm
V-Ray
V-Ray
Render created using V-Ray for Rhinoceros 3D, demonstrating the advanced effects V-Ray is capable of, such as refraction and caustics. Developer(s) Stable release Operating system Type License Website Chaos Group 2.0 / December6,2010 Linux, Mac OS X and Microsoft Windows Rendering system Proprietary commercial software www.chaosgroup.com [1]
Folded paper: SketchUp drawing rendered using V-Ray, demonstrating shading and global illumination
V-Ray
287
Render created using V-Ray for Rhinoceros 3D, demonstrating the advanced effects V-Ray is capable of, such as reflection, depth of field, and the shape of the aperture (in this case, a hexagon)
V-Ray is a rendering engine that is used as an extension of certain 3D computer graphics software. The core developers of V-Ray are Vladimir Koylazov and Peter Mitev of Chaos Software production studio established in 1997, based in Sofia, Bulgaria. It is a rendering engine that uses advanced techniques, for example global illumination algorithms such as path tracing, photon mapping, irradiance maps and directly computed global illumination. The use of these techniques often makes it preferable to conventional renderers which are provided standard with 3d software, and generally renders using these technique can appear more photo-realistic, as actual lighting effects are more realistically emulated. V-Ray is used in the film and video game industries. It is also used extensively in making realistic 3D renderings for architecture.
References
[1] http:/ / www. chaosgroup. com/
Francesco Legrenzi, V-Ray - The Complete Guide, 2008 Markus Kuhlo and Enrico Eggert, Architectural Rendering with 3ds Max and V-Ray: Photorealistic Visualization, Focal Press, 2010
External links
V-Ray Essentials Crash Course - Free Tutorial (http://www.cgmeetup.net/home/v-ray-essentials-crash-course/ ) V-Ray - Free Video Tutorials (http://www.cgmeetup.net/home/tutorials/chaos-group-vray/) V-Ray Help (http://www.spot3d.com) Chaos Group Home Page (http://www.chaosgroup.com) V-Ray at rhino3d.com (http://www.rhino3d.com/resources/display.asp?language=en&listing=870) A Closer Look At VRAY Architectural Review of V-Ray (http://www.cgarchitect.com/news/Reviews/ Review007_1.asp) ASGVIS Makers of V-Ray for Rhino and V-Ray for SketchUp (http://www.asgvis.com) VRAYforC4D - the website of V-Ray for Cinema4d, made by LAUBlab KG (http://www.vrayforc4d.com) V-Ray/Blender development. (http://vray.cgdo.ru/)
YafaRay
288
YafaRay
YafaRay
Developer(s) Stable release Written in YafaRay developers 0.1.1 / June23,2009 C++
Operating system Cross-platform Type License Website Raytracer LGPL www.yafaray.org [1]
YafaRay is a free, open source ray tracing program that uses an XML scene description language. It has been integrated into the 2.49 version of the 3D modelling software Blender, but requires an exporter for the redesigned 2.5 version of Blender. It is licensed under the GNU Lesser General Public License (LGPL).
History
YafaRay's predecessor YafRay (Yet Another Free Raytracer) was A YafaRay rendering of piston engine parts modelled written by Alejandro Conty Estvez, and was first released in July in Blender. 2002. The last version of YafRay was YafRay 0.0.9, released in 2006. Due to limitations of the original design, the raytracer was completely rewritten by Mathias Wein. The first stable version of the new raytracer, YafaRay 0.1.0, was released in October 2008.
Features
Rendering
Global Illumination YafaRay uses global illumination to produce realistically lit renderings of 3D scenes, using Montecarlo-derived approximations. Skydome Illumination This illumination system is based mainly on light coming from an emitting sky, taking into account the soft shadows calculations also involved. The illumination can be obtained from a high dynamic range image. Caustics YafaRay uses photon mapping that allows for caustic (light distortion produced by reflection or transmission such as through a burning-glass). For simulating translucent materials there is also a subsurface scattering shader under development. Depth of field The effects of a focus depth of field can be reproduced using this feature. With a point in the scene fixed, further objects will be out of focus.
YafaRay Blurry reflections If a surface is not a perfect reflector, distortion arises in the reflected light. This distortion will grow bigger as the reflecting object is taken further away. YafaRay can simulate this phenomenon.
289
Architecture
Modular framework YafaRay features a modular structure, with a kernel with which the rest of the render elements connect: scene loader, lights and shaders. This together with an API, allows development of rendering plug-ins, for using YafaRay from any program or 3D suite. Supported suites include Blender, Wings 3D and Aztec. Cross-platform YafaRay has been fully developed using C++. This makes for good portability and there are precompiled binaries for the most common platforms: GNU/Linux, Windows 9x/XP/2000, Mac OS X and Irix. YafaRay can be used as a stand-alone render engine, using its own scene description format. This way it can be used from the command line directly, by a script, etc. There are also provisions for parallel or distributed rendering.
External links
Official website [2] Material Library [3] Material Search [4] Tutorial on How to use Yafaray Material [5]
References
[1] [2] [3] [4] [5] http:/ / www. yafaray. org/ http:/ / www. yafaray. org http:/ / yafaray. manojky. net/ legacy/ index_Rate. php http:/ / yafaray. manojky. net http:/ / tutorials. manojky. net/ yafaray/ YafarayMaterialUse. pdf
290
291
292
293
294
295
296
297
298
299
300
301
302
License
303
License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/