Science Fair Project Encyclopedia
3D computer graphics
The rewrite of this article is being devised at Talk:3D computer graphics/Temp. Please comment or help out as necessary. Thanks!
3D computer graphics are works of graphic art that were created with the aid of digital computers and specialized 3D software. In general, the term may also refer to the process of creating such graphics, or the field of study of 3D computer graphic techniques and its related technology.
3D computer graphics are distinct from 2D computer graphics in that a three-dimensional virtual representation of objects is stored in the computer for the purposes of performing calculations and rendering images. In general, the art of 3D graphics is akin to sculpting or photography, while the art of 2D graphics is analogous to painting. In computer graphics software, this distinction is occasionally blurred; some 2D applications use 3D techniques to achieve certain effects such as lighting, while some primarily 3D applications make use of 2D visual techniques.
OpenGL and Direct3D are two popular APIs for the generation of 3D imagery on the fly. Many modern graphics cards provide some degree of hardware acceleration based on these APIs, frequently enabling the display of complex 3D graphics in real-time. However, it's not necessary to employ any one of these to actually create 3D imagery.
Creation of 3D computer graphics
The process of creating 3D computer graphics can be sequentially divided into three basic phases:
- Scene layout setup
The modelling stage could be described as shaping individual objects that are later used in the scene. There exist a number of modelling techniques:
- constructive solid geometry
- NURBS modelling
- polygonal modelling
- subdivision surfaces
- implicit surfaces
Modelling processes may also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components—more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features.
Modelling may also include various activities related to preparing a 3D model for animation (although in a complex character model this will become a stage of its own, known as rigging). Objects may be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of the model. See also Forward kinematic animation and Inverse kinematic animation. At the rigging stage, the model can also be given specific controls to make animaton easier and more intuitive, such as facial expression controls and mouth shapes (phonemes) for lipsyncing.
Modelling can be performed by means of a dedicated program (e.g., Lightwave Modeler, Rhinoceros 3D , Moray), an application component (Shaper, Lofter in 3D Studio) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modelling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace).
Scene layout setup
Scene setup involves arranging virtual objects, lights, cameras and other entities on a scene which will later be used to produce a still image or an animation. If used for animation, this phase usually makes use of a technique called "keyframing", which facilitates creation of complicated movement in the scene. With the aid of keyframing, instead of having to fix an object's position, rotation, or scaling for each frame in an animation, one needs only to set up some key frames between which states in every frame are interpolated.
Lighting is an important aspect of scene setup. As is the case in real-world scene arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual quality of the finished work. As such, it can be a difficult art to master. Lighting effects can contribute greatly to the mood and emotional response effected by a scene, a fact which is well-known to photographers and theatrical lighting technicians.
Tessellation and meshes
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc, to so-called meshes, which are nets of interconnected triangles.
Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Photo-realistic image quality is often the desired outcome, and to this end several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more modern techniques such as: scanline rendering, raytracing or radiosity.
Rendering software may simulate such cinematographic effects as lens flares, depth of field or motion blur. These artifacts are, in reality, a by-product of the mechanical imperfections of physical photography, but as the human eye is accustomed to their presence, the simulation of such artifacts can lend an element of realism to a scene. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with atmosphere, smoke, or particulate matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), and caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool).
The rendering process is known to be computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is by no means impossible to create small amounts of 3D animation on a home computer system.
Often renderers are included in 3D software packages, but there are some rendering systems that are used as plugins to popular 3D applications. These rendering systems include Final-Render, Brazil r/s , V-Ray , Mental Ray and Pixar Renderman.
The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and combined into the final version using traditional cinematic tools.
Reflection and shading models
In refraction of light, an important concept is the refractive index. In most 3D programming implementations, the term for this value is "index of refraction," usually abbreviated "IOR."
Popular reflection rendering techniques in 3D computer graphics include:
- Flat shading: A technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source.
- Gouraud shading: Invented by H. Gouraud in 1971, a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces.
- Texture mapping: A technique for simulating a large amount of surface detail by mapping images (textures) onto polygons.
- Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces.
- Bump mapping: Invented by Jim Blinn , a normal-perturbation technique used to simulate wrinkled surfaces.
- Cel shading: A technique used to imitate the look of hand-drawn animation.
3D graphics APIs
3D graphics have become so popular, particularly in computer games, that specialized APIs (application programmer interfaces) have been created to ease the processes in all stages of computer graphics generation. These APIs have also proved vital to computer graphics hardware manufacturers, as they provide a way for programmers to access the hardware in an abstract way, while still taking advantage of the special hardware of this-or-that graphics card.
These APIs for 3D computer graphics are particularly popular:
There are also higher-level 3D scene-graph APIs which provide additional functionality on top of the lower-level rendering API. Such libraries under active development include:
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details