advanced graphics study guide

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Coordinate systems

3D computer graphics rely on a number of coordinate systems. These include: World Object Local Viewpoint Screen

The graphics pipeline

Computers have a graphics pipeline: the sequence of steps used to create a 2D representation of a 3D scene. Figure 1.1 gives an overview

The polygon mesh defines the outer surface of the object

A model or scene made in this manner is known as a wireframe. Figure 3.3 shows an object created from triangle mesh

Trigonometry

nd tangent (tan) as well as their inverse functions arcsine (sin−1 or asin), arccosine (cos−1 or acos) and arctangent (tan−1 or atan).

Displays usually index pixels in an ordered pair (x, y) indicating the row and column

number of the pixels. If a display has nx and ny rows of pixels, the bottom-left pixel is (0,0) and the top-right is (nx − 1, ny − 1)

Vector addition:

v + u = (vx + vx, vy + vy, vz + vz)

The definition of a vector in 3D space:

v = (x, y, z)

Dot (scalar) product:

v · u = vxux + vyvy + vzvz

Vector subtraction:

v − u = (vx − vx, vy − vy, vz − vz)

Position and direction vectors The length of a vector

v| = p x 2 + y 2 + z 2

Normalisation of a vector

vˆ = v |v|

elow is an example of a (very simple) vertex shader program in GLSL. It transforms the vertices by the transform matrix and then passes the colour of the vertices to the fragment shader (this is the colour set using the fill command in Processing). We will describe it in more detail in the following sections.

#define PROCESSING_COLOR_SHADER // Note the US spelling uniform mat4 transform; attribute vec4 vertex; attribute vec4 color; // Note the US spelling varying vec4 col; void main(){ gl_Position = transform*vertex; col = color; }

ur example fragment shader starts with some code that ensures compatibility between different versions of OpenGL (do not worry too much about this but remember to include it in your code)

#ifdef GL_ES precision mediump float; precision mediump int; #endif

This is the corresponding fragment shader. It simply colours the pixel according to the vertex colour.

#ifdef GL_ES precision mediump float; precision mediump int; #endif varying vec4 col; void main() { gl_FragColor = col; // Note US spelling }

Other methods are aimed at making the animation look more realistic and expressive.

'Squash and stretch' is about changing the shape of an object to emphasise its motion. In particular, stretching it along the direction of movement

Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding does not offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

). HDR images do not use integer values to represent the single colour channels (for example, [0 . . . 255] in an 8 bit per pixel interval for R, G and B) but instead use a floating point representation. Three of the most common file formats are as follows

Some surfaces are more difficult to texture map, like the problems encountered when giftwrapping a cylinder or a spherical object. In this case, an (x, y, z) value is converted to cylindrical coordinates (radius, theta, height)

). The radius is not used. To find the required colour in the 2D texture map, theta is converted into an x-coordinate and height is converted into a y-coordinate, meaning the texture map is wrapped around the object

Each of the stages in this pipeline can be further subdivided and have their own pipelines too

, but at this level the sequence of steps encompasses the very beginning of the modelling process to the final output - the rendered image - seen by the use

a spline

, is a curve that connects two or more specific points: the term comes from mechanical drafting where a flexible strip was used to trace a curve

manipulations such as translate, rotate and scale that are performed on objects within a modelled scene or environment

, we have objects in an arbitrary 3D space which then need to be mapped onto a 2D screen. We need coordinate systems for vectors and transforms to make sense.

example of drawback of forward kinemetics ?

, when animating you often want to do things like make a character's hand touch a door handle. Trying to get the joint angles right so as to accurately place the hand can be a long and difficult process.

This works for simple positive line slopes where the slopes are less than 45◦

, which corresponds to the first octant. For other octants, it is simply a matter of using the appropriate increment/decrement.

This is an empirical model, which is not based on physics, but on physical observation; that is, it fits empirical observations but with no particular theoretical justification. Phong Bui-Tuong observed that for very shiny surfaces the specular highlight was small and the intensity fell off rapidly

, while for duller surfaces it was larger and fell off more slowly. The model consists of three reflection components: the diffuse component, the specular component and the ambient component

Bresenham's circle algorithm

- derived from the midpoint circle algorithm - takes advantage of a circle's symmetry to plot eight points, reflecting each calculated point around each 45◦ axis

There is an alternative to calculating viewpoint-dependent illumination interactions. Radiosity simulates the diffuse propagation of light starting at the light sources. It is independent from what we see

- it calculates 3D data rather than working on the pixels in an image plane projection, so the solution will be the same regardless of the viewpoint - something that would be very time consuming with ray tracing where for each change in viewing position the scene needs to be recalculated

Not only does it incorporate direct illumination

- light that travels directly from a light source to a surface - but it can also handle light that originates from within the scene environment - the indirect light that is present due to light bouncing off other surfaces and reaching other objects

when going fast and then squashing when changing direction

. 'Slow in slow out' is about controlling the speed of an animation to make it seem smoother. Start slow, speed up in the middle and slow to a stop

computer generated images component

. All of these computer-generated images are made from a number of very simple primitives: points, lines and polygons.

A more principled approach is to model each of the muscles of the face. In fact this can be done using one of the techniques just discussed

. Each muscle could have a morph target, or a bone, or there could be a more complex physical simulation as mentioned for multi-layered body animation.

GPUs are often part of dedicated graphics cards, but can also be integrated on a single chip with the CPU

. Early GPUs simply implemented a fixed set of graphics functionality in hardware, but modern GPUs are programmable: you can run short programs to be run directly on the GPU hardware that will change the way your objects are drawn. These programs are traditionally called Shaders.

The procedure is to compute a normal for each vertex of the polygon using bi-linear interpolation, then to compute a normal for each pixel

. For each pixel normal, compute an intensity for each pixel of the polygon. Paint the pixel to the corresponding shade.

The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's endpoints are tested to see if the line can be trivially accepted or rejected (Figure 3.17)

. If the line cannot be trivially accepted or rejected, an intersection of the line with a window edge is determined and the trivial reject/accept test is repeated. This process is continued until the line is accepted.

Light gives us colour, and we need to reproduce that colour in our virtual image by giving each pixel on the screen a certain colour value. The colour of a pixel is usually recorded as three values: red, green and blue (RGB)

. Mixing these three primary lights is sufficient to give the impression that a full rainbow spectrum is reproducible. This is known as the RGB colour model. This is additive colour mixing; it differs from subtractive mixing used in colour photography and printing where cyan, magenta and yellow are the primaries.

sychophysical experiments are a way of measuring psychological responses in a quantitative way so that they correspond to actual physical values. It is a branch of experimental psychology that examines the relationship between the physical world and peoples' reactions and experience of that world

. Psychophysical experiments can be used to determine responses such as sensitivity to a stimulus. In the field of computer graphics, this information can then be used to design systems that are finely attuned to the perceptual attributes of the visual system.

rasterizzation stage

. Rasterisation. The vertices are assembled into primitives (normally polygons but they could also be lines or dots). These polygons are then rasterised: that is they are drawn as a set of individual pixels (or fragments).

In order to display an image on screen we simply need to choose what colour each pixel will be

. The more accurate our colour choice, the more faithful the image.

This means a mathematical simulation of the equations of physics

. The most important equation is Newton's second law: f = ma (9.1) To state it in words Force (f) equals mass (m) times acceleration (a)

onstrained in terms of the space in which you do it. They also have the benefit of directly outputting joint angles rather than marker positions

. They can be bulky, particularly the cheap systems. It can be uncomfortable and constraining to wear resulting in less realistic motion. Lighter-weight systems have recently been developed but they can be expensive.

They take three types of input, uniform variables, in the same way as vertex shader, varying values passed in from the vertex shader (e.g. transformed position, colour and texture coordinates) and texture maps

. They output colour and depth values that are used to draw the fragments to the framebuffer.

n terms of local illumination, to shade an image we need to set each pixel in a scene to a certain colour

. To determine that colour we need to combine the effects of the lights with the surface properties of the polygon visible in that pixel.

In recent years, visual perception has increased in importance in computer graphics, predominantly due to the demand for realistic computer generated images. The goal of perceptually-based rendering is to produce imagery that evokes the same responses as an observer would have when viewing a real-world equivalent

. To this end, work has been carried out on exploiting the behaviour of the human visual system (HVS). For this information to be measured quantitatively, a branch of perception known as psychophysics is employed, where quantitative relations between physical stimuli and psychological events can be established.

transform stage

. Transform. All vertices are transformed into view coordinates by the ModelView matrix. This stage can also include other operations such as lighting vertices

what does visual perception deal with?

. Visual perception deals with the information that reaches the brain through the eyes

Drawing a shape is as simple as calling a command shape and passing your shape object as a parameter:

/ draw simply has to draw the shape that has been defined. void draw(){ background(255); lights(); // draw the shape shape(myShape); }

In Processing a graphics object is represented by a class called a PShape. This is a built in class and you can create a variable of type PShape without using any libraries, like this:

// declare a variable to hold our shape PShape myShape;

Implementing morph targets.

// iterate over all children of the base shape for (int i = 0; i < base.getChildCount(); i++) { // iterate over all vertices of the current child for (int j = 0; j < base.getChild(i).getVertexCount(); j++) { // create a PVector to represent the new // vertex position PVector vert = new PVector(0, 0, 0); // iterate over all the morph targets for (int morph = 0; morph < morphs.length; morph++) { // get the corresponding vertex in the morph target // i.e. the same child and vertex number PVector v = morphs[morph].getChild(i).getVertex(j); // multiply the morph vertex by the morph weight // and add it to our new vertex position vert.add(PVector.mult(v, weights[morph])); } // set the vertex position of the base object // to the newly calculated vertex position base.getChild(i).setVertex(j, vert); } }

A PShape object has methods beginShape, endShape and vertex that act much like their immediate mode equivalents. However, they do not draw the vertices directly, they simply add the vertices to the shape. This only needs to be done once, when we create the shape, rather than whenever we draw it:

// most of the work is done in setup, // which creates the shape object void setup(){ size(640, 480, P3D); // create the shape object // it starts off empty, with // no vertices myShape = createShape(); // add vertices to the shape using // beginShape and the vertex method // this works just like immediate mode myShape.beginShape(TRIANGLE_STRIP); { myShape.vertex(-100, -100, 50); myShape.vertex(100, -100, 0); myShape.vertex(100, 100, -50); myShape.vertex(50, 100, 0); myShape.vertex(-100, 50, 50); } myShape.endShape(CLOSE); }

Friction

//check if bodies are intersecting int numManifolds = physics.world.getDispatcher().getNumManifolds(); for (int i = 0; i < numManifolds; i++) { PersistentManifold contactManifold = physics.world.getDispatcher().getManifoldByIndexInternal(i); int numCon = contactManifold.getNumContacts(); //change and use this number if (numCon > 0) { RigidBody rA = (RigidBody) contactManifold.getBody0(); RigidBody rB = (RigidBody) contactManifold.getBody1(); if(rA == droid.physicsObject.rigidBody) { for (int j = 0; j < crates.length; j++) { if(rB == crates[j].physicsObject.rigidBody) { score+= 1; } } } if(rB == droid.physicsObject.rigidBody) { for (int j = 0; j < crates.length; j++) { if(rA == crates[j].physicsObject.rigidBody) { score+= 1; } } } } }

In BRigid, creating a world requires you to set the extents of the world; that is, the minimum and maximum values for x, y and z. These are used to create a BPhysics object which represents the world as shown in Code example 9

/extents of physics world Vector3f min = new Vector3f(-120, -250, -120); Vector3f max = new Vector3f(120, 250, 120); //create a rigid physics engine with a bounding box physics = new BPhysics(min, max);

There are six major elements to a graphics system:

1. Input devices 2. Central processing unit (CPU) 3. Graphics processing unit (GPU) 4. Memory 5. Framebuffer 6. Output devices.

There are two general types of procedural texture

1. Those that use regular geometric patterns. 2. Those that use random patterns.

This is computed by specifying the coordinates as follows:

1st octant: x, y 2nd octant: y, x 3rd octant: −y, x 4th octant: −x, y 5th octant: −x, −y 6th octant: −y, −x 7th octant: y, −x 8th octant: x, −y

here are two general ways in which shading can be applied in polygon based systems:

: flat shading and interpolation shading. They provide increasing realism at a higher computational cost.

n environment can be viewed as an infinity of light sources and a map can represent any arbitrary

A geometry of light sources. For example, in the environment map, striplights are just rectangles of high-intensity white values

Bi-directional Distribution Function or BRDF

A BRDF is essentially the description of how a surface reflects.

If a material is opaque then the majority of incident light is transformed into reflected light and absorbed light, and so what an observer sees when its surface is illuminated is the reflected light. The degree to which light is reflected (or transmitted) depends on the viewer and light position relative to the surface normal and tangent.

A BRDF is therefore a function of the incoming light direction and the outgoing direction (the direction of the viewer). As well as that, since light interacting with a surface absorbs, reflects, and transmits wavelengths depending upon the physical properties of the material, this means that a BRDF is also a function of wavelength. It can be considered as an impulse-response of a linear system.

Pierre B´ezier came up with a way of constructing a curve based on a number of control points

A B´ezier curve is a polynomial curve that approximates its control points (often known as knots), coming close to each of the points in a principled and smooth fashion

physics World

A Physics World is a structure that defines properties of the whole simulation. This typically includes the size of the volume to be simulated as well as other parameters such as gravity. Most physics engines require you to create a world before setting up any other element of the simulation, and to explicitly add objects to that world

Pixels

A computer image is usually represented as a discrete grid of picture elements a.k.a. pixels. The number of pixels determines the resolution of the image

Curve

A curve is made up of a number of points that are related by some function. Any point on the curve has two neighbours, except for the endpoints which have only one neighbour

Projections

A display device (such as a screen) is generally 2D, so how do we map 3D objects to 2D space? This is a question that artists and engineers alike over the centuries have had to contemplate. The answer is that we need to transform 3D objects on to a 2D plane using projections.

The surfaces of the scene to be rendered are each divided up into one or more smaller surfaces known as patches. A

A form factor (also known as a view factor) - a coefficient describing how well the patches can see each other - is computed for each patch. The form factors can be calculated in a number of ways. The early way of doing this was the hemicube method - a way to represent a 180◦ view from a surface or point in space. This, however, is quite computationally expensive, so a more efficient approach is to use a BSP tree to reduce the amount of time spent.

A global illumination model adds to the local model the light that is reflected from other surfaces to the current surface. A

A global illumination model is more comprehensive, more physically correct, and produces more realistic images. It is also a great deal more computationally expensive. Global illumination is covered in Chapter 7 of this subject guide.

The most common way to represent bumps is by the heightfield method

A greyscale texture map is used, where the brightness of each pixel represents how much it sticks out from the surface (black is minimum height, white is maximum height). Bump mapping changes the brightness of the pixels on the surface in response to the heightmap that is specified for each surface

In a wireframe drawing, all the edges of the polygons making up the model are drawn

A hidden surface drawing procedure attempts to colour in the drawing so that joins between polygons and polygons that are out of sight (hidden by others) are not shown. For example, if you look at a solid cube you can only see three of its facets at any one time, or from any one viewpoin

physics simulation

A particularly popular approach is to simulate the laws of physics to get realistic movement and interaction between objects

Physics engines

A physics engine is a piece of software for simulating physics in an interactive 3D graphics environment. It will perform the simulation behind the scenes and make it easy to set up complex simulations.

Simulating physics

A physics engine is a piece of software that simulates the laws of physics (at least within a certain domain).

polygon back facing

A polygon is back-facing if its normal, N, is facing away from the viewing direction, V

Ordered seed fill

A seed pixel is chosen inside the region to be filled. Each pixel on that row, to the left and right of the seed, is filled pixel by pixel until a boundary is encountered. Extra seeds are placed on the rows above and below and are processed in the same manner. This is also recursive but the number of recursive calls is dramatically reduced.

works on the same principle

A sequence of images is displayed at 25 frames per second (the minimum to make it appear like smooth motion)

Facial bones are similar to bones in body animation

A set of underlying objects that can be moved to control the mesh. Each bone affects a number of vertices with weights in a similar way to smooth skinning for body animation

Skinning

A skeleton is a great way of animating a character but it does not necessarily look very realistic when rendered

A natural way to construct a curve from a set of given points is to force the curve to pass through the points, or interpolate the points.

A spline curve is usually approximated by a set of short line segments. It is rare that a single function can be constructed to pass smoothly through all given points; therefore, it is generally necessary to use a series of curves end-to-en

Spot light

A spot light is a point source whose intensity falls off away from a given direction (Figure 5.2). The beam of the spotlight is normally assumed to have a graduated edge so that the illumination is at its maximum inside a cone, falling to zero intensity outside a cone

Triangle fans

A triangle fan is similar to a triangle strip but uses the vertices in a different order. The first vertex is part of every single triangle. Every two vertices in turn are used to create a triangle together with the first vertex, with each vertex being joined to the last vertex of the previous triangle and the first vertex of the overall shape

a variety of input devices are currently available

A variety of input devices are currently available but the most commonly encountered in everyday use are the computer mouse and keyboard, with touchscreen having recently gained prominence due to the increased use of mobile devices

Vectors

A vector is composed of N number of components. The number of components determines the dimension of the vector

Filling

A wireframe drawing usually needs to be filled in some way and there are several ways of achieving this. The following methods consider the simple problem of filling a closed polygonal region with a single colour

24-bit colour

Actual RGB levels are often specified with an integer for each component. This is most commonly a one byte integer, so each of the three RGB components is an integer between 0 and 255.

sychologist Paul Ekman defined a set of six universal human emotions (joy, sadness, surprise, anger, disgust, fear), which he called the basic emotions

All are independent of culture and each has a distinctive facial expression. They are a common basis for morph targets but can be very unsubtle.

Directional light

All of the rays from a directional light source have a common direction and no point of origin (Figure 5.2). It is as if the light source was infinitely far away from the surface that it is illuminating

Tone mapping (also known as tone reproduction) provides a method of scaling (or mapping) luminance values in the real world to a displayable range

Although it is tempting to use a straightforward linear scaling, this is not an adequate solution as many details can be lost (Figure 10.3). The mapping must be tailored in somenon-linear way

Rendering equation

An integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation.

transform hierarchial system

As in most graphics systems transforms are hierarchical; FK can easily be implemented using the existing functionality of the engine.

benefit of hermit curve

As we need to go through the keyframes we use Hermite curves instead (Figure 8.2(b)). These are equivalent to B´ezier curves, but rather than specifying four control points specify two end points and tangents at these end points. In the case of interpolating positions the tangents are velocities

an impulse-response of a linear system.

As well as that, since light interacting with a surface absorbs, reflects, and transmits wavelengths depending upon the physical properties of the material, this means that a BRDF is also a function of wavelength.

Quads and quad strips

As well as triangles it is possible to create shapes out of quadrilaterals (quads), four sided shapes, and there is a type of shape called a quad strip which is analagous to a triangle strip. The quads are automatically split into triangles before rendering.

regardless of whether they are located in a bright or dark area. This often results in a tone mapped image that looks 'flat', having lost its local details. Conversely, local operators apply a different scale to different parts of an image. Local tone mapping operators consider pixel neighbourhood information for each individual pixel, which simulates the adaptive and local properties of the human vision system. This can lead to better results but this will take longer for the computer to process.

At present, it is a matter of choosing the best tool for the job, although the development of high dynamic range displays means operators can be compared more accurately. Where required, for the purposes of realistic image generation, perceptually-accurate operators are the best choice.

The shape, mass and position are used to create a BObject, which contains the rigid body:

BObject physicsShape = new BObject(this, mass, box, pos, true); The BObject has a member variable rigidBody which represents the rigid body. Finally, you add the body to the physics world so it will be simulated. physics.addBody(physicsShape);

ng a structure that captures some relative depth information between objects. It involves partitioning a space into two (binary) parts using a separating plane. Polygons on the same side of that plane as the eye can obscure - but can not be obscured by - polygons on the other side. This is then repeated for both resulting subspaces, giving a BSP tree

BSP trees split up objects so that the painter's algorithm will draw them correctly without 35 CO3355 Advanced graphics and animation need of a Z-buffer, and eliminates the need to sort the objects as a simple tree traversal will yield them in the correct order.

At the end of the array there is no next keyframe. Finally, we are using the break statement to break out of the loop when we have found the right keyframe

Because of the way loops work, if we never break out of the loop currentKeyframeM will be set to the last index of the array, which is what we want, because it means that the current time is after the last keyframe

The painter's algorithm is quite limited as time is wasted drawing objects that will be overdrawn later. However, the z-buffer algorithm also has its drawbacks as it is expensive in terms of memory use

Binary Space Partitioning (BSP) trees provide an elegant, efficient method for sorting polygons by building a structure that captures some relative depth information between objects

Character animation is normally divided into Body animation and Facial animation each of which uses different techniques. Within each there are a number of important topics

Body animation • skeletal animation • skinning • motion capture Facial animation • morph targets • facial bones • facial motion capture.

Motion capture can also be used for facial movement. Again the movement of an actor can be captured and applied to a character in a similar way to body motion capture

Body motion capture relies on putting markers on the body and recording their movement but generally these are too bulky to use on the face. The only real option is to use optical method

BRigid has a number of primitive collision shapes:

Boxes (BBox) Spheres (BSphere) Planes (BPlane)

Displacement mapping

Bump mapping does not alter an object's geometry. Because of this, a bump-mapped object will cast shadows that show the underlying geometry; that is, shadows will have smooth edges, not bumpy. However, a similar technique known as displacement mapping does actually alter an object's geometry. Displacement mapping changes the position of points over the textured surface, often along the local surface normal, according to the value from the texture map

The surface of a 2D texture is flat and therefore the surface normals go straight up and the surface appears flat

Bump mapping evaluates the current light intensity at any given pixel on the texture. Rather than altering the geometry, it adds 'fake' depth by modifying the surface normals. The technique uses the colour from an image map to change the direction of the surface normal (Figure 6.6)

Bump mapping

Bump mapping is a lot like texture mapping. However, where texture mapping adds colour to a polygon, bump mapping adds, what appears to be surface roughness.

This is desirable, because high dynamic range display devices are being developed that will allow this data to be displayed directl

By capturing and storing as much of the real scene as possible, and only reducing the data to a displayable form just before display, the image becomes future-proof. HDR images store a depiction of the scene in a range of intensities commensurate with the real-world scene. These images may be rendered images or photographs

The process for this is as follows:

CR = IasR + Ic(Is + IdsR)lR CG = IasG + Ic(Is + IdsG)lG CB = IasB + Ic(Is + IdsB)lB where C is the value for surface colour and illumination sR , sG , sB represents the surface colour IR , IG , IB represents the colour of the light Ia , Ic , Id , Is refer to the ambient reflection, depth cueing, diffuse reflection and specular reflection

Translation

Changes the position of an object by adding a vector (x, y, z) to that position. In Processing the translate command implements translation.

Scale

Changes the size of the shape. This can be a uniform scale in which the size is changed equally in all directions according to a number. Numbers less than 1 reduce the size and more than 1 increase it. A scale can also be non-uniform meaning the x, y and z directions are scaled differently. In Processing, the scale command can either take one parameter for a uniform scale or three for a non-uniform scale

If a polygon lies partially outside the viewport then there is no need for the program to spend any CPU time doing any of the calculations for the part that is not going to be seen

Clipping is generally used to mean avoiding drawing anything outside the camera's field of view. To clip a line we only need to consider its endpoints.

Compound shapes

Compound shapes. If an object cannot be represented as a single primitive object, it may be possible to represent it as a number of primitive objects joined together: a compound shape.

Computer animation

Computer animation (both 2D and 3D) is quite a lot like Stop Motion Animation.

Why study computer graphics?

Computer graphics is a well-established field of computer science. It has a huge range of applications, from entertainment to cutting-edge science

Objects are the 3D models that exist (mathematically-described in 3D space) in their own coordinate systems; images are the 2D realisations of objects which are displayed on screen

Converting 3D information for display on a 2D screen involves a process known as rasterisation. Rasterisation algorithms take a 3D scene described as polygons, and render it as discrete 2D primitives - using shaded pixels that convince you that the scene you are looking at is still a 3D world rather than a 2D image.

Environment mapping

Creating true specular highlights and reflections in a generated image is discussed in Chapter 7 of this subject guide. It is time consuming and computationally expensive; therefore, a shortcut is needed for realtime graphics. An environment map, also known as a reflection map can satisfy this requiremen

HDR capture and storage

Current state-of-the-art image capturing techniques allow much of the luminance values to be recorded in high dynamic range (HDR) images

step of z buffer algorithms

Declare an array z_buffer(x, y) with one entry for each pixel position. Initialise the array to the maximum depth. for each polygon P for each pixel (x, y) in P compute z_depth at x, y if z_depth < z_buffer (x, y) then set_pixel (x, y, colour) z_buffer (x, y) <= z_depth

Vector-Point Relationship

Diff. b/w 2 points = vector v = Q - P

display stage

Display. The appearance of the fragments is determined based on the properties of the polygons. The colour of the vertices is often interpolated to get the fragment colour. Alternatively, the colour could be determined by a new operation on the individual fragment, such as applying a texture. The final step is to draw pixels to a frame buffer, a piece of memory that is mapped to the screen.

The process is as follows

Draw the pixel at (x, y) making sure it is the closest pixel to the point that belongs on the line. Move across the x-axis and decide at each step what will be the next point on the y-axis to draw. Since the point on the line may not be in the centre of the pixel we must find the second best point - the one where the distance from the line is the smallest possible. Decision: do we draw (x + 1, y) or do we draw (x + 1, y + 1)? d1 is smaller than d2 so the next point after (x, y) that should be drawn is (x + 1, y + 1). Every iteration we increment the x coordinate and calculate whether or not we should increment the y coordinate.

animation and graphics software, layer refers to the different levels on which you place your drawings, animations, and objects. The layers are stacked one on top of another

Each layer contains its own graphics or effects, which can be worked on and changed independently of the other layers.

Morph targets are one of the most popular methods. Each facial expression is represented by a separate mesh.

Each of these meshes must have the same number of vertices as the original mesh but with different positions.

New facial expressions are created from these base expressions (called Morph targets) by smoothly blending between them

Each target is given a weight between 0 and 1 and a weighted sum is performed on all of the vertices in all of the targets to get the output mesh: vi = X t∈morph targets wtvti; whereXwt = 1

The mesh is handled on a vertex by vertex basis.

Each vertex can be associated with more than one bone. The effect on each vertex is a combination of the transformations of the different bones. The effect of a bone on a vertex is specified by a weight, a number between 0 and 1. All weights on a vertex sum to 1.

Joints are generally represented as full three degrees of freedom rotations but human joints cannot handle that range

Either you build rotation limits into the animation system or you can rely on the methods generating joint angles to give reasonable values (as motion capture normally will)

engine

Engines are very important as programming high quality simulations is extremely difficult so engines make simulation available much more widely

Consider a cube: at any one time three of the sides of the cube will face away from the user and therefore will not be visible

Even if these faces were drawn they would be obscured by the three 'forward' facing sides. Back-face culling therefore reduces the number of faces drawn from twelve to six. Removing unseen polygons can give a big increase in speed for complex scenes. Processing supports back-face culling internally, so it can be enabled to get this effect

For any endpoint (x, y) of a line, the code can be determined that identifies the region in which the endpoint lies. The code's bits are set according to the following conditions:

First bit set 1: point lies to the left of the window, x < xmin Second bit set 1: point lies to the right of the window, x > xmax Third bit set 1: point lies below (bottom) of the window, y < ymin Fourth bit set 1: point lies above (top) of the window, y > ymax The sequence for reading the codes' bits is LRBT (Left, Right, Bottom, Top

rotation in the bones and skeletal animation

First choose a position on a bone (the end point). This position is rotated by the rotation of the joint above the bone. Translate by the length (offset) of the parent bone and then rotate by its joint. Go up its parent and iterate until you get to the root. Rotate and translate by the root position.

The key step in texturing is mapping from coordinates in the object space to particular points in the image

For each pixel in an object, we have to ask where to look in the texture map to find the colour. To be able to answer this, we need to consider two things: map shape and map entity

The logical OR of the endpoint codes determines if the line is completely inside the window. If the logical OR is zero, the line can be trivially accepted.

For example, if the endpoint codes are 0000 and 0000, the logical OR is 0000 - the line can be trivially accepted. If the endpoint codes are 0000 and 0110, the logical OR is 0110 and the line cannot be trivially accepted

Once the codes for each endpoint of a line are determined, the logical AND operation of the codes determines if the line is completely outside of the window. If the logical AND of the endpoint codes is not zero, the line can be trivially rejected

For example, if an endpoint had a code of 1001 and the other endpoint had a code of 1010, the logical AND would be 1000 indicating the line segment lies outside ofthe window. If endpoints had codes of 1001 and 0110, the logical AND would be 0000, and the line could not be trivially rejected.

Lightness constancy is the term used to describe the phenomena whereby a surface appears to look the same regardless of any differences in the illumination

For example, white paper with black text maintains its appearance when viewed indoors in a dark environment or outdoors in bright sunlight, even if the black ink on a page viewed outdoors actually reflects more light than the white paper viewed indoors

example of directional light

For outdoor scenes, the sun is so far away that its illumination is simulated as a directional light source with all rays arriving at the scene in a parallel direction.

how images arranged in order to make animation?

For the purpose of creating animation these images are arranged in a 'time line'. In traditional animation this is a set of images with frame numbers drawn by the side

Inverse kinematics is a way of doing this automatically so that you can animate in terms of hand and foot positions rather than joint angles

Given a desired position for a part of the body (end effector) inverse kinematics is the process of calculating the required joint angles to achieve that position (in the above diagram, given Pt IK will calculate R0 and R1).

There are two main types of interpolation shading

Gouraud and Phong

The following sections describe a number of different types of force

Gravity is a force that acts to pull objects towards the ground. It is proportional to the mass of an object. The mass term in gravitational force cancels out the mass term in Newton's second law of motion (Equation 9.1) so as to produce a constant downward acceleration. Gravity is typically a global parameter of the physics wor

his results in the loss of detail in bright or dark areas of a picture, depending on whether the camera had a low or high exposure setting

HDR compensates for this loss of detail by taking multiple pictures at different exposure levels and intelligently stitching them together to produce a picture that is representative in both dark and bright areas. Figure 10.4 demonstrates how varying levels of exposure reveal different details. By combining the various exposure levels and tone mapping them, a better overall image can be achieved.

OpenEXR .exr 48

High colour precision at the expense of some dynamic range; can be compressed

Comparing these equations we can see that our code corresponds to the first two terms of the Taylor series. This means that it is a valid approximation, because for small (δt) the later terms of the Taylor series become smaller

However, it is just an approximation, and it is only a valid approximation for small values of δt. That means it can lead to noticeable errors if the rate at which we update the simulation is slow compared to our objects' velocities or accelerations. More accurate simulations can be created by including higher order derivatives of the function and through other sophisticated techniques.

CCD is very general and powerful; it can work for any number and combinations of bones

However, there are problems. It does not know anything about the human body. It can put you in unrealistic or impossible configurations (for example, elbow bent backwards). To avoid this we need to introduce joint constraints. CCD makes this easy; you constrain the joints after each step.

hidden surface removal

Identifying and discarding points in a scene that are blocked from view.

if polygon facing away

If a polygon is facing away from the camera and is part of a solid object, then it can't be seen

f the skeleton is in the bind pose the mesh should be in its default location

If the bind pose is not zero rotation you need to subtract the inverse bind pose from the current pose.

As you proceed around the window, nine regions are created - the eight outside regions and the one inside region (Figure 3.16). Each of the nine regions associated with the window is assigned a 4-bit code to identify the region. Each bit in the code is set to either a 1(true) or a 0(false

If the region is to the left of the window, the first bit of the code is set to 1. If the region is to the right of the window, the second bit of the code is set to 1. If to the bottom, the third bit is set, and if to the top, the fourth bit is set. The 4 bits in the code then identify each of the nine regions

Map shape

If we are using a map shape that is planar, we take an (x, y, z) value from the object and project (that is, discard) one of the components.

radiosity (radiant energy)

If you want to change the position of the viewer it involves recalculating the lighting by re-ray tracing the entire scene to update it to the new position.

When applied to 3D graphics, adding this information is often collectively referred to as shading. Note the distinction in the following terms that you will encounter on this course:

Illumination is the calculation of light intensity at a particular point on a surface. Shading uses these calculated intensities to shade the whole surface or the whole scene.

9 Image-based lighting

Image-based lighting (IBL) is the process of illuminating scenes and objects (real or synthetic) with images of light from the real world. This increases the level of realism of a scene and is a technique that is becoming more commonplace given its potential for applications such as motion-picture visual effects, where light from a CG scene can be integrated with light from a real world scene

Texturing in Processing

Images in Processing are represented using the PImage class and loaded from file using the loadImage command: PImage myTexture = loadImage("texture.jpg");

Vertex shapes

In 3D graphics solid shapes are actually represented simply by their outer surface and in most cases this surface is represented as a number of polygons. A polygon is a flat shape made up of a finite number of straight sides

Creating a rigid body

In BRigid, creating a rigid body involves a number of steps. Firstly, you need to create a shape: box = new BBox(this, 1, 50, 50, 50);

Drag is a force that models air resistance, which slows down moving objects. It is proportional to the speed of an object and in the opposite direction so it will always act to reduce the speed.

In BRigid, drag is included in a general damping parameter which includes all damping forces; there is both a linear damping which reduces linear (positional) velocity and an angular damping which reduces angular (rotational) velocity. The damping applied to an object can be set using the setDamping method of a RigidBod body.rigidBody.setDamping(linearDamping, angularDamping);

The quality of a ray traced scene depends on the number of 'bounces' - the more bounces (that is, the deeper the recursion), the better the quality. Ray traced images have a tendency to appear very clean and sharp with hard shadows

In fact, they can look so clean that they appear unrealistic. Introducing randomness can counter this. Distributed ray tracing, a refinement of ray tracing that generates 'softer' images, is one way of doing this.

Local versus global illumination

In general, light leaves a light source, is reflected from many surfaces and then finally reflected to our eyes, or through an image plane of a camera. Illumination models are able to model the interaction of light with the surface and range from simple to very complex.

Happily, these two problems can have a common solution. Most high level graphics engines include the concept of a graphics object. A graphics object contains the vertices required to draw it (called the Geometry) as well as colour and style properties (called the Material).

In most cases it will also include the transformations applied to the geometry. These properties are stored in memory and stay the same over time, so the same geometry, material and transformations are used every time an object is drawn. A graphics object corresponds much more closely to our understanding of a real world object. It can also be more efficient. All of the vertices can be set to the graphics card at once, thus reducing transfer overheads. In fact, it is even possible to optimise further by storing the geometry and materials directly on the graphics card almost eliminating transfer costs

Lighting in a GPU shader

In order to perform lighting in a shader we must first tell Processing we will be using lights by adding the following definition to our vertex shader (instead of PROCESSING COLOR SHADER): #define PROCESSING_LIGHT_SHADER

Texturing in a shade

In order to perform texturing in a shader we must first tell Processing we will be using textures by adding the following definition to our vertex shader (instead of PROCESSING COLOR SHADER): #define PROCESSING_TEXTURE_SHADER

This requires significant effort to achieve, and one of the key properties of this problem is that the overall performance of a photorealistic rendering system is only as good as its worst componen

In the field of computer graphics, the actual image synthesis algorithms - from scanline techniques to global illumination methods - are constantly being reviewed and improved, but weaknesses in a system in both areas can make any improvements in the underlying rendering algorithm insignificant.

These are the types available in GLSl

Integers: int, short (a small integer) and long (a large integer). Floating point: float, double and half (which have, respectively, more or less precision than a float). Vectors: a single type representing multiple values, for example a vec4 is a vector of 4 floats and ivec3 is a vector of 3 integers. Matrices: a 2 dimensional matrix of values, for example a mat4 represents a 4 by 4 matrix of floats. Texture samplers: represent a texture, for example sampler2D. Structures: a set of values that are grouped together (like a class without methods). For example, this is a structure with an integer and a vector of float

Motion capture can give you potentially very realistic motion but is often ruined by noise, bad handling, etc

It can also be very tricky to work with. Whether you choose motion capture or hand animation depends on what you want out of your animation: a computer graphics character that is to be inserted into a live action film is likely to need the realism of motion capture, while a children's animation might require the more stylised movement of hand animation.

culling operation

It involves removing any polygons behind the projection plane, removing any polygons projected to lie outside the clip rectangle, and culling any back-faces (that is, any polygon that faces away from the viewport)

Hand animation tends to be very expressive but has a less realistic more cartoon-like style.

It is easy to get exactly what you want with hand animated data and to tweak it to your requirements.

These need to have an exactly identical structure to each other and to the base shape. That means they need to have exactly the same number of child shapes and each child shape must have exactly the same number of vertices

It is generally a good idea to start with one basic shape (morph[0]) and edit it to create the other morphs. The same shape can be initially loaded into base and morph[0] (but they must be loaded separately, not simply two variables pointing to the same shape otherwise editing base will also change morph[0])

An image displayed on a standard LCD screen is greatly restricted in terms of tonality, perhaps achieving at most 1000:1 cd/m2

It is therefore necessary that the image be altered in some way, usually through some form of scaling, to fit a display device that is only capable of outputting a low dynamic range.

The z-buffer algorithm

It is very impractical to carry out a depth sort among every polygon for every pixel. The basic idea of the z-buffer algorithm is to test the z-depth of each surface to determine the closest (visible) surface

It loads the shader to the GPU then sends all the vertex/texture data

It loads the shader to the GPU then sends all the vertex/texture data. The GPU programs are written in a specialist shader programming language. In our examples we will use GL Shading Language (GLSL), which is part of OpenGL but other languages include HLSL developed by Microsoft and Cg developed by nVidia.

The most basic form of animation is the flip book.

It presents a sequence of images in quick succession, each of which is a page of the book

BRDF

It simply describes how much light is reflected when light makes contact with a certain material

There are many methods that can be used to clip a line. The most widely-used algorithm for clipping is the Cohen-Sutherland line-clipping algorithm. It is fast, reliable, and easy to understand

It uses what is termed 'inside-outside window codes'. To determine whether endpoints are inside or outside a window, the algorithm sets up a half-space code for each endpoint. Each edge of the window defines an infinite line that divides the whole space into two half-spaces, the inside half-space and the outside half-space

adavantage of ray tracing?

Its big advantage is that it combines hidden surface removal with shading due to direct illumination, shading due to global illumination, and shadow computation within a single model

what does joint represent?

Joints are represented as transforms

A timeline would essentially be an array of these keyframe objects

Keyframe [] timeline;

Motion capture

Keyframe animation is based on sets of movement data which can come from one of two sources: hand animation or motion capture.

The computer does the inbetweening automatically

Keyframes are a tuple consisting of the time at which the keyframe occurs and the value of the transforms. These will be set up in a timeline, which is just an array of keyframes

Keyframing in animation

Keyframing can reduce this effort even more. The animator only needs to define the 'key frames' of a movement (which will be values for transforms)

Depth cueing

Light is attenuated as it travels away from its source. In theory, light intensity should be attenuated using an inverse square law. In practice, a linear fall-off looks much more realistic. Fade distances can be set; for example, OpenGL uses the attenuation coefficient.

rendering equation formula

Lo(x, ωo, λ, t) = Le(x, ωo, λ, t) +ZΩfr(x, ωi, ωo, λ,t)Li(x, ωi, λ, t)(ωi. n)dωi

The dynamic range in the real world is far greater than the range that can be produced on most electronic displays

Luminous intensity - the power of a light source - is measured in candelas per square metre (cd/m2 ). The human visual system can accommodate a wide range of luminance in a single view, around 10 000:1 cd/m2 , and our eye adapts to our surroundings, changing what we see over time

Definition of a matrix:

M = m11 m12 m13 m21 m22 m23

There are a number of approaches to performing inverse kinematics

Matrix (Jacobean) methods. Cyclic Coordinate Descent (CCD). Specific analytic methods for the human body

model stage

Model. Objects in the world are modelled out of vertices by the CPU program. These vertices are sent to the GPU

GPU shader programming is an integral part of modern graphics programming, but it is very different from normal CPU programming because GPUs and CPUs rely on a very different programming model. CPUs are sequential, they perform one operation at a time, in rapid succession and programs are generally made up of a sequence of instructions. GPUs on the other hand are massively parallel and are able to perform many simple instructions simultaneously

Modern CPUs are somewhat parallel because they are multi-core, they might be made up of, for example, four cores, each of which is essentially an independent CPU. However, the model of parallelism used for CPU programming is generally multithreading, multiple independent programs running at the same time with possibly a small amount of interaction between them. GPUs work differently, they are designed to run many copies of the same program, each of which runs on different data and each of which must run independently of the other. This can make GPU programs extremely fast. However, it means that programming a GPU means thinking in a different way about programming, using different techniques and different programming languages that are designed specifically for GPUs. Luckily graphics programs are very well suited to parallelisation: they perform the same action independently on large numbers of vertices or pixels. (GPUs can also be used for non-grap

Using morph targets

Morph targets are a good low level animation technique. To use them effectively we need ways of choosing morph targets. We could let the animator choose (nothing wrong with that) but there are also more principled way

This is very useful as it means you only have one method in your animation code (one shader).

Morphs are very convenient from an animator point of view, but bones are easier in the engine

key frame and find the current frame

Note that we are adding keyframes in the correct time order. We will use the ordering later when we have to find the current keyframe

Transformations

Objects in 3D can be moved and positioned using Transformations

Simple primitives.

Often objects are represented as simple primitive shapes for which simple collision equations are defined; for example, boxes or spheres. These are typically only rough approximations of the appearance of the object but are very efficient and are often close enough that the differences in an object's movement are not noticeab

2D polygons

Once lines and curves have been drawn on the screen there are a sequence of operations that take place in order to portray polygons on the screen

While research into ways of rendering images provides us with better and faster methods, we do not necessarily see their full effect due to limitations of the display hardware. To ensure that the scene as it was created closely resembles the scene as it is displayed, it is necessary to be aware of any factors that might adversely influence the display medium.

One major problem is that computer screens are limited in the range of luminance they can display. Most are not yet capable of producing anywhere near the range of light in the real world. This means the realistic images we have carefully created are not being properly displayed

Collision shape

One of the most important properties of a rigid body is its shape. From the point of view of a physics engine the shape controls how it collides and interacts with other objects

Is this a valid thing to do? The equations given above are continuous relationships where position and velocity are varying continuously over time. In the code shown above time is split into discrete frames and velocities and positions are updated only at those time steps. Will this introduce errors?

One way of looking at this is through the Taylor series. This states that a small change (δt) in a function can be represented as an infinite series like this: y(t + δt) = y(t) + δtdy dt (t) + (δt) 2 2! d 2y dt2 (t) + . . .

The light that goes directly from the light source and is reflected from the surface is called a local illumination model and the shading of any surface is independent from the shading of all other surfaces

Only the interaction between the light source and the point on the surface being shaded is considered. Light that takes an indirect path to the surface is not considered. This means that each object is lit individually, regardless of what objects surround it. Most real-time graphics rendering systems use local illumination.

The initial camera ray is tested for intersection with the 3D scene geometry. If the ray does not hit anything, then we can colour the pixel to some specified 'background' colour

Otherwise, we want to know the first thing that the ray hits - it is possible that the ray will hit several surfaces, but we only care about the closest one to the camera. For the intersection, we need to know the position, normal, colour, texture coordinate, material, and any other relevant information about that exact location. If we hit somewhere in the centre of a polygon, for example, then this information would get computed by interpolating the vertex data

The simplest approach is to interpolate them in straight lines between the keyframes (Figure 8.1(b)). The position is interpolated linearly between keyframes using the following equation:

P(t) = tP(tk) + (1 − t)P(tk−1)

The formula for a Hermite curve is:

P(t) =(−2s3 + 3s2)P(tk) + (s3 − s2)T(tk)+ (2s3 − 3s2 +1)P(tk−1) + (s3 − 2s2 + s)T(tk−1)

We also need an array of vertex shapes to represent the morph targets:

PShape [] morphs;

To implement morph targets we need a vertex shape that we want to animate

PShape base;

The simplest way to update velocity and position is simply to add on the current acceleration, like this:

PVector acceleration = new PVector(0,0,0); for (int i = 0; i < forces.length; i++) { acceleration.add(forces[i].calculate()); } acceleration.div(mass); velocity.add(PVector.mult(acceleration, deltaTime)); position.add(PVector.mult(velocity, deltaTime));

o play an animation back effectively we need to be able to find the current keyframe based on time. We can use the millis command in Processing to get the current time

PVector pos = timeline[0].position; pushMatrix(); translate(pos.x, pos.y); ellipse(0, 0, 20, 20); popMatrix();

Once we have found the current keyframe we can use it to get a position:

PVector pos = timeline[currentKeyframe].position;

: Interpolating keyframes

PVector pos; // first we check whether we have reached the last keyframe if(currentKeyframe == timeline.length-1) { // if we have reached the last keyframe, // use that keyframe as the position (no interpolation) pos = timeline[currentKeyframe].position; } else { // This part does interpolation for all keyframes before // the last one // get the position and time of the keyframe before // and after the current time PVector p1 = timeline[currentKeyframe].position; PVector p2 = timeline[currentKeyframe+1].position; float t1 = timeline[currentKeyframe].time;; float t2 = timeline[currentKeyframe+1].time; // multiply each position by the interpolation // factors as given in the linear interpolation // equation p1 = PVector.mult(p1, 1.0-(t-t1)/(t2-t1)); p2 = PVector.mult(p2, (t-t1)/(t2-t1)); // add the results together to get the // interpolated position. pos = PVector.add(p1, p2); }

he most visible elements of a simulation are the objects that move and interact with each other. There are a number of different types of objects:

Particles Rigid bodies Compound bodies Soft bodies and cloth

particles objects

Particles are the simplest type of object. They have a position, velocity and mass but zero size and no shape (at least from the point of view of the simulation). They can move, but they do not have a rotation. They are typically used for very small objects.

Visual perception: an overview

Perception is the process that enables humans to make sense of the stimuli that surround them

Points, Scalars and Vectors

Points, vectors defined relative to a coordinate system

projection stage

Projection. The vertices are projected into screen space using the Projection matrix (perspective or orthographic).

RGB is a device-dependent colour model: different devices will reproduce a given RGB value differently

R, so an RGB value is not guaranteed to look the same across all devices without some kind of colour management being used.

Radiosity

Ray tracing follows all rays from the eye of the viewer back to the light sources. It is entirely dependent on what the viewer can see from their given position

what are ray tracing function in computer graphics?

Ray tracing is the most complete simulation of an illumination-reflection model in computer graphics

Ray tracing

Ray-tracing simulates the path of light in a scene, but it does so in reverse. A ray of light is traced backwards through the scene, starting from what the eye or camera sees. When it intersects with objects in the scene its reflection, refraction, or absorption is calculated.

he processes behind IBL are those of global illumination and also high dynamic range imaging (HDRI), which is discussed in Chapter 10 of this subject guide.

Real-world illumination is captured as an omni-directional light probe image. In essence, a video camera captures a series of images of reflections in a mirrored sphere. The computer generated scene geometry is created and objects are assigned material values. The captured images are mapped onto a representation of the environment (for example, as a sphere encompassing the modelled scene). The light in that environment is simulated using ray tracing with the light probe image as the light source.

Related effects

Replication of visual effects that are related to the area of tone reproduction include the modelling of glare. Psychophysically-based algorithms have been produced that will add glare to digital images, simulating the flare and bloom seen around very bright objects. Psychophysical tests have demonstrated that these effects increase the apparent brightness of a light source in an image. While highly effective, glare simulation is computationally expensive.

Rotation

Rotates shapes by a certain angle around a certain axis. In Processing the most common way of using rotation is to use the rotateX, rotateY and rotateZ commands which rotate by angle (in radians) around each of the x, y and z axes.

oft bodies and cloth are much more complex

S as they can change their shape as well as moving. Many modern physics engines are starting to include soft as well as rigid bodies, but they are out of the scope of this subject guide.

The viewing transformation is the operation that maps a perspective view of an object in world coordinates into a physical device's display space.

Screen space refers to a coordinate system attached to a display device with the xy plane coincident with the display surface

Triangles are the simplest possible polygon having only three sides. That means that they cannot have some of the more complex configurations that other polygons can have. Firstly, they cannot be concave.

Secondly, they are always planar, meaning they are always flat. All three of their vertices are always in the same plane. More complex shapes can be bent, with vertices in different planes. These two features can greatly simplify the graphics calculations required to render triangles and current graphics hardware is designed to work with triangles

To use a shader in Processing we must use a PShader object which represents a shader program:

Shader myShader; We can then load in the fragment and vertex shaders from file: myShader = loadShader("frag.glsl", "vert.glsl");

Meshes. If the shape of an object is too complex to represent out of primitive objects it is possible to represent its physics shape as a polygon mesh, in the same way as a graphics object

Simulating meshes is much more expensive than simulating primitives, so the meshes must be simple. They are usually a different, and much lower resolution, mesh from the one used to render the graphics. They are typically created by starting with the graphics mesh and greatly reducing the number of polygons.

Reflection

Specular reflection is the direct reflection of light by a surface. Most light is reflected in a narrow range of angles. Shiny surfaces reflect almost all incident light and therefore have bright specular highlights or 'hot spots'. For a perfect mirror the angle of reflection is equal to the angle of incidence.

A convenient method for drawing B´ezier curves is to use a recursive procedure via the de Casteljau algorithm (Figure 3.15), using a sequence of linear interpolations to compute the positions along the curve. The steps are as follows:

Start at the beginning and end points, P0 and P3. Calculate the halfway point (in other words, at the first pass the halfway point will be t=0.5). If the angle between, formed by the two line segments, is smaller than a threshold value, then add that point as a drawing point. Now recursively repea th each half of the curve. Stop the algorithm when no more division is possible, or the line segments reach a minimal length

There are two types of friction: static and dynamic friction.

Static friction Dynamic friction

In the previous formula the weights have to sum to 1. If they do not, the size of the overall mesh will increase (if the weights sum to more than 1) or decrease (if they sum to less than 1).

Subtracting a neutral mesh from all the targets allows us to lift the restriction because we are adding together differences not absolute positions. This can allow more extreme versions of a target or the use of two complete morph targets simultaneously (for example, a smile together with closed eyes).

Radiance RGBE .hdr 32

Superlative dynamic range; sacrifices some colour precision but results in smaller file size

first approximation this is the motion of rigid bones linked by rotational joints

T(the terms joints and bones are often used interchangeably in animation; although, of course, they mean something different, the same data structure is normally used to represent both).

This distance is divided by the time between the previous and next keyframe in order to get the correct speed of movement:

T(tk) = P(tk+1) − P(tk−1) tk+1 − tk−1

The vertices are transformed individually by their associated bones. The resulting position is a weighted sum of the individual joint transforms.

T(vi)sum of joints wijRj (vi)

he advantages of radiosity are that it gives good results for effects such as colour bleeding.

TInteractivity is achieved since the scene calculations are pre-computed and only need to be done once. It is good for scenes (especially indoor scenes) where most lighting is indirect and where ray tracing copes badly. However, it is not very good for scenes involving transparency and non-diffuse reflection as it cannot handle specular objects and mirror

5 Texture mapping

Texture mapping uses an image map, which is literally just an image; that is, a 2D array of intensities. Any image may be used as the source for the image map. Digital photographs or 2D artwork are the usual sources.

Procedural texturing

Texture maps are a good solution for adding realism but they have one major drawback: a fixed amount of detail. They cannot be scaled larger without losing resolution.

This is quite a complex loop with a number of unusual features. Firstly, we are defining the loop variable currentKeyframe outside the loop

That is because we want to use it later in the program. Secondly, we are not going to the end or the array but to the position before the end. This is because we are checking both the current keyframe and the next keyframe

These commands function on the graphics state, which is the combination of all the current graphics settings that control drawing, including colours and line drawing settings.

That means that the commands do not change the colour of particular shapes, but change the state of the renderer. Calling fill(255, 0, 0) will put the renderer into a state where it will draw shapes in red. That means that any shape drawn after that command, until fill is called again, will be drawn in red

the graphics processing unit (GPU)

The GPU is a dedicated, highly-parallel, processor that is ideal for executing mathematically-intensive tasks such as 3D graphics instructions quickly and efficiently. The GPU is covered in more detail in Chapter 4 of this subject guide

Phong model for specular reflection

The Phong reflection model (also called Phong illumination model or Phong lighting model) is widely used for real-time computer graphics to approximate specular reflection

Lightness and colour constancy

The ability to judge a surface's reflectance properties despite any changes in illumination is known as colour constancy

force acts on an object to create a change of movement (acceleration).

The acceleration depends both on the force and on the mass of the object. Force and acceleration are written in bold because they are both vectors having both a magnitude and a direction

mechanism of midpoint

The algorithm works incrementally from a starting point (x0, y0) and the first step is to identify in which one of eight octants the direction of the end point lie

The animation can be jerky,

The animation can be jerky, as the object changes direction of movement rapidly at each keyframe

Circles are rotational joints, lines are rigid links (bones). Joints are represented as rotations

The black circle is the root, the position and rotation offset from the origin. The root is (normally) the only element of the skeleton that has a translation. The character is animated by rotating joints and translating and rotating the root

Skinning is well suited to implementing in a shader

The bone weights are passed in as vertex attributes and an array of joint transforms is passed in as a uniform variable.

This happens when two objects are in contact and are already moving relative to each other. It acts against the relative velocity of the objects and is in the opposite direction to that velocity, so it tends to slow the objects down (like drag). It is proportional to the velocity, the contact reaction force and a coefficient of friction

The coefficients of friction are different in the two cases. Both depend on the two materials involved in a complex way. Most physics engines do not deal with these complexities; each object has only a coefficient of friction. The coefficients of the two objects are multiplied to get the coefficient of their interaction. Some physics engines have separate parameters for static and dynamic friction coefficients, but BRigid has only one, which can be set using the setFriction method: body.rigidBody.setFriction(frictionCoefficient)

Types of shape

The commands given above create a single polygon out of all of the vertices provided. However, most 3D shapes are made up of several polygons. Normally these polygons would only be triangles

degree of reflected light

The degree to which light is reflected (or transmitted) depends on the viewer and light position relative to the surface normal and tangent

Friction is a force that acts on two bodies that are in contact with each other. Friction depends on the surface texture of the objects involved. Slippery surfaces like ice have very low friction, while rough surfaces like sandpaper have very high friction

The differences between these surfaces is represented by a number called the coefficient of friction. Friction also depends on the contact force between two objects; that is, the force that is keeping them together. For one object lying on top of another this contact force would be the gravity acting on the top object as shown in Figure 9.5; this is why heavy objects have more friction than light objects

the downside of ray tracing

The downside of ray tracing is that although it generates incredibly realistic images, it is computationally expensive with extremely high processing overheads - scenes can take minutes, hours or days to render.

Facial animation

The face does not have a common underlying structure like a skeleton. Faces are generally animated as meshes of vertices, either by moving individual vertices or by using a number of types of rig.

he first change is to use a 3D

The first change is to use a 3D rather than a 2D renderer (the software that creates images from your graphics commands). This is done by passing a third argument, P3D, to the size command

Real-time ray tracing

The first implementation of a 'real-time' ray-tracer occurred in 2005. It was a parallel implementation limited to just a few frames per second. Since then there has been much focus on achieving faster real-time ray tracing but it is limited by the computational power available. It is not yet practical but moves are being made in this direction and as the hardware and software improves, so too does the likelihood of usable, practicable real-time ray tracing within the next few years.

A triangle strip is a type of shape that makes it more efficient to create shapes out of connected triangles, by ensuring that each triangle is automatically joined to the previous one

The first three vertices of a triangle strip are joined into a triangle, but after that each triangle is formed by taking the next vertex and joining it to the last two vertices of the previous triangle

mpulse

The forces listed above are all typically included as standard in a physics engine, but sometimes you will need to apply a force that is not included. Most physics engines allow you to apply a force directly to an object through code. This will take the form either of a constant force that acts over time (such as gravity); or of an impulse which is a force that happens at one instant of time and then stops (such as a collision)

human body motion

The fundamental aspect of human body motion is the motion of the skeleton

The head animator

The head animator for a particular character draws the most important frames (keyframes). An assistant draws the in-between frames (inbetweening).

The human visual system

The human visual system receives and processes electromagnetic energy in the form of light waves reaching the eye. This starts with the path of light through the pupil (Figure 10.1)

With a sphere, the (x, y, z) value of a point is converted into spherical coordinates to gain the latitude and the longitude information

The latitude is then converted intoan x-coordinate and the longitude is converted into a y-coordinate. As might be expected, the objects 'North Pole' and 'South Pole' shows the the texture map squeezed into pie-wedge shapes

Facial bones essentially use the same mechanisms as skeletal animation and skinning

The main difference is that facial bones do not correspond to any real structures. Facial bones are normally animated by translation rather than rotation as it is easier

Map entity

The map entity determines what we use as the (x, y, z) value. It could be a point on the object, the surface normal, a vector running from the object's centroid through the point, or perhaps the reflection vector at the current point

The deformation of a human body does not just depend on the motion of the skeleton

The movement of muscle and fat also affect the appearance. These soft tissues need different techniques from rigid bones. More advanced character animation systems use multiple layers to model skeleton, muscle and fat

The example above implements keyframes but the animation is not at all smooth

The object instantly jumps from one keyframe position to the next, rather than gradually moving between the keyframes

Stop motion animation is a very different process. It involves taking many still photographs of real objects instead of drawing images.

The object is moved very slightly after each photograph to get the appearances of movement. More work is put into creating characters. You can have characters with a lot of detail and character creators will spend a lot of effort making characters easy to move.

Point light

The point light source emits rays in radial directions from its source (Figure 5.2). A point light source is a fair approximation of a local light source such as a light bulb. For many scenes a point light gives the best approximation to lighting conditions.

wireframe definition

The polygon mesh defines the outer surface of the object. A model or scene made in this manner is known as a wireframe

Graphics objects

The process described above is called Immediate Mode graphics and is a low level approach to graphics that closely corresponds to how the graphics card works. Transforms and styles are maintained in a state machine

cross products

The product of the numerator of one ratio and the denominator of the other ratio

Graphics Processing Units

The rapid improvement in the quality of 3D computer graphics over the last two decades has largely been down to the increasing use and power of dedicated graphics hardware in modern computers. These are custom chips called Graphics Processing Units (GPUs), that are designed specifically for creating images from 3D shapes represented as a collection of polygons made out of vertices

Shading

The representation of light and shade on a sketch or map.

Flat shading

The simplest shading model for a polygon is flat shading, also known as constant or faceted shading. Each polygon is shaded uniformly over its surface. Illumination is calculated at a single point for each polygon. Usually only diffuse and ambient components are used.

Gamut mapping

The term gamut is used to indicate the range of colours that the human visual system can detect, or that display devices can reproduce.

It is the process of computing the mapping from scene geometry to pixels.

The term rasterisation is derived from the fact that an image described in a vector graphics format (shapes) is converted into a raster image (pixels or dots)

What if we want to move two objects independently? We can use two commands pushMatrix and popMatrix. The above description was a slight simplification. The renderer does not maintain a single current matrix, it maintains a stack of matrices. A stack is a list of objects, where objects can be added and removed. The last object to be added is always the first one to be removed.

The transform stack contains all of the matrices that affect the current objects being drawn. The pushMatrix command adds a new matrix to the stack. Any transforms after the call to pushMatrix are applied to this new matrix. When popMatrix is called the last matrix to be added to the stack is removed. This has the effect of cancelling any transform that was called after the previous call to pushMatrix, but still keeping any matrices that were active before the call to pushMatrix. This makes it simple to move two objects independently.

High dynamic range imaging

The ultimate aim of realistic graphics is the creation of images that provoke the same responses that a viewer would have to a real scene

Perspective projection

The visual effect of perspective projection is similar to the human visual system: it has perspective foreshortening, in that the size of an object varies inversely with distance

z buffer values

The z-buffer stores values [0..ZMAX] corresponding to the depth of each surface. If the new surface is closer than one in the buffers, it will replace the buffered values:

Typically use of a physics engine consists primarily of setting up the elements of a simulation then letting it run, with maybe some elements of interaction

There are many good physics engines available; we will use an engine for Processing, called BRigid which is based on the jBullet engine, which is itself based on the C++ engine Bullet.

Vertex shaders

These act on vertices and take the place of the standard transform stage of the pipeline.

Paul Ekman invented a system of classifying facial expressions called Facial Action Parameters (FAPs) which is used for psychologists to observe expressions in people. It consists of a number of parameters each representing a minimal degree of freedom of the face.

These parameters can be used for animation. FAPs really correspond to the underlying muscles so they are basically a standardised muscle model. Again they can be implemented as morph targets or bone

Markerless optical

These systems use advanced computer vision techniques to track the body without needing to attach markers. These have the potential to provide an ideal tracking enivornment that is completely unconstrained

e are too bulky to use on the face. The only real option is to use optical methods. Markerless systems tend to work better for facial capture as there are less problems of occlusion and there are more obvious features on the face (eyes, mouth, nose)

They do, however, have problems if you move or rotate your head too much.

Rigid bodies are slightly more complex.

They have a size and shape. They can move around and rotate but they cannot change their shape or deform in any way; they are rigid. This makes them relatively easy to simulate and means that they are the most commonly used type of object in most physics engines.

hey take two types of input uniform variables such as the transform matrix or parameters of light sources and vertex attributes such as the position, normal or colour of a vertex.

They then calculate varying values as output; for example, the colour of the vertex after lighting or the transformed (world space) position. These are interpolated and passed to the fragment shader.

he process is therefore repeated a second time to take into account those patches that are now lit

They, in turn, will light other patches, and so on, and so on, and the process is repeated until a given number of passes or bounces are achieved - with each pass, the process converges on a stable image. Because these calculations are pre-processed the results can be presented interactively as they are view-independent.

e three integers together take up three bytes, which is 24 bits: thus a system with 24 bit colour

Thhas 256 possible levels for each of the three primary colours. This is defined as 'true colour' or 'millions of colours' - a total of at least 16,777 216 colour variations, allowing for perceptually smooth transitionsbetween colours. Many modern desktop systems have options for 24-bit true colour with an additional 8 bits for an alpha (transparency) channel, which is referred to as '32-bit colour' - an RGBA colour space

this 'perceptual' or 'photometric' correction

This 'perceptual' or 'photometric' correction may avoid the above artefacts, but conversely there are many different ways in which such remapping may be accomplished. As such, there is no standard way to map one gamut into another more constrained gamu

Once we have the key intersection information (position, normal, colour, texture coordinates, and so on) we can apply any lighting model we want

This can include procedural shaders, lighting computations, texture lookups, texture combining, bump mapping, and more. However, the most interesting forms of lighting involve spawning off additional rays and tracing them recursively

Graphics programming

This chapter covers programming environments, which often develop rapidly. We will be using Processing for this course. The most up to date resources will be on the Processing website

Each call to vertex adds a new vertex to the shape, with its x, y and z position given by the three parameters.

This draws an open shape: the start and end point do not join up. To close the shape we can pass an argument, CLOSE, to endShape: beginShape(); { vertex(100, 100, -100); vertex(200, 100, -150); vertex(200, 200, -200); vertex(150, 200, -150); vertex(100, 150, -100); } endShape(CLOSE);

The radiosity for each patch is then calculated. The radiosity equation is a matrix equation or set of simultaneous linear equations derived by approximations to the rendering equation.

This gives a single value for each patch. Gouraud shading is then used to interpolate these values across all patches. Now that some parts of the scene are lit, they themselves have become sources of light, and they could possibly cast light onto other parts of the scene

The procedure is first to calculate the intensity of light at each vertex. Next, interpolate the RGB values between the vertical vertices.

This gives us the RGB components for the left and right edges of each scan line (pixel row). We then display each row of pixels by horizontally interpolating the RGB values between that row's left and right edges.

Static friction

This happens when two objects are in contact with each other but not moving relative to each other. It acts to stop objects starting to move across each other and is equal and opposite to the forces parallel to the plane of contact of the objects

Once we have all of these in place we can modify the base shape, by iterating through all the vertices and calculating a new vertex position

This implementation assumes that the shape is composed of a number of child shapes (this is often the case when a shape is loaded from an obj file).

Magnetic

This involves puting magnetic transmitters on the body. The positions of these transmitters are tracked by a base station. These methods are very accurate but expensive for large numbers of markers. The markers also tend to be relatively heavy. They are generally used for tracking small numbers of body parts rather than whole body capture.

Mechanical

This involves putting strain gauges or mechanical sensors on the body. These are self contained and do not require cameras or a base station making them less

Recursive seed fill

This is a simple but slow method. A pixel lying within the region to be filled is taken as a seed and set to record the chosen colour. The seed's nearest neighbours are found and, with each in turn, this procedure is carried out recursively. The process continues until the boundary is reached

Cyclic Coordinate Descent

This is an iterative geometric method. You start with the final link and rotate it towards the target.

An object typically has a different shape representation for physics than it has for graphics.

This is because physics shapes need to be fairly simple so that the collision calculations can be done efficiently; while graphics shapes are typically much more complex so the object looks good.

Finally, rather than using a 2D primitive object such as a rectangle, we can use 3D primitives; in this case a box. The box command takes three parameters. These are the three dimensions of the box: width, depth and height, or to describe them in standard mathematical notation x, y, and z

This is probably the most obvious difference of writing 3D graphics; we must work with three rather than two dimensions. In most cases where 2D code would require us to specify an x and a y we must also specify a z for depth. Another difference is that the box command, unlike rect does not allow us to specify the position of the box

Optical

This is the most commonly used system in the film and games industries. Coloured or reflective balls (markers) are put on the body and the positions of these balls are tracked by multiple cameras.

The painter's algorithm

This is the simplest of all the hidden surface rendering techniques, although it is not a 'true' hidden surface as it cannot handle polygons which intersect or overlap in certain ways. It relies on the observation that if you paint on a surface, you paint over anything that had previously been there, thus hiding it; namely, paint from furthest to nearest polygons

We need to add a graphical 'skin' around the character. The simplest way is to make each bone a transform and hang a separate piece of geometry off each bone

This works but the body is broken up (how most games worked 15 years ago). We want to represent a character as a single smooth mesh (a 'skin'). This should deform smoothly based on the motion of the skeleton.

Rotation

To derive a rotation matrix, we can begin in 2D and by rotating a vector with only an x component (x, 0).

Tone mapping

To ensure a true representation of tonal values, some form of scaling or mapping is required to convey the range of a light in a real-world scene on a display with limited capabilities.

An ideal diffuse surface is, at the microscopic level, a very rough surface (for example, chalk or cardboard). Because of the microscopic variations in the surface, an incoming ray of light is equally likely to be reflected in any direction

To model the effect we assume that a polygon is most brightly illuminated when the incident light strikes the surface at right angles. Illumination falls to zero when the beam of light is parallel to the surface

What are the 5 types of animation?

Traditional Animation. (2D, Cel, Hand Drawn) 2D Animation. (Vector-Based) 3D Animation. (CGI, Computer Animation) Motion Graphics. (Typography, Animated Logos) Stop Motion. (Claymation, Cut-Outs

The transform nearest the vector is applied first: that is, the rightmost one. In this case it is the scale. Transforms can therefore be read right to left, in the opposite order we would expect from reading English

Transforms in programming languages have a similar backwards effect with the last one to be called applied first to any object.

The rendering equation..

Using the definition of a BRDF we can describe surface radiance in terms of incoming radiance from all different directions

The rendering equation

Using the definition of a BRDF we can describe surface radiance in terms of incoming radiance from all different directions. The rendering equation, introduced independently by D. Immel et al. and J. T. Kajiya in 1986, is a way of describing how light moves through an environmen

Variables in shader programs

Variables in a CPU program are pieces of memory that can be read from or written to freely at any time. This is because the programs are sequential: as only one instruction can happen at a time we can be sure that there will not be two instructions changing a variable at the same time

Floating point TIFF/PSD .tiff .psd 96

Very accurate with large dynamic range but results in huge file sizes and wasted internal data space.

Transformation means changing some graphics into something else by applying rules

We can have various types of transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it is called 2D transformation.

We can describe a B´ezier curve by a function which takes a parameter t as a value. The value t ranges from 0 to 1, where 0 corresponds to the start point of the curve and 1 corresponds to the endpoint of the curve

We can think of t as the time taken to draw a curve on a piece of paper with a pen from start to finish: t = 0 when the pen first touches the paper, and t = 1 when we have drawn the curve. Values in between correspond to other points on the curve. The formula for cubic B´ezier curves is: [x, y] = (1 − t) 3P0 + 3(1 − t) 2 tP1 + 3(1 − t)t 2P2 + t 3P3

secondly, to view a 3D scene effectively requires lightin

We will cover lighting in detail in later chapters, but for the moment we will use the lights command which provides a standard set of lights

Culling and clipping

When displaying images on the screen, we want to restrict polygons to what will actually appear within the field of view and the confines of the screen. The default clipping rectangle is the full canvas (the screen)

Ambient reflection

When there are no lights in a scene the picture will be blank. By including a small fraction of the surface colour we can simulate the effect of light reflected from around the scene. Ambient reflection is a gross approximation of multiple reflections from indirect light sources. By itself, ambient reflection produces very little realism.

Collision

When two objects collide they produce forces on each other to stop them penetrating each other and to separate them.

Hidden surfaces

When you want to make a wireframe more realistic you need to take away the edges you cannot see; that is, you need to make use of hidden line and hidden surface algorithms. The facets in a model determine what is and what is not visible

P(t) = tP(tk) + (1 − t)P(tk−1

Where t is the time parameter. The equation interpolates between keyframe P(tk − 1) and keyframe P(t) as t goes from 0 to 1. This simple equation assumes that the keyframe times are 0 and 1. The following equation takes different keyframe values and normalises them so they are between 0 and 1

Colour reproduction

While the previous section deals with the range of image intensities that can be displayed, devices are also limited in the range of colours that may be shown. Tone mapping compresses luminance values rather than colour values.

translation matrix

X' = X + tx Y' = Y + ty P' = P + T

what are benefit of layering offer to animation ?

You only have to animate bits that move. Next time you watch an animation, notice that the background is always more detailed than the characters. Asian animation often uses camera pans across static image

Shader programming

You therefore have two (or more) programs in your graphics software: your main (CPU) program and one or more shader programs. The CPU program is an ordinary program written in a standard programming language such as Processing, Java or C++. This will define and prepare all of the vertex and texture dat

You put a lot of effort into creating a (virtual) model of a character and then when animating it you move it frame by frame.

You will spend a lot of time making easy-to-use controls for a character, a process called rigging

The acceleration is the rate of change of velocity (v), in mathematical terms:

a = dv dt

keyframe in animation and filmmaking

a drawing that defines the starting and ending points of any smooth transition. The drawings are called "frames" because their position in time is measured in frames on a strip of film

Chromatic colour constancy extends this to colour:

a plant seems as green when it is outside in the sun as it does if it is taken indoors under artificial light

A framebuffer (also known as a framestore) is

a portion of memory representing a frame that holds the properties such as the colours of that frame and sends it to an output device, most typically a screen

e-arranging the equation we can see that the total acceleration of an object is the sum of all the forces acting on the object divided by its mass:

a=1\m sum fa

To use the shader, we need to call the shader command before any drawing that we want to use the shader for:

ader(myShader); noLights(); fill(100); noStroke(); translate(20, 20, 50); beginShape(TRIANGLE_STRIP); { vertex(-100, -100, 50); vertex(100, -100, 0); vertex(100, 100, -50); vertex(50, 100, 0); vertex(-100, 50, 50); } endShape();

Because the image is represented by a discrete array of pixels

aliasing problems may occur. The most classical form of aliasing is the jaggy aspect of lines (see figure below

Matrices

an arrangement of numbers in rows and columns

For basic objects these will just be transforms like translations and rotation but human characters will have complex skeletons

analogous to the metal skeletons Aardman Animations use (more on this later). Once you have completed this set up effectively, animation becomes much simple

The nearest object hit spawns secondary rays that also intersect with every object in the scene (except the current one)

and this continues recursively until the end result is reached: a value that is used to set the pixel colour

Now we have the timeline we can get hold of the positions at a certain key frame

and use them to translate our object. This is an example of how to get keyframe 0:

Procedural textures

are calculated mathematically to provide realism at any resolution

In theory, B´ezier curves can be constructed for any number of points, but four control points (a cubic B´ezier) a

are commonly used because increasing the control points leads to polynomials of a very high order, making things computationally inefficient. The curve is defined by four points: the initial position, P0, and the terminating position, P3, and two separate middle points, P1 and P2: the curve itsel

7 Lighting The way in which light interacts with a scene is the most significant effect that we can simulate to provide visual realism. Light can be sent into a 3D computer model of a scene in a number of ways:

as a directional or parallel light source that shines in a particular direction but does not emanate from any particular location as a point source that illuminates in all directions and diminishes with distance as a spotlight that is limited to a small cone-shaped region of the scene as ambient light, a constant which is everywhere in the scene

t is one of the most-used methods for free-form curves in computer graphics

as the mathematical descriptions are compact and easy to compute and they can be chained together to represent many different shapes

In the case of the 1st octant, the line is drawn by stepping one horizontal pixel at a time from x0 to x1 and

at each step making the decision whether or not to step +1 pixel in the y direction

This will give us access to a new vertex attribute, texCoord, which we should declare in our vertex shader (Processing will pass it to our shader automatically):

attribute vec2 texCoord;

It also gives us access to vertex normals via the normal vertex attribute:

attribute vec3 normal;

Vertex attributes use the attribute modifier. Custom vertex attributes are less common and not easily supported in Processing. We will therefore normally only use the default attributes that are automatically passed into the shader by Processing; for example, the position (vertex) and colour

attribute vec4 vertex; attribute vec4 color; // Note US spelling

If v = (x,y,z) is a vector and a is a number, then the scalar product of a and v is defined as

av = ( a*x, a*y, a*z );

Multiplication by a scalar:

av = (ax, ay, az)

n Processing it is possible to make a shape out of multiple triangles by passing a parameter TRIANGLES to the beginShape command. If this is done, every set of three vertices is formed into a triangle as shown below:

beginShape(TRIANGLES); { // triangle 1 vertex(100, 100, -100); vertex(200, 100, -150); vertex(200, 200, -200); // triangle 2 vertex(150, 200, -150); vertex(100, 150, -100); vertex(100, 200, -100); } endShape();

Triangle strips

beginShape(TRIANGLE_STRIP); { vertex(100, 100, -100); // triangle 1 vertex(200, 100, -150); // triangle 1 and 2 vertex(200, 200, -200); // triangle 1, 2 and 3 vertex(150, 200, -150); // triangle 2, 3 and 4 vertex(100, 150, -100); // triangle 3 and 4 vertex(100, 200, -100); // triangle 4 } endShape()

f the restitution is 1, the objects will bounce back with the same speed and if it is 0 they will remain stuck together. The restitution is a combined property of the two objects. In most physics engines each object has its own restitution and they are combined to get the restitution of the collision. In BRigid you can set the restitution on a rigid body:

body.rigidBody.setRestitution(restitutionCoefficient); A physics engine will typically handle all collisions without you needing to write any code for them. However, it is often useful to be able to tell when a collision has happened; for example, to increase a score when a ball hits a goal or to take damage when a weapon hits an enemy. Code example 9.3 gives an example of how to detect collisions in BRigid.

Ray-tracing simulates the path of light in a scene

but it does so in reverse. A ray of light is traced backwards through the scene, starting from what the eye or camera sees

However, it involves very difficult computer vision techniques. The Microsoft Kinect has recently made markerless motion capture more feasible by using a depth camera,

but it is still less reliable and accurate than marker based capture. In particular, the Kinect only tends to be able to capture a relatively constrained range of movements (reasonably front on and standing up or sitting down).

We can create a polygon

by simply specifying the positions in 3D space of each of the corners where its sides meet. In 3D graphics we call these corner points vertices (singular vertex).

A character is generally rigged with the skeleton in a default pose

called the bind pose, but not necessarily zero rotation on each bone.

viewpoint coordinate system

camera coordinate system.

The areas in which computer graphics is used include:

cartography, visualization of measurement data (2D and 3D), visualization of computer simulations. medical diagnostics, drafting and computer design, preparation of publications, special effects in movies, computer games.

Hierarchical graphics objects It is often useful to make graphics objects out of a collection of other objects. For example, a table is a top and four legs. Most graphics engines enable you to do this by allowing graphics objects to have other graphics objects as children: a table would have its top and legs as children. The children inherit the transforms of their parents, so moving the parent will move all of the children together.

class GraphicsObject { PVector position; PVector rotation; PVector scale; PShape shape; GraphicsObject [] children; GraphicsObject(String filename) { shape = loadShape(filename); position = new PVector(0, 0, 0); rotation = new PVector(0, 0, 0); scale = new PVector(1, 1, 1); children = new GraphicsObject[0]; } void addChild(GraphicsObject obj) { children = (GraphicsObject[])append(children, obj); } void display() { pushMatrix(); translate(position.x,position.y,position.z); rotateX(rotation.x); rotateY(rotation.y); rotateZ(rotation.z); scale(scale.x,scale.y,scale.z); shape(shape); for (int i = 0; i < children.length; i++) { children[i].display(); } popMatrix(); } }

The PShape class is able to handle shapes which contain other shapes as children, but this feature is also easy to implement using our custom GraphicsObject class by adding an array of children:

class GraphicsObject { PVector position; PVector rotation; PVector scale; PShape shape; GraphicsObject [] children; GraphicsObject(String filename) { shape = loadShape(filename); position = new PVector(0, 0, 0); rotation = new PVector(0, 0, 0); scale = new PVector(1, 1, 1); children = new GraphicsObject[0]; } void addChild(GraphicsObject obj) { children = (GraphicsObject[])append(children, obj); } void display() { pushMatrix(); translate(position.x,position.y,position.z); rotateX(rotation.x); rotateY(rotation.y); rotateZ(rotation.z); scale(scale.x,scale.y,scale.z); shape(shape); for (int i = 0; i < children.length; i++) { children[i].display(); } popMatrix(); } }

Luckily it is straightforward to wrap a PShape in a custom class that has this functionality

class GraphicsObject { PVector position; PVector rotation; PVector scale; PShape shape; GraphicsObject(String filename) { shape = loadShape(filename); position = new PVector(0, 0, 0); rotation = new PVector(0, 0, 0); scale = new PVector(1, 1, 1); } void display() { pushMatrix(); translate(position.x,position.y,position.z); rotateX(rotation.x); rotateY(rotation.y); rotateZ(rotation.z); scale(scale.x,scale.y,scale.z); shape(shape); popMatrix(); } }

traditional animation

classical animation, cel animation or hand-drawn animation) is an animation technique in which each frame is drawn by hand on a physical medium

he result is the light intensity which can then be multiplied by the vertex colour (it has to be converted to a vector first) to get the lit colour of the vertex:

col = vec4(light, light, light, 1) * color;

The position of a bone is calculated by

concatenating rotations and offsets; this process is called forward kinematics (FK)

constant and local

const: variables are compile-time constant, they cannot be changed for a particular shader. local: these behave like standard local variables in a CPU program. They can both be written to and read from in a shader, but only exist within the scope of a single run of a single shader

three-dimensional vector

contains three components defining a displacement along the x, y and z axis. Mathematically, a three-dimensional vector is defined as:

We can think of a transform matrix as providing a new coordinate system, as described in Chapter 3 of this subject guide. The matrix transforms coordinates in one coordinate system into another coordinate system. For example, it could convert from the coordinate system of an object into a coordinate system of the world

coordinate systems. For example, the top of the stack could be the world coordinate system, below that would be the coordinate system of an object (a table) and below that the coordinate system of sub-parts of the object (table legs). This allows us to create the type of hierarchical coordinate system shown in Figure 3.8 in chapter 3 of this subject guide. The commands pushMatrix and popMatrix allow us to move from one coordinate system to another.

definition of culling

culling is the process of segregating organisms from a group according to desired or undesired characteristics. In animal breeding, culling is the process of removing or segregating animals from a breeding stock based on specific trait

The second problem is efficiency. In immediate mode, each vertex is sent to the graphics card as soon as the vertex command is called.

d. Transfers to the graphics card require a lot of overhead and transferring vertices one at a time can be expensive

Our example vertex shader starts with a definition, which tells us it only uses colour, not lighting or texture (see above):

define PROCESSING_COLOR_SHADER // Note US spelling It then defines the uniform, attribute and varying variables: uniform mat4 transform; attribute vec4 vertex; attribute vec4 color; // Note US spelling

exturing can be combined with lighting. To use both in a shader we need the following definition in order to have all the variables available

define PROCESSING_TEXLIGHT_SHADER Lighting and texturing can be combined by multiplying the texture colour by the calculated light colour: gl_FragColor = lightColour*texture2D(texture, outpuTexCoord.xy);

From model to screen

describe the key components of the rasterisation process identify different coordinate systems explain the concept of projection describe and explain the main algorithms involved in creating a 2D image from a 3D scene.

5 BRDFs

different angles, and when lit from different directions. This is what is known as a Bi-directional Distribution Function or BRDF. A BRDF is essentially the description of how a surface reflects. It simply describes how much light is reflected when light makes contact with a certain materia

Clipping

distortion caused when the volume level exceeds the maximum that can be accurately reproduced

dynamic range

due to the limitations of current technology, this is rarely the case. The ratio between the darkest and the lightest values in a scene is known as the dynamic rang

Finally, rather than using a 2D primitive object such as a rectangle

e can use 3D primitives; in this case a box. The box command takes three parameters. These are the three dimensions of the box: width, depth and height, or to describe them in standard mathematical notation x, y, and z. This is probably the most obvious difference of writing 3D graphics; we must work with three rather than two dimensions

The cameras use infra-red to avoid problems of colour. Problems of occlusion (markers being blocked from the cameras by other parts of the body) are partly solved by using many cameras spread around a room

e markers themselves are lightweight and cheap although the cameras can be expensive and require a large area.

Ray tracing is the most complete

e simulation of an illumination-reflection model in computer graphics.

what to do to improve interploation between different keyframes ?

e. To improve this we can do spline interpolation which uses smooth curves to interpolate positions (Figure 8.1(b))

What is more, in order to make the final animation look consistent

each character should always be drawn by the same animator. Disney and other animation houses developed techniques to make the process more efficient Without these methods full length films like Snow White would not have been possible.

A colour in the RGB colour model is described by indicating how much of each of the red, green, and blue is included. The colour is expressed as an RGB triplet (red value, green value, blue value);

each component of which can vary from zero to a defined maximum value - on a normalised scale [0.0 . . . 1.0] or actual byte values in the range [0 . . . 255]. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white

ightCount holds the number of active lights, so we can just loop over the active lights and perform the lighting calculation on them:

ec3 vertexCamera = vec3(modelview * vertex); vec3 transformedNormal = normalize(normalMatrix * normal);

If both endpoints of a line lie inside the clip rectangle, the entire line lies inside the clip

ectangle and no clipping is required. If one endpoint lies inside and one outside, we must compute the intersection point. If both endpoints are outside the clip rectangle, the line may or may not intersect with the clip rectangle

To texture a shape in Processing we can pass a PImage object into the texture command, which must be called inside beginShape and endShape:

eginShape(); texture (myTexture); vertex(100, 100, -100, 0, 0); vertex(200, 100, -150, 1, 0); vertex(200, 200, -200, 1, 1); vertex(150, 200, -150, 0.5, 1); vertex(100, 150, -100, 0, 0.5); endShape()

The rendering equation describes the total amount of light

emitted from a point along a particular viewing direction, given a function for incoming light and a BRDF

Physics simulation

explain the basic principles of physics simulation for computer animation explain the role of rigid bodies, forces and constraints in a physics simulation demonstrate the use of a physics engine to create a basic simulation create a simulated scene using rigid bodies and manipulate the properties of those rigid bodies to create appropriate effects demonstrate the use of forces and constraints to control the behaviour of rigid bodie

A BRDF is therefore a function of

f the incoming light direction and the outgoing direction (the direction of the viewer)

Other approaches to facial animation There is plenty more to facial animation than morph targets, often related to body animation techniques

facial bones muscle models facial action parameters facial motion capture.

We can change the style of a shape using commands such as fill which changes the colour of the body of a shape; stroke which changes the colour of the edges, and strokeWeight which changes the width of the edges:

fill(255, 0, 0); // make the shape red strokeWeight(6); // make the edges thick beginShape(TRIANGLE_STRIP); { vertex(-100, -100, 50); vertex(100, -100, 0); vertex(100, 100, -50); vertex(50, 100, 0); vertex(-100, 50, 50); } endShape();

We also need an array of weights:

float [] weights;

nd then calculate the dot product of the light direction and the vertex normal (equivalent to calculating the cosine of the angle between them)

float light = max(0.0, dot(direction, transformedNormal));

Then you need to assign a mass and initial position to the object. Positions are represented as Vector3f objects (a different representation of a vector from a Processing PVector):

float mass = 100; Vector3f pos = new Vector3f(random(30), -150, random(1));

As well as creating shapes out of vertices, it is possible to create PShape objects based on primitives

float shapeParameters [] = new float []{100, 100, 100}; myShape = createShape(BOX, shapeParameters);

We can now search for the current keyframe. We need to find the keyframe just before the current time. We can do that by finding the position in the timeline where the keyframe is less than t but the next keyframe is more than t:

float t = float(millis())/1000.0; // convert time from milliseconds to second int currentKeyframe; for (currentKeyframe = 0; currentKeyframe < timeline.length-1; currentKeyframe++) { if(timeline[currentKeyframe].time < t && timeline[currentKeyframe+1].time > t) break; } PVector pos = timeline[currentKeyframe].position;

The above code shows how to create PShapes out of vertices, but it is also possible to edit the vertices of an existing object. This makes it possible to animate the shape of an object:

for (int i = 0; i < myShape.getVertexCount(); i++) { PVector v = myShape.getVertex(i); v.x += random(-1, 1); v.y += random(-1, 1); myShape.setVertex(i, v); }

Line drawing needs to be fast and as simple as possible

giving a continuous appearance. Commonly used procedures are the Bresenham algorithm and the midpoint algorithm

The fragment shader uses the transformed texture coordinates to sample the texture:

gl_FragColor = texture2D(texture, outpuTexCoord.xy);

We can do many things with a fragment shader. One example is changing the vertex position. A simple example is to apply a sine function to the vertex position:

gl_Position.x *= (sin(0.04*gl_Position.y) + 2.0)/2.0; This will create a wavy effect on the shape

This results in a two-dimensional (planar) coordinate which can be used to look up the colour from the texture map.

gure 6.2 shows some textured-mapped objects that have a planar map shape. No rotation has occurred. It is the z-coordinate - namely, the depth - that has been discarded. You can work out which component has been projected by looking for colour changes in coordinate directions.

Interpolation shading

hade value at each point. This is done quickly by interpolation: 1. Compute a shade value at each vertex. 2. Interpolate to find the shade value at the boundary. 3. Interpolate to find the shade values in the middle.

Instead, the mapping must be specifically tailored in a non-linear manner, permitting the luminance to be compressed in an appropriate way. Algorithmic solutions, known as tone mapping operators, or tone reproduction operators

have been devised to compress certain features of an image and produce a result with a reduced dynamic range that appears plausible or appealing on a computer monitor.

First, the environment itself has to be created or captured - either computer generated or captured using a probe, such as a chrome sphere, which captures real world reflections.

he object is then surrounded by a closed three dimensional surface onto which the environment is projected. Imagine the mapping process as one where the reflected surface is located at the centre of an infinitely large hollow cube, with the image map painted on the inside, where each inner face of the cube is a 2D texture map representing a view of the environment

The main idea of the method is to store illumination values on the surfaces of the objects. It has its basis in the field of thermal heat transfer: the radiosity of a surface is the rate at which energy leaves that surface,

heat transfer: the radiosity of a surface is the rate at which energy leaves that surface, and this includes energy emitted by the surface as well as energy reflected from other surfaces. In other words, the light that contributes to the scene comes not only from the light source itself but also from the surfaces that receive and then reflect light. It uses the finite element method to

Firstly, it does not correspond well to how we normally think about graphics. We do not normally think in terms of graphics states and vertices. We think in terms of objects. Vertices are not ephemeral things that are re-created in each frame;

hey form stable objects that exist from frame to frame (for example, a teapot, or a table). Colours and positions are not graphics states, they are properties of objects. In the real world there is no abstract 'red' state; there are red teapots and red tables.

There are different ways to determine the value of a BRDF.

hey can be measured directly from real objects using specially calibrated cameras and light sources, or they can be models derived from empirical measurements of real-world surfaces

In collision, momentum is conserved, so the sum of the velocities of the two objects stays the same. However, this does not tell us anything about what the two individual objects do.

hey might join together and move with a velocity that is the result of combining their momentum or they might bounce back from each other perfectly, without losing much velocity at all. What exactly happens depends on a number called the coefficient of restitution.

o construct a hidden surface view, each polygon is projected onto the viewing plane. Instead of drawing the edges, pixels lying inside the boundary formed by the projected edges of a polygon are given an appropriate colour

his is not a trivial task. Given a typically high polygon count in a scene, any procedure to implement the filling task must be very efficient.

This fourth component is, in most cases, 1. This new 4D coordinate system is called

homogeneous coordinates

the rendering equation defines

how much light leaves from any point in a 3D scene

Texture mapping places

images on top of the polygons and thus can make simple models look incredibly realistic and is the easiest way to add fine detail

If you flip through the pages fast enough the images are presented one by one and the small changes no longer seem like a sequence

individual images but a continuous sequence. In film, this becomes a sequence of frames, which are also images, but they are shown automatically on a file projector

Making an image appear realistic

involves setting pixels to the correct colour, depending on the material properties and how those materials are lit

λ

is a particular wavelength of light (without this, everything would be greyscale)

The rendering equation....

is a way of describing how light moves through an environmen

Stop motion

is an animated-film making technique in which objects are physically manipulated in small increments between individually photographed frames so that they will appear to exhibit independent motion when the series of frames is played back as a fast sequence.

Le(x, ωo, λ, t)

is emitted spectral radiance (and since most surfaces tend not to emit light there is not usually any contribution here)

In terms of computer-generated imagery, one of the aims

is often to achieve photorealism; that is, images that look as convincing as a photograph of a real scene

Li(x, ωi , λ, t)

is spectral radiance of wavelength λ coming inward towards x from direction ωi at time t. The incoming light does not have to come from a direct light source - it may be indirect, having been reflected or refracted from another point in the scene

One goal of realistic computer graphics

is such that if a virtual scene is viewed under the same conditions as a corresponding real-world scene, the two images should have the same luminance levels, or tones.

The advantage of the painter's algorithm

is that it is very easy to implement for simple cases (but not so good for more complex surface topologies such as certain overlapping polygons or the presence of holes). However, it's not very efficient - all polygons are rendered, even when they become invisible

fr(x, ωi , ωo, λ, t)

is the bidirectional reflectance distribution function (BRDF) -the proportion of light reflected from ωi to ωo at position x, time t, and at wavelength λ

Solving the rendering equation for a given scene

is the main challenge in physically-based rendering, where we try to model the way in which light behaves in the real world

Lo(x, ωo, λ, t)

is the total spectral radiance of wavelength λ directed outward along direction ωo at time t, from a particular position x. In other words, the rendering equation is a function which gives you the outgoing light in a particular direction ωo from a point x on a surface

ωi · n

is the weakening factor of inward irradiance due to incident angle, as the light flux is smeared across a surface whose area is larger than the projected area perpendicular to the ray. This attenuates the incoming light.

Forward kinematics is a simple and powerful system and have drawbacks

it can be fiddly to animate with. Making sure that a hand is in contact with an object can be difficult

Some tone mapping operators focus on preserving aspects such as detail or brightness, some concentrate on producing a subjectively pleasing image, while others focus on providing a perceptually-accurate representation of the real-world equivalent. In addition to compressing the range of luminance

it can be used to mimic perceptual qualities, resulting in an image which provokes the same responses as someone would have when viewing the scene in the real world. For example, a tone reproduction operator may try to preserve aspects of an image such as contrast, brightness or fine detail - aspects that might be lost through compression.

The downside of ray tracing is that although it generates incredibly realistic images,

it is computationally expensive with extremely high processing overheads - scenes can take minutes, hours or days to render

In the real world, the interaction of light on surfaces gives shading, which humans use as an important depth cue. When creating a 3D computer generated image we can model the appearance of real-world lighting. As light-material interactions cause each point to have a different colour, we therefore need to know a number of different properties, namely:

light sources material properties location of viewer surface orientation

The advantage of the z-buffer algorithm

m is that it is easy to implement, particularly in hardware (it is a standard in many graphics packages such as Open GL). Also, there is no need to sort the objects and no need to calculate object-object intersections. However, it is problematic as there is some inefficiency as pixels in polygons nearer the viewer will be drawn over polygons at greater depth

Transforms are implemented as matrices. Each type of transform corresponds to a particular form of matrix as described in Chapter 2 of this subject guide. The transform system works as a state machine in the same way as the style system.

m. The renderer maintains a current matrix which represents the current transform state. Each time a translate, rotate or scale commmand is called this current matrix is multiplied by the relevant transform matrix. Whenever a vertex is rendered it is transformed by the current transform matrix.

Multiplication of two matrices

m11 m12 m21 m22 n11 n12 n21 n22 = m11n11 + m12n21 m11n12 + m12n22 m21n11 + m22n21 m21n12 + m22n22

Multiplication of a vector by a matrix:

m11 m12 m21 m22 v1 v2 = m11v1 + m12v2 m21v1 + m22v2

Ray tracing

manipulates variations in color intensity that would be produced by light falling on an object from multiple directions

Diffuse reflection Diffuse lighting is the most significant component of an illumination model. Light reflected from a diffuse surface is scattered in all directions

mination model. Light reflected from a diffuse surface is scattered in all directions. A material that is perfectly diffuse follows Lambert's Cosine Law, and so the surface looks the same from all directions; that is, the reflected energy from a small surface area in a particular direction is proportional to the cosine of the angle between that direction and the surface normal

consists of a number of stages, most of which occur on the GPU and some of which are programmable

model transform projection rasterization display

his sets the three components of the tint variable. The set command can take a variable number of parameters for different types of uniform variables. For example, a float variable can be set with one parameter:

myShader.set("intensity", 1.0);

We can pass a texture from the CPU to GPU by using the set command of PShader, passing in a PImage object:

myShader.set("texture", myTexture);

If we are using custom uniform variables we need to set them from within our CPU program using the set command of PShader

myShader.set("tint", 1.0, 0.0, 0.0);

It is also possible to load objects from files using the 'OBJ' format. This is useful as it allows you to create 3D objects in an external modelling tool such as 'Blender' or 'Autodesk Maya' and use them in your program:

myShape = loadShape("Sphere.obj");

shape also has style properties. These are changed using commands such as fill and stroke, just like in immediate mode. (Note that, unlike immediate mode, these commands must be called between beginShape and endShape):

myShape.beginShape(TRIANGLE_STRIP); { myShape.fill(100); myShape.noStroke(); myShape.vertex(-100, -100, 50); myShape.vertex(100, -100, 0); myShape.vertex(100, 100, -50); myShape.vertex(50, 100, 0); myShape.vertex(-100, 50, 50); } myShape.endShape();

Shapes also have transform properties

myShape.translate(-50, -50, -50); myShape.rotateY(radians(30));

Vertices are the basic building blocks of our 3D graphics shapes.

n Processing we can make a shape out of vertices using the commands beginShape, vertex and endShape: beginShape(); { vertex(100, 100, -100); vertex(200, 100, -150); vertex(200, 200, -200); vertex(150, 200, -150); vertex(100, 150, -100); } endShape();

Forces

n a physics simulation objects are affected by forces that change their movement in a number of ways. These can be forces that act on objects from the world (for example, gravity and drag); that act between objects (for example, collisions and friction); or that are triggered on objects from scripts (for example, impulses).

As shown in the image above a physics simulation consists of a World that contains a number of Objects which can have a number of Forces acting on them (including forces that are due to the interaction between objects).

n addition there can be a number of Constraints that restrict the movement of objects. Each of these elements will be covered in the following sections.

the rasterisation of a circle beginning with a point in the first octant and proceeding anticlockwise

n of a circle beginning with a point in the first octant and proceeding anticlockwise. As with the line-drawing algorithm, the procedure is incremental and at each step one of two points are chosen by means of a decision variable d, where the best approximation of a true circle is described by the pixels that fall the least distance from the true circle

environmental mapping

nvironment mapping is the process of reflecting the surrounding environment in a shiny object - it is a cheap way to create reflections. When you look at a shiny object, what you see is not the object itself but how the object reflects its environment. What you see when you look at a reflection is not the surface itself but what the environment looks like in the direction of the reflected ray

Culling refers to

o discarding any complete polygons that lie outside the clip rectangle. It is a relatively quick way of deciding whether to draw a triangle or not.

The vertex shader has to transform the texture coordinates of the vertex (we must first convert them to a vec4):

oid main(){ gl_Position = transform*vertex; outputTexCoord.xy = texCoord; outputTexCoord.zw = vec2(1.0, 1.0); outputTexCoord = texMatrix*texCoord; col = color; }

benefit from keyframing in animation?

only need a few images toget the entire feel of a sequence of animation, but you would need many more to make the final animation look smooth.

culling and cliping

operations that take place in order to portray polygons on the screen. These operations involve getting rid of back-facing polygons (culling) that the viewer will not see, removing polygons outside of the viewing volume (clipping)

A computer-generated image begins with a 3D geometric model. 3D models are widely used anywhere in 3D graphics. The geometric models themselves may have been created in a modelling package

or may be the automatically generated output from a data capture device such as a laser scanner, or from techniques such as 3D scene reconstruction from images

hese could use geometric methods (for example, free form deformations based on NURBS)

or simulation methods (model physical properties of fat and muscle using physics engines). Hair is also normally modelled using physics simulation.

mathematic for graphics

perform calculations using trigonometry and vectors perform calculations on 3x3 and 4x4 affine transform matrices.

The BPhysics object is used to simulate the world. In Processing's draw function you must update the physics object so that the simulation runs

physics.update();

In BRigid you set gravity on the physics world like this

physics.world.setGravity(new Vector3f(0, 500, 0)); Note that gravity in physics engines typically just models gravity on the surface of the earth (or other planet) where the pull of the planet is the only significant gravitational force and gravity is constant. For simulations in outer space that include multiple planets and orbits you would need to implement your own gravitational force using a custom force script and Newton's law of gravity

only passes through the first and last points (Figure 3.14). The first and last control

points interpolate the curve; the rest approximate the curve. The shape of a B´ezier curve can be altered by moving the middle points

Culling

process of eliminating less productive or less desirable cattle from the herd

As an example we could define a basic keyframe class that had keyframes on position and would look something like this:

public class Keyframe { PVector position; float time; public Keyframe (float t, float x, float y, float z) { time = t; position = new PVector (x,y,z); } }

Then go to the next link up and rotate it so that the end effecto

r points towards the target; you then move up to the next joint. Once you reach the top of the hierarchy you need to go back down to the bottom and iterate the whole procedure again until you hit the correct end effector position.

Combining transforms

ransform matrices can be combined together by multiplying them together (remembering that order matters in matrix multiplication).

When it intersects with objects in the scene its

reflection, refraction, or absorption is calculated

When a ray hits a surface, it can generate reflection, refraction, and shadow rays. A reflection ray is traced in the mirror-reflection direction and the object it intersects is what will be seen in the reflection. Refraction rays work similarly

reflections and refractions only a single ray is traced. For shadows, a shadow ray is traced toward each light. If the ray intersects with a light then the point is illuminated based on the light's settings. If it does intersect with another object, that object casts a shadow on it.

Scanline fill The polygon is filled by stroking across each row and colouring any pixels on that row if they lie within the polygon:

remove any horizontal edges from the polygons find the max and min y values of all polygon edges. make a list of scanlines that intersect the polygon for each scanline repeat for each polygon edge list the pixels where the edge intersects the scan line order them in ascending x value (p0, p1, p2, p3) step along the scanline filling pairwise (p0 --> p1, p2 --> p3)

transformation stack

renderer does not maintain a single current matrix, it maintains a stack of matrices. A stack is a list of objects, where objects can be added and removed. The last object to be added is always the first one to be removed. The transform stack contains all of the matrices that affect the current objects being drawn. The pushMatrix command adds a new matrix to the stack. Any transforms after the call to pushMatrix are applied to this new matrix. When popMatrix is called the last matrix to be added to the stack is removed. This has the effect of cancelling any transform that was called after the previous call to pushMatrix, but still keeping any matrices that were active before the call to pushMatrix. This makes it simple to move two objects independently

In the above example we are passing an extra two parameters to vertex; these are the texture coordinates. There are two modes for handling texture coordinates in Processing (set using the textureMode command): in normal mode the coordinates

represent pixel in the image, while in the normalised mode (used in the example) the texture coordinates are between 0 (top or left of the image) and 1 (the bottom or right of the image).

Procedural textures take an entirely different approach, creating the texture itself. Procedural textures are textures that are defined mathematically. You provide a formula and the computer is able to create the texture at any scale

rientation. Instead of using an intermediate map shape, the (x, y, z) coordinate is used to compute the colour directly - equivalent to carving an object out of a solid substance. Rather than storing a value for each coordinate, 3D texture functions use a mathematical procedure to compute a value based on the coordinate, hence the name 'procedural texture

Tranforms can also be used to animate objects. The following code rotates based on a variable angle which is increased slightly with each frame resulting in a spinning object.

rotateY(angle); angle += 0.05;

his implies two things. Firstly, each call to a transform is applied on top of the last. A call to translate does not replace the previous call to translate; the two accumulate to produce a combined transform. This can result in some quite complex transformations. For example, an existing rotation will apply to a new translation command. For examp

rotateZ(PI/2.0); translate(100, 0, 0);

Where do we get the tangents (velocities) from? We could directly set them; they act as an extra control on the behaviour

s = t − tk−1 tk − tk−1

layering in animation

s a background image that does not move and you put foreground images on a transparent slide in front of it.

Skeletal animation

s a technique in computer animation in which a character (or other articulated object) is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe ...

A number of theories have been put forward regarding constancy. Early explanations involved adaptational theories, suggesting that the visual system adjusts in sensitivity to accommodate changes.

s. However, this would require a longer time than is needed for lightness constancy to occur, and adaptational mechanisms cannot account for shadow effects. Other proposed theories include unconscious inference (where the visual system 'knows' the relationship between reflectance and illumination and discounts it)

The shader transforms each vertex by each bone transform in turn and then adds together the results multiplied by the weights

s. In order to limit the number of vertex attributes we normally have a limit of four bones per vertex and use a vec4 to represent the bone weights. This means you also need to know which bones correspond to which weights, so you also have a second vec4 attribute specifying bone indices.

Phong Phong shading was introduced by Phong in 1975 and it is used by OpenGL. (Note: it is not the same as the Phong reflection model, described above.) It linearly interpolates a normal vector across the surface of the polygon from the polygon's vertex normals

s. The surface normal is interpolated and normalised at each pixel and then used to obtain the final pixel colour. Phong shading is more computationally expensive than Gouraud shading since the reflection model must be computed at each pixel instead of at each vertex. It is slower but provides more accurate modelling of specular highlights.

n this code any object will appear to move by 100 units along the y-axis, not the x-axis because the translation has been affected by the rotation. Similarly, this code will result in a movement of 200 units, not 100

scale(2.0); translate(100, 0, 0);

Its big advantage is that it combines hidden surface removal with shading due to direct illumination

shading due to global illumination, and shadow computation within a single model.

Assuming that some of the colours to be displayed in an image are outside a screen's gamut, the image's colours may be remapped to bring all its colours within displayable range. This process is referred to as gamut mapping

simple mapping would only map out-of-range colours directly inward towards the screen triangular gamut. Such a 'colorimetric' correction produces visible artefacts. A better solution is to re-map the whole gamut of an image to the screen's gamut, thus remapping all colours in an image.

The painter's algorithm steps

sort the list of polygons by distance from viewpoint (furthest away at start of list) repeat the following for all polygons in the ordered list: draw projected polygon

srutcure composition

struct MyStruct { int a; vec4 b; };

From this equation we can see that the basic function of a physics engine is to evaluate all of the forces on an object

sum them to calculate the acceleration and then use the acceleration to update the velocity and position.

t

t is time (for ease, let us assume it is constant for now)

what does the visual perception links?

t links the physical environment with the physiological and psychological properties of the brain, transforming sensory input into meaningful information.

Textures repeat themselves, just like giftwrap with a repeating motif. In general, mapping a 2D image to a polygon is just a 2D transformation. This is where knowledge of coordinate systems is necessary (see Chapter 3 of the subject guide for a reminder of this). In the OCS the coordinates are fixed relative to the object

t. Most mapping techniques will therefore use object coordinates to keep the texture in the correct place when the object moves. If the texture was mapped using the WCS then the pattern would shift as the object moves.

if you can follow the paths that light

takes around a scene then you can simulate real world lighting and generate a very realistic image indeed

Fragment shaders Fragment shaders act on individual fragments. Fragments are the elements of a polygon that will be drawn to specific pixels on screen. Fragments are created by the rasteriser, by interpolating the outputs of the vertex shader. Examples of what they do include

texture mapping bump/normal mapping generic texture operations fog.

exturing can be divided into two categories

texture mapping (also called image mapping) and procedural texturing.

Not only does it incorporate direct illumination - light

that travels directly from a light source to a surface - but it can also handle light that originates from within the scene environment - the indirect light that is present due to light bouncing off other surfaces and reaching other object

Compound bodies are objects

that are made out of a number of rigid bodies linked together by joints or other methods. They are a way of creating objects with more complex movement while maintaining the simplicity of rigid body simulation. We will describe a number of ways of joining rigid bodies below

We could try to work out an exact (analytic) formula, but this would be specific to a given number of links. It would also be underconstrained for more than two links;

that is, there is more than one solution (you can hold your shoulder and wrist still but still rotate your elbow into different positions)

When this object is moved to a point in the WCS, it is really the origin of the object (in the OCS)

that ismoved to the new world coordinates, and all other points in the model are moved by an equal amount. Figure 3.4 shows the WCS and OCS.

Aardman animations use metal skeletons underneath their clay mode

that the characters can be moved easily and robustly without breaking. Each individual movement is then less work (though still a lot)

e. In rasterizing a curve

the challenge is to construct a spline from some given points in the plane.

what are the most important method in animation ?

the most important method is keyframing

Mathematically-speaking, the light exiting any point in a particular direction is

the sum of the amount of light it emits in that direction and the amount of light it reflects in that direction from any incoming light

Even with 24-bit colour, although indicated as 'millions of colours' or 'true colour',

there are many colours within the visible spectrum that screens cannot reproduce. To show the extent of this limitation for particular display devices, chromaticity diagrams are often used. Here, the Yxy colour space is used, where Y is a luminance channel (which ranges from black to white via all greys), and x and y are two chromatic channels representing all colours.

Fragment shaders

these act on fragments (pixels) and take the place of standard display, after the rasterisation stage and prior to writing pixels to the frame buffer

The frames between the keyframes have

to be filled in (interpolated). For example, if you have the following positions of the ball.

this would be a hopeless task as it would be computationally impossible

to calculate every interaction between light and multiple surfaces as it travels around a scene

If a material is opaque then the majority of incident light is

transformed into reflected light and absorbed light, and so what an observer sees when its surface is illuminated is the reflected light

The other implication of the way transforms work is that any transform will apply to all objects that are drawn after it has been applied

translate(100, 0, 0); // both boxes are translated by 100 units: box(10, 10, 10); box(10, 10, 10);

The following code shows an example of using transforms to position and rotate an object

translate(20, 20, 50); rotateY(radians(30)); rotateX(radians(45)); beginShape(TRIANGLE_STRIP); { vertex(-100, -100, 50); vertex(100, -100, 0); vertex(100, 100, -50); vertex(50, 100, 0); vertex(-100, 50, 50); } endShape();

This interference between tranforms can be confusing. In general, the interference can be avoided by always applying transforms in the following order:

translate(x,y,z); rotate(a); scale(s);

Translation

translation moves an object to a different position on the screen. You can translate a point in 2D by adding translation coordinate (tx, ty) to the original coordinate (X, Y) to get the new coordinate (X', Y').

There are four main types of transformations

translation, rotation, reflection and dilation. These transformations fall into two categories: rigid transformations that do not change the shape or size of the preimage and non-rigid transformations that change the size but not the shape of the preimage

attribute:

ttribute: These variables have a different value for each vertex. These are sent from the CPU to the shader as part of the vertex data (typically in an array). They can be read in a vertex shader but not written to. Examples include vertex position and normal.

The above example only uses one light, but we can access more lights by using an array uniform for the lights (see the Processing shader tutorials on processing.org for a full list of the light variables that Processing supports in shaders)

uniform int lightCount; uniform vec4 lightPosition[8]

Instead of transforming the normal by the model view matrix, we must transform it by a special normal matrix

uniform mat3 normalMatrix;

It will also provide a new uniform variable, texMatrix, which is the matrix that is used to transform the texture coordinates:

uniform mat4 texMatrix;

In a GLSL shader a uniform variable is defined within the shader using similar syntax to Java, with a type and a name but also the modifier uniform:

uniform mat4 transform;

The texturing itself is done in the fragment shader. We must declare a uniform variable for the texture. This is of type samples2D. It is called a sampler because it samples from a texture (in this case a 2D image texture)

uniform sampler2D texture;

transform is a built in uniform variable that is defined by Processing for the current transform matrix. We have used it in the above example. We could also define our own custom uniform variables; for example, a colour to use to tint our shapes (it is a vec3 for r, g, b colour):

uniform vec3 tint;

his example is very simple, but we can do many more things in the fragment shader. We will cover some of these later in the course, but here we will give an example of tinting the colour. We can define a custom uniform variable as defined above (this should go in the fragment shader):

uniform vec3 tint; We can then use this to calculate the fragment colour, for example: gl_FragColor = vec4(col.xyz*tint, 1); The syntax col.xyz means the x, y and z components of a vector. Shader programs can achieve many more complex effects, such as lighting and textures, but these will be covered in future chapters of this subject guide

This gives us access to a new uniform variable lightPosition, which we must declare in our vertex shader before we can use it:

uniform vec4 lightPosition;

uniform

uniform: These are variables that have a single value passed per object or render. They are set by the CPU program and do not change until the CPU program starts a new shader. They can be read in any shader but not written to. Examples include light positions and transform matrices

Gouraud

uraud shading was invented by Gouraud in 1971. The method simulates smooth shading across a polygon by interpolating the light intensity (that is, the colour) across polygons. It is a fast method and is supported by graphics accelerator cards. The downside is that it cannot model specular components accurately, since we do not have the normal vector at each point on a polygon

Sum of point and vector = point

v + P = Q

where velocity is the rate of change of position (v):

v = dx dt

Varying variables are also defined in a similar way to standard variables, but use the varying modifier

varying vec4 col;

t then defines the same varying variable that we defined in our vertex shader:

varying vec4 col;

We also need to provide a varying variable to pass the transformed texture coordinate to the fragment shader:

varying vec4 outputTexCoord;

varying:

varying: are values that are passed from the vertex shader to the fragment shader. The vertex shader computes their values for each vertex and the rasteriser interpolates them to get a value for each fragment, which is passed to the fragment shader. They can be written to in the vertex shader and read in the fragment shader

We calculate the direction from the vertex to the light:

vec3 dir = normalize(lightPosition.xyz - vertexCamera);

his code implements the diffuse lighting equation XXXX. First it calculates the vertex position in camera coordinates by transforming by the model view matrix:

vec3 vertexCamera = vec3(modelview * vertex); Then we transform the normal: vec3 transformedNormal = normalize(normalMatrix * normal);

Dot Product

vector multiply vector results in scalar quantity = a b cos(angle)

Ambient light

ven though an object in a scene is not directly lit it will still be visible (Figure 5.2). This is because light is reflected indirectly from nearby objects. Ambient light does not model a 'true' light source; it is a workaround for 3D modelling - a way of faking things - consisting of a constant value that mimics indirect lighting.

Vertex shaders act on individual vertices. They act on the basic vertex data to produce values that are interpolated by the rasteriser and then sent to the fragment shader. They perform a number of functions:

vertex and normal transformations texture coordinate calculation lighting material application.

As with the vertex shader, the code is contained in the main function

void main() { gl_FragColor = col; // Note US spelling }

The code itself is defined in the main function. It calculates the screen space position of the vertex by multiplying it by the transform matrix. It puts the value into gl Position, which is a built in varying variable which will be automatically passed to the fragment shader. It also passes the vertex colour to our custom varying variable col:

void main(){ gl_Position = transform*vertex; col = color; // Note US spelling }

Lighting calculations are often performed in the vertex shader. Code example 5.1 shows an example of a shader program with lighting.

void main(){ gl_Position = transform*vertex; vec3 vertexCamera = vec3(modelview * vertex); vec3 transformedNormal = normalize(normalMatrix * normal); vec3 dir = normalize(lightPosition.xyz - vertexCamera); float light = max(0.0, dot(dir, transformedNormal)); col = vec4(light, light, light, 1) * color; }

n the following example code we are applying an impulse to simulate a catapult. The player can drag the object about with the mouse and when the mouse is released, an impulse is applied to it that is proportional to the distance to the catapult. The impulse is calculated as the vector from the object to the catapult and is then scaled by a factor forceScale. The result is applied to the rigid body using the applyCentralImpulse command (there is also a command applyImpulse which can apply an impulse away from the centre of the object)

void mouseReleased() { PVector impulse = new PVector(); impulse.set(startPoint); impulse.sub(droid.getPosition()); impulse.mult(forceScale); droid.physicsObject.rigidBody.applyCentralImpulse( new Vector3f(impulse.x, impulse.y, impulse.z)); }

In a full program this would be bundled into a complete timeline class, but for a basic demo we can just use the array directly by adding keyframes to it:

void setup(){ size(640, 480); timeline = new Keyframe [5]; timeline[0] = new Keyframe(0, 0, 0, 0); timeline[1] = new Keyframe(2, 0, 100, 0); timeline[2] = new Keyframe(4, 100, 100, 0); timeline[3] = new Keyframe(6, 200, 200, 0); timeline[4] = new Keyframe(10, 0, 0, 0); }

3D Graphics in Processing

void setup(){ size(640, 480); } void draw(){ background(255); fill(100); rect(100, 100, 100, 200); }

Below is a basic two dimensional program in Processing that draws a rectangle in grey on a white background

void setup(){ size(640, 480); } void draw(){ background(255); fill(100); rect(100, 100, 100, 200); }

Turning this program into a 3D one is fairly straightforward, but requires some changes, which are marked with comments below:

void setup(){ size(640, 480, P3D); // use the P3D renderer } void draw(){ background(255); lights(); // add lighting fill(100); box(100, 100, 100); // 3D primitive }

urning this program into a 3D one is fairly straightforward, but requires some changes, which are marked with comments below:

void setup(){ size(640, 480, P3D); // use the P3D renderer } void draw(){ background(255); lights(); // add lighting fill(100); box(100, 100, 100); // 3D primitive }

An important problem is how to animate people talking. In particular, how to animate appropriate mouth shapes for what is being said (Lip-sync). Each sound (phoneme) has a distinctive mouth shape

we can create a morph target for each sound (visemes). Analyse the speech or text into phonemes (automatically done by text to speech engine), match phonemes to visemes and generate morph target weights

The two basic types of projections - perspective and parallel -

were designed to solve two mutually exclusive demands: showing an object as it looks in real life, and preserving its true size and shape

Creating realistic images is not just a goal for the entertainment industry

where special effects rely on looking as convincing as possible.

Digital Differential Analyzer (DDA) algorithm

which uses floating points and is slower, less efficient and less accurate)

B´ezier curves and animation

would be an obvious choice of curve to use as they are smooth but they do not go through all the control points; we need to go through all the keyframes. As we need to go through the keyframes we use H

allows us to calculate x 0 :

x 0 = x cos(θ)

x

x is the location in space

A ray in a 3D scene generally uses a 3D vector for the origin and a normalised 3D vector for the direction. We begin by shooting rays from the camera out into the scene (Figure 7.2). The pixels can be rendered in any order (even randomly), but it is easiest to go from top to bottom, and left to right.

y order (even randomly), but it is easiest to go from top to bottom, and left to right. We generate an initial primary ray (also called a camera ray or eye ray) and loop over all of the pixels. The ray origin is simply the camera's position in world space

If tangents are calculated in this way the curves are called Catmull-Rom splines

you set the tangents at the first and last frame to zero you get 'slow in slow out

Lo(x, ωo, λ, t) = Le(x, ωo, λ, t) + Z Ω fr(x, ωi , ωo, λ, t)Li(x, ωi , λ, t)(ωi · n)dωi

λ is a particular wavelength of light (without this, everything would be greyscale) t is time (for ease, let us assume it is constant for now) x is the location in space ωo is the direction of the outgoing light ωi is the negative direction of the incoming light Lo(x, ωo, λ, t) is the total spectral radiance of wavelength λ directed outward along direction ωo at time t, from a particular position x. In other words, the rendering equation is a function which gives you the outgoing light in a particular direction ωo from a point x on a surface. Le(x, ωo, λ, t) is emitted spectral radiance (and since most surfaces tend not to emit light there is not usually any contribution here) Ω is the unit hemisphere (see Figure 7.3) containing all possible values for ωi R Ω . . . dωi is an integral over Ω. The enclosed functions need to be integrated

ωi

ωi is the negative direction of the incoming light

ωo

ωo is the direction of the outgoing light

Ω is the unit hemisphere (see Figure 7.3) containing all possible values for ωi

Screen coordinate system

A coordinate system used by most programming languages in which the origin is in the upper-left corner of the screen, window, or panel, and the y values increase toward the bottom of the drawing area.

Basic drawing actions

All drawing functions simply place pixel values into memory; that is, they set a pixel on the screen to a specific colour: SetPixel{x,y,R,G,B} h x and y representing the coordinates of the pixel and R, G and B, the colour value

Viewing

Graphics hardware: input/output devices, specialised chips, specialised architectures

function of input devices

Input devices take a signal from the user and send this signal to the central processing unit (CPU) - the hardware in a computer that carries out a sequence of stored instructions kept in computer memor

camera coordinate system

This is based upon the viewpoint of the observer, and changes as they change their view. Moving an object 'forward' in this coordinate system moves it along the direction that the viewer happens to be looking at the time.

Coordinate Systems

Most 2D geometry is pictured with the first coordinate on the horizontal axis and the second coordinate on the vertical axis

Subfields of computer graphics

Mathematical structures: spaces, points, vectors, dusts, curves, surfaces, solids. Modelling: ie. the description of objects and their attributes, including: primitives (pixels, polygons), intrinsic geometry, attributes (colour, texture), dynamics (motion, morphing).Techniques for object modelling, including: polygon meshes, patches, solid geometry, fractals, particle systems

Transforms

Matrices can be used to define transforms. In this section we describe how to define a number of different transforms

pixels and points

Pixels are not points so we have to find the pixel that is closest to the actual point

Computer memory stores computer programs.

This is directly accessible to the processing unit. Dedicated video memory is located on and only accessible by the graphics card. The more video memory, the more capable the computer will be at handling complex graphics at a faster rat

These models describe 3D objects using mathematical primitives such as spheres, cubes, cones and polygons

The most common type is a triangle or quadrilateral mesh composed of polygons with shared vertices. Hardware-based rasterisers use triangles because all of the lines of a triangle are guaranteed to be in the same plane, which makes the process less complicated

Identity

The simplest transform is the one that leaves a vector unchanged, called the identity matrix. This is a matrix with 1s along the main diagonal and 0s elsewhere

geometry and other adds to images

These describe the geometry of an object; we can then build on that geometry by adding information about materials, textures, colours and light in order to produce a resulting image that looks 'real'

User interfaces and graphics

User interfaces: human factors, input/output devices, colour theory, workstations, interactive techniques, dialogue design, animation, metaphors for object manipulation, virtual reality.

When each object is created in a modelling program, a point must be picked to be the origin of that particular object, and the orientation of the object to a set of model axes.

Vertices are defined relative to the specific object. For example, when modelling a car, the modeller might choose a point in the centre of the car for the origin, or the point in the back of the car, or the front left wheel

Multiplication by a scalar:

a m11 m12 m21 m22 = am11 am12 am21 am22

translation matrix

a matrix that can be added to the vertex matrix of a figure to find the coordinates of the translated image

identity matrix

a square matrix that, when multiplied by another matrix, equals that same matrix

As we can see from Figure 2.1, when the vector (x, 0) is rotated by θ it still has length x but it has new coordinates (x 0, y0). x, x 0 and y 0 form a right angled triangle, so:

cos(θ) = x 0 x

n this course we are most interested in perspective projection as this is what is required to make our modelled scenes look 'real'

enhancing realism by providing a type of depth cue. Projection from 3D to 2D is defined by straight projection rays (projectors) emanating from the centre of projection, passing through each point of the object, and intersecting the projection plane to form a projection

Three-dimensional (3D) computer graphics

have a number of applications, with the four main categories being computer-aided design (CAD),scientific visualisation, the entertainment industries

two-dimensional array of pixels is an image: a raster of pixels

he raster is organised into rows and each row holds a number of pixels. The quality of an image is related to the accuracy of the pixel colour value and the number of pixels in the raster

a vector image is converted

into a raster graphics image, which maps bits directly to a display space (and is sometimes called a bitmap)

An input device

is any device that allows information from outside the computer to be communicated to the computer.

Everything you see on your computer's screen, from text to pictures,

is simply a two-dimensional grid of pixels, a term which comes from the words 'picture element'

Drawing vertical and horizontal lines

is straightforward: it is simply a matter of setting the pixels nearest the line endpoints and all the pixels in between

Rasterisation (or rasterization)

is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (pixels or dots). The rasterised image may then be displayed on a video display or printer, or stored in a bitmap file format.

Usually a model is built in its own object coordinate system (OCS)

later this model is placed into a scene in the WCS.

Even wireframe objects😁😘😘

look 'real' if perspective is used - that is, distant objects look smaller. A feature of perspective drawings is that sets of parallel lines appear to meet at a vanishing point - like train tracks running into the distance

The primary frame is the world coordinate system (WCS)

lso known as the universe or global or sometimes model coordinate system. This is the base reference system for the overall model (generally in 3D), to which all other model coordinates relate

Transpose

mT ij = mji

There are two kinds of computer graphic

raster (composed of pixels) and vector (composed of paths). Raster images are more commonly called bitmap images. A bitmap image uses a grid of individual pixels where each pixel can be a different color or shade. Bitmaps are composed of pixels.

Parallel Projection

requires that the object be positioned at infinity and viewed from the multiple points on an imaginary line parallel to the object

Computer graphics

s is a term that describes the use of computers to create or manipulate images

The actual part of the scene in world coordinates

that is to be displayed is called a window. On a screen, the picture inside this window is mapped onto the viewport - the available display area

World Coordinate System (WCS)

the common X-Y coordinate system that is the default; if it is modified, it becomes a User coordinate System (UCS)

Negating a vector:

−v = (−x, −y, −z)

You should be familiar with all of these operations, both in terms of performing calculations using them and how they affect points in spaces.

For example, you should be comfortable using vector subtraction to obtain a vector between two points or using the dot product to calculate the angle between two vectors. If you do not feel comfortable with any of the following exercises please revise these topics in an introductory mathematics textboo

Local coordinate systems (LCS) are a further subdivision

For example,when modelling a car the centre of each wheel can be described with respect to the car's coordinate system, but each wheel can be specified in terms of a local coordinate system

how to draw line

For general screen coordinates (x0, y0) and (x1, y1) the algorithm should draw a set of pixels that approximates a line between them

graphics software applications

Graphics software: graphics APIs; paint, draw, CAD and animation software; modelling and image databases; iconic operating systems; software standards

Scale Matrix

If, instead of 1s along the diagonal, we have a different scalar a we get a matrix that is equivalent to multiplying the vector by a:

Line drawing

Line drawing involves taking two endpoints in screen coordinates and drawing a line between them.

Non-commutativity of matrix multiplication:

MN 6= MN not equal

Vectors

Magnitude n Direction n NO position n Can be added, scaled, rotated n CG vectors: 2, 3 or 4 dimensions


Ensembles d'études connexes

"First Aid- Chapter 7: Breathing Emergencies"

View Set

الدرس الرابع (مقاييس الانتشار والإحصائيات الاستنتاجية)

View Set

Exploration and Conquest Unit Test Study

View Set