advanced graphics study guide 3

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Other methods are aimed at making the animation look more realistic and expressive.

'Squash and stretch' is about changing the shape of an object to emphasise its motion. In particular, stretching it along the direction of movement

Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding does not offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

). HDR images do not use integer values to represent the single colour channels (for example, [0 . . . 255] in an 8 bit per pixel interval for R, G and B) but instead use a floating point representation. Three of the most common file formats are as follows

example of drawback of forward kinemetics ?

, when animating you often want to do things like make a character's hand touch a door handle. Trying to get the joint angles right so as to accurately place the hand can be a long and difficult process.

when going fast and then squashing when changing direction

. 'Slow in slow out' is about controlling the speed of an animation to make it seem smoother. Start slow, speed up in the middle and slow to a stop

A more principled approach is to model each of the muscles of the face. In fact this can be done using one of the techniques just discussed

. Each muscle could have a morph target, or a bone, or there could be a more complex physical simulation as mentioned for multi-layered body animation.

sychophysical experiments are a way of measuring psychological responses in a quantitative way so that they correspond to actual physical values. It is a branch of experimental psychology that examines the relationship between the physical world and peoples' reactions and experience of that world

. Psychophysical experiments can be used to determine responses such as sensitivity to a stimulus. In the field of computer graphics, this information can then be used to design systems that are finely attuned to the perceptual attributes of the visual system.

This means a mathematical simulation of the equations of physics

. The most important equation is Newton's second law: f = ma (9.1) To state it in words Force (f) equals mass (m) times acceleration (a)

onstrained in terms of the space in which you do it. They also have the benefit of directly outputting joint angles rather than marker positions

. They can be bulky, particularly the cheap systems. It can be uncomfortable and constraining to wear resulting in less realistic motion. Lighter-weight systems have recently been developed but they can be expensive.

In recent years, visual perception has increased in importance in computer graphics, predominantly due to the demand for realistic computer generated images. The goal of perceptually-based rendering is to produce imagery that evokes the same responses as an observer would have when viewing a real-world equivalent

. To this end, work has been carried out on exploiting the behaviour of the human visual system (HVS). For this information to be measured quantitatively, a branch of perception known as psychophysics is employed, where quantitative relations between physical stimuli and psychological events can be established.

what does visual perception deal with?

. Visual perception deals with the information that reaches the brain through the eyes

Implementing morph targets.

// iterate over all children of the base shape for (int i = 0; i < base.getChildCount(); i++) { // iterate over all vertices of the current child for (int j = 0; j < base.getChild(i).getVertexCount(); j++) { // create a PVector to represent the new // vertex position PVector vert = new PVector(0, 0, 0); // iterate over all the morph targets for (int morph = 0; morph < morphs.length; morph++) { // get the corresponding vertex in the morph target // i.e. the same child and vertex number PVector v = morphs[morph].getChild(i).getVertex(j); // multiply the morph vertex by the morph weight // and add it to our new vertex position vert.add(PVector.mult(v, weights[morph])); } // set the vertex position of the base object // to the newly calculated vertex position base.getChild(i).setVertex(j, vert); } }

Friction

//check if bodies are intersecting int numManifolds = physics.world.getDispatcher().getNumManifolds(); for (int i = 0; i < numManifolds; i++) { PersistentManifold contactManifold = physics.world.getDispatcher().getManifoldByIndexInternal(i); int numCon = contactManifold.getNumContacts(); //change and use this number if (numCon > 0) { RigidBody rA = (RigidBody) contactManifold.getBody0(); RigidBody rB = (RigidBody) contactManifold.getBody1(); if(rA == droid.physicsObject.rigidBody) { for (int j = 0; j < crates.length; j++) { if(rB == crates[j].physicsObject.rigidBody) { score+= 1; } } } if(rB == droid.physicsObject.rigidBody) { for (int j = 0; j < crates.length; j++) { if(rA == crates[j].physicsObject.rigidBody) { score+= 1; } } } } }

In BRigid, creating a world requires you to set the extents of the world; that is, the minimum and maximum values for x, y and z. These are used to create a BPhysics object which represents the world as shown in Code example 9

/extents of physics world Vector3f min = new Vector3f(-120, -250, -120); Vector3f max = new Vector3f(120, 250, 120); //create a rigid physics engine with a bounding box physics = new BPhysics(min, max);

physics World

A Physics World is a structure that defines properties of the whole simulation. This typically includes the size of the volume to be simulated as well as other parameters such as gravity. Most physics engines require you to create a world before setting up any other element of the simulation, and to explicitly add objects to that world

physics simulation

A particularly popular approach is to simulate the laws of physics to get realistic movement and interaction between objects

Physics engines

A physics engine is a piece of software for simulating physics in an interactive 3D graphics environment. It will perform the simulation behind the scenes and make it easy to set up complex simulations.

Simulating physics

A physics engine is a piece of software that simulates the laws of physics (at least within a certain domain).

works on the same principle

A sequence of images is displayed at 25 frames per second (the minimum to make it appear like smooth motion)

Facial bones are similar to bones in body animation

A set of underlying objects that can be moved to control the mesh. Each bone affects a number of vertices with weights in a similar way to smooth skinning for body animation

Skinning

A skeleton is a great way of animating a character but it does not necessarily look very realistic when rendered

sychologist Paul Ekman defined a set of six universal human emotions (joy, sadness, surprise, anger, disgust, fear), which he called the basic emotions

All are independent of culture and each has a distinctive facial expression. They are a common basis for morph targets but can be very unsubtle.

Tone mapping (also known as tone reproduction) provides a method of scaling (or mapping) luminance values in the real world to a displayable range

Although it is tempting to use a straightforward linear scaling, this is not an adequate solution as many details can be lost (Figure 10.3). The mapping must be tailored in somenon-linear way

transform hierarchial system

As in most graphics systems transforms are hierarchical; FK can easily be implemented using the existing functionality of the engine.

benefit of hermit curve

As we need to go through the keyframes we use Hermite curves instead (Figure 8.2(b)). These are equivalent to B´ezier curves, but rather than specifying four control points specify two end points and tangents at these end points. In the case of interpolating positions the tangents are velocities

regardless of whether they are located in a bright or dark area. This often results in a tone mapped image that looks 'flat', having lost its local details. Conversely, local operators apply a different scale to different parts of an image. Local tone mapping operators consider pixel neighbourhood information for each individual pixel, which simulates the adaptive and local properties of the human vision system. This can lead to better results but this will take longer for the computer to process.

At present, it is a matter of choosing the best tool for the job, although the development of high dynamic range displays means operators can be compared more accurately. Where required, for the purposes of realistic image generation, perceptually-accurate operators are the best choice.

The shape, mass and position are used to create a BObject, which contains the rigid body:

BObject physicsShape = new BObject(this, mass, box, pos, true); The BObject has a member variable rigidBody which represents the rigid body. Finally, you add the body to the physics world so it will be simulated. physics.addBody(physicsShape);

At the end of the array there is no next keyframe. Finally, we are using the break statement to break out of the loop when we have found the right keyframe

Because of the way loops work, if we never break out of the loop currentKeyframeM will be set to the last index of the array, which is what we want, because it means that the current time is after the last keyframe

Character animation is normally divided into Body animation and Facial animation each of which uses different techniques. Within each there are a number of important topics

Body animation • skeletal animation • skinning • motion capture Facial animation • morph targets • facial bones • facial motion capture.

Motion capture can also be used for facial movement. Again the movement of an actor can be captured and applied to a character in a similar way to body motion capture

Body motion capture relies on putting markers on the body and recording their movement but generally these are too bulky to use on the face. The only real option is to use optical method

BRigid has a number of primitive collision shapes:

Boxes (BBox) Spheres (BSphere) Planes (BPlane)

This is desirable, because high dynamic range display devices are being developed that will allow this data to be displayed directl

By capturing and storing as much of the real scene as possible, and only reducing the data to a displayable form just before display, the image becomes future-proof. HDR images store a depiction of the scene in a range of intensities commensurate with the real-world scene. These images may be rendered images or photographs

Compound shapes

Compound shapes. If an object cannot be represented as a single primitive object, it may be possible to represent it as a number of primitive objects joined together: a compound shape.

Computer animation

Computer animation (both 2D and 3D) is quite a lot like Stop Motion Animation.

HDR capture and storage

Current state-of-the-art image capturing techniques allow much of the luminance values to be recorded in high dynamic range (HDR) images

animation and graphics software, layer refers to the different levels on which you place your drawings, animations, and objects. The layers are stacked one on top of another

Each layer contains its own graphics or effects, which can be worked on and changed independently of the other layers.

Morph targets are one of the most popular methods. Each facial expression is represented by a separate mesh.

Each of these meshes must have the same number of vertices as the original mesh but with different positions.

New facial expressions are created from these base expressions (called Morph targets) by smoothly blending between them

Each target is given a weight between 0 and 1 and a weighted sum is performed on all of the vertices in all of the targets to get the output mesh: vi = X t∈morph targets wtvti; whereXwt = 1

The mesh is handled on a vertex by vertex basis.

Each vertex can be associated with more than one bone. The effect on each vertex is a combination of the transformations of the different bones. The effect of a bone on a vertex is specified by a weight, a number between 0 and 1. All weights on a vertex sum to 1.

Joints are generally represented as full three degrees of freedom rotations but human joints cannot handle that range

Either you build rotation limits into the animation system or you can rely on the methods generating joint angles to give reasonable values (as motion capture normally will)

engine

Engines are very important as programming high quality simulations is extremely difficult so engines make simulation available much more widely

rotation in the bones and skeletal animation

First choose a position on a bone (the end point). This position is rotated by the rotation of the joint above the bone. Translate by the length (offset) of the parent bone and then rotate by its joint. Go up its parent and iterate until you get to the root. Rotate and translate by the root position.

Lightness constancy is the term used to describe the phenomena whereby a surface appears to look the same regardless of any differences in the illumination

For example, white paper with black text maintains its appearance when viewed indoors in a dark environment or outdoors in bright sunlight, even if the black ink on a page viewed outdoors actually reflects more light than the white paper viewed indoors

how images arranged in order to make animation?

For the purpose of creating animation these images are arranged in a 'time line'. In traditional animation this is a set of images with frame numbers drawn by the side

Inverse kinematics is a way of doing this automatically so that you can animate in terms of hand and foot positions rather than joint angles

Given a desired position for a part of the body (end effector) inverse kinematics is the process of calculating the required joint angles to achieve that position (in the above diagram, given Pt IK will calculate R0 and R1).

The following sections describe a number of different types of force

Gravity is a force that acts to pull objects towards the ground. It is proportional to the mass of an object. The mass term in gravitational force cancels out the mass term in Newton's second law of motion (Equation 9.1) so as to produce a constant downward acceleration. Gravity is typically a global parameter of the physics wor

his results in the loss of detail in bright or dark areas of a picture, depending on whether the camera had a low or high exposure setting

HDR compensates for this loss of detail by taking multiple pictures at different exposure levels and intelligently stitching them together to produce a picture that is representative in both dark and bright areas. Figure 10.4 demonstrates how varying levels of exposure reveal different details. By combining the various exposure levels and tone mapping them, a better overall image can be achieved.

OpenEXR .exr 48

High colour precision at the expense of some dynamic range; can be compressed

Comparing these equations we can see that our code corresponds to the first two terms of the Taylor series. This means that it is a valid approximation, because for small (δt) the later terms of the Taylor series become smaller

However, it is just an approximation, and it is only a valid approximation for small values of δt. That means it can lead to noticeable errors if the rate at which we update the simulation is slow compared to our objects' velocities or accelerations. More accurate simulations can be created by including higher order derivatives of the function and through other sophisticated techniques.

CCD is very general and powerful; it can work for any number and combinations of bones

However, there are problems. It does not know anything about the human body. It can put you in unrealistic or impossible configurations (for example, elbow bent backwards). To avoid this we need to introduce joint constraints. CCD makes this easy; you constrain the joints after each step.

f the skeleton is in the bind pose the mesh should be in its default location

If the bind pose is not zero rotation you need to subtract the inverse bind pose from the current pose.

Creating a rigid body

In BRigid, creating a rigid body involves a number of steps. Firstly, you need to create a shape: box = new BBox(this, 1, 50, 50, 50);

Drag is a force that models air resistance, which slows down moving objects. It is proportional to the speed of an object and in the opposite direction so it will always act to reduce the speed.

In BRigid, drag is included in a general damping parameter which includes all damping forces; there is both a linear damping which reduces linear (positional) velocity and an angular damping which reduces angular (rotational) velocity. The damping applied to an object can be set using the setDamping method of a RigidBod body.rigidBody.setDamping(linearDamping, angularDamping);

This requires significant effort to achieve, and one of the key properties of this problem is that the overall performance of a photorealistic rendering system is only as good as its worst componen

In the field of computer graphics, the actual image synthesis algorithms - from scanline techniques to global illumination methods - are constantly being reviewed and improved, but weaknesses in a system in both areas can make any improvements in the underlying rendering algorithm insignificant.

Motion capture can give you potentially very realistic motion but is often ruined by noise, bad handling, etc

It can also be very tricky to work with. Whether you choose motion capture or hand animation depends on what you want out of your animation: a computer graphics character that is to be inserted into a live action film is likely to need the realism of motion capture, while a children's animation might require the more stylised movement of hand animation.

Hand animation tends to be very expressive but has a less realistic more cartoon-like style.

It is easy to get exactly what you want with hand animated data and to tweak it to your requirements.

These need to have an exactly identical structure to each other and to the base shape. That means they need to have exactly the same number of child shapes and each child shape must have exactly the same number of vertices

It is generally a good idea to start with one basic shape (morph[0]) and edit it to create the other morphs. The same shape can be initially loaded into base and morph[0] (but they must be loaded separately, not simply two variables pointing to the same shape otherwise editing base will also change morph[0])

An image displayed on a standard LCD screen is greatly restricted in terms of tonality, perhaps achieving at most 1000:1 cd/m2

It is therefore necessary that the image be altered in some way, usually through some form of scaling, to fit a display device that is only capable of outputting a low dynamic range.

The most basic form of animation is the flip book.

It presents a sequence of images in quick succession, each of which is a page of the book

what does joint represent?

Joints are represented as transforms

A timeline would essentially be an array of these keyframe objects

Keyframe [] timeline;

Motion capture

Keyframe animation is based on sets of movement data which can come from one of two sources: hand animation or motion capture.

The computer does the inbetweening automatically

Keyframes are a tuple consisting of the time at which the keyframe occurs and the value of the transforms. These will be set up in a timeline, which is just an array of keyframes

Keyframing in animation

Keyframing can reduce this effort even more. The animator only needs to define the 'key frames' of a movement (which will be values for transforms)

The dynamic range in the real world is far greater than the range that can be produced on most electronic displays

Luminous intensity - the power of a light source - is measured in candelas per square metre (cd/m2 ). The human visual system can accommodate a wide range of luminance in a single view, around 10 000:1 cd/m2 , and our eye adapts to our surroundings, changing what we see over time

There are a number of approaches to performing inverse kinematics

Matrix (Jacobean) methods. Cyclic Coordinate Descent (CCD). Specific analytic methods for the human body

Using morph targets

Morph targets are a good low level animation technique. To use them effectively we need ways of choosing morph targets. We could let the animator choose (nothing wrong with that) but there are also more principled way

This is very useful as it means you only have one method in your animation code (one shader).

Morphs are very convenient from an animator point of view, but bones are easier in the engine

key frame and find the current frame

Note that we are adding keyframes in the correct time order. We will use the ordering later when we have to find the current keyframe

Simple primitives.

Often objects are represented as simple primitive shapes for which simple collision equations are defined; for example, boxes or spheres. These are typically only rough approximations of the appearance of the object but are very efficient and are often close enough that the differences in an object's movement are not noticeab

While research into ways of rendering images provides us with better and faster methods, we do not necessarily see their full effect due to limitations of the display hardware. To ensure that the scene as it was created closely resembles the scene as it is displayed, it is necessary to be aware of any factors that might adversely influence the display medium.

One major problem is that computer screens are limited in the range of luminance they can display. Most are not yet capable of producing anywhere near the range of light in the real world. This means the realistic images we have carefully created are not being properly displayed

Collision shape

One of the most important properties of a rigid body is its shape. From the point of view of a physics engine the shape controls how it collides and interacts with other objects

Is this a valid thing to do? The equations given above are continuous relationships where position and velocity are varying continuously over time. In the code shown above time is split into discrete frames and velocities and positions are updated only at those time steps. Will this introduce errors?

One way of looking at this is through the Taylor series. This states that a small change (δt) in a function can be represented as an infinite series like this: y(t + δt) = y(t) + δtdy dt (t) + (δt) 2 2! d 2y dt2 (t) + . . .

The simplest approach is to interpolate them in straight lines between the keyframes (Figure 8.1(b)). The position is interpolated linearly between keyframes using the following equation:

P(t) = tP(tk) + (1 − t)P(tk−1)

The formula for a Hermite curve is:

P(t) =(−2s3 + 3s2)P(tk) + (s3 − s2)T(tk)+ (2s3 − 3s2 +1)P(tk−1) + (s3 − 2s2 + s)T(tk−1)

We also need an array of vertex shapes to represent the morph targets:

PShape [] morphs;

To implement morph targets we need a vertex shape that we want to animate

PShape base;

The simplest way to update velocity and position is simply to add on the current acceleration, like this:

PVector acceleration = new PVector(0,0,0); for (int i = 0; i < forces.length; i++) { acceleration.add(forces[i].calculate()); } acceleration.div(mass); velocity.add(PVector.mult(acceleration, deltaTime)); position.add(PVector.mult(velocity, deltaTime));

o play an animation back effectively we need to be able to find the current keyframe based on time. We can use the millis command in Processing to get the current time

PVector pos = timeline[0].position; pushMatrix(); translate(pos.x, pos.y); ellipse(0, 0, 20, 20); popMatrix();

Once we have found the current keyframe we can use it to get a position:

PVector pos = timeline[currentKeyframe].position;

: Interpolating keyframes

PVector pos; // first we check whether we have reached the last keyframe if(currentKeyframe == timeline.length-1) { // if we have reached the last keyframe, // use that keyframe as the position (no interpolation) pos = timeline[currentKeyframe].position; } else { // This part does interpolation for all keyframes before // the last one // get the position and time of the keyframe before // and after the current time PVector p1 = timeline[currentKeyframe].position; PVector p2 = timeline[currentKeyframe+1].position; float t1 = timeline[currentKeyframe].time;; float t2 = timeline[currentKeyframe+1].time; // multiply each position by the interpolation // factors as given in the linear interpolation // equation p1 = PVector.mult(p1, 1.0-(t-t1)/(t2-t1)); p2 = PVector.mult(p2, (t-t1)/(t2-t1)); // add the results together to get the // interpolated position. pos = PVector.add(p1, p2); }

he most visible elements of a simulation are the objects that move and interact with each other. There are a number of different types of objects:

Particles Rigid bodies Compound bodies Soft bodies and cloth

particles objects

Particles are the simplest type of object. They have a position, velocity and mass but zero size and no shape (at least from the point of view of the simulation). They can move, but they do not have a rotation. They are typically used for very small objects.

Visual perception: an overview

Perception is the process that enables humans to make sense of the stimuli that surround them

Related effects

Replication of visual effects that are related to the area of tone reproduction include the modelling of glare. Psychophysically-based algorithms have been produced that will add glare to digital images, simulating the flare and bloom seen around very bright objects. Psychophysical tests have demonstrated that these effects increase the apparent brightness of a light source in an image. While highly effective, glare simulation is computationally expensive.

oft bodies and cloth are much more complex

S as they can change their shape as well as moving. Many modern physics engines are starting to include soft as well as rigid bodies, but they are out of the scope of this subject guide.

Meshes. If the shape of an object is too complex to represent out of primitive objects it is possible to represent its physics shape as a polygon mesh, in the same way as a graphics object

Simulating meshes is much more expensive than simulating primitives, so the meshes must be simple. They are usually a different, and much lower resolution, mesh from the one used to render the graphics. They are typically created by starting with the graphics mesh and greatly reducing the number of polygons.

There are two types of friction: static and dynamic friction.

Static friction Dynamic friction

In the previous formula the weights have to sum to 1. If they do not, the size of the overall mesh will increase (if the weights sum to more than 1) or decrease (if they sum to less than 1).

Subtracting a neutral mesh from all the targets allows us to lift the restriction because we are adding together differences not absolute positions. This can allow more extreme versions of a target or the use of two complete morph targets simultaneously (for example, a smile together with closed eyes).

Radiance RGBE .hdr 32

Superlative dynamic range; sacrifices some colour precision but results in smaller file size

first approximation this is the motion of rigid bones linked by rotational joints

T(the terms joints and bones are often used interchangeably in animation; although, of course, they mean something different, the same data structure is normally used to represent both).

This distance is divided by the time between the previous and next keyframe in order to get the correct speed of movement:

T(tk) = P(tk+1) − P(tk−1) tk+1 − tk−1

The vertices are transformed individually by their associated bones. The resulting position is a weighted sum of the individual joint transforms.

T(vi)sum of joints wijRj (vi)

This is quite a complex loop with a number of unusual features. Firstly, we are defining the loop variable currentKeyframe outside the loop

That is because we want to use it later in the program. Secondly, we are not going to the end or the array but to the position before the end. This is because we are checking both the current keyframe and the next keyframe

Lightness and colour constancy

The ability to judge a surface's reflectance properties despite any changes in illumination is known as colour constancy

force acts on an object to create a change of movement (acceleration).

The acceleration depends both on the force and on the mass of the object. Force and acceleration are written in bold because they are both vectors having both a magnitude and a direction

The animation can be jerky,

The animation can be jerky, as the object changes direction of movement rapidly at each keyframe

Circles are rotational joints, lines are rigid links (bones). Joints are represented as rotations

The black circle is the root, the position and rotation offset from the origin. The root is (normally) the only element of the skeleton that has a translation. The character is animated by rotating joints and translating and rotating the root

Skinning is well suited to implementing in a shader

The bone weights are passed in as vertex attributes and an array of joint transforms is passed in as a uniform variable.

This happens when two objects are in contact and are already moving relative to each other. It acts against the relative velocity of the objects and is in the opposite direction to that velocity, so it tends to slow the objects down (like drag). It is proportional to the velocity, the contact reaction force and a coefficient of friction

The coefficients of friction are different in the two cases. Both depend on the two materials involved in a complex way. Most physics engines do not deal with these complexities; each object has only a coefficient of friction. The coefficients of the two objects are multiplied to get the coefficient of their interaction. Some physics engines have separate parameters for static and dynamic friction coefficients, but BRigid has only one, which can be set using the setFriction method: body.rigidBody.setFriction(frictionCoefficient)

Friction is a force that acts on two bodies that are in contact with each other. Friction depends on the surface texture of the objects involved. Slippery surfaces like ice have very low friction, while rough surfaces like sandpaper have very high friction

The differences between these surfaces is represented by a number called the coefficient of friction. Friction also depends on the contact force between two objects; that is, the force that is keeping them together. For one object lying on top of another this contact force would be the gravity acting on the top object as shown in Figure 9.5; this is why heavy objects have more friction than light objects

Facial animation

The face does not have a common underlying structure like a skeleton. Faces are generally animated as meshes of vertices, either by moving individual vertices or by using a number of types of rig.

mpulse

The forces listed above are all typically included as standard in a physics engine, but sometimes you will need to apply a force that is not included. Most physics engines allow you to apply a force directly to an object through code. This will take the form either of a constant force that acts over time (such as gravity); or of an impulse which is a force that happens at one instant of time and then stops (such as a collision)

human body motion

The fundamental aspect of human body motion is the motion of the skeleton

The head animator

The head animator for a particular character draws the most important frames (keyframes). An assistant draws the in-between frames (inbetweening).

The human visual system

The human visual system receives and processes electromagnetic energy in the form of light waves reaching the eye. This starts with the path of light through the pupil (Figure 10.1)

Facial bones essentially use the same mechanisms as skeletal animation and skinning

The main difference is that facial bones do not correspond to any real structures. Facial bones are normally animated by translation rather than rotation as it is easier

The deformation of a human body does not just depend on the motion of the skeleton

The movement of muscle and fat also affect the appearance. These soft tissues need different techniques from rigid bones. More advanced character animation systems use multiple layers to model skeleton, muscle and fat

The example above implements keyframes but the animation is not at all smooth

The object instantly jumps from one keyframe position to the next, rather than gradually moving between the keyframes

Stop motion animation is a very different process. It involves taking many still photographs of real objects instead of drawing images.

The object is moved very slightly after each photograph to get the appearances of movement. More work is put into creating characters. You can have characters with a lot of detail and character creators will spend a lot of effort making characters easy to move.

Gamut mapping

The term gamut is used to indicate the range of colours that the human visual system can detect, or that display devices can reproduce.

High dynamic range imaging

The ultimate aim of realistic graphics is the creation of images that provoke the same responses that a viewer would have to a real scene

Typically use of a physics engine consists primarily of setting up the elements of a simulation then letting it run, with maybe some elements of interaction

There are many good physics engines available; we will use an engine for Processing, called BRigid which is based on the jBullet engine, which is itself based on the C++ engine Bullet.

Paul Ekman invented a system of classifying facial expressions called Facial Action Parameters (FAPs) which is used for psychologists to observe expressions in people. It consists of a number of parameters each representing a minimal degree of freedom of the face.

These parameters can be used for animation. FAPs really correspond to the underlying muscles so they are basically a standardised muscle model. Again they can be implemented as morph targets or bone

Markerless optical

These systems use advanced computer vision techniques to track the body without needing to attach markers. These have the potential to provide an ideal tracking enivornment that is completely unconstrained

e are too bulky to use on the face. The only real option is to use optical methods. Markerless systems tend to work better for facial capture as there are less problems of occlusion and there are more obvious features on the face (eyes, mouth, nose)

They do, however, have problems if you move or rotate your head too much.

Rigid bodies are slightly more complex.

They have a size and shape. They can move around and rotate but they cannot change their shape or deform in any way; they are rigid. This makes them relatively easy to simulate and means that they are the most commonly used type of object in most physics engines.

Static friction

This happens when two objects are in contact with each other but not moving relative to each other. It acts to stop objects starting to move across each other and is equal and opposite to the forces parallel to the plane of contact of the objects

Once we have all of these in place we can modify the base shape, by iterating through all the vertices and calculating a new vertex position

This implementation assumes that the shape is composed of a number of child shapes (this is often the case when a shape is loaded from an obj file).

Magnetic

This involves puting magnetic transmitters on the body. The positions of these transmitters are tracked by a base station. These methods are very accurate but expensive for large numbers of markers. The markers also tend to be relatively heavy. They are generally used for tracking small numbers of body parts rather than whole body capture.

Mechanical

This involves putting strain gauges or mechanical sensors on the body. These are self contained and do not require cameras or a base station making them less

Cyclic Coordinate Descent

This is an iterative geometric method. You start with the final link and rotate it towards the target.

An object typically has a different shape representation for physics than it has for graphics.

This is because physics shapes need to be fairly simple so that the collision calculations can be done efficiently; while graphics shapes are typically much more complex so the object looks good.

Optical

This is the most commonly used system in the film and games industries. Coloured or reflective balls (markers) are put on the body and the positions of these balls are tracked by multiple cameras.

We need to add a graphical 'skin' around the character. The simplest way is to make each bone a transform and hang a separate piece of geometry off each bone

This works but the body is broken up (how most games worked 15 years ago). We want to represent a character as a single smooth mesh (a 'skin'). This should deform smoothly based on the motion of the skeleton.

Tone mapping

To ensure a true representation of tonal values, some form of scaling or mapping is required to convey the range of a light in a real-world scene on a display with limited capabilities.

What are the 5 types of animation?

Traditional Animation. (2D, Cel, Hand Drawn) 2D Animation. (Vector-Based) 3D Animation. (CGI, Computer Animation) Motion Graphics. (Typography, Animated Logos) Stop Motion. (Claymation, Cut-Outs

Floating point TIFF/PSD .tiff .psd 96

Very accurate with large dynamic range but results in huge file sizes and wasted internal data space.

Collision

When two objects collide they produce forces on each other to stop them penetrating each other and to separate them.

P(t) = tP(tk) + (1 − t)P(tk−1

Where t is the time parameter. The equation interpolates between keyframe P(tk − 1) and keyframe P(t) as t goes from 0 to 1. This simple equation assumes that the keyframe times are 0 and 1. The following equation takes different keyframe values and normalises them so they are between 0 and 1

Colour reproduction

While the previous section deals with the range of image intensities that can be displayed, devices are also limited in the range of colours that may be shown. Tone mapping compresses luminance values rather than colour values.

what are benefit of layering offer to animation ?

You only have to animate bits that move. Next time you watch an animation, notice that the background is always more detailed than the characters. Asian animation often uses camera pans across static image

You put a lot of effort into creating a (virtual) model of a character and then when animating it you move it frame by frame.

You will spend a lot of time making easy-to-use controls for a character, a process called rigging

The acceleration is the rate of change of velocity (v), in mathematical terms:

a = dv dt

keyframe in animation and filmmaking

a drawing that defines the starting and ending points of any smooth transition. The drawings are called "frames" because their position in time is measured in frames on a strip of film

Chromatic colour constancy extends this to colour:

a plant seems as green when it is outside in the sun as it does if it is taken indoors under artificial light

e-arranging the equation we can see that the total acceleration of an object is the sum of all the forces acting on the object divided by its mass:

a=1\m sum fa

For basic objects these will just be transforms like translations and rotation but human characters will have complex skeletons

analogous to the metal skeletons Aardman Animations use (more on this later). Once you have completed this set up effectively, animation becomes much simple

Now we have the timeline we can get hold of the positions at a certain key frame

and use them to translate our object. This is an example of how to get keyframe 0:

f the restitution is 1, the objects will bounce back with the same speed and if it is 0 they will remain stuck together. The restitution is a combined property of the two objects. In most physics engines each object has its own restitution and they are combined to get the restitution of the collision. In BRigid you can set the restitution on a rigid body:

body.rigidBody.setRestitution(restitutionCoefficient); A physics engine will typically handle all collisions without you needing to write any code for them. However, it is often useful to be able to tell when a collision has happened; for example, to increase a score when a ball hits a goal or to take damage when a weapon hits an enemy. Code example 9.3 gives an example of how to detect collisions in BRigid.

However, it involves very difficult computer vision techniques. The Microsoft Kinect has recently made markerless motion capture more feasible by using a depth camera,

but it is still less reliable and accurate than marker based capture. In particular, the Kinect only tends to be able to capture a relatively constrained range of movements (reasonably front on and standing up or sitting down).

A character is generally rigged with the skeleton in a default pose

called the bind pose, but not necessarily zero rotation on each bone.

traditional animation

classical animation, cel animation or hand-drawn animation) is an animation technique in which each frame is drawn by hand on a physical medium

The position of a bone is calculated by

concatenating rotations and offsets; this process is called forward kinematics (FK)

dynamic range

due to the limitations of current technology, this is rarely the case. The ratio between the darkest and the lightest values in a scene is known as the dynamic rang

The cameras use infra-red to avoid problems of colour. Problems of occlusion (markers being blocked from the cameras by other parts of the body) are partly solved by using many cameras spread around a room

e markers themselves are lightweight and cheap although the cameras can be expensive and require a large area.

what to do to improve interploation between different keyframes ?

e. To improve this we can do spline interpolation which uses smooth curves to interpolate positions (Figure 8.1(b))

What is more, in order to make the final animation look consistent

each character should always be drawn by the same animator. Disney and other animation houses developed techniques to make the process more efficient Without these methods full length films like Snow White would not have been possible.

Physics simulation

explain the basic principles of physics simulation for computer animation explain the role of rigid bodies, forces and constraints in a physics simulation demonstrate the use of a physics engine to create a basic simulation create a simulated scene using rigid bodies and manipulate the properties of those rigid bodies to create appropriate effects demonstrate the use of forces and constraints to control the behaviour of rigid bodie

Other approaches to facial animation There is plenty more to facial animation than morph targets, often related to body animation techniques

facial bones muscle models facial action parameters facial motion capture.

We also need an array of weights:

float [] weights;

Then you need to assign a mass and initial position to the object. Positions are represented as Vector3f objects (a different representation of a vector from a Processing PVector):

float mass = 100; Vector3f pos = new Vector3f(random(30), -150, random(1));

We can now search for the current keyframe. We need to find the keyframe just before the current time. We can do that by finding the position in the timeline where the keyframe is less than t but the next keyframe is more than t:

float t = float(millis())/1000.0; // convert time from milliseconds to second int currentKeyframe; for (currentKeyframe = 0; currentKeyframe < timeline.length-1; currentKeyframe++) { if(timeline[currentKeyframe].time < t && timeline[currentKeyframe+1].time > t) break; } PVector pos = timeline[currentKeyframe].position;

Instead, the mapping must be specifically tailored in a non-linear manner, permitting the luminance to be compressed in an appropriate way. Algorithmic solutions, known as tone mapping operators, or tone reproduction operators

have been devised to compress certain features of an image and produce a result with a reduced dynamic range that appears plausible or appealing on a computer monitor.

In collision, momentum is conserved, so the sum of the velocities of the two objects stays the same. However, this does not tell us anything about what the two individual objects do.

hey might join together and move with a velocity that is the result of combining their momentum or they might bounce back from each other perfectly, without losing much velocity at all. What exactly happens depends on a number called the coefficient of restitution.

If you flip through the pages fast enough the images are presented one by one and the small changes no longer seem like a sequence

individual images but a continuous sequence. In film, this becomes a sequence of frames, which are also images, but they are shown automatically on a file projector

Stop motion

is an animated-film making technique in which objects are physically manipulated in small increments between individually photographed frames so that they will appear to exhibit independent motion when the series of frames is played back as a fast sequence.

One goal of realistic computer graphics

is such that if a virtual scene is viewed under the same conditions as a corresponding real-world scene, the two images should have the same luminance levels, or tones.

Forward kinematics is a simple and powerful system and have drawbacks

it can be fiddly to animate with. Making sure that a hand is in contact with an object can be difficult

Some tone mapping operators focus on preserving aspects such as detail or brightness, some concentrate on producing a subjectively pleasing image, while others focus on providing a perceptually-accurate representation of the real-world equivalent. In addition to compressing the range of luminance

it can be used to mimic perceptual qualities, resulting in an image which provokes the same responses as someone would have when viewing the scene in the real world. For example, a tone reproduction operator may try to preserve aspects of an image such as contrast, brightness or fine detail - aspects that might be lost through compression.

Forces

n a physics simulation objects are affected by forces that change their movement in a number of ways. These can be forces that act on objects from the world (for example, gravity and drag); that act between objects (for example, collisions and friction); or that are triggered on objects from scripts (for example, impulses).

As shown in the image above a physics simulation consists of a World that contains a number of Objects which can have a number of Forces acting on them (including forces that are due to the interaction between objects).

n addition there can be a number of Constraints that restrict the movement of objects. Each of these elements will be covered in the following sections.

benefit from keyframing in animation?

only need a few images toget the entire feel of a sequence of animation, but you would need many more to make the final animation look smooth.

hese could use geometric methods (for example, free form deformations based on NURBS)

or simulation methods (model physical properties of fat and muscle using physics engines). Hair is also normally modelled using physics simulation.

The BPhysics object is used to simulate the world. In Processing's draw function you must update the physics object so that the simulation runs

physics.update();

In BRigid you set gravity on the physics world like this

physics.world.setGravity(new Vector3f(0, 500, 0)); Note that gravity in physics engines typically just models gravity on the surface of the earth (or other planet) where the pull of the planet is the only significant gravitational force and gravity is constant. For simulations in outer space that include multiple planets and orbits you would need to implement your own gravitational force using a custom force script and Newton's law of gravity

As an example we could define a basic keyframe class that had keyframes on position and would look something like this:

public class Keyframe { PVector position; float time; public Keyframe (float t, float x, float y, float z) { time = t; position = new PVector (x,y,z); } }

Then go to the next link up and rotate it so that the end effecto

r points towards the target; you then move up to the next joint. Once you reach the top of the hierarchy you need to go back down to the bottom and iterate the whole procedure again until you hit the correct end effector position.

Where do we get the tangents (velocities) from? We could directly set them; they act as an extra control on the behaviour

s = t − tk−1 tk − tk−1

layering in animation

s a background image that does not move and you put foreground images on a transparent slide in front of it.

Skeletal animation

s a technique in computer animation in which a character (or other articulated object) is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe ...

A number of theories have been put forward regarding constancy. Early explanations involved adaptational theories, suggesting that the visual system adjusts in sensitivity to accommodate changes.

s. However, this would require a longer time than is needed for lightness constancy to occur, and adaptational mechanisms cannot account for shadow effects. Other proposed theories include unconscious inference (where the visual system 'knows' the relationship between reflectance and illumination and discounts it)

The shader transforms each vertex by each bone transform in turn and then adds together the results multiplied by the weights

s. In order to limit the number of vertex attributes we normally have a limit of four bones per vertex and use a vec4 to represent the bone weights. This means you also need to know which bones correspond to which weights, so you also have a second vec4 attribute specifying bone indices.

Assuming that some of the colours to be displayed in an image are outside a screen's gamut, the image's colours may be remapped to bring all its colours within displayable range. This process is referred to as gamut mapping

simple mapping would only map out-of-range colours directly inward towards the screen triangular gamut. Such a 'colorimetric' correction produces visible artefacts. A better solution is to re-map the whole gamut of an image to the screen's gamut, thus remapping all colours in an image. This 'perceptual' or 'photometric' correction may avoid the above artefacts, but conversely there are many different ways in which such remapping may be accomplished. As such, there is no standard way to map one gamut into another more constrained gamu

From this equation we can see that the basic function of a physics engine is to evaluate all of the forces on an object

sum them to calculate the acceleration and then use the acceleration to update the velocity and position.

what does the visual perception links?

t links the physical environment with the physiological and psychological properties of the brain, transforming sensory input into meaningful information.

Compound bodies are objects

that are made out of a number of rigid bodies linked together by joints or other methods. They are a way of creating objects with more complex movement while maintaining the simplicity of rigid body simulation. We will describe a number of ways of joining rigid bodies below

We could try to work out an exact (analytic) formula, but this would be specific to a given number of links. It would also be underconstrained for more than two links;

that is, there is more than one solution (you can hold your shoulder and wrist still but still rotate your elbow into different positions)

Aardman animations use metal skeletons underneath their clay mode

that the characters can be moved easily and robustly without breaking. Each individual movement is then less work (though still a lot)

what are the most important method in animation ?

the most important method is keyframing

Even with 24-bit colour, although indicated as 'millions of colours' or 'true colour',

there are many colours within the visible spectrum that screens cannot reproduce. To show the extent of this limitation for particular display devices, chromaticity diagrams are often used. Here, the Yxy colour space is used, where Y is a luminance channel (which ranges from black to white via all greys), and x and y are two chromatic channels representing all colours.

The frames between the keyframes have

to be filled in (interpolated). For example, if you have the following positions of the ball.

where velocity is the rate of change of position (v):

v = dx dt

n the following example code we are applying an impulse to simulate a catapult. The player can drag the object about with the mouse and when the mouse is released, an impulse is applied to it that is proportional to the distance to the catapult. The impulse is calculated as the vector from the object to the catapult and is then scaled by a factor forceScale. The result is applied to the rigid body using the applyCentralImpulse command (there is also a command applyImpulse which can apply an impulse away from the centre of the object)

void mouseReleased() { PVector impulse = new PVector(); impulse.set(startPoint); impulse.sub(droid.getPosition()); impulse.mult(forceScale); droid.physicsObject.rigidBody.applyCentralImpulse( new Vector3f(impulse.x, impulse.y, impulse.z)); }

In a full program this would be bundled into a complete timeline class, but for a basic demo we can just use the array directly by adding keyframes to it:

void setup(){ size(640, 480); timeline = new Keyframe [5]; timeline[0] = new Keyframe(0, 0, 0, 0); timeline[1] = new Keyframe(2, 0, 100, 0); timeline[2] = new Keyframe(4, 100, 100, 0); timeline[3] = new Keyframe(6, 200, 200, 0); timeline[4] = new Keyframe(10, 0, 0, 0); }

An important problem is how to animate people talking. In particular, how to animate appropriate mouth shapes for what is being said (Lip-sync). Each sound (phoneme) has a distinctive mouth shape

we can create a morph target for each sound (visemes). Analyse the speech or text into phonemes (automatically done by text to speech engine), match phonemes to visemes and generate morph target weights

B´ezier curves and animation

would be an obvious choice of curve to use as they are smooth but they do not go through all the control points; we need to go through all the keyframes. As we need to go through the keyframes we use H

If tangents are calculated in this way the curves are called Catmull-Rom splines

you set the tangents at the first and last frame to zero you get 'slow in slow out


Set pelajaran terkait

Chemistry Chapter 8: Covalent Bonding

View Set

Chinese III Lesson 3 Can you help me move into my new house? (Part 1.Nouns)

View Set

Tort of Negligence - Superseding Cause and Affirmative Duties

View Set

Ed 1355 Exam 3 practice problems

View Set

Lesson 7: Audit Sampling & Statistical Testing

View Set