Computer Graphics Final

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

3-D Noise

- 3-D or solid texture has value at every point (x, y, z) --> makes texture mapping easy - Simple solid texture generator noise function on lattice: noise(x, y, z) = random() - Interpolate for points inbetween - This is "value" noise, "Perlin" noise is based on random gradients - Example: box with holes, spheres

Bump Mapping Representations

- 3-D vector m(u,v) added directly to normal vector n - Think of as 2-D vector of coefficients (bu, bv) that scale u, v vectors tangent to surface (changes direction to create bump effect) - Partial derivatives give how much normal should be moved horizontally and vertically

Bounding Boxes for Multiple Objects

- A box around each object, or a single box surrounding all objects, are not optimally efficient schemes - Nested boxes impose a hierarchy that allow more efficient recursive tree search

Corner-Cutting Subdivision

- Algorithm to achieve smooth curves - Repeatedly chop off corners of polygon - Each line segment is replaced by two shorter segments - Limit curve is shape that would be reached after an infinite series of such subdivisions

Issues with Basic Ray Tracing

- Aliasing (jaggies) - Shadows have sharp edges, unrealistic - No diffuse reflection from other objects - Intersection calculations are expensive (especially for complex objects) - Not suitable for real-time (i.e. games), but its getting better with GPU acceleration

Glossy Reflections with DRT

- Analog of hard shadows are "sharp reflections" - every reflective surface acts as a perfect mirror - To get glossy or blurry reflections, send out multiple jittered reflection rays and average their colors - Reflection is sharper closer to the object being reflected - Example: butterfly stained glass

Surface Subdivision

- Analogous to curve subdivision: - Refine mesh: Choose new vertices to make smaller polygons, update connectivity - Smooth mesh: Move vertices to fit underlying object - Depend on mesh type (triangular, quadrilateral) - Treat vertices with different valences (# of neighbors) differently - Odd and even vertices (odd: just-added vertices, even: inherited from previous step)

Uniform Spatial Subdivision

- Another bounding volumes approach is to divide space equally, such as into boxes - Each object belongs to every box it intersects - Trace ray through boxes sequentially, check every object belonging to current box - Tracing ray here is a little like rasterizing a line - must keep track of intersections with current box sides to know which box will be entered next - Must make sure that only hits inside current box are reported

What is Texture Mapping?

- Apply a 2-D image to a 3-D object (e.g. a textured cube) - Sticker or decal that you are putting on an otherwise texture-less object - Spatially-varying modification of surface appearance (e.g. color, transparency) at the pixel level

Cross-Fading

- Approach for image morphing - Animate image blending as alpha varies from 1 to 0 smoothly - Issues: features don't line up exactly and we get a double image, pixels changing intensity - Shifting/scaling one entire image doesn't fix the problem - Can handle more situations by applying different warps to different pieces of image (manually chosen, takes care of feature correspondences) - Overlay mesh over image to establish correspondence between features (vertices of mesh move!)

Refraction

- Bending of light ray as it crosses interface between media (e.g. air to glass or vice versa) - Example: balls

Quadratic Blending Functions

- Bernstein polynomials - Magnitude of each proportional to control point's influence on the shape of the curve (note that each is non-zero along the entire curve)

BRDF

- Bidirectional Reflectance Distribution Function - Ratio of outgoing radiance in one direction to incident irradiance from another - Can view BRDF as probability that incoming photon will leave in a particular direction (given its incoming direction)

Bilinear Interpolation

- Blend four pixel values surrounding source, weighted by nearness (mix colors in texture space together) - Gives smoother color transitions in texture --> reduces aliasing!

1st Pass in Photon Mapping

- Build photon map (analog of rexes) - Shoot random rays from light(s) into scene - Each photon carries fraction of light's power - Follow specular bounces but store photons in map at each diffuse surface hit (or scattering event) - Probabilistically decide on photon reflection, transmission, or absorption based on material properties of object hit - Specular surface: send new photon with scaled-down power in reflection/refraction direction just like ray tracing - Diffuse surface: if at least on bounce store photon in photon map and send new photon in random direction - usually cosine distribution (so do not store photon at specular surface) - Arbitrary BRDF: use BRDF as probability distribution on new photon's direction

Height Map

- Bump representation - Store just scalar "altitude" at each pixel - Get bu, bv from partial derivatives - Approximate with finite differencing - Red and green values change based on height differences, blue is constant (1) - Example: converting height maps to normal displacements, penguins

Issues with Bump Mapping

- Bumps don't cast shadows - Geometry doesn't change, so silhouette of object is unaffected (just changes how the object is lit)

Improving Interpolation

- C^n continuity: nth derivative is continuous everywhere on the curve - Linear interpolation over multiple connected line segments has C^0 continuity, but not C^a or higher continuity, which would make for a smoother curve

Adaptive Supersampling

- Change in areas where image is changing more quickly - Whitted's method - Shoot rays through 4 pixel corners an collect colors - Provisional color for entire pixel is average of corner contributions - If any corner's color is too different (more than 25% different from average), subdivide pixel into quadrants and recurse on quadrants (maximum depth of 2 sufficient)

How does bump mapping work?

- Change interpolated normal based on bump map - Alter pixel normals n(u,v) derived from object geometry to get additional detail for shading - Compute lighting per pixel (like Phong)

Characteristics of Texture Mapping

- Color - Transparency - Shininess (e.g. leaf on dome, sword) --> use texture map to specify different reflectance in different areas of object - Bumpiness (bumpy sphere, brick wall) - Etc.

Ray Casting

- Compute illumination at first intersected surface point only - Takes care of hidden surface illumination - Simulation of irradiance (incoming light ray) at each pixel - Iterates through every pixel in image to determine if ray from focal point through pixel intersects object in scene (background color if nothing hit) - Local shading model applied to first point hit - Easy to apply exact rather than faceting shading model to objects for which we have an analytic description (spheres, cones, cylinders, etc.)

Perspective-Correct Texture Coordinate Interpolation

- Computer at each vertex after perspective transformation (numerators: s/w and t/w, denominators: 1/w) - Linearly interpolate s/w, t/w, and 1/w across triangle - At each pixel, perform perspective division of interpolated texture coordinates (s/w, t/w) by interpolated 1/w (i.e. numerator over denominator) to get (s, t) - GPU takes care of this for us - Alternative: use regular linear interpolation with small enough polygons that effect is not noticeable - Linear interpolation for Z-buffering is correct

Caustic

- Concentrated specular reflection/refraction onto a diffuse surface - Follows an LSDE path - Standard rat tracing cannot handle caustics - Example: glass sphere on table, copper ring on table

Spherical Projector

- Convert rectangular coordinates (x, y, z) to spherical (r, theta, phi), use only (theta, phi) - Example: rainbow sphere surrounding teapot

Steps of Texture Mapping

- Creation: where does the texture image come from? - Geometry: transformation from 3-D shape locations to 2-D texture image coordinates - Rasterization: what to draw at each pixel (since texture coordinates are floats) --> bilinear interpolation vs. nearest neighbor

Bezier Curves

- Curve approximation through recursive application of linear interpolations - Linear: 2 control points, 2 linear Bernstein polynomials - Quadratic: 3 control points, 3 quadratic Bernstein polynomials - Cubic: 4 control points, 4 cubic polynomials - N control points = N - 1 degree curve - Only endpoints are interpolated (i.e. on the curve) - Curve is tangent to linear segments at endpoints - Every control point affects every point on curve (makes modeling harder) - For surfaces, multiply two blending functions (one for each dimension) together --> bilinear patch: 2 x 2 control points, biquadratic Bezier patch: 3 x 3 control points, bicubic patch: 4 x 4 control points

Chaikin's Subdivision Scheme

- Defines quadratic B-splines - Make new edge joining point ¾ of way to next control point pi with point ¼ of way after pi

Catmull-Rom Spline

- Different from Bezier curves in that we can have arbitrary number of control points, but only 4 of them at a time influence each section of curve - And it's interpolating (goes through points) instead of approximating (goes "near" points) - Four points define curve between 2nd and 3rd - n + 1 control points for polygon with n sides - constraints allow you to solve blending function - increasing t moves forward along spline - Yields C0, C1 continuous curve which goes through every control point (not C2 continuous) - Unlike Bezier, C-R spline curve does not necessarily lie within convex hull of control points - Example: smooth camera paths (e.g. roller coaster), image morphing

Ray-Triangle Intersection

- Direct barycentric coordinates expression: t(u, v) = (1 - u - v) v0 +uv1 + vv2 - Set this equal to parametric form of ray o + td and solve for intersection point (t, u, v) - Only inside triangle if u, v, and 1 - u - v are between 0 and 1

DRT

- Distributed ray tracing - Solves problems with basic ray tracing - Use multiple eye rays for each pixel rendered or multiple recursive rays at intersections

Noise

- Easiest texture to make (random values for texels) --> noise(x, y) = random() - If random() has limited range (e.g. [0, 1]), can control maximum value via amplitude --> a * noise(x, y) - Results usually aren't very exciting visually - Example: arch, "Wobbly Chrome"

Bounding Volumes

- Enclose complex objects (i.e. object modes) in simpler ones (i.e. spheres, boxes) and test simple intersection before complex intersection - Want bounds as tight as possible - Example: some creature, bunny

Ray-Polygon Intersection

- Express point p on a ray as some distance t along direction d from origin o: p = o + td - Use plane equation n ⋅ x + d = 0, substitute o + td for x, and solve for t - Only positive t's mean the intersection is in front of the eye - Then plug t back into p = o + td to get p - Is the 2-D location of p on the plane inside the 2-D polygon? - For convex polys, Cohen-Sutherland-style outcode test will work

Ambient Occlusion

- Extension of shadow ray idea - not every point should get full ambient illumination - Distinguish points that are less or more blocked by objects in scenery - Cast multiple random rays from each rendered surface point to estimate percent of sky hemisphere that is available (limit length of rays so distant objects have no effect, cosine weighting/distribution for forshortening - Example: box and donut, pile of Legos

Mipmaps

- Filtering for minification is expensie, and different areas must be averaged depending on the amount of minification - Precompute reduced size version of original texture - Prefilter entire image at different resolutions - For each screen pixel, pick texture in mipmap at level of detail (LOD) that minimizes minification (i.e. pre-image area closest to 1) - Do nearest or linear filtering in appropriate LOD texture image - Example: shell

Interpolating Interpolants

- For 3 points a, b, and c, we can define a smoother curve by linearly interpolating along the line between points d and e, linearly interpolated between a, b and b, c respectively - This curve approximates a, b, and c, because it doesn't go through them all - True interpolating curves include all of the original points

Soft Shadows with DRT

- For point light sources, sending a single shadow ray toward each is reasonable, but this gives hard-edged shadows - Simulate soft shadows by modeling each light source as sphere and sending multiple jittered shadow rays toward a light sphere, using fraction that reach it to attenuate color - Similar to ambient occlusion, but using list of light sources instead of single hemisphere - Example: heart shadow, rectangular prism - Creates discrete shadow points - need post-processing to smooth into contiguous region

What is the problem with LS+DE paths for ray tracing?

- For specular surfaces, we know where the photon will go (or where it came from, if going backwards) - For diffuse surfaces, there's much more uncertainty - If we're tracing a ray from the eye and we hit a diffuse surface, this uncertainty means that the source of the photon could be anywhere in the hemisphere - Conventional ray tracing just looks for lights at this point, but for LD+SE paths we need to look for other specular surfaces

FTIR for Multi-Touch Surfaces

- Frustrated Total Internal Reflection - Putting material with higher refractive index against interface can allow light to escape/scatter - Example: fingers on touch screen

Why use bump mapping?

- Gives impression of height variation by only using lighting (e.g. world height map) - Can get a lot more surface detail without expense of more object vertices to light, transform

Ray-Sphere Intersection

- If equation equals 0, p is on sphere for: |p - pc|^2 - r^2 = 0 where pc is center of sphere and r is radius - Intersection is found by satisfying both equations (point on sphere and point on ray): |0 + td - pc|^2 - r^2, solve for t so t = d * delta p +- sqrt( (d * delta p)^2 - (|delta p|^2 - r^2) ) --> can have zero, one, or two solutions - Testing must be done after t is computed (e.g. neagtive t means behind camera) - KNOW HOW TO MAKE C-CODE THAT DOES MATH

Shadow Maps

- If we render scene from point of view of light source, all visible surfaces are lit and hidden surfaces are in shadow (camera parameters here = spotlight characteristics) - Shadows are things that the light "doesn't see" - z-buffer used to see if pixel is in light or in shadow - When rasterizing scene from eye view, transform each pixel to get 3-D position with respect to the light --> project pixel to (i, j, depth) with respect to light, compare depth to value in shadow buffer (aka light's z-buffer) at (i, j) to see if it is visible to light (not shadowed)

Applications of DRT

- Improve image quality via anti-aliasing - Supersampling - Uniform vs. adaptive

IOR

- Index of refraction - Ratio of speed of light in vacuum to speed in that medium (wavelength dependent --> prisms) - The bigger IOR is, the more that the light is bent

Illumination Models

- Interaction between light sources and objects in scene that results in perception of intensity and color at eye - Local vs. global models

Light Paths

- Interactions between light source (L), diffuse (D) and specular (S) objects, and eye (E) can be described with the regular expression L (D|S)* E - If a surface is a mix of D and S, the combination is additive so it is still okay to treat in this manner

Texture Rasterization

- Linear texture coordinate interpolation doesn't work! - Triangle in image does not map to triange in texture space - Equally-spaced pixels do not project to equally spaced texels under perspective projection - No problem with 2-D affine transforms (rotation, scaling, shear, etc.) but different depths change things due to forshortening - Example: cubes with incorrect perspective

Surface Subdivison Schemes

- Loop (C2): approximating triangular meshes - Catmull-Clark (C2): approximating quadrilateral meshes (e.g. Tetris piece, car, "Geri's Game") - Modified Butterfly (C1): interpolating triangular meshes - Kobbelt (C1): interpolating quadrilateral meshes

What visual phenomena does ray casting not account for?

- Mirror-like surfaces should reflect other objects in scene - Transparent surfaces should refract scene objects behind them - Use ray tracing for more realism

Why use texture mapping?

- More efficient --> more detail without the cost of more complicated geometry (modeling, display) - Layer multiple texture maps, can modulate color + other surface characteristics (transparency) - "Lookup table" for pre-computed quantities (lighting, shadows, etc.), results for lighting/shadows/etc. stored in textures and then rendered using texture maps

Sphere Map for Environment Textures

- Most often constructed with two photographs of mirrored sphere taken 90 degrees apart - Example: sphere map placed onto reflective bear object, pilot's refection in reflective body from Terminator II, relighting (same scene, different lighting)

Shadow Rays

- Next step in ray casting after intersection determination - Figure out where lights are - For point being locally shaded, spawn new ray in each light direction and check for intersection to make sure light is visible - Only add diffuse and specular components for light if light is not blocked - Do intersection text on light vector: tests to see if another object is in the way, if so light is left out of Phong lighting equation, ambient light is unblockable - Example: columns, balls

Issues with Environment Mapping

- Not a substitute for real reflection, it's an approximation - Only physically correct under assumptions that object shape is convex and radiance comes from infinite distance - Object concavities mean self-reflections, which won't show up - Other objects won't be reflected - Parallel reflection vectors access same environment texel (works for objects that are far from environment by not close)

Cylindrical Projection

- Oblique - Convert rectangular coordinates (x, y, z) to cylindrical (r, h, theta), use only (h, theta) to index texture image - Scale h and theta (2D projection coordinates) to get texture pixel coordinates from model units - Example: rainbow cylinder surrounding teapot

Backward Ray Tracing

- Only consider rays that create image - Types: ray casting and ray tracing

Planar Projection

- Orthographic projection - Pick point and follow with color to specified plane - Example: rainbow objects on table with colors projected in different places based on whether they are being projected onto the XY plane, the YZ plane, or the XZ plane.\

Fragment Shader

- Output: out vec3 color; (color that you're drawing at each pixel) - Input: in vec2 UV; (texture coordinates) - sampler2D: uniform sampler2D myTextureSampler; (two-dimensional image/texture type, uniform means getting set from outside program, set on .cpp side) - UV from teapot picture drawn on board

Parametric Lines

- Parametric definition of a line segment: p(t) = (1-t) p0 + tp1 - Like a blend of the two endpoints

Bump Mapping

- Per pixel variation in the apparent normal - Alters eye reflection vector - "Smooth" normal still interpolated, randomly changed by bump map - Less complicated than specifying distance of vertices to create bumps - Example: checkered cylinder, bump texture applied to teapot

Ray Tracing Model

- Perceived color at point p is an additive combination of local illumination (e.g. Phong) + reflectetion + refraction effects - Compute reflection, refraction contributions by tracing respective rays back from p to surfaces they came from and evaluating local illumination at those locations - Apply operation recursively to some maximum depth

Projecting in Non-Standard Directions

- Perspective projection - Texture projector function doesn't have to project ray from object center through position (x, y, z) - can use any attribute of that position - Example: ray comes from another location, ray is surface normal n at (x, y, z), ray is reflection-from-eye vector r at (x, y, z) --> used to fake reflective object - This can lead to interesting or informative effects (e.g. teapot with different ray directions for a spherical projector)

kd-tree

- Photon map - Decoupling from scene geometry allows fewer photons that scene objects/triangles (no texture maps, no meshes) - Each point parameterizes axis-aligned splitting plane; rotate which axis is split - But balance is important to get O(log N) efficiency for nearest-neighbor queries

Forward Ray Tracing

- Proper global illumination means simulation of physics of light - Rays emitted from light, bounce off objects, and some hit our eyes forming image - Problem: not many rays make it to the image, wastes computation

Ray Tracing

- Recursively spawns rays at hit points to simulate reflection, refraction, etc. - Example: glass sphere, camera, cars, explorer's desk

2nd Pass in Photon Mapping

- Render scene - Modified ray tracing (follow eye rays into scene) - Use photons near each intersection to compute light - For each eye ray intersection, estimate irradiance as function of nearby photons - Each photon stores position, power, incident direction - can treat like mini-light source - Use filtering (cone or Gaussian) to weight nearer photons more - Can use discs instead of spheres to only get photons from same planar surface - For soft, indirect illumination, irradiance estimates are combined with standard local illumination calculations after final gathering (which shoots more rays to bring back irradiance estimates from other diffuse surfaces) - just like ray tracing adds reflection/refraction components to local color - As usual, more accurate with more photons --> use multiple maps for different phenomena

Texturing Step 1: Creation

- Reproductions (photographs, handpainted) - Directly computed functions (lightmaps, visibility maps) - Procedurally-built (synthezise with randomness, pattern-generating rules, etc.) --> e.g. happy Buddha

Aliasing

- Shadow edges have aliasing depending on shadow map resolution and scene geometry - Resolution problem - Shadow edges are "hard" by default (very blocky), real shadows typically have soft edges - Solutions to both problems typically involve multiple offset shadow buffer lookups

Caustic Photon Map

- Shoot photons only at specular objects (aimed sort of like shadow rays) - Example: water

Minification

- Single screen pixel area maps to area greater than one texel - Tough to do with bilinear interpolation. ignores lots of texels - Filtering for minification has aliasing problem much like line rasterization --> pixel maps to quadrilateral (pre-image) is texel space

Loop Subdivision

- Smooths triangle mesh - Subdivision replaces 1 triangle with 4 - Approximating scheme (original vertices not guaranteed to be in subdivided mesh) - Example: snake

DRT Effects

- Soft shadows - Glossy reflections - Depth of field (atom model) - Motion blur (moving balls)

3-D Noise Applications

- Solid textures such as marble, wood, 3-D clouds - Animated 2-D textures (flesh, slime, copper, mercury, stucco, amber, sparks)

Subdivision for Sphere

- Start with icosahedron with triangular faces - Compute midpoint of each edge of triangle (this defines 4 new triangles) - Renormalize midpoints (new vertices) so that they lie on sphere (this makes original triangle non-planar) - Recurse on each new face until a desired depth is reached (at leaf of recursion, draw triangle)

Cube Map for Environment Textures

- Straightforward to make - Render photograph six rotated views of environment (4 side views at compass points, 1 straight-up view, 1 straight-down view)

Geometry Shaders

- Take primitive as input (possibly with adjacency information): point, line, triangle - Amplifies/dessimates primitives - Generate zero or more new primitives (change primitives into other primitives) --> e.g. point into quadrilateral - Run after vertex shaders - Good for subdivision, level of detail (LoD) control, billboards/impostor, procedural terrain/plants

Ray Intersection

- Test each primitive in scene for intersection individually - Different methods for different kinds of primitives (polygon, sphere, cylinder, torus, etc.) - Make sure intersection point is in front of eye and the nearest one

Applications of Texture Mapping

- Text (each letter is rectangle with texture applied to it) - Billboards/Impostors (clouds/trees are impostors with transparent background, rotated so that they always face viewer/parallel to viewing plane)

Lightmaps

- Texture mapping application - Expensive to do lighting computation in real time - Precompute expensive static lighting effects (such as ambient occlusion or diffuse reflectance) and "bake" them in color texture - Scene looks more realistic as camera moves without expense of recomputing effects - Example: light on tiles, torches/lights in buildings for video games

Displacement Maps

- Textures can be used to modify underlying geometry (unlike bump maps) - Generally in direction of surface normal - Must have enough vertices (not like globe example) - Take altitude map to use as scale factor to move vertices out or in - Example: bumpy sphere, penguins

Environment/Reflection Mapping

- To render pixel on mirrored surface correctly, we need to follow reflection of eye vector back to first intersection with another surface and get its color - This is an expensive procedure with ray tracing, but can be easily done by approximating with texture mapping - Key idea is to render 360 degree view of environment from center of object with sphere or box as intermediate surface - Intersection of eye reflection vector with intermediate surface provides texture coordinates for reflection/environment mapping

Bidirectional Ray Tracing

- Trace forward light rays into scene as well as backward eye rays - At diffuse surfaces, light rays additively deposit photons in rexes where they are accessed by eye rays - Summation approximates intergral term in radiance computation - Light rays carry information on specular surface locations - they have no uncertainty - Simulates LS*DS*E paths - Photons desposited in rexes are sparse, so they must be interpolated (use density estimation, still have noise issues) - Storage of illumination only on surfaces means that we ignore for and other volume-based scattering/absorption (aka "participating media) - Example: glass spheres

2-D Noise Applications

- Traditional "wrappable" textures (clouds, water, fire, bumps, specularity, blending maps) - Height maps (fractal terrain)

Fractal Noise

- Turbulence - FBM (fractional Brownian motion) - Many frequencies present, looks more natural - Can get this by summing noise at different magnifications: turb(x, y, z) = sigma i ( ai * noise i (x, y, z) - Typical (but adjustable) parameters: magnification doubles at each level (octave), amplitude drops by half - Example: spheres

Photon Mapping

- Two-pass algorithm somewhat like bidirectional ray tracing, but photons stored differently - Example: glass balls in room with red and blue walls, room with table

Vertex Shader

- UV is interpolated downstream, sends information about texture coordinates - Mapping between texture and object

Interpolating Splines

- Use key frame to indicate a series of positions that must be hit (camera location, path for character to follow, animation of walking/gesturing/facial expressions - morphing) - Use splines for smooth interpolation (must not be approximating!)

Supersampling

- Using more than bilinear interpolation's 4 texels - Solves minification problems - Shoot multiple nearby eye rays per pixel and combine colors - Rasterize at higher resolution (regular grid patter around each normal image pixel, irregular jittered sampling pattern reduces artifacts - Combine multiple samples into one pixel via weighted average (e.g. "box" filter, Gaussian/cone filter)

Projector Functions

- Want a way to get from 3-D point to 2-D surface coordinates as an intermediate step - Project complex object onto simple object's surface with parallel or perspective projection (focal point inside object) --> plane, cylinder, sphere, cube, mesh (piecewise planar in OpenGL)

3-D Noise Interpolation

- f(x, y, z) can be evaluated at non-lattice points with a straightforward extension of 2-D bilinear interpolation (to trilinear interpolation) - Other interpolation methods (quadratic, cubic, etc.) also applicable

Basic Steps for Ray Casting

1. Loop over (i, j) 2. Compute 3-D ray into scene for each 2-D image pixel 3. Compute 3-D position of ray's intersection with nearest object and normal at that point 4. Apply lighting model to get color (closest gets Phong lighting)

Texturing Steps 2 and 3: Geometry + Rasterization

1. Run fragment shader at each pixel 2. Compute object space location (x, y, z) from screen space location (i, j) of given pixel 3. Use projector function to obtain object surface coordinates (u,v) (3-D to 2-D projection) 4. Use corresponder function to find texel coordinates (s, t) (2-D to 2-D transformation) --> scale, shift, wrap like view transform in geometry pipeline (in texture coordinates) 5. Filter texel at (s, t) --> do we combine/interpolate values around that texel? 6. Modify pixel (shadow, color, etc.) --> rasterization

Curve Subdivision

Algorithmically obtain smooth curves starting from small number of line segments

"Box" Filter

All samples associated with a pixel have equal weight (i.e. directly take their average)

4-D Noise Applications

Animated solid textures

Noise Frequency

By selecting larger spaces between lattice points, we are increasing the magnification of the noise texture and hence reducing its frequency

Linear Interpolation as Blending

Consider each point on the line segment as a sum of control points pi weighted by blending functions Bi: p(t) = n sigma i=0 ( B n i (t) pi )

Uniform Supersampling

Constant number of rays

Local Illumination

Depends only on light sources directly affecting object, neglecting other objects in the scene (e.g. geometry, material properties)

Parametric Surfaces

If we have a surface patch already parametrized by some natural (u, v) such that x = f(u, v), y = g(u, v), z = h(u, v), we can use parametric coordinates u, v without a projector

Impostors

Image-aligned polygons in 3-D

Lighting Components

L = Ldirect + Lspecular + Lindirect + Lcaustic - Can get Ldirect and Lspecular using ray-casting, ray-tracing respectively - Lindirect is main reason we're looking at photon mapping (LD*E paths) - Lcaustic from special caustic photon map

What are the possible light paths that local illumination can create?

LDE LSE

What is the light path for a direct visualization of the light?

LE

What are the possible light paths that ray tracing can create?

LS*E LDS*E

Volume Photon Map

Photon interactions with participating media such as fog or smoke

Texture Coordinates

Polygons can be treated as parametric patches by assigning texture coordinates to vertices

Gaussian/Cone Filter

Sample weights inversely proportional to distance from associated pixel

Global Photon Map

Shoot photons everywhere for diffuse, indirect illumination

Magnification

Single screen pixel maps to area less than or equal to one texel

Global Illumination

Takes indirect effect of other objects into account (e.g. shadows cast, light reflected/refracted)

C1 Continuity

Tangent of splines match on either side of point

Snell's Law

The relationship between the angle of incidence and the angle of refraction is given by: n1 sin(theta1) = n2 sin(theta2)

Color Bleeding

Transfer of color between diffuse surfaces via reflection

Mesh Warping Algorithm

for f = 0 to 1 do 1. Linearly interpolate mesh vertices between MS and MT to get Mf 2. Warp image IS to IfS using MS and Mf 3. Warp IT to IfT using MT and Mf 4. Linearly interpolate morphed image If between images IfS and IfT (i.e., blend them together with α = 1 - f) end - f is degree of transformation - vertical = color change, horizontal = geometry change - For steps 2 & 3, use cubic splines to interpolate new pixel locations between warped mesh vertices (e.g. Catmull-Rom) - Could use bilinear patch for each piece, but wouldn't have C1 continuity of intensity at borders (i.e. could get a faceted effect akin to Gouraud shading without normal averaging) - Example: kid to adult, man to cat

Midpoint Corner-Cutting Algorithm

function midpoint_subdivide (p0, p1, p2, depth) { if (depth > 0) { p01 = (p0 + p1)/2; p12 = (p1 + p2)/2; pm = (p01 + p12)/2; midpoint_subdivide(p0, p01, pm, depth - 1); midpoint_subdivide(pm, p12, p2, depth - 1); } // else draw lines (p0, p1) and (p1, p2) } - Subdivision definiton of quadratic Bezier curves

Reflectance Equation

i total = i amb + i diff + i spec

Computing the Transmission Direction t

n = n1/n2 c1 = cos(theta1) = -v . n c2 = cos(theta2) = sqrt(1 - n^2 (1 - c1^2) ) t = nv + (nc1 - c2) n - Total internal reflection happens with the term in the square root isn't positive

Reflection Direction for Phong Model

r = 2 (n . l) n - 1 where r is reflection direction, n is normal, and l is light direction

Ray Tracing Reflection Formula

r = v - 2 (n . v ) n Negate Phong illumination formula

Critical Angle

theta critical = sin^-1 (n2/n1) - If you exceed critical angle, don't send out refraction array - No light escapes, all light is reflected internally - Example: turtle


Set pelajaran terkait

Ch.8 Healthcare Delivery Systems PrepU

View Set

Step 3 - Compile Study Materials

View Set

HESI Module #2 Health Promotion and Disease Prevention

View Set

NRS-304 Pharmacology II Chapter 9

View Set

Physics- Ch 7, Physics Ch 8 MC, Physics Ch 9 MC, Physics Chapter 10 MC, Physics Ch 11 MC, Physics Ch 17 MC, Physics Ch 16 MC

View Set