CS450 Final Exam - Computer Graphics
A "stereomirror" display system uses: one very wide monitor and a half-silvered mirror 2 LCD monitors and a half-silvered mirror 2 LCD monitors and 2 normal reflective mirrors one very wide monitor and 2 normal reflective mirrors
2 LCD monitors and a half-silvered mirror
A Bezier cubic curve like we looked at contains: 2 end points and 3 intermediate control points 2 end points and 2 intermediate control points 2 end points and 1 intermediate control point 2 end points and 4 intermediate control points
2 end points and 2 intermediate control points
A cubic Bezier curve requires, as input: 4 points An arbitrary number of points 5 points 3 points
4 points
If, in a framebuffer, the green component is stored using 6 bits, the number of shades of green that are possible is: 4 64 256 16
64
In Inverse Kinematics (IK), the inputs are: A desired velocity A desired acceleration A desired position The animation parameters
A desired position
Doing the computer graphics projection for left and right stereo views requires: A complete paradigm change A slight variation on gluPerspective( ) A sophisticated game engine A call to gluLeftEye( ) and gluRightEye( )
A slight variation on gluPerspective( )
The number of output vertices from a Bezier curve is: 5 3 4 An arbitrary number
An arbitrary number
Using 3D Venn diagrams to create and edit geometry is called: Constructive Solid Geometry (CSG) 3D Venn Diagrams (3VD) UnionIntersectionDifference (UID)
Constructive Solid Geometry (CSG)
The framebuffer's Z-buffer is used to hold: Depth Refresh duration Output to the Video Driver Which double-buffered framebuffer you are using
Depth
The 3 major steps in running a particle system are: Enter, Do-It, Update Enter, Display, Un-Do-It Emit, Display, Update Emit, Do-It, Un-Do-It
Emit, Display, Update
The simplified Euler equation that relates the number of Edges, Faces, and Vertices is: V - F + E = 2 F - E + V = 2 E - V + F = 2
F - E + V = 2
Particle Systems can be used create the appearance of all of these except: Sand Fire Water Falling dominos
Falling dominos
The full Rendering Equation describes: How light travels through air How light travels in a vacuum How light is emitted from a surface How multiple light beams interfere with each other
How light is emitted from a surface
The process of having someone else (e.g., a computer, an underling) create the intermediate frames is called: Underling-framing Intern-framing In-betweening Minor-framing
In-betweening
The value of the Radiosity Shape Factor is obtained by: Looking it up in a table Integrating multiple light paths between surfaces Googling for it
Integrating multiple light paths between surfaces
Radiosity: Is actually just Ambient-Diffuse-Specular under a different name Is only useful for reflections and refractions Is a super-fast way of creating rendered images Is essentially an energy balance
Is essentially an energy balance
All of the following are true about Ray-tracing except: It can easily handle shadows It can easily represent color bleeding between surfaces It can easily handle reflection It can easily handle transparency with refraction
It can easily represent color bleeding between surfaces
Each of the following is true about Radiosity except: It treats surfaces as light sources It easily produces reflections and refractions It handles color bleeding between surfaces
It easily produces reflections and refractions
The process of having a human create the major frames in an animation and having someone else (e.g., a computer, an underling) create the intermediate frames is called: Key Framing Minor Framing Major Framing Master Framing
Key Framing
The Specular lighting depends on the location of the point you are lighting, plus: Light location, Surface normal, Eye position Light location, Eye position Surface normal, Eye position None of these -- it's a constant Light location, Surface normal
Light location, Surface normal, Eye position
Normal OpenGL drawing (like you have been doing all along) is: Local illumination Global illumination Freudian illumination Fractal illumination
Local illumination
In ray-tracing, why is it a good idea to limit the number of reflective bounces? Otherwise, you run out of floating-point precision Otherwise, you look like a graphics nerd Otherwise, you get weird-looking artifacts Otherwise you run the risk of computing forever
Otherwise you run the risk of computing forever
To turn a shader (called Pattern) on, the C++ class, shown in the notes has to say: Pattern->TurnShaderOn( ) Pattern->ShaderOn( ) Pattern->Use( ) Pattern->UseShader( )
Pattern->Use( )
To turn a shader off and go back to the fixed function (default OpenGL) pipeline functionality, using the C++ class shown in the notes, one says: Pattern->TurnShaderOff( ) Pattern->DontUseShader( ) Pattern->Use( 0 ) Pattern->ShaderOff( )
Pattern->Use( 0 )
In Forward Kinematics, the inputs are: The animation parameters A desired acceleration A desired velocity A desired position
The animation parameters
In Ray-tracing, the image is produced by: Tracing light spheres through each pixel Tracing light spheres from the origin Correct! Tracing light rays through each pixel Tracing light rays from the origin
Tracing light rays through each pixel
The framebuffer's Alpha value is used to specify: Saturation Intensity Lightness Transparency
Transparency
A 3D Printer accepts geometry in the form of a: Pentagonal mesh Triangle mesh Quadrilateral mesh Any form of mesh
Triangle mesh
What order do these appear in the shader-enabled graphics pipeline? * Fragment shader * Rasterizer * Vertex shader
Vertex, Rasterizer, Fragment
Subsurface Scattering is a good way to realistically render materials like: Brushed metal Shiny plastic Smooth metal Wax
Wax
In Stereographics, you can tell how deep the Plane of Zero Parallax is in the scene by noticing: Where the left and right eye views are the same Noticing where the left and right eye views are vastly different Noticing where the left and right eye views get less lighting
Where the left and right eye views are the same
The built-in shader function texture( ) performs texture mapping by using: an st coordinate pair a texture unit and an st coordinate pair a texture unit
a texture unit and an st coordinate pair
The (r,g,b) output from the Fragment shader is in the variable: gl_FragColor gl_Color gl_FragColor3f gl_Color3f
gl_FragColor
The (x,y,z) output from the Vertex shader is in the variable: gl_Position gl_Vertex3f gl_Vertex gl_Position3f
gl_Position
The (x,y,z) input to the Vertex shader is in the variable: gl_Vertex3f gl_Position3f gl_Position Correct! gl_Vertex
gl_Vertex
The built-in variable gl_FragColor: is a vec3 that writes to the framebuffer is a vec4 that writes to the rasterizer is a vec4 that writes to the framebuffer is a vec3 that writes to the rasterizer
is a vec4 that writes to the framebuffer
A vertex shader creates an out variable. A fragment shader creates the same variable, but as an in variable. The step that connects the two is: the rasterizer the color blander the matrix multiplier the texture lookup
the rasterizer
The parts of your 3D scene that you place between the Plane of Zero Parallax and your eye appear: to pop out of the screen to be buried in the screen to appear on the left-hand side of the screen to appear on the right-hand side of the screen
to pop out of the screen
The input and output variables in the Vertex shader are of type: vec4f vec3f vec3 vec4
vec4
The output from the Fragment shader is of type: vec4 vec3f vec3 vec4f
vec4