Final Exam Geog 361

¡Supera tus tareas y exámenes ahora con Quizwiz!

Thermal Conductivity

(K) is the rate that heat will pass through a material and is measured as the number of calories that will pass through a 1cm cube of material in 1 second when two opposite faces are maintained at 1° C difference in temp (cal cm^-1 sec^01 °C). The conductivity of a material is variable due to soil moisture and particle size. Many rocks and soils are extremely poor conductors of heat

Components of Landsat Multispectral Scanner(MSS) System on Landsat 1 through 5

1,2,3 - sun synchronous descending equatorial crossing = 9:30 am 913 km repeat period: 18 days FOV - 11.6 185 km GIFOV 79 m 3 GIFOV 237m 4,5 sun synchronous descending equatorial crossing = 9:45 am 705 km repeat process 16 days FOV 14.9 185 km GIFOV 82 m

problems/solutions associated with soft-copy photogrammetry derived orthoimagery

Accuracy of digital orthoimage is a function of 1. Image quality 2. Ground control 3. Photogrammertic triangulation 4 DEM (or field surveyed points or digitalized contours) used in orthoimage generation → DEM metadata important Large-scale urban DEMs → distortion of bldg edges Check: drape othoimage back onto the DEM used to create orthoimage Tradition orthorectification does not eliminate radial distortion and relief displacement of tall structures Method for true orthophoto generation Traditional method: BV of pixel a obtained by starting at x,y ground position for a, interpolate elevation from DEM, tracing up thru math model to image, interpolate proper shade of gray Okay → not obstruction between a and exposure station Problem is with pixel b (obstructed) Solution to problem in figure

Advantages of Radar

Active microwave energy penetrates clouds and can be all-weather remote sensing system. • Synoptic views of large areas, for mapping at 1:25,000 to1:400,000; cloud-shrouded countries may be imaged. • Coverage can be obtained at user-specified times, even at night. • Permits imaging at shallow look angles, resulting in different perspectives that cannot always be obtained using aerial photography. • Senses in wavelengths outside the visible and infrared regions of the electromagnetic spectrum, providing information on surface roughness, dielectric properties, and moisture content

Additive color theory

Additive color theory: when light is mixed, white light = all colors of visible spectrum; black = absence of color Primary colors: blue, green, red (B,G,R) G+R = yellow; B+G = Cyan; B+R = magenta Complementary colors: Yellow, cyan, Magenta (CMY) → when paired, produce white light (Y+B, M+G, C+R) ACTL used with tv and Crt displays. RGB color guns. Each color gun's intensity for each pixel is modulated according to amount of primary color present in scene

Landsat Multispectral Scanner (MSS)

Band 4, 5, 6, 7, 8 range 0.5-0.6, 0.6-0.7, 0.7-0.8,0.8-1.1, 10.4-12.6 Whiskbroom 79x79m resolution for 4-7 240x240 for 8 Data rate: 15 m/bs 6 bit (0-63) brightness values (quantization) 18 days temporal resolution landsat 1,2,3 16 days landsat 4,5 swath width 185 919 alititude Band λ range (μm)4 0.5-0.65 0.6-0.76 0.7-0.87 0.8-1.18 10.4-12.6- whiskbroom-79 × 79 m band 4-7; 56 x 79 is Landsat picture element-240 × 240 m for band 8-6-bit (0 to 63)-18-day temporal resolution (except poleward of 81°); altitude = 919km (570 mi); swath width = 185 km; orbital inclination = 99 deg. (nearpolar), crossed equator at ~9° from normal; orbited Earth every 103min (14 orbits/day); Sun-synchronous orbit (orbital plane precessedaround Earth at same angular rate at which Earth revolves around Sun→ crossed equator at approx same local time (9:30 to 10:00 am) onilluminated side of Earth)- MSS on Landsats 1-5, mirror oscillates thru angular displacement of±5.78° off-nadir → 11.56° FOV → 185 km swath

Landsat 4-5

Band λ range (μm)1 0.45-0.522 0.52-0.603 0.63-0.694 0.76-0.905 1.55-1.756 10.40-12.57 2.08-2.35- 30 × 30 m (ground-projected IFOV) for bands 1 to 5, 7-120 × 120 m for band 6-8-bit- 16 days; altitude = 705 km; swath width = 185 km; orbital inclination = 98.2 deg -whiskbroom scanner w/ visible, reflective-IR, mid-IR, and thermal-IR

Atmospheric Window

Beyond the visible region of the electromagnetic spectrum, we encounter the reflective infrared region from 0.7 -0.3 um and the thermal infrared region from 3-14um The only reason we can use remote sensing devices to detect infrared energy in these regions is because the atmosphere allows a portion of the infrared energy to be transmitted from the terrain to the detectors. Regions that pass energy are called atmospheric windows. Regions that absorb most of the infrared energy are called absorption bands. Water, carbon dioxide, ozone O3 are responsible for most of the absorption. For example, atmospheric water vapor absorbs most of the energy exiting the terrain in the region from 5 to 7 um making it almost useless for remote sensing

Black and White photographic emulsion

Black and white photographic emulsions Various sensitivities possible: Orthochromatic: sensitive to B and G light Panchromatic: sensitive to UV, B, G and R light (most common) near -IR: sensitive to B,G,R, and near IR light Healthy veg: dark on pan image because it absorbs 80-90% of B,G, and R energy; bright on near-IR photos because it reflects 40-70% of near- IR EMR

oblique aerial photograph

Camera's optical axis deviates more than +- 3 degrees from vertical Can see sides of objects Low oblique aerial photograph: horizon is not visible High-olique aerial photograph: horizon is visible Image interpretation Low oblique aerial photograph - angels from the IFOV High Oblique aerial photograph - horizon

Color photographic emulsion

Color photographic emulsions Normal color aerial photography 1st step: just like that of B&W photography → exposed silver halide crystals in each layer are turned into black crystals of silver But rest of process depends on whether the film is a color negative or color reversal film Use some kind of dye Normal color light: Has a yellow filter to block blue energy or sequence Color infrared: focus on infrared band 2nd step Uses dyes: silver halides in each layer that turned completely in step 1 are replaced with the dyes of the complementary colors of the layer Black crystals in G layer → replaced with magenta dye Black crystals in R layer → replaced with cyan dye Black crystals in blue → y dye Thus negative is composed of CMY dyes To produce color positive print: projects white light thru negative to expose a 3-layer color printing emulsion

Types of geometric correction

Commercial remote sensor data (spot, digital globe, space imaging) already have much of the systematic error removed. Unless otherwise processed, however unsystematic random error remains in the image, making it non planimetric (the pixels are not in their correct x,y planimetric map position). Two common geometrically correction procedures are often used by scientists to make the digital remote sensor data of value Image to map rectification and Image to image registration The general rule of thumb is to rectify remotely sensed data to standard map projection whereby it may be used in conjunction with other spatial information in a GIS to solve problems. Therefore, most of the discussion will focus on image-to-map rectification

Emissivity pt 3

Compaction- the degree of soil compaction can effect emissivity Field of view (FOV)- the emissivity of a single leaf measured with a very high resolution thermal radiometer will have a different emissivity than an entire tree crown viewed using a coarse spatial resolution radiometer Wavelength- the emissivity of an object is generally considered to be wavelength dependent. For example, while the emissivity of an object is often considered to be constant throughout the 8-14 um region, its emissivity in the 3-5 um region may be different Viewing angle- the emissivity of an object can vary with sensor viewing angle. We must take into account an object's emissivity when we use our remote radiant temperature measurement to measure the object's true kinetic temp. This is done by applying Kirchoff's radiation law

Scale of vertical aerial photography over flat terrain method 2

Computing scale by relating focal length to altitude AGL Can express scale in terms of focal length, f and altitude (AGL), H Equates geometrically similar triangles Loa and LPA -----S =f/H If AGL is constant, increasing f yields larger images of objects in film plane, if f is constant, images of objects will be smaller as AGL increases Ex 12 in focal length, AGL = 60,000 ft S = f/H - 12"/60000 RF = 1./60000, verbal scale = 1in = 5000ft Computing scale by relating focal length to altitude AGL Also note: H = f/s F = H x s

Heat Transfer

Conduction- occurs when one body molecule or atom transfers its kinetic energy to another by colliding with it. This is how a pan is heated on a stove Convection, the kinetic energy of bodies is transferred from one place to another by physically moving the bodies. An example is the conventional heating of air in the atmosphere in the early afternoon The transfer of energy by electromagnetic radiation is of primary interest to remote sensing because it is the only form of energy transfer that can take place in a vacuum such as the region between the sun and the Earth

Aerial photographic positive

Creating a positive black and white photographic print from a black and white negative Analysts do not commonly interpret negatives →reversal of tone and geometry of real world thus , create positive print from negative dense(dark) bench area on negative allows very little radiant flux (light) to pass thru negative, whereas clear (ocean) area on negative allows much radiant flux thru EMR focused thru lens onto undeveloped photographic paper Perceived shades of gray in image → actually variations in abundance of very small grains of silver in processed film Development: exposed film is bathed in an alkaline chemical → reduces exposed silver halide crystals Left side more crystals will be exposed (dense)

Advanced Very High Resolution Radiometer(AVHRR) Sensor-System Characteristics

Day/night cloud and surface mapping snow and ice discrimination cloud and surface temperature land/water interface

Types of Aerial Cameras 2

Different lenses used for missions with varying objectives Narrow lens: angular FOV < 60 degrees Normal lens: angular FOV 60-75 degrees Wide angle lens: angular FOV 75 -100 Super wide angle lens: angular FOV > 100 The wider the field-of-view (FOV), the greater the amount of Earth recorded on film at a given altitude AGL The higher the altitude AGL, the greater the amount of Earthrecorded on film by each lens

Film Digitization

Digitizing black and white and color films Densitometer: used to measure density characteristics of negative or positive transparency film Various types of densitometers: flatbed and drum microdensitometry, linear or area array CCD densitometers

Magnitude of relief displacement d is:

Directly proportional to difference in elevation,h, between top of object whose elevation is displaced and local datum Directly proportional to radial distance, r, between top of displaced image and PP Magnitude of relief displacement d Inversely proportional to altitude, H , of camera above local datum. Thus, can reduce relief displacement of an object by increasing flying height h/H = d/r Thus amount of displacement, d is directly proportional to object height, h, and its distance from PP,r, and incersley proportional to altitude above local datum, H D = (h x r)/H Solving for object height: h(d x H)/r Can thus compute object height based on relief displacement on a single vertical aerial photograph

Relief displacement

Displacement on a truly vertical aerial photograph Relief displacement: outward from principal point(PP) for objects whose elevation are above local datum and inward for objects below local datum Direction of relief displacement is radial from PP

Diurnal Temperature Cycle of Typical Materials

During the 24-hour diurnal cycle: Short wavelength energy from the Sun is intercepted by the Earth from sunrise to sunset (0.4 - 0.7 μm). Terrain reflects some of this energy back into the atmosphere during the day. Some absorbed energy is re-radiated as long wavelength thermal infrared radiation (3 - 14 μm). Outgoing longwave radiation peaks during the day when surface temperatures are highest. After sunset, short wavelength radiation stops, but outgoing longwave radiation continues throughout the night.

Earth Observer (EO-1)

Earth Observer (EO-1) : launched November 21, 2000;deactivated March 30, 2017• Sun-synchronous orbit, 98.7-deg inclination, follow sone (1) minute behind Landsat 7 in same ground track→ satellite image intercomparisons• Advanced Land Imager (ALI) (linear array, 10 bands,0.4 to 2.35 microns, 30-m GSD)• Hyperion (hyperspectral ) (242 bands, VNIR thruSWIR, 30-m GSD, across-track ground swath of ~7.7km)• Linear Etalon Imaging Spectrometer Array (LEISA)Atmospheric Corrector (hyperspectral) (256-bands,0.9-1.6 microns, 250-m GSD); f acilitates correction for atmospheric water vaporEarth Observer (EO-1)

Image offset(skew) caused by earth rotation effects

Earth observing sun synchronous satellites are normally in fixed orbits that collect a path(or swath) of imagery as the satellite makes its way from the north to the south in descending mode. Meanwhile, the Earth below rotates on its axis from west to east making one complete revolution every 24 hours. This interaction between the fixed orbital path of the remote sensing system and the Earth's rotation on its axis skews the geometry of the imagery collected

Emissivity

Emissivity, e is the ratio between the radiant flux exiting a real-world selective radiating body (Mr) and a blackbody at the same temperature(Mb) All selectively radiating bodies have emissivities ranging from 0 to <1 that fluctuate depending upon the wavelengths of energy being considered. A graybody outputs a constant emissivity that is less than one at all wavelengths

External Geometric error

External geometric errors are usually introduced by phenomena that vary in nature through space and time. The most important external variables that can cause geometric error in remote sensor data are random movements by the aircraft (or spacecraft) at the exact time of data collection Which usually involve: Altitude change, and/or Altitude changes (roll,pitch, and yaw)

Microdensitometry Digitization

Flatbed microdensitometer Dense portion of film → very little light transmitter to receiver Clear portion of film → most of light transmitted to receiver A to D conversion to BV i,j,k (usually 8-bit bytes (0-255)) may not be a high resolution

Pixel Sixe

Flight altitude for MSS surveys determined by desired pixel size andsize of study areaDiameter of spot size on the ground (D; nominal spatial resolution) isa function of instantaneous-field-of-view (b) and altitude aboveground level (H) of sensor system, i.e. D = B X H

Forward-looking infrared (FLIR) systems

For decades, the military organization throughout the world have funded the development of FLIR type systems that look obliquely ahead of the aircraft and acquire high-quality thermal infrared imagery, especially at night FLIR systems collect the infrared energy based on the same principles as an across track scanner previously discussed, except that the mirror points forward about 45° and projects terrain energy during a single sweep of the mirror onto a linear array of thermal infrared detectors

Radiant Energy

Fortunately for us, an object's internal kinetic heat is also converted to radiant energy (often called external or apparent energy). the energy emitted or reflected by objects on the Earth's surface and atmosphere, which is detected by remote sensing instruments. This energy includes electromagnetic radiation across a range of wavelengths, from visible light to infrared and beyond.

Inverse-square law

Halving the distance of a remote sensing detector from a point source quadruples the infrared energy received by that detector. The inverse-square law states that: The intensity of radiation emitted from a point source varies as the inverse square of the distance between source and receiver Thus, we can obtain a more intense, strong thermal infrared signal if we can get the remote sensor detector as close to the ground as practical.

Haze and Wratten filters

Haze filter: absorbs wavelengths shorteer than 400 nm Band-pass filtering: filter/film combo used to record EM energy in very specific wavelengths Polarizing filter: passes vibration of EMR in just one plane at a time * application:water bodies Transmission characteristics of Selected Kodak Wratten Filters used in aerial photography HF3 We dont need to collect energy shorter that 400 nm because it will be ultraviolet Haze filter, absorbs light shorter than 400 nm -Wratten 12 minus blue used in color IR photography, absorbs light shorter than 500nm

Shadow length

Height of object, h, may be computed by measuring length of shadow cast, L, on vertical aerial photography Tangent of angle a would be equal to opposite side, h, over the adjacent side, which is a shadow length, L Tan a = h/L Solving for height → h = L x tan a Sun's elevation angle, a, above local horizon can be predicted using solar ephemeris table Or solar altitude may be empirically computed if well-defined shadows are on photograph Complicating factors: Shadows cast on non-level (variable) terrain, shadows created by leaning objects, shadows not cast from true top of object, snow, or other ground cover obscuring true ground level

Aerial Photography mission planning

Ideal time of day to obtain aerial images: when sun is between 30 and 52 degrees above horizon (within 2 hours of solar noon) Sun angle greater than 52 degrees may produce hot spots - really bright crystals Weather: aerial photography is ideally collected a few days after frontal system passes Avoid strong winds, clouds Flightline layout Must know Desired photo scale Scale of base map X,y coordinates of four corner points of study area Geographic size of study area Average forward overlap of each frame (60%) Average sidelap of each frame (20%) Film format (9x9 in) Camera focal length Can then compute Necessary altitude AGL Number of flightlines Map distance between flightlines Ground distance between exposures Total number of exposures required

Thermal Crossover

If all materials had identical radiant temperatures, thermal infrared remote sensing would be ineffective due to lack of contrast. Fortunately, only during specific times (after sunrise and near sunset) do some materials, like soils, rocks, and water, exhibit identical radiant temperatures, making data acquisition unwise during these periods. However, materials differ in their thermal capacity, with water having a higher capacity than soils and rocks. This results in water experiencing minimal diurnal temperature fluctuations compared to the significant fluctuations seen in soils and rocks over a 24-hour period.

Spatial interpolation using coordinate transformation

Image to map rectification requires that the polynomial equation be fit to the GCP data using least squares criteria to model the corrections directly in the image domain without explicitly identifying the source of the distortion. Depending on the distortion in the imagery, the number of GCPs used, and the degree of topographic relief displacement in the area Higher-order polynomial equations may be required to geometrically correct the data. The order of the rectification is simply the highest exponent used in the polynomial Higher order would be more order- computation time cost will be higher General for moderate distortion in a relatively small area of an image(a quarter of a Landsat ™ scene), a first order, six -parameter, affine(linear) transformation is sufficient to rectify the imagery to a geographic frame of reference This type of transformation can model six kinds of distortion in the remote sensed data Translation in x and y Scale changes in x and y Skew Rotation Output to input or inverse mapping

Scanning System one-dimensional Relief Displacement

Images acquired using an across-track scanning system also contain relief displacement. However, instead of being radial from a single principal point as in a vertical aerial photograph, the displacement takes place in a direction that is perpendicular to the line of flight foreach and every scan line. In effect, the ground-resolution element at nadir functions like a principal point for each scan line. At nadir, the scanning system looks directly down on a tank, and it appears as a perfect circle. The greater the height of the object above the local terrain and the greater the distance of the top of the object from nadir(i.e., the line of flight), the greater the amount of one-dimensional relief displacement present. One-dimensional relief displacement is introduced in both directions away from nadir for each sweep of the across-track mirror.

Thermal infrared detection

In:Sb (indium antimonide) with a peak sensitivity near 5 um Gd:Hg (mercury-doped germanium) with a peak sensitivity near 10 um or Hg:Cd:Te (mercury-cadmium-telluride) sensitive over the range from 8-14 um The detectors are cooled to low temp (-196°C -243°C, 73K) using liquid helium or liquid nitrogen. Cooling the detectors insures that the radiant energy (photons) recorded by the detectors comes from the terrain and not from the ambient temperature of objects within the scanner itself.

Landsat 9--Operational Land Imager 2 (OLI-2) and--Thermal InfraRed Sensor 2 (TIRS-2)

Instruments onboard Landsat 9 are improved replicas of those on Landsat 8--Spectral bands: Band 1 Visible (0.43 - 0.45 μm) 30-m Band 2 Visible (0.450 - 0.51 μm) 30-m Band 3 Visible (0.53 - 0.59 μm) 30-m Band 4 Red (0.64 - 0.67 μm) 30-m Band 5 Near-Infrared (0.85 - 0.88 μm) 30-m Band 6 SWIR 1(1.57 - 1.65 μm) 30-m Band 7 SWIR 2 (2.11 - 2.29 μm) 30-m Band 8 Panchromatic (PAN) (0.50 - 0.68 μm) 15-m Band 9 Cirrus (1.36 - 1.38 μm) 30-m Band 10 TIRS 1 (10.6 - 11.19 μm) 100-m Band 11 TIRS 2 (11.5 - 12.51 μm) 100-m --Radiometric resolution: 14-bit quantization (improved over 12-bit quantization for Landsat8); improved SNR, relative to Landsat 8 --OLI-2 GSD: 30 m for all bands except pan band (which 15-m GSD); TIRS-2 GSD: 100 m --Pushbroom sensors (both OLI-2 and TIRS-2); launched September 27, 2021

Internal geometric erro

Internal geometric errors are introduced by the remote sensing system itself or in combination with Earth rotation or curvature characteristics. These distortions are often systematic (predictable) and may be identified and corrected using pre-launch or in-flight platform ephemeris (ie information about the geometric characteristics of the sensor system and the Earth at the the time of data acquisition) Geometric distortions in imagery that can sometimes be corrected through analysis of sensor characteristics and ephemeris data include: Skew caused by Earth rotation effects Scanning system-induced variation in ground resolution cell size Scanning system one-dimensional relief displacement Scanning system tangential scale distortion

Flight line of vertical aerial photography

Intervalometer Exposes film at particular time intervals, dependent on speed and AGL, resulting in appropriate level of endlap for steroscopic coverage Usually 60% overlap(endlap), 20-30% sidelap between flightlines Multiple flightlines with 20-30% sidelap → block of aerial photography

Orthophotography and DEMs

Orthoimages → created from RS imagery-instead of a perspective center Each part of terrain is independently corrected during rectification Orthoimagery: topographic relief displacement, camera altitude variance are removed Orthophoto has good plannimetric accuracy Traditional versus digital photogrammetry Advances in digital elevation model (dem) creation Digital elevation model (DEM): regular array of terrain elevations (x,y,x) normally obtained in a grid or hexagonal pattern DEMs created using 1. Ground survey, 2. Cartographic digitization of contours, and/or 3. Photogrammetric measurements

Stereoscopic Measurement of Object Height or Terrain Elevation

Parallax: apparent displacement in position of object, wrt a frame of reference, caused by a shift in position of observation → basis for 3-D viewing Stereoscopy: science of perceiving depth using two-eyes Humans → two eyes (binocular vision), parallactic angle () Eye base or interpupillary distance Overlapping aerial photography (usually 60% endlap) obtained at exposure stations along a flightline contain stereoscopic parallax

Linear or area array CCD Digitzation

Personal flatbed/desktop linear array CCD digitizers for negatives, prints, or transparencies → often 300 to >3000 pixels per inch Inexpensive Size of scanner may be a problem for standard 9 x9 in aerial photos Some are specifically designed for RS digitization Large format High dpi (dots per inch) possible Scans film in a series of rectangular image segments/tiles Radiometric calibration algorithms applied For colo digitization → scans via all filters sequentially for each tile Should have 3 different filters/ bands

Intensity interpolation

Pixel brightness values must be determined. Unfortunately, there is no direct one to one relationship between the movement of input pixel values to output pixel locations. It will be shown that a pixel in the rectified output image often requires a value from the input pixel grid that does not fall neatly on a row and column coordinate. When this occurs, there must be some mechanism for determining the brightness value (BV) to be assigned to the output rectified pixel. This process is called

Image to map rectification

Process by which the geometry of an image is made planimetric. Whenever accurate area, direction and distance measurements are required, image to map geometric rectification should be performed. It may not, however, remove all the distortion caused by topographic relief displacement in images. The image to map rectification process normally involves selecting GCP image pixel coordinates (row and column) with their map coordinate counterparts (meters northing and easting in a Universal Transverse Mercator map projection) Don't care about projection

Radar history

RADAR-Taylor and Young (1922)-Experiments with high-frequency radio transmitter on one side of the Anacostia River near Washington, DC and a receiver on the other side-RADAR ("radio detection and ranging") -RADAR systems now use microwave EMR instead of radio waves ,but the acronym has persisted -1936: antenna transmitter and receiver combined in the same instrument -Development of RADAR during WWII: navigation, target location -1950s: continuous-strip imaging RADAR imagery (e.g., from SLAR)-long-range standoffJensen, 2000 1950s: military began using SLARs -1960s: some systems declassified

Pre dawn image acquisition

Reason for acquiring pre-dawn thermal imagery 1. Short-wavelength reflected energy can create annoying/problematic shadows in daytime imagery By 4:00 am most of the materials in/on the terrain have relatively stable equilibrium temperatures → also max thermal contrast between land and water Convective wind currents usually calm down by the early morning, resulting in more accurate flight lines( less atmospheric turbulence, less crabbing of the aircraft into the wind) → minimized wind smear or wind streaks in the imagery

Kirchoff's radiation law

Remember that the terrain intercepts incident (incoming) radiant flux(Φi). This incident energy interacts with terrain materials. The amount of radiant flux reflected from the surface (Φr), the amount of radiant flux absorbed by the surface (Φα), and the amount of radiant flux transmitted through the surface (Φτ) can be carefully measured as we apply the principle of conservation of energy and attempt to keep track of what happens to all the incident energy. The general equation for the interaction of spectral (λ) radiant flux with the terrain is 1 = oi + or + oa +ot

Internal and external geometric error

Remotely sensed imagery typically exhibits internal and external geometric error. It is important to recognize the source of the internal and external error and whether it is systematic (predictable) or nonsystematic(random). Systematic geometric error is generally easier to identify and correct than random geometric error.

Aerial Cameras compared to human eyes

Retina in human eye analogous to film located at film plane in back of camera Lens (eyes and camera) → used to focus reflected light onto reina or film iris(eye): controls amount of light allowed to reach/illuminate retina along with field

Scale of Vertical Aerial Photography over Variable Terrain

Scale at any point whose elevation above sea level is h and whose altitude above sea level is H, may be expressed as: S = f/H-h Compute scale at multiple locations (at multiple elevations) Usually an average, or nominal, scale is computed Savg = f/H-havg

Scale of vertical aerial photography over flat terrain

Scale of vertical aerial photography over flat terrain Two methods: Comparing size of objects measure in real world or from a map with same object on an aerial photo Computing relationship between camera focal length and altitude of aircraft AGL Method 1: comparing real world object size versus photographic image size scale , s, of vertical aerial photograph obtained over nearly level terrain is: ratio of size of object as measure in aerial photograph, ab, compared to actual measured length in real world, AB S= ab /AB Similar triangles Lab and LAB May be difference between actual scale and nominal scale of an aerial photograph

Silver Halide Crystals

Silver halide grains created by combining silver nitrate and halide slats (chloride, bromide and iodide) → result in range of crystal sizes, shapes and compositions Grains then chemically modified on surface to increase light sensitivity Density of Size of Silver Halide Crystals Film B is faster than film A since it requires less light for proper exposure Resolution test IMC- quality of the image

Emissivity pt 2

Some materials like distilled water have emissivities close to one (0.99) over the wavelength interval from 8-14 um. Others such as polished aluminum (0.08) and stainless steel (0.16) have very low emissivities. Two rocks lying next to one another on the ground could have the same true kinetic temp but have different apparent temp when sensed by a thermal radiometer simply because their emissivities are different. The emissivity of an object may be influenced by a number of factors Color- darker colored objects are usually better absorbers and emitters(higher emissivity) than lighter colored objects which tend to reflect more of the incident energy Moisture content- the more moisture an object contains, the greater its ability to absorb energy and become a good emitter. Wet soil particles have a high emissivity similar to water

Spot 6 and 7 Sensors

Spot 6 launched on September 9, 2012 Spot 7 launched on June 30, 2014 Image product resolution Panchromatic: 1.5m Colour merge: 1.5 m Multispectral: 6 m Spectral bands, simultaneous panchromatic and multispectral acquisitions Panchromatic (450 - 745 nm) Blue (450 -525 nm) Green (530-590 nm) Red (625-695 nm) Near-infrared (760-890 nm) Footprint: 60 km x 60 km

Problems/Solutions associated with Soft-copy photogrammetry derived DEMS

Tall structures and trees affect the creation of a Photogrammetrically derived DEM Methods used for DEM editing affect accuracy To remove all bldgs, trees, etc, must manually edit elevation posings Most accurate for leaf off rural scenes (also requires less editing)

Thermal infrared multispectral scanners

The Daedalus DS-1260, DS-1269, and Airborne Multispectral Scanner (AMS) are instruments used for environmental monitoring, particularly for gathering high spatial and spectral resolution thermal infrared data. The DS-1260 records data in 10 bands, including a thermal infrared channel ranging from 8.5 to 13.5 micrometers (um). The DS-1269 incorporates thematic mapper middle-infrared bands, covering ranges of 1.55-1.75 um and 2.08-2.35 um. The AMS includes both a standard thermal infrared detector (8.5 to 12.5 um) and a hot-target thermal infrared detector (3.0-5.5 um). The diameter of the circular ground area viewed by the sensor, referred to as "D," is determined by the instantaneous field of view (IFOV) of the scanner (measured in milliradians, or mrad) and the altitude of the scanner above ground level (H), where D = H x B.

Hybrid approach to image rectification/registration

The difference is that in image to image map rectification the reference is a map in a standard map projection, while in image to image registration the reference is another image If a rectified image is used as the reference base (rather than a traditional map) any image registered to it will inherit the geometric errors existing in the reference image. Because of this characteristic, most serious Earth science remote sensing research is based on analysis of data that have been rectified to a map base. However, when conduction rigorous change detection between two or more dates of remotely sensed data, it may be useful to select a hybrid approach involving both image-to-map rectification and image-to image registration

Radar Sar Satellites:

The discussion is based initially on the system components and functions of a real aperture side -looking airborne radar (SLAR). The discussion then expands to includes synthetic aperture radars (SAR) that have improved capabilities. -1950s: military began using SLARs -1960s: some systems declassified-aperture means antenna-Real aperture radars: use antenna of fixed length (e.g., 1-2 m) -SAR: also use 1-2 m antenna, but can synthesize a much larger antenna (e.g., 600 m)Ex 11-m SAR antenna, synthetic length = 15 km

Radiant Flux

The electromagnetic radiation exiting an object is called radiant flux and is measured in watts. The concentration of the amount of radiant flux exiting (emitted from) an object is its radiant temperature (Trad)

Spatial interpolation

The geometric relationship between the input pixel coordinates (column and row, referred to as x', y') and the associated map coordinates of this same point (x,y) must be identified. A number of GCP pairs are used to establish the nature of the geometric coordinate transformation that must be applied to rectify or fill every pixel in the output image (x,y) with a value from a pixel in the unrectified input image (x',y'). This process is called spatial interpolation

Sending and Receiving a Pulse of MicrowaveEMR - System Components

The pulse of electromagnetic radiation sent out by the transmitter through the antenna is of a specific wavelength and duration (ie, it has a pulse length measured in microseconds, usec). The wavelengths are much longer than visible, near-infrared, mid-infrared or thermal infrared energy used in other remote sensing systems. Therefore, microwave energy is usually measured in centimeters rather than micrometers The unusual names associated with the radar wavelengths (K,KA, Ku, X, C,S,L and P) are an artifact of the original secret work on radar remote sensing when it was customary to use the alphabetic descriptor instead of the actual wavelength or frequency

Wien's displacement law

The relation between the true temp of a blackbody (T) in kelvin and its peak spectral existence or dominant wavelength The dominant wavelength provides valuable information about which part of the thermal spectrum we might want to sense in. For example, if we are looking for 800 K forest fires that have a dominant wavelength of approximately 3.62 um then the most appropriate remote sensing system might be a 3-5um thermal infrared detector If we are interested in soil, water and rock with ambient temp on the earth's surface of 300K and a dominant wavelength of 9.66 um, then a thermal infrared detector operating in 8-14 um region might be most appropriate.

Scanning System tangential scale distortion

The tangential scale distortion and compression in the far range causes linear features such as roads, railroads, utility right of ways, to have an s-shape or sigmoid distortion when recorded on scanner imagery. Interestingly, if the linear feature is parallel with or perpendicular to the line of flight, it does not experience sigmoid distortion.

Stephen-Boltzmann Law

The total spectral radiant flux existence (Mb) measured in watts m^-2 leaving a blackbody is proportional to the fourth power of its temp (T). This is the Stefa-Boltzmann law The sun produces more spectral radiant existence at 6,000 K than the Earth at 300K. As the temp increases, the total amount of radiant energy measured in Watter per m^-2(the area under the curve) increases and the radiant energy peak shifts to shorter wavelengths.

Spatial interpolation using coordinate transformation RMSE

This process involves refining a set of six parameters and constants used to correct an input image. Initially, all 20 ground control points (GCPs) are used to compute these parameters, resulting in an initial set of coefficients. The error associated with each GCP is then calculated, and the ones contributing the most error are progressively removed through iterations. After each iteration, a new set of coefficients is computed with the remaining GCPs until the root mean squared error (RMSE) reaches a user-specified threshold. The aim is to eliminate GCPs that introduce the most error into the coefficient computation. Once the threshold is met, the final coefficients are used to rectify the input image to an output image in a standard map projection.

Image to map geometric rectification logic

Two basic operation must be performed to geometrically rectify a remotely sensed image to a map coordinate system Spatial interpolation Intensity interpolation

Measurement of object height from single aerial photographs

Two methods: Measurement of image relief displacement Measurement of shadow length

Mosaics

Uncontrolled Mosaic: formed by placing photograph together such that continuous coverage of an areas is provided, without concern for preservation of consistent scale and correct positional relationships Controlled mosaic: formed from individual photographs assembled such that mosaic preserves correct positional relationship between features

Compute the root-mean-squared error of the inverse mapping function

Using the six coordinate transform coefficients that model distortions in the original scene, it is possible to use the output to input (inverse) mapping logic to transfer (relocate) pixel values from the original distorted image x', y' to the grid of the rectified output image, x, y However, before applying the coefficients to create the rectified output image, it is important to determine how well the six coefficients derived from the least-squares regression of the initial GCPs account for the geometric distortion in the input image. This method used most often involves the computation of the root-mean-square error (RMS error) for each of the ground control points

Vertical aerial photography

Vertical aerial photograph: camera's optical axis within +- 3 degree of being vertical perpendicular to Earth's level surface Overlapping vertical photography enables photogrammetric deviation of: Accurate planimetric(x,y location) base maps Topographic (z-elevation(height) above sea level) base maps Accurate orthophotographs (aerial photographs that are geometrically accurate in x,y) all pixels May obtain the vertical photo first Misalignment may occur Use in GIS

Fiducial marks and principal points and geometry

Vertical photographs: obtained by camera pointed directly at ground Very few aerial photograph are truly vertical Fiducial marks (4 of 8 of them) at edges and/or corners of photograph Lines that connect opposite pairs of fiducial marks intersect at principal point (pp) → optical center of image Ground nadir Point on ground vertically beneath center of camera lens at instant of exposure of photograph Usually aerial photography is acquired with a certain degree of forward overlap, which is duplicate coverage by successive frames in a flightline (50% - 60%)

Thermal properties of Terrain

Water,rocks, soil,vegetation, the atmosphere, and human tissue all have the ability to conduct heat directly through them (thermal conductivity) onto another surface and to store heat(thermal capacity) Some materials respond to changes in temp more rapidly or slowly than others (thermal inertia)

NOAA GOES I-M Imager Sensor System Characteristics

cloud, pollution, haze detection for detection discriminates between water, clouds, snow or ice during daytime, detect fires and volcanoes cloud drift winds, severe storms, cloud top height, heavy rainfall low level water vapor

Color IR- Aerial Photography

color-IT film records reflected EMR from just below 0.3 um to just above 0.9 um (Uv-NIR) 3 emulsion layers CIR film is exposed thru a dark yellow filter Dyes are "offset by one" compared with standard color photography: G layer gets Y dye; R layer gets Magenta dye; IR layer gets Cyan dye White light projected thru negative Exhibits a color balance shift compared with normal color photos Near infrared going to collect data throughout all three later

Filters

filters out certain wavelengths before reaching film plane; subtract some light before it exposes film Ex red filters absorbs (subtracts) blue and green light. Yellow filter absorbs blue (allows green and red to pass). Since blue scatters easily in atmosphere, yellow (ie minus-blue) filter is usually used to remove some path radiance (esp UV and blue light) very important for NIR photos Haze filter: absorbs wavelengths shorteer than 400 nm Band-pass filtering: filter/film combo used to record EM energy in very specific wavelengths Polarizing filter: passes vibration of EMR in just one plane at a time * application:water bodies Transmission characteristics of Selected Kodak Wratten Filters used in aerial photography HF3 We dont need to collect energy shorter that 400 nm because it will be ultraviolet Haze filter, absorbs light shorter than 400 nm -Wratten 12 minus blue used in color IR photography, absorbs light shorter than 500nm

Multispectral imaging using discrete detectors and scanning mirrors

landsat(MSS); Landsat thematic mapper(™); Landsat Enhanced thematic mapper plies (ETM+) -most popular NoAA geostationary operational environmental satellite (GOES) NoAA ascanced very high resolution radiometer (AVHRR) NASA and ORBIMAGE, inc, Sea viewing wide field of view Sensor (SeaWiFS) Daedalus inc, aircraft multispectral Scanner (AMS) NASA airborn terrestrial applications scanner (ATLAS)

quantization

refers to the process of assigning digital values to the continuous data collected by remote sensing instruments. Here's a breakdown of what this entails:

Cross sections

see lecture slide #4 Black and white film Panchromatic — blue, green,and red sensitive emulsion of silver halide crystals Base Anti-halation layer Normal color film Blue sensitive layer[yellow dye-forming layer] Yellow internal filter blocks blue light Green(and blue) sensitive layer[magenta dye-forming layer] Red (and blue) sensitive layer[cyan dye-forming layer] Base Anti-halation layer Black-and-White Infrared Film Near-infrared sensitive layer Base Anti-halation layer Color Infrared Film Near-infrared (and blue) sensitive layer [cyan dye-forming layer] Green(and blue) sensitive layer[yellow dye-forming layer] Red (and blue) sensitive layer[magenta dye-forming layer] base anti-halation layer

NASA Advanced Spaceborne Thermal Emission and ReflectionRadiometer (ASTER) Characteristics

see slide show

SPOT 1,2,&3 High Resolution Visible (HRV)sensors and SPOT 4 High Resolution VisibleInfrared (HRVIR) and VEGETATION sensor

see slide show The SPOT (Satellite Pour l'Observation de la Terre) satellite series is a constellation of Earth observation satellites operated by Airbus Defence and Space. These satellites are used primarily for civilian applications such as environmental monitoring, urban planning, agriculture, forestry management, and disaster management. Here are some key points about the SPOT satellite program: Purpose: SPOT satellites are designed to provide high-resolution optical imagery of Earth's surface. The imagery captured by these satellites is used for various applications in fields like cartography, agriculture, environmental monitoring, and urban planning. Imaging Capabilities: SPOT satellites are equipped with high-resolution optical sensors capable of capturing detailed images of the Earth's surface. They can provide imagery with different spectral bands, allowing for analysis of vegetation, land cover, and other environmental features.

Types of Aerial Cameras

single - lens mapping metric cameras Multiple lens or multiple band cameras Panoramic cameras Digital camera Misc camera Single lens metric cameras obtain most of aerial photos Cartographic cameras→ calibrated to acquire best geometric and radiometric quality photos Single lengs mapping Consists of camera body, lens cone assembly, shutter, film feed and uptake motorized transport assembly at film plane and aircraft mounting platform Filters place in front of lens Lens cone: consists of s single, multielement lens that projects undistorted images of real world onto film plane

Thermal Capacity

©- the ability of a material to store heat. It is measured as the number of calories required to raise a gram of material (water) 1° C ( cal g-1°C-1). Water has the highest thermal capacity (1.00). It stores heat very well relative to all the other materials

Aerial Photographic negative

1st step: creating a black and white aerial photography negative When light exposes silver halide crystal suspended in film emulsion, entire crystal becomes exposed, regardless of size Bond between silver and halide wakened when exposed to light Emulsion expose to light contains an invisible image of object (latent image) To convert latent image on emulsion into a negative it must be developed When latent image is developed via chemicals, areas of emulsion exposed to intense light turn to free silver and become black (dense or opaque) Areas receiving no light become clear if support is typically transparent plastic film Degree of darkness of developed negative is a function of total exposure which caused emulsion to form latent image More dark more crystals

What is a pixel

A single point on a graphic image which stands for picture element

Earth Observing System - Terra Instruments

ASTER - Advanced Spaceborne Thermal Emission and Reflection Radiometer CERES - Clouds and the Earth's Radiant Energy System MISR - Multi-angle Imaging Spectroradiometer MODIS - Moderate-resolution Imaging Spectroradiometer MOPITT - Measurement of Pollution in the Troposphere

Imaging spectrometers

NASA jet propulsion lab (JPL/CALTECH) Airborne visible infrared imaging spectrometer (AVIRIS) Canadian compact airborne spectrographic imager-2 (CASI-2) NASA terra moderate resolution imaging spectrometer (MODIS) NASA Earth observer

Scanning System-induced variation in Ground Resolution cell size

An orbital multispectral scanning system scans through just a few degrees off-nadir as it collects data hundreds of kilometers above the Earth's surface (landsat 7 data are collected at 705 km AGL) This configuration minimizes the amount of distortion introduced by the scanning system. Conversely, a suborbital multispectral scanning system may be operating just tens of kilometers AGL with a scan field of view of perhaps 70 °. This introduces numerous types of geometric distortions that can be difficult to correct.

Landsat ™

Band 1 (0.45 -0.52 um)(blue): penetration of water bodies, land use, soil and vegetation Band 2 (o.52-0.60 um)(green): covers spectral region b/w blue and red chlorophyll absorption bands; sensitive to green reflectance of healthy green vegetation Band 3 (0.63-0.69 um) (red): red chlorophyll absorption band of healthy green vegetation; vegetation discrimination. Soil and geologic formation boundary discrimination. Less atmospheric attenuation than with band 1 and 2. Upper bound (0.69 um-cutoff) used likely due to 0.68 -0.75 um region (vegetation reflectance crossover) Band 4 (0.76-0.90 um) (reflective infrared): lower bound is above the upper bound of vegetation reflectance crossover region (0.75 um). Vegetation biomass estimation, crop identification, land/water and soil/ crop contrasts Band 5 (1.55-1.75) (mid- infrared): plant turgidity (plant water content). Discriminates clouds, snow, and ice Band 6 (10.4-12.5 um)(thermal infrared): soil moisture estimation,geothermal activity detection, vegetation-stress detection, vegetation classification, geologic thermal inertia mapping- provide information about temperature Band 7(2.08-2.35 um) (mid-infrared): discriminates geologic formations, rock hydrothermal alteration mapping.

Types of aerial cameras 3

Intervalometer- exposes film at particular time intervals, dependent on speed and AGL, that will result in appropriate leval of endlap to be obtained for steroscopic coverage -------Aerial cameras typically expose film 24 cm (9.5 in) wide in rolls >= 500 ft in length, depending on film thickness --------Typical exposure = 9x9 (23x23 cm) image compensation magazines (IMC): moves film across focal plane quickly (0 - 60 mm/sec) IMC: shifting platen pressure plate with film attached via a vacuum in flight direction in accordance with a velocity-to-height ratio (v/h) and focal length of lens Multiple lens (multiband) cameras → multiple wavelengths sensed using differing film and/ or filter combos --> photographs taken simultaneously Panoramic cameras: use rotating lens (or prism) to create thin strip of imagery perpendicular to line of flight ----->exposures typically vertical in center and oblique near ends-for low-altitude photography: pan across flight line from horizon-to-horizon 180° FOV Spectral Bands = channels Numbers in the parenthesis is the wavelength for the specific spectrum Digital camera: CCD area array is used CCDs more radiometrically sensitive than silve halide crystals in film Uses a RGB filter wheel (or a dicroic grate) at instant of exposure, camera records three versions of scene using 3 filters

Image to image registration

Is the translation and rotation alignment process by which two images of like geometry and of the same geographic area are positioned coincident with respect to one another so that corresponding elements of the same ground area appear in the same place on the registered images This type of geometric correction is used when it is not necessary to have each pixel assigned a unique x,y coordinate in a map projection. For example, we might want to make a cursory examination of two images obtained on different dates to see if any change has taken place.

Kinetic heat

Kinetic heat, also known as internal or true heat, refers to the energy possessed by particles of matter in random motion. All objects with a temperature above absolute zero exhibit this motion. When these particles collide, they change their energy state and emit electromagnetic radiation. The amount of heat can be measured in calories, which represent the energy required to raise the temperature of 1 gram of water by 1 degree Celsius. Temperature, or the concentration of this heat, can be measured using a thermometer. This in situ measurement is commonly used in medical contexts, such as when assessing illness. Additionally, the true kinetic internal temperature of substances like soil or water can be measured by physically touching them with a thermometer.

EO-1 Sensor Characteristics

MS 1- 7 ( 30x 30) with band 8 being panchromatic (10x10) EO-1 hyperion hyperspectral sensor 220 bands from 0.4 to 2.4 um at 30 x 30 m EO-1 Leisa Atmospheric Corrector (LAC) 256 bands from 0.9 um to 1,6 um at 250 x 250 m Advanced Land Imager is a pushbroom radiometer. Hyperion is a pushbroom spectroradiometer Lac uses area arrays ALI = 37 km; Hyperion = 7.5 km; LAC = 185 km 16 days of revisit 706 km, sun synchronous inclination = 98.2 equatorial crossing = Landsat 7 + 1 min

Landsat Enhanced Thematic Mapper plus (EMT+) SLC-off problem

May 31,2003: Scan line corrector (SLC), which compensates for forward motion of Landsat 7, failed Landsat 7 ETM+ still capable of acquiring useful image data with SLC turned off, esp within central portion of scene Middle of scene (~22 km wide swath on a Level 1 (L1G, L1Gt, L1T) product: minimal duplication or data loss, very similar quality compared with previous ("SLC-on) images 22% of Landsat 7 scene is lost due to SLC failure USGS/NASA developed ETM+ gap filling methods. Merge multiple ETM+ images

Wiskbroom Scanners•

Mirror scans terrain perpendicular to flight direction • As it scans, focuses radiant flux from terrain ontodiscrete detector elements • Detectors convert reflected solar radiant flux measuredwithin each IFOV in scene into electrical signal • Detector elements placed behind filters that passportions of spectrum • MSS: 4 sets of filters and detectors; TM has 7 • Problem: short residence time of detector in each IFOV• Use of broad spectral bands

Subtractive color theory

Mixing equal proportions of RGB → black (not white) based on pigments or dyes (not light). Applies to paint/ink or filters Based on complementary color dyes (CMY) Yellow filter absorbs all blue light, a magenta filter absorbs all green light, and a cyan filter absorbs all red light Could combine/superimpose filters Magenta dye filter + cyan dye filters → all but blue light is subtracted (magenta filter subtracts G; cyan filter subtracts R) Y + C filters → only G light remains Y + M filters → only R light is perceived C + M + Y → subtracts/filters out all white light →


Conjuntos de estudio relacionados

The Beginnings of American Government Topic Test

View Set

Chapter 5 Mood Disorders Multiple Choice

View Set

Types of insurance policies, part four

View Set

IB-Discounted Cash Flow Questions

View Set

Chapter 10 -12 linux+ guide to linux certification review

View Set

Ap Psych Thinking, Concepts, and Creativity (module 34 unit 11)

View Set

Swaim A Good Man is Hard to Find

View Set