Notes for this presentation on remote sensing

Satellite and Aerial Image Analysis

Start off with images from the Mesopotamian Marshlands in Iraq. Composite image by Landsat from 1970s, and Landsat 7 image from 2000. Multispectral (multiple sensors each attuned to take images at some small portion of the spectrum), false color. Vegetation in red, water is blue.

Massive reduction of marshlands (to 15%), tigris and euphrates diverted by Saddam Hussein in retaliation for the Marsh Arabs uprising in 1991. Garden of Eden, Middle East flyway. Natural hydrology regime partially restored, with good results, much left to be worked out on the ground.

Landsat 7 ... first earth observation satellite. 15m panchromatic pixel size, 30m visible, near multispectral bands, 60 meters infrared thermal.16 day orbital cycle (full earth coverage), 57784 scenes, 3.8 gigabyte each scene.

Sources: EO Newsroom, Landsat Info

Remote Sensing

This is a huge field, multidisciplinary with wide range of applications.

Remote Sensing = "Acquiring information without touching".
Earth Observation = "Remote Sensing of the Earth" (rather than other planets)
Photogrammetry = "Science or art of obtaining reliable measurements or information from photographs or other sensing systems."

Applications ... anything that happens on the planet.
Geology, agriculture, meteorology, land management, oceanography, urban planning

Earthrise from the Apollo Missions. The paradox of many modern tech. The technologies designed to help blow up the earth 1000 times over is redeployed to understand and aid the planet.

Sources: Basic Materials

Imaging Tech

Satellites and aerial photography. Photography more detailed, but very troublesome.
Sensors mounted on satellite platform, expensive, but continous coverage, massive data, inaccessible areas.

Satellites in geosynchronous (speed matched to earth's revolution), usually weather satellites. Sun synchronous or near polar orbit. Earth revolves beneath satellite, allows for complete coverage. Orbits corrected from ground control.

Passive sensors receive solar reflected or emitted electromagnetic radiation - visible, infrared, microwave. Active sensors project and receive radiation - radar, laser, microwave. Each have their advantages/disadvantages (radar is weather independent but difficult to process).

Panchromatic sensors - sensitive to all of visible spectrum. Multispectral - multiple sensors each visible to different, small portion of the spectrum.

Various complicated processing and distribution of raw data. Calibration, validation. Corrections made for atmospheric affects. Etc.

Interpretation .. Manual, Semi-Automated, Automatic .. what we're talking about.

Sources: Basic Materials

Pixel Classification

Primary step of interpretation is pixel classification. 30m area. Classification is based on reflectance profile in multiple bands. Doesn't mean the entire pixel is in the same classification, but the majority would be.

Made easier by prior knowledge. Orientation of the satellite, actual place being examined (ocean, land, desert, ice cap). Vegetation typically gives a high response in infrared wavelength, soil gives a uniform response. Microwave radiation is sensitive to amount of snow cover. There are detailed profiles of different phenomenon.

This is an image of vegetation classification in China, and spectrum profiles.

Sources: Basic Materials

MODIS Rapid Repsonse Fire Maps

MODIS "Moderate Resolution Imaging Spectroradiometer" instrument on board
Terra/Aqua .. new earth observation satellites with very frequent coverage, school bus sized.

These images are from the Southern California wildfire last summer. The areas outlined in red are active fire zones. This information is made available to various agencies to coordinate accurate fire fighting responses, and recovery actions following the fire (mudslides).

Algorithm: Brightness temperature is examined in multiple channels, certain conditions identify potential fire pixels and mask out cloud. The context and surrounding pixels of potential fire pixels are examined, to screen out false alarms and detect low temperature files. Must also deal with sun glint, and effects from desert and coastal regions.

Sources: "An Enhanced Contextual Fire Detection Algorithm for MODIS", Volume 87, Issues 2-3 , 15 October 2003, Pages 273-282 Remote Sensing of Environment

Remote Sensing at Sussex

There is some remote sensing activity at Sussex, at a similar algorithmic complexity level. Dominic Kniveton in Geography participates in the WetNet project. Goal is rainfall estimation for regions where little data is available, like in Angola on the Okavango watershed, where warfare has disrupted scientific activities. The latest work uses infrared and microwave data, to measure cloud density and height, leading to a rough estimate for rainfall. Better than nothing.

Sources: http://www.geog.ucl.ac.uk/~mtodd/papers/jhyd_2003/2003_Kidd_JoH.pdf
http://www.geog.susx.ac.uk/research/cec/as.html

Ice Edge and Icebergs

Finally we start to see some computer vision type techniques.

The Northern View is a project of the European Space Agency, offering products about the ?North?. This is a vast, largely inaccessible region of global importance in climate change. The melting of polar ice caps will have unpredictable results (decreasing salinity in oceans, decreased albedo, positive or negative feedback).

It?s an important shipping channel, so information is needed on ice thickness and iceberg locations.

An interesting application is ice edge monitoring. The ice edge is an active location in the food chain, and attracts wildlife and Inuit hunters. Traditional knowledge for safely and efficiently navigating to the ice edge has become less effective, possibly due to climate change. The product supplies information on landfast ice, moving ice, and ice at risk of fracturing.

Pixel categorization techniques are similar to the previous applications. From there, edge and motion detection is applied to find the ice edge, rifts and fractures in progress, and make predictions about iceberg flow.

Northern View

Neural Networks

Neural Networks are very commonly applied in geography and remote sensing. This example employs neural networks to estimate snow depth and snow water equivalent (SWE) from microwave radiation measurements. Traditionally theoretical equations or empirical algorithms have been used. SPD and Chang are rule based algorithms, run on observed microwave data in several spectrums. HUT is based on theoretical models of how snow scatters microwaves, and is inverted into an iterative formula to obtain SWE and depth from microwave data.

The neural network was a multilayer perceptron, trained using back-propagation to minimize mean square error. The ANN was trained on simulated data, generated from the HUT model and on empirical data. 4 input neurons, 19 and 37 GHz vertical and horizontal brightness temperatures. Several networks were trained, and the based trained determined by statistical measures on the validation set. The network trained on experimental data performed as well or better than the traditional techniques. Note, this was snowfall in Finland, not in Boston. This work was just published last week!

Sources: ?Artificial neural network-based techniques for the retrieval of SWE and snow depth from SSM/I data?, Remote Sensing of Environment, Volume 90, Issue 1 , 15 March 2004, Pages 76-85

Automated Road Extraction

Like the rest of computer vision, methods are sometimes more suited to the built environment.

Road extraction from aerial photography is an active research area. Places in the world are very badly mapped, developing countries. Research systems produced, but none are fully automated and ready for wide deployment.

Research at the Swiss Federal Institute of Technology Zurich takes a very utilitarian approach, employing existing spatial databases, rules and models of roads, uses stereoscopic images if available. Goal is to update existing road maps

ATOMI (Automated reconstruction of Topographic Objects from aerial images using vectorized Map Information).

Very involved process, so here are some highlights. Canny edge detection extracts segments in pair of images. Stereoscopic images are used to find matches.These are kept in a ?road object space?. Pixels are classified by set of rules in road, vegetation, shaded areas, and buildings. Known topographic data applied against stereo images to remove above ground objects ? (roads are on the ground). Roadmarkings are also used ? zebra crossings are found by first morphological operations (blobs of certain size), than analysis of clustered objects. From this, relevant edges extracted earlier give the width of the road, with corrections for occlusions and shadows. Knowledge of road layout constraints, and similarity to existing databases are used to fill in the gaps.

Great results, lots of future work.

Automated 3D City Models

Also from ATOMI. Employs known information on buildings and above ground object extraction from stereo images to create 3D architectural models. Aerial images then texture mapped.

References

Here there are. All available in the library. There's tons, sometimes not easy to get at.