Photogrammetry

Photogrammetry is defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) as “the art, science, and technology of obtaining reliable information about physical objects and the environment, through processes of recording, measuring, and interpreting imagery and digital representations of energy patterns derived from noncontact sensor systems” (Colwell, 1997:3).

From: Introduction to Environmental Forensics (Third Edition), 2015

Photogrammetry

James S. Aber, ... Johannes B. Ries, in Small-Format Aerial Photography, 2010

3.1 Introduction

Photogrammetry is the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recording, measuring, and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena (Wolf and Dewitt, 2000; McGlone, 2004). Photogrammetry is nearly as old as photography itself. Since its development approximately 150 years ago, photogrammetry has moved from a purely analog, optical–mechanical technique to analytical methods based on computer-aided solution of mathematical algorithms and finally to digital or softcopy photogrammetry based on digital imagery and computer vision, which is devoid of any opto-mechanical hardware. Photogrammetry is primarily concerned with making precise measurements of three-dimensional objects and terrain features from two-dimensional photographs. Applications include the measuring of coordinates; the quantification of distances, heights, areas, and volumes; the preparation of topographic maps; and the generation of digital elevation models and orthophotographs.

Two general types of photogrammetry exist: aerial (with the camera in the air) and terrestrial (with the camera handheld or on a tripod). Terrestrial photogrammetry dealing with object distances up to ca. 200 m is also termed close-range photogrammetry. Small-format aerial photogrammetry in a way takes place between these two types, combining the aerial vantage point with close object distances and high image detail.

This book is not a photogrammetry textbook and can only scratch the surface of a continuously developing technology that comprises plentiful principles and techniques from quite simple to highly mathematical. In this chapter, an introduction to those concepts and techniques is given that are most likely to be of interest to the reader who plans to use small-format aerial photography, but has little or no previous knowledge of the subject. For a deeper understanding, the reader is referred to the technical literature on photogrammetry, for example, the textbooks by Wolf and Dewitt (2000), Kasser and Egels (2002), Konecny (2003), Luhmann (2003), Kraus (2004), McGlone (2004), Kraus et al., (2007), and Luhmann et al. (2007).

The basic principle behind all photogrammetric measurements is the geometrical–mathematical reconstruction of the paths of rays from the object to the sensor at the moment of exposure. The most fundamental element therefore is the knowledge of the geometric characteristics of a single photograph.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780444532602100031

Investigations

J. Horswell, in Encyclopedia of Forensic Sciences (Second Edition), 2013

Photogrammetry

Photogrammetry, as its name implies, is a three-dimensional coordinate measuring technique that uses photographs as the fundamental medium for metrology or measurement. The fundamental principle used in photogrammetry is triangulation. By taking photographs from at least two different locations, so-called ‘lines of sight’ can be developed from each camera to points on the object. These lines of sight, sometimes called rays owing to their optical nature, are mathematically intersected to produce the three-dimensional coordinates of the points of interest.

Triangulation is also the principle used by theodolites for coordinate measurement. Crime scene investigators familiar with these instruments will find many similarities and some differences between photogrammetry and theodolites. Triangulation is also the way the two human eyes work together to gauge distance which is called depth perception.

The choice of equipment to draw crime scenes is one of ‘practice’ and ‘procedure’ of crime scene investigators within a given jurisdiction.

I will now, however, go back to the basics for crime scene sketching.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123821652002075

Multimodality Brain Image Registration Using a Three-Dimensional Photogrammetrically Derived Surface

OSAMA R. MAWLAWI, ... RONALD G. BLASBERG, in Quantitative Functional Brain Imaging with Positron Emission Tomography, 1998

A Photogrammetry

Photogrammetry allows 3D coordinates, in this case defining a surface, to be derived from points within two 2D pictures that have been taken from slightly different positions using charge-coupled device (CCD) cameras. In this case, the points within each of the 2D pictures (i.e., the points known to be the same in the two pictures) were determined by the intersection of horizontal and vertical laser lines that were scanned across the patient's face (Fig. 1). Instead of individual points or a grid, lines were used as a compromise between speed of acquisition and ease of accuracy of automated identification of the points.

FIGURE 1. Configuration for measurement of photogrammetric information in the PET scanner. Two CCD cameras are positioned behind the PET gantry, toward a mirror which in turn is directed such that it will view the patient's face while lying within the PET field of view. Vertical and horizontal laser lines are scanned across the face to generate the surface of the face.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780121613402500160

Geographic Information and Land Classification in Support of Forest Planning

Pete Bettinger, ... Donald L. Grebner, in Forest Management and Planning (Second Edition), 2017

4 Aerial Photogrammetry

Aerial photogrammetry is technically a subset of remote sensing that primarily involves visible light waves in the electromagnetic spectrum. There are some excellent uses in the near-infrared applications as well. Aerial photography is perhaps the most widely used method for creating geographic databases in forestry and natural resource management. Interpretation techniques that involve geometry, trigonometry, optics, and familiarity with natural resources can allow us both to identify and estimate the size, length, or height of objects on the ground. Aerial photogrammetry requires the use of vertical aerial photographs (those where the axis of the camera was no more than 3° from vertical), and most often requires the use of stereo pairs (overlapping photos), although reasonable measurements can be made from single vertical aerial photographs if the scale of the photo can be determined. Many of the base maps used by natural resource management organizations were initially made with aerial photographs that were interpreted by natural resource managers. This information can be collected from the photographs using stereo compilers, which allow us to correct a large portion of the inherent error from sources of distortion and displacement. Alternatively, the detail associated with the interpreted photos can be transferred to maps using hand-drawing processes or traditional digitizing processes. Further, information about the camera and terrain can be combined in an analytical model that can compute the coordinates of features using softcopy (personal computer-based) photogrammetry techniques.

One product that is developed from vertical aerial photographs is the georeferenced digital orthophotograph. Digital orthophotographs (Fig. 3.1) commonly are used in GIS as a background image on top of which delineated forested stands, roads, or streams are draped or laid. To create an orthophotograph, vertical aerial photographs first are scanned using very high spatial resolution scanners. Vertical images can also be acquired from digital aerial photo cameras. Much of the topographic displacement and other distortions are removed from the vertical aerial photographs analytically. Finally, the photographs may be combined, and are then georeferenced to allow their correct placement on the landscape. Orthophotographs are hard-copy versions, commonly printed on mylar maps or glass plates. Digital orthophotographs are soft-copy versions that can be used in conjunction with computer software.

Figure 3.1. A digital orthophotograph of land in South Carolina.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128094761000035

Drones in agriculture

Deon van der Merwe, ... Ajay Sharda, in Advances in Agronomy, 2020

4.1 Orthomosaicking

Unprocessed digital aerial images are files that contain data in the form of numerical values associated with pixel locations. It is not possible to interpret these data, or to derive useful information for decision-making, without the generation of an information product in an accessible format. Image processing is an essential step in the extraction of information from aerial images, and ultimately in the extraction of knowledge that can form the basis of decision-making.

Aerial image processing consists of several possible workflows aimed at preparing data for analysis in image processing software or a geographic information system (GIS). One of the most commonly used processing workflows used in agricultural applications of aerial imaging is known as orthomosaicking. Individual images collected by an aerial camera are subject to geometric distortion, meaning that distances and areas cannot be accurately measured from an uncorrected aerial image. Orthomosaicking composites aerial images into a single, seamless, geometrically corrected image. Following georegistration, it can be used for measurements of attributes typically derived from maps, including distances between objects, the geographic locations of objects, and measurement of areas. It can also be used in a GIS environment in conjunction with other data sources.

Orthomosaicking is typically performed using one of several desktop software packages designed for this purpose. A variety of cloud-based orthomosaic processing software options are also available. These software packages utilize a machine vision technique known as structure-from-motion photogrammetry to align aerial images, stitch them together into a larger, seamless image, and produce a point cloud, or three-dimensional model of the imaged area. Using this model, the imagery is orthorectified such that artifacts such as lens distortion, tilt, and elevation effects are removed from the output dataset. If the imagery was collected without geotags (metadata tags containing the geographic coordinates where each image was recorded during flight), the orthomosaics must also be georegistered, or correctly aligned with its real-world coordinates before it can be used in a GIS.

The success of structure-from-motion photogrammetry for production of aerial orthomosaics depends on the characteristics of the aerial imagery used, and the environmental conditions during image capture. Factors, and common practical examples, that may reduce the success of structure-from-motion photogrammetry include:

Surface movement (for example movement of tall vegetation due to wind)

Inconsistent and excessive variation in camera angle and altitude above ground level (for example due to an inconsistent flight patterns due to excessive and variable wind, autopilot malfunction, or camera gimbal malfunction)

Highly reflective surfaces (for example open water)

Consistent, repetitive patterns (for example homogeneous fields with precisely planted row crop, or smooth surfaces)

Images with very low pixel density or low effective spatial resolution (for example low-end thermal images, or images that are out of focus)

Excessive variation in ambient light intensity during image collection (for example partial cloud conditions)

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/S0065211320300328

Remote Sensing

J. Estes, ... E. Collins, in International Encyclopedia of the Social & Behavioral Sciences, 2001

The definition of remote sensing used here, provided by the American Society for Photogrammetry and Remote Sensing (ASPRS), is:

In the broadest sense, the measurement or acquisition of information of some property of an object or phenomena, by a recording device that is not in physical or intimate contact with the object or phenomenon under study; e.g., the utilization at a distance (as from aircraft, spacecraft, or ship) of any device and its attendant display for gathering information pertinent to the environment, such as measurements of force fields, electromagnetic radiation, or acoustic energy. The technique employs such devices as the camera, lasers, and radio frequency receivers, radar systems, sonar, seismographs, magnetometers, and scintillation counters. (Reeves et al. 1975)

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B0080430767025262

Geospatial Data Discovery, Management, and Analysis at National Aeronautics and Space Administration (NASA)

Manzhu Yu, Min Sun, in Federal Data Science, 2018

3 Big Geospatial Data Management

Geospatial data are growing at tremendous speed, collected in various ways, including photogrammetry and remote sensing, and more recently through laser scanning, mobile mapping, geo-located, sensors, geo-tagged web contents, volunteer geographic information, and simulations (Li et al., 2016a,b). Part of this growth is due to the increasing powerful computation capabilities to handle the big geospatial data. For example, in climate science, climate simulations are conducted in an increasingly higher spatiotemporal resolution, which largely relies on the capabilities of parallel computing. However, the higher spatiotemporal resolution simulation has brought researchers new challenge, which is to deal with the efficient management, analysis, and visualizations of the simulation output. To efficiently store, manage, and query the big geospatial data set, one must consider the data structure, modeling, and indexing for a case-dependent, customized big geospatial data management solution.

In climate science, NetCDF/HDF is one of the most commonly used data structures, which consists of multidimensional variables within their coordinate systems and some of their named auxiliary attributes (Rew and Davis, 1990). However, the classic data model has two obvious limitations: (1) lack of support for nested structure, ragged arrays, unsigned data types, and user defined types; and (2) limited scalability due to the flat name space for dimensions and variables.

To address the limitations of the classic data model, Li et al. (2017) proposed an improved data model based on NetCDF/HDF, which included to contain additional named variables, dimensions, attributes, groups, and types. The variables can be divided into different groups by a certain characteristic, such as the model groups that generated these data. When storing these variables in a physical file, each two-dimensional grid will be decomposed into a one-dimensional byte stream and stored separately, one by one, in a data file.

With this improved data model, efficient management of these data is still challenging because of the large data volume, as well as the intrinsic high-dimensional nature of geoscience data. In addition, distributed file systems, such as MapReduce, have become increasingly popular in managing and processing big data, but there is a gap between the MapReduce model and the efficient handling of the traditional climate data format (NetCDF/HDF). To tackle this challenge, Li et al. (2017) also proposed a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries (Fig. 11.2). Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balance the workload. The proposed indexing approach is evaluated using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis data set. The experimental results show that the index can significantly accelerate querying and processing (∼10× speedup compared with the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328%). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

Figure 11.2. The structure of spatiotemporal index.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128124437000119

Satellite Oceanography, History, and Introductory Concepts

W.S. Wilson, ... J.R. Apel, in Encyclopedia of Ocean Sciences (Second Edition), 2009

The Early Era

The origins of satellite oceanography can be traced back to World War II – radar, photogrammetry, and the V-2 rocket. By the early 1960s, a few scientists had recognized the possibility of deriving useful oceanic information from the existing aerial sensors. These included (1) the polar-orbiting meteorological satellites, especially in the 10–12-μm thermal infrared band; and (2) color photography taken by astronauts in the Mercury, Gemini, and Apollo manned spaceflight programs. Examples of the kinds of data obtained from the National Aeronautics and Space Administration (NASA) flights collected in the 1960s are shown in Figures 1 and 2.

Figure 1. Thermal infrared image of the US southeast coast showing warmer waters of the Gulf Stream and cooler slope waters closer to shore taken in the early 1960s. While the resolution and accuracy of the TV on Tiros were not ideal, they were sufficient to convince oceanographers of the potential usefulness of infrared imagery. The advanced very high resolution radiometer (AVHRR) scanner (see text) has improved images considerably. Courtesy of NASA.

Figure 2. Color photograph of the North Carolina barrier islands taken during the Apollo-Soyuz Mission (AS9-20-3128). Capes Hatteras and Lookout, shoals, sediment- and chlorophyll-bearing flows emanating from the coastal inlets are visible, and to the right, the blue waters of the Gulf Stream. Cloud streets developing offshore the warm current suggest that a recent passage of a cold polar front has occurred, with elevated air–sea evaporative fluxes. Later instruments, such as the coastal zone color scanner (CZCS) on Nimbus-7 and the SeaWiFS imager have advanced the state of the art considerably. Courtesy of NASA.

Such early imagery held the promise of deriving interesting and useful oceanic information from space, and led to three important conferences on space oceanography during the same time period.

In 1964, NASA sponsored a conference at the Woods Hole Oceanographic Institution (WHOI) to examine the possibilities of conducting scientific research from space. The report from the conference, entitled Oceanography from Space, summarized findings to that time; it clearly helped to stimulate a number of NASA projects in ocean observations and sensor development. Moreover, with the exception of the synthetic aperture radar (SAR), all instruments flown through the 1980s used techniques described in this report. Dr. Ewing has since come to be justifiably regarded as the father of oceanography from space.

A second important step occurred in 1969 when the Williamstown Conference was held at Williams College in Massachusetts. The ensuing Kaula report set forth the possibilities for a space-based geodesy mission to determine the equipotential figure of the Earth using a combination of (1) accurate tracking of satellites and (2) the precision measurement of satellite elevation above the sea surface using radar altimeters. Dr. William Von Arx of WHOI realized the possibilities for determining large-scale oceanic currents with precision altimeters in space. The requirements for measurement precision of 10-cm height error in the elevation of the sea surface with respect to the geoid were articulated. NASA scientists and engineers felt that such accuracy could be achieved in the long run, and the agency initiated the Earth and Ocean Physics Applications Program, the first formal oceans-oriented program to be established within the organization. The required accuracy was not to be realized until 1992 with TOPEX/Poseidon, which was reached only over a 25-year period of incremental progress that saw the flights of five US altimetric satellites of steadily increasing capabilities: Skylab, Geos-3, Seasat, Geosat, and TOPEX/Poseidon (see Figure 3 for representative satellites).

Figure 3. Some representative satellites: (1) Seasat, the first dedicated oceanographic satellite, was the first of three major launches in 1978; (2) the Tiros series of operational meteorological satellites carried the advanced very high resolution radiometer (AVHRR) surface temperature sensor; Tiros-N, the first of this series, was the second major launch in 1978; (3) Nimbus-7, carrying the CZCS color scanner, was the third major launch in 1978; (4) NROSS, an oceanographic satellite approved as an operational demonstration in 1985, was later cancelled; (5) Geosat, an operational altimetric satellite, was launched in 1985; and (6) this early version of TOPEX was reconfigured to include the French Poseidon; the joint mission TOPEX/Poseidon was launched in 1992. Courtesy of NASA.

A third conference, focused on sea surface topography from space, was convened by the National Oceanic and Atmospheric Administration (NOAA), NASA, and the US Navy in Miami in 1972, with ‘sea surface topography’ being defined as undulations of the ocean surface with scales ranging from approximately 5000 km down to 1 cm. The conference identified several data requirements in oceanography that could be addressed with space-based radar and radiometers. These included determination of surface currents, Earth and ocean tides, the shape of the marine geoid, wind velocity, wave refraction patterns and spectra, and wave height. The conference established a broad scientific justification for space-based radar and microwave radiometers, and it helped to shape subsequent national programs in space oceanography.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123744739007372

Tools for Monitoring Global Deforestation

C. Davis, R. Petersen, in Reference Module in Earth Systems and Environmental Sciences, 2016

Airborne

Remote sensing started as cameras mounted on planes taking aerial photographs. This technique of making maps has been used since the first flights, when photogrammetry and photointerpretation was used to carry out forest mapping and monitoring. Aerial methods are still utilized today when the purpose of monitoring requires much more detailed information than we can see from space. In this case, airborne instruments – sensors attached to planes, or cameras mounted on drones – are used to capture detailed information about a specific area of forests at high resolutions. Their application ranges from private companies counting the number of trees harvested in a plantation, to a government conducting a detailed carbon inventory of their forests.

For example, airborne light detection and ranging (LiDAR) sensors can capture detailed information about the physical structure of forests (Asner, 2009). Airborne LiDAR can collect information at a resolution of 1 m to up to 10 cm, and is detailed enough to see individual tree crowns and delineate tree species (Baldeck et al., 2015). This method is very well suited for calculating detailed aboveground biomass measurements, useful in forest carbon inventories.

Small, recreational unmanned area vehicles are also increasingly popular technologies to quickly capture images of an area of forest. These low-cost drones, with small cameras attached, can be quickly flown over areas to visually confirm deforestation and even illegal activities (see e.g., Paneque-Gálvez et al., 2014). However, using drone imagery to monitor forests can be challenging; the equipment is difficult to land in rough terrain or has a short battery life. The biggest barrier to utilizing drones in many countries is receiving the necessary permits or permissions to fly these vehicles in regulated airspace.

Overall, airborne tools are very well suited for detailed mapping of small areas of interest. Their limited coverage and high costs render them inefficient tools for systematic monitoring of forest change at scale.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780124095489095270

Cameras for Small-Format Aerial Photogrammetry

James S. Aber, ... Johannes B. Ries, in Small-Format Aerial Photography, 2010

6.6.4 Camera Type

As a consequence of the above items, SLR cameras and their recent mirror-free equivalents must be considered more suitable for small-format aerial photogrammetry than are compact cameras. At the time of writing, there were only two compact cameras on the market that could meet all of these requirements (Leica M8 and Sigma DP1/DP2), but they have other disadvantages for SFAP (high price and slow shutter speeds and frame rates, respectively). The disadvantage of SLR cameras, although they are becoming increasingly smaller and lighter, is their often considerably larger size and weight. This might—in addition to considerations of photogrammetric survey issues like stereo coverage, navigability, and stableness—also influence the choice of platforms used for small-format aerial photogrammetry (see Chapter 8).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780444532602100067