Marsh Baseline Survey Report
08505-012-01 1
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
SUMMARY MEMORANDUM
City of Atlantic Beach Adaptation Planning Services
TO: Steve Swann, PE; Kimberly Flower; Abrielle Genest
City of Atlantic Beach
FROM: Jarrod Hirneise, PE
Department Manager
DATE: June 17, 2025
SUBJECT: RFQ 22-01 Adaptation Planning Services –
Task 1.4 – Marsh Baseline Survey - Summary Memorandum
Jones Edmunds Project No.: 08505-012-01
INTRODUCTION
Changes in the aerial extent of coastal vegetated habitats can indicate the health and
vitality of coastal ecosystems and their potential resilience to sea-level rise. The City of
Atlantic Beach contracted Jones Edmunds to conduct a baseline aerial survey to map the
extent of saltmarsh along the Atlantic Intracoastal Waterway within the City’s limits.
Jones Edmunds hired a subconsultant, TerraData Unmanned, to acquire high -resolution red,
green, blue (RGB) imagery via drone/unmanned aerial vehicle and obtained high-resolution
multispectral satellite imagery from Apollo Mapping. The drone-imagery collected has
relative accuracy and was georeferenced based on tie points (building corners, road
intersections, etc.) that are unlikely to change over time. Jones Edmunds received an RGB
orthomosaic, digital terrain model, and digital surface model from the drone subconsultant
and received an orthomosaic from the satellite imagery vendor.
In addition, Jones Edmunds conducted field investigations to characterize unique vegetative
signatures and developed training points for mapping polygon generation. The imagery was
post-processed in ArcGIS to generate polygons for unique aerial imagery vegetation
signatures.
This Technical Memorandum summarizes the methodology and results of this Marsh
Baseline Survey.
1 AERIAL IMAGERY DATA COLLECTION
1.1 HIGH-RESOLUTION AERIAL IMAGERY
TerraData collected high-resolution RGB aerial imagery on October 30, 2024, and on
May 24, 2025. Figure 1 shows the areas where imagery was collected on each date. The
data collection limits were extended outside the City’s limits north of Dutton Island Preserve
because this area is managed by the City.
08505-012-01 2
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 1 Summary of Data Collection Limits
08505-012-01 3
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
The data were collected each day around low tide using a 45 megapixel,
CMOS 35.9-mm-x-24-mm full-frame sensor and a 35-mm f2.8 lens. The sensor
was gimbaled for stabilized nadir orientation and was flown at a constant altitude of
400 feet above ground level from takeoff with no takeoff point being more than 1 meter
above the waterline. To provide sufficient data overlap, a 70 -percent front and side overlap
were used, which were configured using mission planning software. The sensor was flown in
an autonomous fashion using a serpentine path with separate missions overlapping. All
flights were conducted in an east-to-west orientation.
The imagery collected has relative accuracy and was georeferenced based on tie points
(building corners, road intersections, etc.) that are unlikely to change over time. A
calculation can be made for spatial accuracy, but it would need to be confirmed via
surveying methods because no ground control was used for this collection effort. Given the
parameters of the missions and the sensor used, 3-centimeter (cm) horizontal accuracy and
5-cm vertical accuracy is achievable.
The geotagged imagery was input into a terrestrial and photogrammetry mapping software.
The processing hardware used was developed for use with the mapping software. Initial
processing was conducted using standard settings in the software, allowing review of the
data such as relative difference, median keypoints per image, median matches per
calibrated image, image position, etc. A custom processing template was developed
considering the subject matter of the data capture, lighting conditions, desired resol ution,
etc. This template determined which outputs were to be generated in full processing, to
what resolution, and the specific file format. Outputs from the processing included the
following:
▪ Digital Surface Model – This output was generated using a standard 100-cm grid spacing
between three-dimension (3D) points and was output in .laz file format.
▪ Digital Terrain Model – This output is a 2.5-dimension (2.5D) model of the mapped area
and was output in .tif file format.
▪ Orthomosaic – This output is the full-resolution orthomosaic as defined by the
parameters of the processing template and was output in .tif file format.
▪ Compressed Orthomosaic – This output is a compressed .tif file of the full-sized
orthomosaic and was provided as a version that is more easily opened and manipulated.
Figure 2 shows the output of the compressed orthomosaic.
1.2 MULTISPECTRAL SATELLITE IMAGERY
Jones Edmunds obtained multispectral satellite aerial imagery that covers the entire City
limits from Apollo Mapping. The imagery was collected from the WorldView -2 commercial
earth observation satellite on December 2, 2024 , at 4:09 pm. The orthorectified
multispectral imagery that was delivered has the following specifications:
▪ 50-cm resolution.
▪ Eight bands.
▪ Off-Nadir – 23.3 degrees.
▪ Cloud Cover – 0 percent.
▪ Sun Elevation – 35.5 degrees.
▪ Sun azimuth – 161.2 degrees.
Figure 3 shows the orthorectified multispectral aerial imagery.
08505-012-01 2
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 2 Orthomosaic Output
08505-012-01 3
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 3 Multispectral Imagery
08505-012-01 4
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
2 FIELD INVESTIGATION
Wetland scientists from Jones Edmunds reviewed the aerial imagery and conducted a field
investigation on January 14, 2025, to identify the primary wetland plant species signatures
in the imagery. The scientists used a handheld global positioning system (GPS) unit to note
the spatial location of observed dominant wetland species and took photographs
documenting the species in each unique vegetation polygon. Observation points were
collected at 46 locations and 208 photographs were taken.
Based on the field observations, four primary wetland species habitat signatures were
identified: black rush (Juncus roemerianus) (Figure 4), saltmarsh cordgrass (Spartina
alterniflora) (Figure 5), saltgrass (Distichlis spicata) (Figure 6), and a mixture of black rush
and saltmarsh cordgrass (Figure 7).
Figure 4 Photograph of Black Rush with Corresponding Aerial Signature
08505-012-01 5
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 5 Photograph of Saltmarsh Cordgrass with Corresponding Aerial
Signature
Figure 6 Photograph of Saltgrass with Corresponding Aerial Signature
08505-012-01 6
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 7 Photograph of Black Rush and Saltmarsh Cordgrass Mixture with
Corresponding Aerial Signature
3 MARSH MAPPING
Jones Edmunds used the aerial imagery products and the findings from the field
investigations to map the primary marsh habitats in ArcGIS. A combination of automated
mapping techniques and manual review were used to create the marsh mapping coverage.
The automated mapping approach was developed based on the methodology presented in
Section 4.3, Percent Cover Analysis, of the standard operating procedure (SOP)
(Attachment 1). The SOP was developed by the National Estuarine Research Reserve
System (NERRS) System-Wide Monitoring Program (SWMP) to assess changes in ecological
characteristics and aerial extent of vegetated habitats. The automated mapping analysis
was performed in ArcMap using the following inputs:
▪ High-resolution RGB orthomosaic aerial imagery.
▪ Digital Surface Model raster.
▪ Normalized Difference Vegetation Index raster, which was generated from the
orthorectified multispectral aerial imagery.
▪ Training polygons that were generated by Jones Edmunds for each of the primary
wetland species habitat signatures discussed in Section 2. Polygons were also generated
for open water and wooded or non-marsh grass areas.
The automated mapping methodology produced a draft shapefile coverage that classified
the City’s marsh into the following primary habitats:
▪ Saltmarsh cordgrass.
▪ Black rush.
▪ Saltmarsh cordgrass and black rush
mixture.
▪ Saltgrass.
▪ Water.
▪ Wooded or Non-Marsh Grass Areas.
08505-012-01 1
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
The automated mapping results were manually reviewed and refined to improve the
accuracy of the mapping where the automated methodology misclassified habitats. A
seventh mapping category was also added to identify mangroves on the island in the middle
of the main channel of the Atlantic Intracoastal Waterway north of the Atlantic Boulevard
Bridge. Table 1 provides a breakdown by area of the marsh habitats that were mapped and
Figure 8 shows the final marsh mapping results.
Table 1 Marsh Habitat Mapping Area Breakdown
Marsh Habitat Mapped Area
(acres)
Saltmarsh Cordgrass 137
Black Rush 103
Saltmarsh Cordgrass and
Black Rush Mixture 62
Saltgrass 2.5
Water 189
Wooded or Non-Marsh Grass 138
Mangrove 0.01
4 CONCLUSION
This dataset provides a valuable baseline extent of coastal marsh within the City limits. We
recommend that the City acquire high resolution aerial imagery every five years to compare
future year datasets to this baseline dataset. This comparison will all ow the City to
determine trends in coastal marsh extents, identify and quantify areas of loss or gain, and
investigate causation.
08505-012-01 2
June 2025 Adaptation Planning Services – Task 1.4 Summary Memorandum
Figure 8 Marsh Habitat Mapping
Attachment 1
Automated Marsh Mapping SOP
1
A Protocol for Monitoring Coastal Wetlands with Drones:
Image Acquisition, Processing, and Analysis Workflows
A Protocol for Monitoring Coastal Wetlands with Drones: Image Acquisition, Processing, and Analysis
Workflows 1
1. Introduction and Objectives 4
2. Operational Protocol 6
2.1. Image Acquisition 6
Table 1. Mission Planning Parameters 6
2.1.1. Ground Control Points (GCPs) 7
Table 2. Preflight Checklist 7
2.1.2. Vertical ground-truth points (optional, but recommended): 8
Figure 1. GCP, Checkpoint and Vegetation Plot Locations 9
2.1.3. Mission Planning 9
2.1.4. Preflight and Equipment Checklists 9
2.1.5. Collecting Multispectral Imagery 10
2.2. Ground-based Vegetation Surveys 10
2.2.1. Species-Specific Percent Cover 11
Figure 2. Reference Percent Cover Guide (using 10% intervals) 11
2.2.2. Canopy Height 12
2.2.3. Above-ground biomass 12
2.2.4. Delineating ecotones 13
2.3. RTK GPS surveys 13
2.4. Recommended Sampling Schedule 14
2.5. Data File Structure and Naming Convention 15
3. Image Processing in Pix4D 18
3.1. Adding Photos/Camera Calibration 18
New Project 18
Select Images 18
Image Properties 18
Output Coordinate System 18
Processing Options Template 19
3.2. Photo Alignment and Optimization 20
3.2.1. Processing Step 1: Initial Processing 20
General 20
Matching 20
Calibration 20
Running Step 1 and Checking Outputs 21
2
Initial Quality Check 21
3.2.2. GCP Registration 22
Importing GCPs 22
Registering GCPs 23
Output Quality Check 24
3.3. Creating Densified Point Cloud, Orthomosaic and Elevation Models 24
3.3.1. Processing Step 2. Point Cloud and Mesh 24
Point Cloud 25
3D Textured Mesh 25
Advanced 25
Running Step 2 and Checking Outputs 26
Point Deletion 26
3.3.2. Processing Step 3. DSM, Orthomosaic and Index 26
Table 3. Deriving Outputs from Different Sensors (RGB vs. Multispectral (MS)) 26
DSM and Orthomosaic 26
Additional Outputs 27
Index Calculator 27
Figure 3. Radiometric Calibration Card 28
3.3.3. Running Step 3 and Generating Outputs 29
Exporting Products 29
4. Post Processing and Analysis 29
Figure 4. Data to Analysis Flowchart 31
4.1 Assessing Accuracy of Elevation Models and Efficacy for Estimating Canopy Height 31
4.2 Ecotone Delineation 35
4.3 Percent Cover Analysis 37
4.3.1 Total percent cover (vegetated vs unvegetated classification) 37
Figure 5. Segmented Classification Workflow 38
Accuracy assessment 42
Comparison Between Drone vs. Field-based Total Percent Cover Estimation 44
4.3.2 Species-specific Percent Cover Analysis (multiple vegetation species classification) 44
4.4 Assessing Efficacy of Vegetation Indices to Estimate Above-ground Plant Biomass 45
Appendices 47
Appendix 1. Ground control point construction instructions. 47
Appendix 2. Image Processing in Drone2Map version 2.3 50
Adding Imagery 50
1.1. New Project 50
1.2. Select Images 50
1.3. Image Properties 50
Defining Processing Options 51
2.1. 2D Products tab 51
2.2. 3D Products tab 51
2.3. Initial tab 51
3
2.4. Dense tab 52
2.5. Coordinate Systems tab 53
2.6. Resources tab 53
2.7. Processing multispectral imagery 53
3. Add Ground Control 54
3.1. Import Control 54
3.2. Control Manager 55
3.3. Image Links Editor 55
3.4. Export Control 56
3.5. Start Processing 57
3.6. Troubleshooting 57
3.6.1. Fixing a distorted orthomosaic or DSM 57
3.6.2. Creating new GCPs to improve georeferencing of surface using water level as a
reference 57
4. Products 58
4.1. Indices 59
5. Drone2Map Processing Report 60
5.1 Drone2Map Quality Check Table 60
Appendix 3. GCP Caveats 64
Manually Adjusting GCP Height to Facilitate Registration Process 64
Creating New GCPs to Improve Georeferencing of Surface 64
Creating New GCPs Using Original GCP as Reference 64
Creating New GCPs Using Water Level as Reference 65
Appendix 4. Pix4D Output Quality check table 67
Appendix 5. Create Alternative Vegetation Indices with RGB Imagery in ArcGIS Pro 72
Appendix 6. Creating RGB Orthomosaic from Multispectral Imagery in ArcGIS Pro 73
4
1. Introduction and Objectives
Monitoring plays a central role in detecting climate and anthropogenic impacts on coastal ecosystems.
Tidal wetlands exhibit high spatial complexity and temporal variability. Monitoring programs to measure
the impacts of stressors and, ultimately, inform management must be designed accordingly. Many
modern-day monitoring programs combine ground-based measurements and remotely sensed (e.g.,
satellite) observations for describing change associated with small- and large-scale spatiotemporal
processes. The NERRS System-Wide Monitoring Program (SWMP) uses a similar approach designed, in
part, to assess changes in the ecological characteristics and aerial extent of vegetated habitats as an
indicator of health and vitality of coastal ecosystems.
To date, a huge investment has been made across the NERRS to assess changes in tidal wetland
vegetation through SWMP biotic and sentinel site monitoring, as well as habitat mapping. Biotic and
sentinel site monitoring are conducted at spatial scales of meter square permanent plots every 1-3
years. Habitat mapping is conducted at reserve-wide, watershed scales via imagery from satellites or
manned flights with 1-30 m pixel resolution every 5-10 years. While both approaches have strengths,
important processes at intermediate spatial (i.e., marsh platform) and finer temporal (i.e., storm events)
scales may be missed. For instance, permanent plots may miss important spatial heterogeneity such as
marsh die-offs. Moreover, repeated ground-based sampling along permanent transects can result in
substantial damage to vegetation from trampling. Satellite-derived imagery reduces flexibility in timing
(e.g., seasonally, tide stage), can have obstructions from cloud cover, or have insufficient resolution to
delineate and detect changes occurring at important ecotones.
Bridging the spatial and temporal scales that limit current monitoring programs is key to improving our
understanding of the drivers of change in tidal wetlands. In this context, Unmanned Aerial Systems
(UAS, i.e., drones) have considerable potential to radically improve tidal wetland monitoring programs,
including SWMP. UAS-mounted sensors offer an extraordinary opportunity to bridge the existing gap
between ground-based observations and traditional remote sensing, by providing high spatial detail
over relatively large areas in a cost effective way, with customizable sensors, all at an entirely new, user -
defined temporal capacity. However, the published research using UAS rarely documents the
methodology, workflow, and practical information in sufficient detail to allow others, with little remote
pilot and image analysis experience, to replicate them or to learn from their mistakes. A major challenge
in the utilization of UAS is that operational standards and data-collection techniques are developed
independently, which is neither efficient nor optimal for large-scale data archiving, sharing and the
reproducibility of measurements, which are all critical pillars of SWMP.
The objectives for this project were to conduct a regionally coordinated effort, working in salt marshes
and mangroves within six National Estuarine Research Reserves in the Southeast and Caribbean to
develop, assess and collaboratively refine a UAS-based tidal wetlands monitoring protocol that details
image acquisition, post-processing, and analysis workflows. End users have indicated that barriers to the
use of UAS technology for applications such as tidal wetlands monitoring span the entire data collection
workflow from flight planning best practices, to image analysis procedures, to metadata standards. To
address these barriers and to ensure consistent and coordinated acquisition, analysis, and management
of data, this document is comprised of three distinct, yet interrelated protocols:
1) Operational protocol covering image acquisition and ground-based vegetation surveys.
5
2) Image processing protocol covering post-processing of imagery following acquisition. Post-
processing details are provided for Pix4D and Drone2Map photogrammetry software.
3) Image analysis protocol covering canopy height estimation, ecotone delineation, total and
species-specific percent cover estimation, and NDVI-based estimates of above-ground biomass.
This document is targeted at entry-level UAS users with UAS training and required certifications (e.g.,
FAA Part 107 certificate) that are interested in exploring UAS as a tool for monitoring coastal wetlands.
The document does not include UAS operation guidance or airspace rules and regulations.
6
2. Operational Protocol
2.1. Image Acquisition
The image acquisition protocol is targeted at users that have UAS training and required certifications
(e.g., FAA Part 107 certificate) and, therefore, will not include UAS operation guidance or airspace rules
and regulations. All users must ensure all required federal, state and local certifications have been
acquired prior to conducting UAS flights. The protocol is written for image acquisition with RGB (Red-
Green-Blue spectrum) and multispectral sensors to generate and analyze orthomosaics, digital elevation
models, and reflectance-based indices.
Conduct UAS flights to obtain imagery prior to in-situ vegetation monitoring, delineation of ecotones, or
harvesting of aboveground biomass. Flight planning parameters (Table 1) are based on project team
consensus of general best practices. Use a pre-flight checklist to ensure best results and equipment
longevity (Table 2). If using a multispectral sensor with a calibration card, take photos of the reflectance
panel before and after each flight (as well as before and after battery swaps), ensuring that the entire
reflectance panel is included in the image. Depending on the sensor setup, collection of these additional
near infrared and red-edge bands can occur concurrently with RGB sensor collection or with a follow-up
flight (i.e., within 30 minutes). A flight log datasheet (S1 in supplementary documents) should be
completed in concurrence with the flight to capture relevant metadata. The numbered ‘flights’ in the log
(in time of day, airframe, etc) are meant to capture each paired take-off and landing (i.e., battery swaps
or sensor swaps).
Table 1. Mission Planning Parameters
Parameter Decision
Altitude
50 m altitude; preferably over an area covering ≥ 3 vegetation transects used for ground-based
validation surveys.
Time of day
After 7:00 am, before 10:00 am. Generally too much reflection from the marsh platform in the
middle of the day; low light too early or too late in day; often windier later in day. If shading is a
problem at your site (e.g., upland trees), consider later in the window. Make sure to avoid
shadowing on reflectance panels used for multispectral sensor calibration.
Tide Low tide - greatest exposure of ground; can probably fly c. 1 hr on each side of low tide
Wind
conditions
Must be < than 15mph, preferably lower than 10mph. Less movement of vegetation will result in
better image processing results.
Cloud
conditions
If cloudy, lower wind speed is important. Clouds can help reduce shadowing. If clouds are patchy
make sure to use reflectance panels for calibration before and after each takeoff-landing.
Flight Lines
Set up flight lines to be parallel to creeks and perpendicular to elevation gradient; on rising tide,
plan flights to begin at water, moving up the elevation gradient, vise versa on falling tides. Check
that there are at least 3 transects of photos being captured on the smallest dimension of the
mission box and that flight lines extend beyond the area of interest.
Ground
control and
checkpoints
One ground control point (GCP) per hectare is a reasonable goal (2 GCPs per hectare is ~ optimal).
Spread GCPs fairly evenly across the area being flown. While surveying GCPs, take ~50 RTK ground
7
checkpoints haphazardly distributed throughout the flight area (spaced from transects and GCPs
to avoid redundancy).
Image overlap 75% front and side overlap.
Flight speed
Auto adjust speed on mid setting (recommend 15 mph (6.7m/s) or under), slower if cloudy (under
10 mph (4.4m/s)).
Camera
settings
Users can set camera settings if comfortable doing so, but use of auto settings, which is often
optimized for mapping, is generally sufficient. Tips for improving image quality if the user is
adjusting the camera settings available for DroneDeploy and Pix4DCapture.
Image file JPEG
2.1.1. Ground Control Points (GCPs)
A minimum of 3 GCP’s are required in an image set for photogrammetry software to include them in
image processing. A minimum of 5 GCP’s is recommended in order to see a significant increase in the
absolute accuracy of the project. Having at least 5 GCP’s minimizes the measurement inaccuracies and
helps to detect mistakes that may occur when inserting the points. Strive for 1 GCP per hectare, but 2
per hectare is preferable. Distribution should be uniform, not linear or clumped distribution. Best results
are typically obtained when adhering to a quincunx pattern (i.e., how dots are arranged on the 5-side of
a die) for GCP distribution to capture corners and center of the landscape (see the red squares in Figure
1).
Table 2. Preflight Checklist
Preflight Checklist ✔ NA
Check area of operation for obstructions, people, and property
Create takeoff/landing exclusion if necessary (signage in public areas)
Check airframe for signs of damage
Check aircraft motor bearings spin freely without unusual noise or resistance. Check props for
damage, cracks, and tightness
Install SD Card with sufficient space available
Remove lens cap/gimbal protection
Clean lens if necessary
Power on transmitter. Check transmitter operation and battery level. All switches neutral
Power up aircraft
Connect to aircraft
Check fail safes are appropriate for mission - Return to Home altitude, fence radius, max
altitude, minimum battery alert
Check transmitter/aircraft signal strength
Check GPS signal
8
Check sensor errors/warnings
Upload appropriate flight plan to UAV
Calibrate multispectral sensors using radiometric targets (for Micasense sensors)
Complete flight log upon landing
When installing GCPs, strive for a relatively stable target. When using PVC poles to support GCPs, the
poles should generally be driven c. 1-2 ft into the sediment (and perhaps even deeper in really soft
sediment). The height of GCPs above the surface depends on vegetation characteristics--in bare areas,
GCPs can be placed directly on the sediment surface. In very dense vegetation, the GCPs will need to be
placed at an elevation approximately even with the canopy height. Plan to take various lengths of PVC
poles for these different settings. Try not to put all GCPs at the same elevation. GCPs should be
‘surveyed-in’ by collecting x,y coordinates, as well as elevation by occupying the point of reference
(often the center) of the GCP using real-time kinematic (RTK) GPS. Use of virtual reference stations
(VRS), as opposed to a base station, should provide sufficient accuracy for our purposes.
See GCP Construction Instructions (Appendix 1) for guidance on how to construct GCPs.
Quality Note: It is a best practice to QA/QC real-time kinematic (RTK) GPS data by using nearby benchmarks
(e.g., surface elevation tables) before and after surveys for assessing the horizontal and vertical position
accuracy reported by the instrument during the survey.
Quality Note: If an adequate number of ground control points can not be distributed at a study site, it is
recommended that either an UAS equipped with an RTK system is used to improve data collection
accuracy or a water level measurement is taken at a marsh site to improve accuracy during post-
processing (see GCP Caveat 2b for details).
2.1.2. Vertical ground-truth points (optional, but recommended):
Vertical ground-truth points, which are referred to as ‘checkpoints’ in this protocol, provide an
additional means to validate and assess the accuracy of the digital elevation models produced from
image processing. If time permits, it is IDEAL to collect ~5-10 ground checkpoints per hectare using RTK
GPS. The checkpoints should be haphazardly distributed throughout the entire area over which imagery
will be collected (see Figure 1). Checkpoints can be taken during the course of surveying GCPs, but
should not all be concentrated by GCPs or permanent plots along transects to avoid redundancy.
9
Figure 1. GCP, Checkpoint and Vegetation Plot Locations
Figure 1. Location of ground control points, checkpoints, and vegetation/biomass plots at NCNERR in
February 2021. Notice, checkpoints are distributed throughout the survey area and elevation gradient.
Vegetation and biomass plots were also surveyed using RTK GPS, so checkpoints were offset from those
plots in many cases to reduce redundancy.
2.1.3. Mission Planning
Mission planning is critical to acquiring quality imagery. A number of parameters must be considered
during the mission planning process including, but not limited to, flight altitude, weather conditions, and
flight lines. The mission planning parameters agreed on by the project team based on experience and
general best practices are provided in Table 1.
2.1.4. Preflight and Equipment Checklists
Begin preparing for image acquisition at least 1-2 days prior to actually conducting the flights (see
Recommended Sampling Schedule below). During this time, it is a good idea to check for
software/firmware updates for your specific drone and ensure flight patterns and camera settings are
correct for the mission to be conducted. Remember to charge all batteries including: UAS (airframe and
controller) batteries, RTK antenna and receiver batteries, ground station (i.e., ipad) batteries. Also
10
remember to check memory cards and check airspace (NOTAM or LAANC). Be sure to develop an
equipment checklist (S2-S5 in supplementary documents includes several examples) to ensure all
necessary items are packed for field work. A general preflight checklist to be used while onsite prior to
conducting the mission is provided in Table 2. In areas with high humidity, it is often necessary to allow
equipment, particularly sensor lenses, to acclimate prior to flights.
2.1.5. Collecting Multispectral Imagery
There are different methods of collecting and calibrating multispectral imagery, the process is
dependent upon the sensor used and light sensing equipment available.
Calibration. The use of a radiometric calibration target enables Pix4D to calibrate and correct the images
to reflectance according to the values given by the reflectance target. When using reflectance targets,
their images must be imported for processing, like regular images, in order to be used for radiometric
correction. Calibration images in the field should be stored in a separate subfolder. Sensors that require
pre and post-flight calibration images (e.g., most MicaSense sensors) require further separation of
calibration images into respective subfolders.
See MicaSense documentation on calibration and Pix4D’s guidance on Radiometric Calibration Targets
for more information.
Targetless workflow. Setups such as the Parrot Sequoia+ provides absolute reflectance measurements
without reflectance targets. See Parrot Sequoia+ and Pix4D Documentation to learn more.
Sunshine sensors. The use of a sunshine sensor improves the overall correction results by including more
information about the illumination on the field (sun irradiance and, when supported by the hardware,
sun angle). For supported camera models, this information is stored in the image EXIF tags and
automatically found by Pix4Dfields.
2.2. Ground-based Vegetation Surveys
Ground-based vegetation surveys serve as a validation of image-based estimates for the parameters of
interest (e.g., percent cover). Generally follow the standard SWMP protocols1 for quantifying total
percent cover of vegetation, percent cover by species, and canopy height. The general approach used
here consists of sampling permanent plots located along fixed transects. Transects are often oriented
across a gradient (e.g., elevation). For the purposes of validating image-based estimates, aim to survey ≥
30 1m2 plots (or sub-plots in the case of mangrove sampling design2).
Quality Note: Vegetation surveys MUST be conducted within 1 week following image acquisition (see
Recommended Sampling Schedule for full timeline).
1 Moore, K. 2009. NERRS SWMP Bio -Monitoring Protocol: Long-term Monitoring of Estuarine Submersed and Emergent Vegetation
Communities. NERRS Technical Report. 14pp.
2 Moore, K. 2009.
11
2.2.1. Species-Specific Percent Cover
To quantify species-specific percent cover, each permanent plot (or sub-plot) is sampled non-
destructively. Percent cover can be quantified visually or using the point intercept method (visual
estimates are required; point intercept is optional).
For visual percent cover, cover estimates for species and other cover types should use 10% cover
intervals, except at the low end where a 5% interval should be used (see Figure 2 for a reference percent
cover guide). For species or cover types that are present, but lower than 5%, indicate their presence by
designating a percent cover of 1%.
Quality Note: Include cover estimates for ‘dead cover’ as well, which will include plants/wrack with no
live (green or yellow) plant tissue.
Figure 2. Reference Percent Cover Guide (using 10% intervals)
If also using the point intercept method, lower a thin rod perpendicular to the substrate at 50
systematically spaced grid ‘nodes’ within a 1m2 quadrat. Each species or cover type, including
unvegetated cover types (e.g., bare ground, oyster shell) that the rod intercepts at each node is
recorded as a ‘hit’. Multiple species can be present at a given node; record a hit for every species the
rod intercepts at each node. Only record bare ground if the rod hits no vegetation or other cover type at
all (e.g., oyster shell). After sampling all 50 nodes, the total number of ‘hits’ for each species and cover
type is tallied and multiplied by 2 to give percent cover (0-100%). Be sure to visually inspect the entire
12
plot and indicate all species and cover types that are present within the plot, but do not intersect a node
by designating a percent cover of 1% (vs lowest cover for a hit = 2%).
2.2.2. Canopy Height
Canopy Height should be measured following percent cover sampling. Measure maximum canopy height
and average canopy height in each plot or sub-plot. Measure maximum canopy height by measuring the
height above the sediment surface of the three tallest points in the canopy within each plot. Measure
maximum canopy height in two ways: 1) straightening the stems and stretching the leaves (i.e., pulling
the plant slightly upwards) of marsh grasses to follow the SWMP guidelines and and 2) not straightening
stems nor stretching leaves, which is more reflective of what the sensor ‘sees’ and the digital surface
model depicts.
To estimate average canopy height (marsh vegetation only), measure the height above sediment of ten
randomly selected plants of the dominant species in each plot. Again, do this in two ways:
1) straightening the stems and stretching the leaves (i.e., pulling the plant slightly upwards) of marsh
grasses to follow the SWMP guidelines and 2) not straightening stems nor stretching leaves.
After quantifying cover and canopy height, obtain latitude, longitude, and elevation at the center of
each permanent vegetation plot using RTK GPS. Use of virtual reference stations (VRS) during RTK
surveys should provide sufficient accuracy for our purposes. If this sampling is being combined with
regularly scheduled SWMP biomonitoring sampling, stem density data should also be collected. This
project did not use stem density data.
2.2.3. Above-ground biomass
Harvest salt marsh vegetation to measure above-ground biomass (g/m2) for two vegetation types:
monoculture stands of S. alterniflora, and the mixed species vegetation stands that tend to occur at
higher elevations in southeastern salt marshes (e.g., short-form S. alterniflora mixed with Distichlis
spicata, Salicornia spp., and Spartina patens). Harvest biomass in an area proximal to one or more of
your transects to ensure representativeness, but far enough removed such that biomass harvesting does
not interfere with your long-term monitoring site.
Quality Note: Biomass harvest MUST be conducted within 1 week following image acquisition (see
Recommended Sampling Schedule for full timeline).
Within each vegetation type, collect aboveground biomass within a 0.25m2 quadrat by clipping all
standing vegetation to the soil surface, excluding fallen litter. No need to count or measure plants
before clipping. During the target time period (early summer), most biomass should be ‘live’ (i.e., any
rooted plant with green or yellow tissue). In the event that there is dead biomass within plots (i.e., no
live tissue on plant), it should be clipped, stored, processed, and weighed separately from ‘live’ biomass.
Collect a minimum of 12 quadrats, located along the marsh elevation gradient to span a gradient of
stem density and plant height for each vegetation type. Aim to get at least three plots for each
vegetation type that cover each of the extremes--extremely low biomass and extremely high biomass
(leaving c. 6 plots for each vegetation type with ~ average biomass). Store clippings from each plot in
separate, labeled bags (e.g., trash bags). Using RTK GPS, obtain the latitude, longitude, and elevation
13
from the center of each biomass plot to allow direct plot-level comparisons with derived spectral data.
Upon returning to the lab, freeze clippings unless samples are planned to be processed within 24 hours.
To process clipped vegetation from each plot, wash plants using ≤ 2mm mesh to remove sediments, but
retain plant material. Allow plant material from each plot to air dry in separate trays before bagging in
brown paper bags (e.g., grocery bags). Dry at 60 °C for 72 h. It is likely that plants will have to be dried in
2-3 batches to avoid overcrowding a single drying oven. Once dried, weigh plant material from each plot
to the nearest 0.1 g to calculate grams dry weight per m2.
2.2.4. Delineating ecotones
Using RTK GPS, survey the following ecotones: wetland-water edge, low marsh-high marsh, and
wetland-upland.
The wetland-water edge ecotone is defined as the most landward point where vegetation is absent.
Wetland-water edge delineation is not required if doing so will be overly destructive at your site, but
please conduct this activity if possible. One option may be to survey this ecotone via boat during high
tide and/or using an offset pole kit to avoid walking near this ecotone.
The low marsh-high marsh ecotone will be defined where the dominant species shifts from low marsh
species (e.g., Spartina alterniflora) to high marsh species (e.g., S. patens). This is likely to be a somewhat
subjective determination on the fly (no pun intended!), so use your expert knowledge of the site for
delineating this boundary as accurately as possible.
The wetland-upland ecotone will be defined as the location where wetland species are no longer the
dominant species (based on cover). If there are additional ecotones of importance at your sentinel site
(e.g., wetland vegetation-salt pannes or wetland vegetation-ponds) delineate those with RTK GPS as
well.
Set RTK units to record at 0.5m intervals. To delineate boundaries and ecotones, we will walk parallel to
the boundary of interest for at least 50 meters. It will be helpful to mount your RTK receiver on a
backpack (example backpack) for ecotone delineation. We are most concerned with horizontal
coordinate accuracy during ecotone delineations, so position the receiver on the side of the backpack
closest to the ecotone of interest and do your best to ensure the receiver is directly over the ecotone
while walking. The elevation of RTK receivers mounted on backpacks will not be very accurate given
points will be recorded at different parts of the surveyor’s stride, as we sink in the mud, etc. Accurate
elevations are not a priority for ecotone delineations.
2.3. RTK GPS surveys
RTK surveys are conducted to get x,y coordinates and elevation of the following:
1) Ground control points
2) Ground checkpoints
3) Center of permanent plots along transects
4) Center of biomass plots
5) Along select ecotones (x,y coordinates only)
14
Quality Note: It is a best practice to QA/QC real-time kinematic (RTK) GPS data by using nearby benchmarks
(e.g., surface elevation tables) before and after surveys for assessing the horizontal and vertical position
accuracy reported by the instrument during survey.
It is important that RTK surveys be conducted in association with vegetation surveys using the same
coordinate system, datum and geoid used to survey GCPs and checkpoints during image acquisition. This
information should be included in the readme.txt file (S6 in supplemental docs folder) so that it can be
later incorporated into the georeferencing step of the image processing (an example readme.txt file is
provided in S6 in supplementary documents). The contents of the readme file should include the
following:
1. UAS platform used
2. Sensor(s) used
3. Coordinate system used for RTK surveys of GCPs and checkpoints. Coordinate system can
be obtained from GPS controller (Trimble units) or Software/App (Emlid units).
a) Coordinate system: e.g., United State/US Continental or US State Plane or UTM
b) Zone (if applicable)
c) Datum: e.g., WGS 1984 or NAD 1983
d) Geoid: e.g., G12AUS
e) Units (distance): e.g., meters
General Note: Using an RTK GPS to derive a NAVD88 or other orthometric height requires both an
ellipsoid height (referenced to a geometric datum (e.g. NAD 83) and geoid height (referenced to a hybrid
geoid model (e.g. GEOID12A)).
2.4. Recommended Sampling Schedule
To maximize efficiency while in the field, an example sampling schedule is prov ided below. Site
specificity will determine if this schedule can work at your site. For instance, at large sites where
multiple flights have to be conducted to cover the entire site, it may not be possible to, for example,
conduct flights and harvest biomass in the same day. Likewise, if your site consists of multiple transects
spaced out over large distances, a single flight day might be required for each transect. However,
vegetation surveys and biomass harvest MUST be conducted within 1 week following image acquisition
in order for UAS imagery to reflect the conditions measured in the ground-based surveys.
The recommended sampling schedule involves conducting all flights and vegetation sampling within the
same week, starting with UAS flights, then proceeding with biomass harvesting and other vegetation
sampling. Details are as follows:
a) 1-2 days before field work:
i) Check for software/firmware updates for drone, make sure flight patterns and camera
settings are correct. Charge UAS and RTK batteries, check memory cards, and check
airspace.
b) Field day 1-2:
i) Deploy GCP’s and survey with RTK GNSS
15
ii) Obtain ~50 randomly distributed checkpoints across the entire site with RTK GNSS
iii) Perform pre-flight checklist
iv) Fly sentinel site at 50m
v) Harvest biomass plots, surveying center of plot with RTK
1) S. alterniflora monoculture plots x 12
2) Mixed spp. vegetation plots x 12
c) Field day 2-3 (MUST be within 1 week of imagery acquisition):
i) Permanent plot transect monitoring
1) Species specific percent cover
2) Maximum canopy height
3) Average canopy height
ii) Ecotone delineation
1) Wetland-water
2) Low marsh-high marsh
3) Wetland-upland
2.5. Data File Structure and Naming Convention
For data management purposes, it is important that a consistent file structure and naming convention is
established to prepare for the image processing phase of the workflow. The data, data products, and
metadata are archived using the file naming conventions and file structure below:
Abbreviations used in this naming convention:
● Date = 6 digit date (YYMMDD; e.g., 210914 for September 14, 2022)
● Reserve = 3 letter abbreviation for NERR site (XXX; e.g., SAP for Sapelo Island NERR)
● UAS designation = 1 letter abbreviation for airframe (z; e.g, v - Mavic, t - Matrice, p - Phantom, i -
Inspire)
● UAS sensor type = 1 letter abbreviation for sensor type (e.g., m - Multispectral, o - Optical (RGB))
Suggested File Structure:
● Drone_the_SWMP/
○ NERRS_drone_marsh_monitoring_SOP.docx
○ /XXX_Field_and_UAS_Survey_Archive
■ /Field_Vegetation_Survey [contains field survey metadata, collected data, RTK
survey data and select documents]
● FieldMetadata_XXX_YYMMDD.m.docx
● /field_measurements
○ YYMMDDXXX_permanent_plot_veg_survey.xlsx
○ YYMMDDXXX_above_ground_biomass.xlsx
○ /ecotones
■ YYMMDDXXX_wetland-water_rtk.csv
■ YYMMDDXXX_low-high_rtk.csv
■ YYMMDDXXX_wetland-upland_rtk.csv
● /field_rtk_data
○ YYMMDDXXX_bio_plots_rtk.csv
16
○ YYMMDDXXX_veg_plots_rtk.csv
○ YYMMDDXXX_checkpt_rtk.csv
○ YYMMDDXXX_gcp_rtk.csv
● /field_documents
○ YYMMDDXXX_readme.txt
○ YYMMDDXXX_flight_log.docx
■ /UAS_Survey [contains image metadata, raw imagery, image products, and
select documents]
● ImageMetadata_XXX_YYMMDD.m.docx
● /uas_imagery
○ /YYMMDDXXXzo
■ /img
● (raw rgb images (.tif files))
○ /YYMMDDXXXzm
■ /img
● (raw multispectral images (.tif files))
■ /calibration
● calibration_coefficients.txt
● /pre-flight
○ (Pre-flight calibration images (.tif files))
● /post-flight
○ (Post-flight calibration images (.tif files))
● /uas_products
General Note: Upload the four files associated with each
georeferenced TIFF (.tif, .prj, .tfw and .ovr). Use the same
naming convention for all files.
Multispectral Note: For multispectral imagery, include all files
for each band (e.g., blue, red edge, nir, etc.)
○ /orthomosaic
■ YYMMDDXXXzo_ortho.tif
■ YYMMDDXXXzo_ortho.prj
■ YYMMDDXXXzo_ortho.tfw
■ YYMMDDXXXzo_ortho.ovr
○ /elevation_models
■ YYMMDDXXXzo_dsm.tif
■ YYMMDDXXXzo_dsm.prj
■ YYMMDDXXXzo_dsm.tfw
■ YYMMDDXXXzo_dsm.ovr
■ YYMMDDXXXzo_dtm.tif
■ YYMMDDXXXzo_dtm.prj
■ YYMMDDXXXzo_dtm.tfw
■ YYMMDDXXXzo_dtm.ovr
○ /ndvi
■ YYMMDDXXXzm_ndvi.tif
■ YYMMDDXXXzm_ndvi.prj
17
■ YYMMDDXXXzm_ndvi.tfw
■ YYMMDDXXXzm_ndvi.ovr
● /uas_documents
○ /flight_plans
■ YYMMDDXXXzo_flight_plan.pdf
■ YYMMDDXXXzm_flight_plan.pdf
○ /quality_reports
■ YYMMDDXXXzo_quality_report.pdf
■ YYMMDDXXXzm_quality_report.pdf
○ /field_documents
■ YYMMDDXXX_readme.txt
■ YYMMDDXXX_flight_log.docx
18
3. Image Processing in Pix4D
The image processing protocol for Drone2Map is available in Appendix 2.
3.1. Adding Photos/Camera Calibration
New Project
● Create a new project in a specified file location. Ensure naming convention is clear.
General Note: Once the file name and location has been established, making changes to either will
result in Pix4D no longer being able to recognize the project or input images.
Select Images
● Add images using the Add Images... button to add images individually or Add Directories… to
add an entire folder of images. If the drone has separate RGB and multispectral sensors, the two
imagery sets should be processed independently.
○ If multiple flights were flown to cover one study area, all images from all flights can be
imported at the same time, as long as the areas covered by the flights are continuous
and there is sufficient overlap between the flights (if this is not the case, the software
will not be able to stitch the images together).
Multispectral Note: For multispectral imagery with calibration card images, calibration
images should be added along with the flight images. One calibration card image per
sensor is required. If multiple calibration images were taken (for example, pre-flight and
post-flight images), a set (one image per sensor) must be selected to serve as the
calibration images for the project (the image set with lighting conditions most like the
majority of the flight conditions should be chosen). If the calibration image set is in its
own directory, this directory can be added by clicking Add Directories. If individual
calibration images need to be hand selected, this can be done by clicking Add Images.
Once added, Pix4D should automatically recognize the QR code in each calibration
image and extract the correct reflectance value. Calibration settings should be checked
in step 3 before processing final outputs to ensure reflectance values are correctly
carried over. (See Processing Step 3 > Index Calculator options for further instructions).
Image Properties
● Pix4D will read EXIF metadata from images upon loading them in; metadata including camera
model information and geolocation information will be displayed in the Image Properties
window.
● Pix4D uses the image geolocation information to position the cameras correctly relative to the
modeled surface; it is recommended that the automatically recognized image geolocation
coordinate system is left as is.
● Click the Next button to proceed to the next step where the GCP and output coordinate systems
can be selected.
Output Coordinate System
There are different coordinate system setting options that vary based on whether image geolocation
and GCPs were used during image acquisition. See Pix4D’s guidance on Selecting Coordinate Systems for
more information. The following steps are written for a process that uses both image geolocation and
GCPs.
19
General Note: It is not necessary, but recommended that the output coordinate system be designated
the same as the GCP coordinate system. Establishing the desired output coordinate system at this stage
avoids having to adjust, reoptimize and reprocess outputs in later stages. It is possible, however, for
both the horizontal and vertical coordinate systems to be edited at later stages, see Pix4D’s
documentation on how to access editing options for image geolocation, GCPs and output coordinate
system information.
● The image geolocation coordinate system recognized in the previous step will be displayed as
the Selected Coordinate System in the Select Output Coordinate System window.
● If the automatically detected coordinate system (the image geolocation coordinate system) is
not the desired output coordinate system, both horizontal coordinate systems can be adjusted
at this stage.
○ The horizontal coordinate system can be specified by selecting Known Coordinate
System and typing its name. It can also be selected from a list or inputted from a PRJ file
by checking the Advanced Coordinate Options box and choosing one of the search
options.
○ The vertical coordinate system can be specified by selecting the Advanced Coordinate
Options box.
■ If the desired vertical output coordinate system is based on one of the MSL
(mean sea level) geoids listed in the first option, one of these can be selected.
■ If the desired coordinate system is not one of the listed MSL geoids, but has a
known height above the GRS 1980 ellipsoid (based off of NAD83(2011)), that
height can be manually inputted by clicking the second option. NOAA’s online
vertical datum transformation (VDATUM) tool is a useful means of determining
the offset between a common coordinate system or datum (e.g., NAVD88) and
NAD83(2011) (the datum the GRS 1980 ellipsoid is based off of).
■ If none of the geoids in the first option match the desired vertical coordinate
system, and the height above the GRS 1980 ellipsoid is not known, the Arbitrary
option can be selected, which will result in no adjustments being made to the
inputted Z values. An arbitrary vertical coordinate system will display as (2D) in
the coordinate system name.
General Note: Due to the limited options Pix4D provides for vertical coordinate systems, most projects
require designating the vertical coordinate system as arbitrary and then doing a manual adjustment of
GCP height. This workaround comes into play after GCPs are imported. The full workaround is
documented in the GCP caveat #1 section of this protocol (Appendix 3).
Processing Options Template
● Select the appropriate template for the type of imagery being processed. The 3D Maps template
is standard for processing RGB imagery, while the Ag Multispectral template is standard for
processing multispectral imagery. The outputs used to conduct analyses in this protocol are
orthomosaic, digital terrain model, digital surface model, and normalized difference vegetation
index (NDVI; multispectral sensors only) rasters. Note that specific sensors may have Pix4D
templates available online to import (e.g. the sentera double 4k sensor).
○ To learn more about which outputs each template produces, refer to Pix4D’s
documentation.
Quality Note: Ensure that Start Processing Now is NOT selected in the bottom, right corner. If it is, the
project will begin processing all 3 steps immediately. Settings should be adjusted prior to processing.
Click Finish to proceed to Processing Options.
20
3.2. Photo Alignment and Optimization
Set up the initial processing of the project (Step 1) by navigating to Process > Processing Options. The
default settings for this step are recommended, however each option for both RGB and MS
(multispectral) imagery is explained below. Read more about what happens in the initial processing step
in Pix4D’s documentation on Step 1.
3.2.1. Processing Step 1: Initial Processing
General
● Keypoints Image Scale: The Keypoint Image Scale defines the image size at which keypoints are
extracted, with Full using the full image scale and Rapid using a reduced image scale for faster
processing.
○ RGB and MS default: Full
● Quality Report Orthomosaic: A preview of the orthomosaic can be displayed in the quality report,
a record that outputs processing details and results.
○ RGB and MS default: Checked box
Matching
● Matching Image Pairs: Matching Image Pairs allows the user to optimize the processing for
flights flown in an aerial grid (Option: Aerial Grid or Corridor), free-flown (Option: Free Flight or
Terrestrial), or with other specific parameters (Option: Custom).
○ RGB and MS default: Aerial Grid or Corridor
● Matching Strategy: Geometrically Verified Matching is more computationally expensive, but can
be more rigorous by excluding geometrically inconsistent matches.
○ RGB default: UNchecked box
○ MS default: Checked box
Quality Note: Although Geometrically Verified Matching is not a default option for RGB imagery,
this is one of Pix4D’s recommendations for fixing a project that doesn’t get enough matches
after the first processing step (this can occur in study sites with homogeneous features like
dense canopies or grass fields). See Camera Optimization section of Quality Check Table for
more details (Appendix 4).
Calibration
● Targeted Number of Keypoints: Keypoints are distinguishable features used to tie overlapping
images together.
○ RGB default: Automatic
○ MS default: Custom (10,000)
● Calibration:
○ Calibration Method: Standard calibration, the default in most processing templates;
Alternative calibration, recommended for aerial nadir images with accurate geolocation,
low texture content, and relatively flat terrain (Alternative is the default setting when
using the Ag multispectral template); and Accurate Geolocation and Orientation is
recommended for projects with very accurate geolocation and orientation information
attached to all images.
● RGB default: Standard
● MS default: Alternative
○ Internal Camera Optimization:
21
● RGB and MS default: All
Quality Note: Selecting All Prior and reprocessing output can help with camera
optimization quality if the quality report indicates a greater than 5% relative difference
in internal camera parameter optimization. The setting forces the internal parameters
to be closer to the initial values. See Camera Optimization section of Quality Check Table
for details (Appendix 4).
○ External Camera Optimization:
● RGB and MS default: All
○ Rematch:
● RGB default: Automatic
● MS default: Custom (Rematch box checked)
Quality Note: The Rematch option can be used to improve reconstruction if a project
has an error in the Dataset section of the quality report. See Dataset section of the
Quality Check Table for further details (Appendix 4).
● Pre-Processing:
○ RGB and MS default: not used except with Parrot Bebop images
● Export:
○ RGB and MS default:
○ Camera Internals and Externals, AAT, BBA box checked
○ Undistorted Images box UNchecked
Running Step 1 and Checking Outputs
It is recommended that Step 1 is run first on its own and the Initial Quality Check (below) is done prior to
running steps 2 and 3 to ensure best results.
● Run Step 1 by checking the 1. Initial Processing box, unchecking boxes for steps 2 and 3, and
clicking Start.
Initial Quality Check
Once the initial processing step is complete, a rayCloud view becomes available (visible when rayCloud
icon is selected along the left sidebar) and can be used to visualize and spot check the generated point
surface (shown as a broken surface of points floating in space) as well as initial and computed camera
positions (shown as blue and green circles). These features can be visualized or hidden by checking or
unchecking boxes within the left sidebar Layers menu. The rayCloud view can be navigated and adjusted
by selecting the different View and Navigation options in the top icon bar (see Pix4D’s guidance on How
to Navigate 3D View).
A quality report will also be generated at this stage and will provide processing details and accuracy
measures for each output. The quality report can be accessed either by selecting the checkmark icon in
the Process top icon bar or navigating to Process > Quality Report. (If quality report is not a selectable
option, navigate to Process > Generate Quality Report).
22
Quality Note: If any settings were adjusted following the initial quality check, the project should be
Reoptimized OR Rematched and Optimized. Any disabled or uncalibrated images will not be taken into
account in either of these reconstruction options. See below for specifications on which option to use:
● Reoptimize: (Process > Reoptimize) This reoptimizes the camera positions and internal camera
parameters. Does not compute more matches between the images, therefore it is a ‘fast step’
that improves the accuracy of the project.
○ When to Use:
■ After adding GCP’s, MTP’s, and/or Checkpoints
■ After changing the coordinate systems
■ After disabling Images
● Rematch and Optimize: (Process > Rematch and Reoptimize) This computes more matches
between the images, thus creating more Automatic Tie Points, and reoptimizes the internal and
external camera parameters. This option is more time consuming but can improve the
percentage of camera calibration and the reconstruction of the model. Using this feature for
large projects (500+ images) will significantly increase the processing time.
○ When to Use:
■ After manually calibrating cameras that were not initially calibrated
■ For difficult projects where few matches were initially found
■ To merge individual projects that do not share common images
■ To optimize Step 1. Initial Processing by rematching images.
General Note: If Step 2. Point Cloud and Mesh and Step 3. DSM, Orthomosaic and Index have been
processed, their result files will be deleted. These files should either be saved in a different folder/file
location or the steps should be repeated as necessary.
3.2.2. GCP Registration
Importing GCPs
After Step 1 is complete, ground control points (GCPs) can be imported and incorporated into the
project. Refer to the See Pix4D’s documentation to learn more about GCPs.
● Navigate to the GTP/MTP Manager icon in the Project toolbar.
● Ensure that the GCP coordinate system is correct. Click Edit… to adjust the horizontal and/or
vertical coordinate system.
● Import an existing spreadsheet (CSV or TXT file) containing GCP locations (X, Y and Z) using the
Import GCPs… button in the GCP manager.
Quality Note: The imported spreadsheet should contain only longitude/easting (X), latitude/northing
(Y), and altitude (Z) values (the units should match tha t of the coordinate system being used, e.g.,
decimal degrees for WGS84). Also note that GCPs can also be imported manually at this stage. Refer to
Pix4D’s documentation on Importing GCPs for more information.
● Once GCPs are imported into the GCP manager, click OK to proceed to GCP registration.
● GCP icons will display in rayCloud view as blue circles with vertical lines coming out of the ce nter
of them. Each GCP will be listed in the left sidebar, in the Layers menu nested under the Tie
Points > GCPs / MTPs drop downs.
Quality Note: Correctly positioned GCP icons allow Pix4D to triangulate and identify the actual GCP
targets in the aerial imagery, which is necessary for GCP registration. If GCPs are positioned too high
above the surface (i.e., above the camera positions), no images will show up in the right sidebar upon
23
selecting a GCP in the left layer menu. If this is the case, the GCP height should be manually adjusted in
order to proceed with GCP registration. See GCP Caveat # 1 for further instructions.
● Upon selecting one of the GCPs from the Layers menu on the left, both Selection (GCP location
and other metadata) and Images (images in which Pix4D has found the GCP target) information
should be shown in the right hand sidebar.
Registering GCPs
The following process should be followed for each registration image, for each GCP:
● With GCP selected (either in the Layers sidebar menu on the left or by clicking on the GCP icon
in the rayCloud view), hover mouse over an image displayed in the Images window in the right
sidebar.
● Zoom out (either by using the mouse or the zoom buttons in the Images window) until the GCP
target (usually a black and white cross) becomes visible in the image. Once the GCP is visible,
zoom closer to it by centering the mouse over it and zooming in. Zoom in until the center of the
target can be confidently identified with a single mouse click.
Quality Note: The higher the zoom level on the GCP, the higher confidence Pix4D assigns to the location
of that GCP and the more it is taken into account when modeling the surface. The size of the yellow
circle reflects the zoom and confidence level.
● Once at the desired zoom level, click once in the center of the target. A yellow cross centered in
a yellow circle will appear in the image and the Number of Marked Images (in the Selection
window) will increase by one. An image can be clicked more than once to adjust the cross’s
position.
Quality Note: It is recommended that at least 5 images are registered for each GCP.
Multispectral Note: When processing multispectral imagery, it is important that images from a variety
of ‘image sets’ are registered. In the file naming system, an image set is usually the number before the
file extension (e.g., in file ‘DJI_0713.JPG’, 0713 is the image set number). An image set number
represents a unique camera position above the ground. When capturing multispectral imagery, each
lens of a camera will snap an image at each camera position. Each GCP must be registered in images
from different image sets/camera positions in order for Pix4D to be able to triangulate the GCP
locations. One band from each image set can be registered and counted in the minimum of 5 registered
GCP images, it does not matter which band is registered. Also note that thermal imagery is often not
visually sharp enough to identify GCP target centers, so thermal images can be ignored in this process.
● Once this process is repeated for enough images (varies for each project, but usually after 2-3
images have been registered), Pix4D will begin to automatically recognize the location of the
GCP target in the other images (Pix4D’s estimate of the center of the GCP target is indicated by a
green cross). For best results, continue registering images (i.e., clicking the visible center of the
GCP target, thereby adjusting Pix4D’s estimate) until the green cross appears as close to the
center of the GCP targets as possible.
Quality Note: If there is a thumbnail image where the GCP target is not visible (e.g., the image was taken
near the GCP but the target was not captured in the image), do not click in the image, simply ignore it
and move onto another image.
24
● Once enough images have been registered, click Apply in the Selection window in the right
sidebar to apply changes. A green GCP icon should appear, representing an adjusted GCP
location based on the registration.
● Repeat the registration process for each GCP.
Quality Note: If a registration process was completed after manually adjusting GCP height (see GCP
Caveat #1), the vertical GCP coordinate system must be set back to Arbitrary before reoptimizing.
● Once all GCPs are registered and GCP coordinate system is set appropriately, the project can be
reoptimized by selecting Process > Reoptimize. A message will pop up saying that by overwriting
the results from the first step (the tie point cloud) will be regenerated. This is expected and
desired, as the registered GCPs should increase the accuracy of the tie point positions. Click OK.
Output Quality Check
Once the tie point cloud has been reoptimized, examine the outputted surface, camera positions and
GCP icons in the rayCloud view.
● The point surface should match the general topography of the study site (e.g., flat or hilly).
○ Altitudes can be spot checked by clicking on individual tie points on the modeled
surface, and reading the Z value reported in the Computed Position output (in the right
sidebar under the Selection dropdown).
Quality Note: If the point surface does not reflect the expected topography of the study
site, this may be due to an insufficient number and/or distribution of GCPs (e.g., having
three total GCPs for a site arranged in a line, resulting in accurate tie point elevations for
a strip of the study site but inaccurate elevations in the rest of the site). If this is the
case, manual GCPs should be added. See the GCP Caveat #2 for full details.
● The camera position icons should be positioned above the point surface at roughly the same
altitude as the drone was flown (e.g., if images were collected via a 50m flight, seeing that point
surface is ~.5 m and computed camera position is ~51 m is a good indication that cameras are
positioned correctly relative to surface).
○ Camera position altitudes can be spot che cked by clicking either the green (computed)
or blue (initial) camera position icons in the rayCloud view and reading the Z value
reported in the Computed Position output (in the right sidebar under the Selection
dropdown).
● The GCP icons should be positioned on top of the point surface at their expected altitudes.
○ GCP icon altitudes can be spot checked by selecting a GCP from the left Layers menu or
a GCP icon in the rayCloud view, and reading the Z value reported in the Computed
Position output (in the right sidebar under the Selection dropdown).
● Once the point cloud has been spot checked, the georeferencing section of the Quality Check
Table (Appendix 4) in the Quality Report should be checked to ensure there is no significant GCP
error.
3.3. Creating Densified Point Cloud, Orthomosaic and Elevation Models
3.3.1. Processing Step 2. Point Cloud and Mesh
Processing Step 2 increases the density of the points of the 3D model created in step 1, which leads to
higher accuracy of both the DSM and orthomosaic. Processing options allow the user to define
parameters for the point cloud densification, classify, and export the point cloud. Most of the default
settings are recommended for Processing Step 2. However, alternate settings may be desired and are
25
briefly described below. See Pix4D’s documentation on Processing Step 2. Point Cloud and Mesh for
further details.
Point Cloud
● Point Cloud Densification
○ Image Scale: The Image Scale defines the image scale at which dense cloud points are
generated. The multiscale option computes additional 3D points on multiple image
scales; this option is useful for computing additional points in vegetated areas.
● RGB and MS default: ½ (Half image size), Multiscale box checked
○ Point Density: The Point Density describes the desired density of the point cloud, higher
density being more computationally expensive.
● RGB default: Optimal
● MS default: Low (Fast)
○ Minimum Number of Matches:
● RGB and MS default: 3
● Point Cloud Classification: Point cloud classification is recommended when generating a Digital
Terrain Model (DTM). Classifying the point cloud will classify each point into the following
categories: Ground, Road Surface, High Vegetation, Building, Human Made Object.
● RGB and MS recommendation for generating DTM: Classify Point Cloud box
checked
● Export: The point cloud can be exported to various formats based on user preferences, the
analysis conducted for this protocol doesn’t use any of the outputs listed at this step. See Pix4D’s
documentation on Point Cloud Export Options for further details on the options.
● RGB and MS default: none selected
3D Textured Mesh
● Generation: A 3D textured mesh can be optionally generated. See Pix4D’s documentation on 3D
Textured Mesh for further details.
○ RGB and MS default: Generate 3D Textured Mesh box UNchecked
● Settings:
○ RGB and MS default: Medium Resolution
● Export:
○ RGB and MS default: none used
Advanced
● Point Cloud Densification:
○ RGB and MS default: 7x7 pixels
● Image Groups:
○ RGB and MS default: dependent on band structure, Pix4D automatically populates this
● Point Cloud Filters:
○ RGB and MS default:
○ Use Processing Area box checked
○ Use Annotations box checked
26
○ Limit Camera Depth Automatically box UNchecked
● 3D Textured Mesh Settings:
○ RGB and MS default: Sample Density Divider = 1
Running Step 2 and Checking Outputs
Run step 2 by checking the 2. Point Cloud and Mesh box and clicking Start to generate Densified P oint
Cloud.
Point Deletion
Point deletion is an optional process that allows the user to remove unwanted features and/or noise
from the point cloud surface. If point deletion is desired, use the Edit Densified Point Cloud button in the
upper menu bar and change the class to Disabled. A processing area can also be set (rayCloud > New
Processing Area) to select only a given area for further processing. See Pix4D documentation on How to
Edit the Point Cloud for more details.
3.3.2. Processing Step 3. DSM, Orthomosaic and Index
Processing Step 3 creates the final outputs of the project (orthomosaic, DSM, DTM, reflectance maps
and vegetation indices). See Pix4D’s documentation on Processing Step 3. DSM, Orthomosaic and Index
for further details on project outputs.
Different outputs are better suited for RGB vs multispectral imagery. If only one type of imagery (either
RGB or multispectral) was collected at a study site, options are available to generate appropriate
outputs for analyses. See Table 3 below for breakdown of how to generate each output for each
equipment scenario:
Table 3. Deriving Outputs from Different Sensors (RGB vs. Multispectral (MS))
Equipment
Scenario
Elevation Model
Generation (DSM, DTM)
Orthomosaic
Generation
Reflectance Map and
Vegetation Index
Generation (NDVI,
individual band reflectance,
etc.)
Both RGB and MS
Imagery
Process outputs in step 3
using RGB imagery
Process output in step
3 using RGB imagery
Process outputs in step 3
using MS imagery
RGB only Process outputs in step 3
using RGB imagery
Process output in step
3 using RGB imagery
Create custom indices using
alternative veg index
process (documented in
Appendix 5)
MS only Process outputs in step 3
using MS imagery
Quality Note: The resolutions of
MS-generated elevation models
are not as fine as those
generated with RGB imagery
Create custom
orthomosaic using
multiband raster
process (documented
in Appendix 6)
Process outputs in step 3
using MS imagery
DSM and Orthomosaic
● Resolution: A standard GSD of 1 is used unless downsampling is desired, in which case the
Custom box can be used to input a custom GSD size.
○ RGB and MS default: Automatic (GSD = 1)
27
● DSM Filters: Noise filtering and surface smoothing will remove artifacts and noise from the DSM
surface.
○ RGB and MS defaults:
○ Use Noise Filtering box checked
○ Use Surface Smoothing box checked
○ Type: Sharp
● Raster DSM: The Raster DSM options allow the user to generate the DSM using an Inverse
Distance Weighting (slower and recommended for a lot of elevation change) or Triangulation
(faster and recommended for flatter surfaces) interpolation method.
○ RGB defaults:
○ GeoTIFF box checked
■ Method = Inverse Distance Weighting
■ Merge Tiles box checked
○ MS defaults:
○ GeoTIFF box UNchecked
● Orthomosaic
○ RGB defaults:
○ GeoTIFF box checked
■ Merge Tiles box checked
■ GeoTIFF without Transparency box UNchecked
○ Google Maps Tiles and KML box UNchecked
○ MS defaults:
○ GeoTIFF box UNchecked
Additional Outputs
● Grid DSM:
○ RGB and MS default: none used
● Raster DTM:
○ RGB and MS (settings when generating DTM):
○ GeoTIFF box checked
○ Merge Tiles box checked
● Raster DTM Resolution:
○ RGB and MS (settings when generating DTM): Automatic (GSD = 5)
● Contour Lines
○ RGB and MS default: none used
Index Calculator
Multispectral vs RGB Note: If multispectral imagery is being processed in addition to RGB imagery, no
index calculator settings need to be adjusted for the RGB project. If only RGB imagery is available, the
multispectral settings listed below for Radiometric Processing, Resolution, Reflectance Map, Indices and
Export sections can be inputted into the RGB project. Note that NDVI will not (and cannot) be generated
from R, G and B bands. See alternative veg index process (Appendix 5) for instructions on how to use
these three bands to create other vegetation indices.
28
● Radiometric Processing and Calibration: Correct Correction Type and Calibration values should
be verified before proceeding. Pix4D usually correctly recognizes and populates this informa tion,
but errors sometimes occur.
○ Correction Type options vary based on the calibration equipment used (i.e., reflectance
panel, sunshine sensor, etc.). See Pix4D’s documentation on Radiometric Corrections for
more information on each of the options.
○ No Correction: no radiometric correction will be performed.
○ Camera Only: require image EXIF metadata
○ Camera and Sun Irradiance: require sun irradiance sensor, image XMP tags and
EXIF metadata (light sensor data is most effective in overcast, completely cloudy
conditions)
○ Camera, Sun Irradiance, and Sun Angle: requires known geometry of sensor and
camera to be embedded in EXIF, along with XMP tags and EXIF metadata (this
option should only be selected for flights that were performed in clear sky
conditions)
○ Camera, Sun Irradiance and Sun Angle using DLS IMU: requires an IMU
embedded in the sun sensor and the orientation to be tagged in XMP (this
option should only be used for flights that were performed in clear sky
conditions)
Quality Note: If the Correction Type is highlighted red in the dropdown menu (in Processing
Options > Step 3 > Index Calculator > Radiometric Processing and Calibration), that means Pix4D
could not find any EXIF/XMP metadata written in the project and that option cannot be used for
processing. To fix this, click the Calibrate… box and import the image of the calibration panel
corresponding to the band that was selected. Click and drag to draw a box around the gray
square in the calibration card image (see Figure 3). The correct reflectance value for the
particular band should be inputted into the Reflectance Factor box. Reflectance values should
be between 0 and 1.0. The sensor company can be contacted for the correct reflectance values
if they are not known.
Figure 3. Radiometric Calibration Card
● Use the coefficient for that band associated with the type of sensor. If the coefficient is
unknown, contact the sensor company for panel coefficients.
29
● Resolution
○ MS default: Automatic (GSD =1)
● Reflectance Map
○ MS default:
○ GeoTIFF box checked
○ Merge Tiles box checked
● Indices
○ MS recommendation: selecting boxes for all bands and indices available
● Export
○ MS default:
○ Index Values as Point Shapefiles box UNchecked
○ Index Values and Rates as Polygon Shapefiles box checked
3.3.3. Running Step 3 and Generating Outputs
Run Step 3 by checking the 3. DSM, Orthomosaic and Index box and clicking Start. Outputs will be
generated (usually over multiple hours) and the quality report will be updated with additional
information on outputs.
Exporting Products
Pix4D will store outputs within the home project folder using the following file structure:
● 1_initial/ → Quality Report
● 2_densification/ → Point Cloud (creates outputs only if specified in settings)
● 3_dsm_ortho/ → Orthomosaic, DSM, DTM (plus other extras if specified in settings)
● 4_index/ → Indices, Reflectance Maps, Project Data (additional info about GSD, etc.)
General Note: Both Reflectance and Indices output folders contain rasters (one for each respective
band/index); pixel values for both of these raster outputs represent reflectance values. The difference
between the two types of outputs is that the index rasters are able to be visualized in Pix4D’s index
calculator interface, when opening files in ArcGIS or other programs files are redundant. Only outputs
from the Indices folder will be used for this analysis.
30
4. Post Processing and Analysis
The analysis sections below provide instructions for analyzing drone imagery outputs with the goal of
estimating four ground-based measurements traditionally measured in many wetland monitoring
programs, including two parameters that are measured as part of the emergent vegetation
biomonitoring component of the NERR System Wide Monitoring Program (Moore 2009): canopy height,
percent cover, ecotones and above-ground biomass. The analyses for this project were written such tha t
for each parameter, drone imagery-based measurements are compared to ground-based measurements
for accuracy assessment. All analyses were done in ArcGIS (see software specific notes below):
Software specific notes:
● In ArcGIS Pro, all tools and toolboxes referenced here can be accessed via the Geoprocessing
pane, accessed by clicking the Tools button under the Analysis ribbon tab. For the percent cover
analysis, the Spatial Analyst license is required to complete the process (license can be checked
by going to the Project menu, selecting Licensing and inspecting the Esri Extensions box).
● In ArcGIS Desktop, all tools and toolboxes referenced here can be accessed via the ArcToolbox
(under the Geoprocessing menu). For the percent cover analysis, be sure to enable the Spatial
Analyst toolbox by checking the appropriate box in the Customize>Extensions menu.
● For those new to ArcGIS, it is typically faster to find tools by using the search function (in the
Geoprocessing pane in ArcGIS Pro and in the Search pane in ArcGIS Desktop) than by manually
going through the toolbox menus.
Coordinate System Standardization:
It is important for the quality of results that all data used for analysis is in the same coordinate system
(both horizontal and vertical) to ensure consistency among comparisons. The coordinate systems of the
image processing outputs should match those of the field data for each comparison. The horizontal
coordinate system varies by region/field GPS system used; a standard vertical coordinate system use d in
this project is NAVD88.
● Horizontal CS - If the output coordinate systems were set consistently in image processing, all
outputs should already be in the same coordinate system (and should match the coordinate
system used for field data collection). If not, they can be reprojected in ArcGIS.
○ To check the coordinate system of an output, bring the file into an ArcGIS map, right
click on the layer in the Contents pane and select Properties > Source > Spatial
Reference.
○ To change the coordinate system of a layer, choose the Project tool (for shapefiles) or
the Project Raster tool (for rasters) and enter the input data and output coordinate
system accordingly.
● Vertical CS - All outputs should be projected to NAVD88, for consistency.
○ To set all layers in map to the same vertical coordinate system, right click the name of
the map in the Contents pane, then select Properties > Coordinate Systems > Current Z.
NAVD88 is located under Vertical Coordinate Systems > Gravity Related > North
America.
31
Data to Analysis Workflow:
Figure 4 shows which image processing outputs and field data feed into each of the four analyses.
Remember that the sensor(s) and resultant imagery for a study site will dictate how the Image
Processing Outputs are derived (the standard RGB and Multispectral output breakdown shown can be
used when a study site has both RGB and MS imagery). Refer to Table 3 for guidance on how to derive
outputs when using only an RGB, only a multispectral sensor or a combination of the two.
Figure 4. Data to Analysis Flowchart
4.1 Assessing Accuracy of Elevation Models and Efficacy for Estimating
Canopy Height
Caveat
At the time this protocol was written, the image processing softwares we used (Pix4D and Drone2Map)
were not able to produce surface models (DSMs and DTMs) of tidal wetland landscapes with the
accuracy needed to measure canopy height. We still have provided our process for checking the
accuracy of the DSM and DTM rasters individually and estimating canopy height for the purposes of
documenting the methods used in this effort.
The following data are used in the canopy height analysis (also see Figure 4):
● Digital Surface Model (DSM) raster (modeled surface (vegetation, exposed ground) elevations,
generated during image processing)
32
● Digital Terrain Model (DTM) raster (modeled terrain (ground) elevations, generated during
image processing)
● Canopy Height Values (measured maximum and average stem heights, acquired in the field)
● Checkpoint, Vegetation and Biomass Plot Elevation Values (Z values at each X,Y plot location,
acquired in the field)
General Note: The DSM and DTM accuracy check processes described below are not required for
obtaining an estimate of canopy height; the purpose of the accuracy checks are to get a sense of the
error associated with the DSM and DTM separately.
Check Accuracy of DSM
The field-based canopy height measurements taken at vegetation plots can be used as ‘true values’ to
compare the modeled surface (DSM) elevation values against. The difference between average DSM
elevation and canopy elevation (field-based canopy height + ground elevation at each plot) can be taken
at each available plot to give a sense of DSM error.
● Import plot number, ground-measured canopy height values for each plot, along with plot X, Y,
Z data into ArcGIS Pro and convert to a canopy elevation shapefile. The canopy elevation
shapefile should contain the four canopy height measurements collected at each plot: average
canopy height with leaf/stem straightening, max canopy height with leaf/stem straightening,
and average and max canopy heights without leaf/stem straightening). C reate a CSV file pulling
the necessary data from the permanent plot vegetation survey data. For plots without
vegetation enter a canopy height of 0. Create a new column labeled for each variation on
canopy height measurement (e.g., ‘canopy_elevation_avg_straight’,
‘canopy_elev_max_straight’, etc.) and add the plot elevation (the Z value of each plot location)
to the canopy height for each column.
○ To create the canopy elevation shapefile, feed the CSV into the XY Table to Point tool
(designate the longitude column as the X Field, the latitude column as the Y Field, and
the Coordinate System used to collect the coordinates in the field).
● The DSM TIFF file derived from image processing should be used to extract DSM elevation
values at each vegetation plot to create the mean, min and max DSM table (the table should
contain plot number and mean, min, and max DSM elevation for each vegetation plot).
○ Create a shapefile with the square footprints of the vegetation plot locations
■ Use the Buffer tool (found in the Analysis Toolbox) to create a circular buffer
around the center points in the canopy elevation shapefile, input 0.5 meters
Distance for a 1 x 1 meter vegetation quadrat. Leave the Method and Dissolve
Type inputs at their default (Planar and No Dissolve).
■ Use the Minimum Bounding Geometry tool from the Data Management toolbox
to turn the circle buffers into squares. Use the circular buffers as the input
feature, and set the Geometry Type to Rectangle by Area and Group Option to
None.
○ To create the DSM table, use the Zonal Statistics as Table tool (in Image Analyst
Toolbox) to extract the DSM value at each plot (designate the vegetation plot quadrat
footprint shapefile as the feature zone data, designate the plot number column as the
zone field, designate the DSM raster as the input value raster, choose a name and
33
location for the output table and designate the statistics type as Minimum, Maximum,
and Mean).
● The canopy elevation data and the DSM elevation data should be added to the same table in
order to quantify error and assess accuracy.
○ To join the DSM data table to the canopy elevation shapefile attribute table, use the
Join Field tool (in Data Management Toolbox) (designate the canopy elevation shapefile
as the input table, the plot number as the input join field, the DSM table as the join
table, the plot number as the join table field, and the min, max, and mean DSM value
field as the transfer fields). Note, the DSM values will be added to the attribute table of
the designated input table (e.g., canopy elevation shapefile).
○ Open the canopy elevation shapefile attribute table, right click on the newly added MIN
column, select fields, and update the alias for MIN, MAX and ME AN (e.g., dsm_min,
dsm_max) so the naming convention is clear.
○ The resulting values are the differences between average DSM elevation and field-based
canopy height at each plot.
Check Accuracy of DTM
The field-based elevation measurements taken at checkpoints, vegetation plots, and biomass plots (all
referred to as checkpoints for the DTM accuracy exercise) can be used as True Values to compare the
modeled terrain (DTM) elevation values against. The difference between average DTM elevation and
field-based ground elevations can be taken at each available point to get a sense of DTM error.
● Ground-measured checkpoints acquired in the field should be inputted into a CSV file, imported
into ArcGIS Pro and converted to a checkpoints shapefile (the checkpoints shapefile should
contain unique checkpoint ID, longitude, latitude and elevation values for each point).
○ To create checkpoints shapefile, feed the CSV into the XY Table to Point tool (designate
the longitude column as the X Field, the latitude column as the Y Field, and the
Coordinate System used to collect the coordinates in the field).
● The DTM TIFF file derived from image processing should be imported into ArcGIS Pro as a raster.
Average DTM values should be extracted at each checkpoint plot to create the mean DTM table
(the mean DTM table should contain checkpoint plot id and mean DTM value for each
checkpoint location).
○ To create the mean DTM table, use the Zonal Statistics as Table tool to extract the mean
DTM value at each checkpoint location (designate the checkpoint plot shapefile as the
feature zone data, designate the checkpoint ID column as the zone field, designate the
DTM raster as the input value raster, choose a name and location for the output table,
and designate the statistics type as mean).
● The checkpoint elevation data and the mean DTM data at each checkpoint should be added to
the same table in order to quantify error and assess accuracy.
○ To join the mean DTM data table to the checkpoint shapefile attribute table, use the
Join Field tool to add the mean DTM column to the checkpoint shapefile attribute table
(designate the checkpoint shapefile as the input table, the plot number as the input join
field, the mean DTM table as the join table, the checkpoint ID as the join table field, and
the mean DTM value field as the transfer field). Note, the mean DTM values will be
34
added to the attribute table of the designated input table (e.g., checkpoint shapefile
attribute table).
● The DTM values should be subtracted from the checkpoint elevation values for each point in
order to obtain height differences for each point that can then be used to quantify error.
○ To quantify differences between mean DTM and checkpoint elevation values at each
point in ArcGIS Pro, open Fields View within the attribute table of the checkpoint
shapefile attribute table (right click on a column header and select Fields).
○ In Fields View, add a new numeric field (named DIFF or something distinguishable), save
changes and exit field view.
○ In the checkpoint shapefile attribute table, right click the new field and select calculate
field. Enter an expression that subtracts the values in the mean DTM field from the
checkpoint elevation values field. In the Fields box, double click the field representing
the checkpoint elevation values, then click the subtraction sign, and then click the field
representing DTM mean elevation values. Click OK.
○ Repeat this same exercise for vegetation plots and biomass plots if ground-based RTK
elevations were also taken at these plots.
○ The resulting values are the differences between average DTM elevations and ground-
based elevations at each measured point.
Calculate Estimated Canopy Height
The difference between DTM elevation and DSM elevations at each vegetation plot can be used to
estimate canopy height (with the caveat described above in mind).
● It is recommended that the DSM and DTM values be compared at the resolution of the DSM
raster (the DSM resolution is equal to the ground sampling distance (GSD) of the drone, while
the DTM resolution is more coarse).
○ Use the Resample tool to align the two rasters and ensure they have the same cell size.
Designate the DTM raster as the Input Raster, and the DSM raster as both the Snap
Raster (Environment Setting) and the Output Cell Size (set the Snap Raster Environment
Setting first, then the Output Cell Size setting).
Quality Note: The resolution of the output raster should be checked before proceeding
to the next step (right click the raster and select Properties > Source > Raster
Information to ensure Cell Size X and Y are as expected). Note that running the
Resample tool within an ArcGIS Pro geoprocessing model has shown to be problematic;
the tool can be run outside the model if errors occur.
○ Once the DTM raster is resampled and aligned with the DSM raster, use the Raster
Calculator to subtract the DTM raster from the DSM raster. The result will be a
difference raster referred to as ‘DSM-DTM raster’ going forward.
● Use the DSM-DTM raster to extract estimated canopy height at each vegetation plot (The
square quadrat footprints should be used here, not the veg plot points). The canopy height table
should contain plot number and mean, min, and max canopy height values within each 1 m^2
vegetation plot.
35
○ To create the mean DSM-DTM table, use the Zonal Statistics as Table tool to extract the
canopy height value at each plot (designate the veg plot quadrat footprint shapefile as
the feature zone data, designate the plot number column as the zone field, designate
the DSM-DTM raster as the input value raster, choose a name and location for the
output table and designate the statistics type as Minimum, Maximum, and Mean).
● The resulting values represent an estimation of canopy height at each plot.
4.2 Ecotone Delineation
The following data are used in the ecotone analysis (also see Figure 4):
● Orthomosaic raster (true color image, acquired in the field)
● Digital Surface Model (DSM) raster (modeled surface (vegetation, exposed ground) elevations,
generated during image processing)
● NDVI raster (Normalized Difference Vegetation Index, generated in image processing)
● Field ecotones (X,Y locations of each ectone line, acquired in the field)
To start
● Add the RGB orthomosaic to a new map. Select the Measure tool in the Map ribbon tab in the
Inquiry group (called the Measure tool in ArcGIS Desktop, accessed from the Tools toolbar). Use
the defaults--planar measurements in metric units. Zoom in on the orthomosaic on an ecotone
of interest, start clicking to add a multi-point line. With each click, the tool will populate a list of
the distance between each point. Continue clicking along any line or boundary, aiming to get the
distance between each point to approximately one meter. This exercise is simply to calibrate
point-clicking to a fairly consistent distance, so continue until points can be reliably added at
roughly one-meter intervals.
Manually delineating ecotones
● Boundaries will need to be delineated for (1) the water-wetland edge, (2) the low marsh-high
marsh edge, and (3) the wetland-upland edge (see Delineating Field Ecotones section for
descriptions of ecotones). To avoid a biased interpretation, do not view the field-delineated
ecotone shapefiles prior to manually delineating ecotones.
● In the Catalog pane, navigate to the folder or geodatabase that will contain the boundary files,
right-click, and under the New pop-out, select Shapefile (if working in a folder) or Feature Class
(if working in a geodatabase). Follow through the prompts, selecting Line/Polyline as the
geometry type, and being sure to choose the same projection as the other files in the project.
● Using the Create Features pane (under the Edit ribbon tab in the Features group in ArcGIS Pro;
right-click the layer and select Start Editing in ArcGIS Desktop), select the feature class you just
created under the Templates menu, select the Line/Polyline tool, and digitize one of the
ecotones, aiming for roughly one meter in between points to match the sampling interval of
ground-based RTK delineations. Try to begin and end the line for each ecotone at the
approximate location the field surveys of each ecotone started and ended.
○ Use the arrow keys on the keyboard to navigate along the orthomosaic while using the
Create tool, or hold the ‘C’ key while clicking to temporarily switch back to using the
cursor to navigate the map.
○ In ArcGIS Pro, click Save in the Edit tab in the Manage Edits group to save the edits.
36
○ In ArcGIS Desktop, click Save Edits from the Editor menu in the pop-up toolbar, then
click Stop Editing.
● Repeat these steps with all three of the ecotones of interest, creating a new Shapefile/Feature
Class file for each ecotone.
● Be sure to save all edits regularly.
Compare the hand-delineated boundary to field-measured boundary
● Run the Near tool from the Analysis toolbox, using the field-measured RTK delineation of the
ecotone (a point shapefile of the GPS points from the ecotone) as the Input Features, and using
the digitally delineated ecotone (the Line/Polyline feature class) as the Near Features. Set the
Method to Geodesic, and update the field names to e.g., NEAR_FID_RGB and NEAR_DIST_RGB
(change these to _NDVI and _DSM when using NDVI and DSM rasters to delineate ecotones) and
leave the other options at their defaults.
○ This modifies the input point shapefile by adding two new columns. There is nothing
wrong with that, but if preserving the original file in an unmodified format is preferable,
then make a copy of the point shapefile and use that as the Input Features. Additionally,
ArcGIS Pro has an Enable Undo option for the tool (next to the Run button) that enables
an edit session, which can be very useful, but be sure to save the edits before
proceeding.
● Open the attribute table of the now-modified GPS points that were used as the Input Features in
the previous step. Right-click on the NEAR_DIST column and select statistics. This will open a
new panel that contains the mean and standard deviation (among other statistics), which serve
as a metric of how close the field-measured boundaries and digitally delineated boundaries are
to each other.
○ Repeat this process for each ecotone.
● Repeat all steps using the NDVI raster instead of the RGB orthomosaic to delineate the
ecotones.
Extracting ecotones from classified rasters
● To extract ecotones from species-specific classified rasters, first complete the percent cover
analyses (below).
● The following steps were written to extract the low marsh-high marsh ecotone, but should work
just as well for the water-wetland and wetland-upland ecotones (or the border of any
species/zone in the species specific classified raster).
○ First, run the Reclassify tool to reclassify the species-specific classified raster into just
two classes, using classvalue as the Reclass field (it may be necessary to press the
Unique button to repopulate the table). Set the unvegetated and low marsh species to
0, and set all of the high marsh species to 1, adjusting the classes for different ecotones.
○ Use the reclassified raster as input for the Resample tool, and set the sampling
technique option to Majority. The cell size strongly affects the shape of the final ecotone
- coarser resolutions create simpler ecotones with fewer patches, while finer resolutions
create more complex ecotones with more patches.A cell size of 0.25m x 0.25m generally
provides good results, with resolutions of 0.1m x 0.1m and 1m x 1m providing less and
more linear ecotones, respectively.
37
○ Use the resampled raster as input for the Boundary Clean tool, leaving the default
settings. If desired, the Sort Type option can be changed to Descending to preferentially
shrink and dissolve habitat patches, or to Ascending to preferentially expand and
connect habitat patches.
○ Use the output raster from Boundary Clean as input to the Contour List tool. Specify the
numeric identity of the first class from the reclassified raster (i.e. 0) as the Contour
Value.
○ Optional: to create a smoother, potentially more realistic ecotone, take the output
contour shapefile and use it as input to the Smooth Line tool. Set Smoothing Tolerance
to 5 meters, and leave all other options as defaults. Changing the Smoothing Tolerance
results in a line that conforms more or less sharply to the hard edges of the initial
contour shapefile.
● It should be possible to modify this process to create multiple ecotones at once, for instance
obtaining the wetland-upland boundary at the same time by including an ‘upland’ category in
the Reclass table in the Reclassify tool, and providing the Contour List tool with multiple values.
The process presented here only extracts a single ecotone for simplicity.
Quality Note: If comparing the extracted ecotone to a field-measured ecotone or one
delineated by hand in ArcGIS, the cell size used in the Resample tool should be carefully
considered. Smaller cell sizes (finer resolutions) tend to result in more patches of habitat, as
opposed to a single block, so if and ecotone with many circular patches is being compared to a
single linear ecotone (e.g. with the Near tool), its accuracy may be artificially inflated by the
presence of the patches. The cell size in the Resample tool can be used to change the number of
resulting habitat patches, and the Sort Type option in the Boundary Clean tool can be used to
dictate whether those patches are preferentially removed or expanded, depending on the
preference of the user.
4.3 Percent Cover Analysis
4.3.1 Total percent cover (vegetated vs unvegetated classification)
The following data are used in the percent cover analysis (also see Figure 4):
● Orthomosaic raster (true color image, generated from image processing)
● Digital Surface Model (DSM) raster (modeled surface (vegetation, exposed ground) elevations,
generated from image processing)
● NDVI raster (Normalized Difference Vegetation Index, generated from image processing)
● Ground vegetation data (per plot percent cover estimates, acquired in the field)
● Ground vegetation plot locations (X, Y and Z locations of vegetation plots, acquired in the field)
This process requires a Spatial Analyst license in both ArcGIS Pro and Desktop. See ArcGIS Pro’s
documentation on Image Analysis for an in-depth overview of the classification process. See Gray et al.
2018 for an example of this technique as used in the NC NERR3.
3 P. C. Gray, J. T. Ridge, S. K. Poulin, A. C. Seymour, A. M. Schwantes, J. J. Swenson, and D. W. Johnston, “Integrating drone imagery into high
resolution satellite remote sensing assessments of estuarine environments,” Remote. Sens. 10, 1257 (2018).
38
There will be six output rasters from this analysis. The first three output rasters will be the total percent
cover raster derived from the RGB orthoimagery alone, from the RGB orthomosaic plus the NDVI raster,
and from the RGB orthomosaic plus the DSM raster. The second three output rasters will be derived
from the same imagery/raster inputs, but will be multi-species percent cover rasters. The general
classification workflow necessary to estimate percent cover is provided in Figure 5.
Figure 5. Segmented Classification Workflow
Clip the imagery to just the wetland area
Quality Note: This step is not strictly necessary; all of the Percent Cover Analysis can be conducted on
the full orthomosaic, but following this step significantly improves the accuracy of the classification, and
cuts down on processing times.
● In the Catalog pane, navigate to the folder or geodatabase that will contain the boundary files,
right-click, and under the New pop-out, select Shapefile (if working in a folder) or Feature Class
(if working in a geodatabase). Follow through the prompts, selecting Polygon as the geometry
type, and being sure to choose the same projection as the other files in the project.
● Using the Create Features pane (under the Edit ribbon tab in the Features group in ArcGIS Pro;
right-click the layer and select Start Editing in ArcGIS Desktop), select the feature class you just
created under the Templates menu, select the Polygon tool, and digitize a polygon surrounding
the wetland area, from the wetland-upland ecotone to the wetland-water ecotone. This polygon
will be used to clip out any area of the image that isn’t wetland (i.e. remove water and upland
areas), so aim to make the boundaries conform as closely as possible to the extent of wetland
vegetation.
○ Be sure to include all of the percent cover plots inside the polygon, even if they fall
beyond the water-wetland or wetland-upland ecotones.
39
○ Those who completed the Ecotone Delineation section (above) can use the Trace tool in
the Create Features pane to follow their previously delineated ecotones, switching to
the Polygon tool when necessary to connect ecotones and fill in gaps.
○ If the upland-wetland boundary is more of a transition zone than a clean divide, some
upland vegetation may be included in the boundary - just be sure to add an extra class
for the upland vegetation in the Training Samples step later on.
○ If the wetland area is not contiguous, it will be necessary to create separate features (in
the same shapefile/feature class) and then merge them together into a single feature.
■ To merge the features in ArcGIS Pro, select all of the features, open the Modify
Features pane (from the Edit tab in the ribbon), and select Merge from the
Construct section
■ To merge the features in ArcGIS Desktop, select all of the features, and under
the Editor menu in the pop-up toolbar, select Merge
○ Save all edits.
■ In ArcGIS Pro, click Save in the Edit tab in the Manage Edits group to save the
edits.
■ In ArcGIS Desktop, click Save Edits from the Editor menu in the pop-up toolbar,
then click Stop Editing.
● Run the Extract by Mask tool from the Spatial Analyst toolbox, using the RGB orthomosaic for
the Input Raster option, and using the wetland boundary polygon for the Input Raster or Feature
Mask Data option.
○ The output of this step will be the input for the following section.
Stretch raster function - increases contrast between features of interest
● ArcGIS Pro: select the layer to be classified from the contents pane (the one created by the
extract by mask tool above), navigate to the Imagery tab in the top ribbon, and select Raster
Functions. In the pane that appears, expand the Appearance section, and select Stretch.
● ArcGIS Desktop: Under the Window menu, open the Image Analysis window, and select the
imagery raster. In the Processing section, select Add Function. In the window that pops up,
right-click the raster name, and select Stretch Function from the Insert Function pop-out.
● In the tool inputs, make sure the correct input raster is selected, change Type to
PercentMinMax, leave the Output Minimum and Maximum at their defaults (0 and 255), and
change the Percent Clip Minimum and Maximum each to 2.
○ Gamma: affects appearance of mid-level brightness. Gamma values less than 1 increase
contrast in lighter areas of the image, while gamma values greater than 1 increase
contrast in darker areas of the image. At first, it is recommended to leave it untouched
and accept the default, but if the end-product classified raster is struggling to distinguish
features in darker or lighter areas, come back and change gamma, making sure that the
Use Gamma box is checked. A gamma value of 0.2 was used (in first three entries = the
rgb bands) for analyzing the NC imagery, where the low marsh appeared brighter due to
reflection off of the wet mud, so enhancing contrast among the lighter areas in the
image enhanced the ability of the classifier to distinguish Spartina alterniflora from
unvegetated mud.
40
■ When setting Gamma, it is recommended to set bands to the same gamma
value (unless there is a reason to do otherwise). ArcGIS Pro will automatically
add an extra band if the gamma for all bands is modified - this can be ignored.
General Note: Raster functions, including Stretch, create a new layer in the active ArcGIS project/map,
but do not save a new raster file, so if the entire project is not saved when closed, the layer will be lost.
This generally is not an issue, but if desired, it can be avoided altogether by saving the layer as a
separate file. To do so, right click on the layer in the Contents pane, and under Data, select E xport
Raster.
Segmentation - breaks image up into objects or ‘segments’ by combining adjacent pixels based on their
similarity
● ArcGIS Pro: select the layer to be classified from the contents pane (the one created by the
stretch raster function above), navigate to the Imagery tab in the top ribbon, and under
Classification Tools, select Segmentation.
● ArcGIS Desktop: Use the Segment Mean Shift tool from the Spatial Analyst toolbox.
● In the tool inputs, increase Spectral Detail to the maximum (20), decrease Spatial Detail to the
minimum (1), and leave the minimum segment size at its default (20).
Quality Note: The above settings for spectral and spatial detail consistently yield the best
results. Minimum segment size can be increased if the classified output imagery appears noisy
and overfitted, but it should not be decreased below 20. For instance, at North Inlet-Winyah Bay
NERR trial and error suggested that a minimum segment size of 75 provided the best result.
● Change the Output Dataset name for clarity if desired.
Training Samples - the distribution of the training sample selection strongly affects the quality of the
output classified image, so this step requires detailed attention.
● Ensure the segmented output from the previous step is selected in the contents pane, click the
classification tools dropdown (from imagery tab, image classification group) and select Training
Samples Manager.
● When opening the Training Samples Manager, schemas (which define the classes that training
samples may belong to) are displayed in the upper pane, and training samples are displayed in
the lower pane.
● Start by creating a new schema (the icon that looks like a list on paper), then add ‘vegetated’
and ‘unvegetated’ classes by right clicking on New Schema that was just created, and selecting
Add New Class. For each new class, specify a unique value for the required Value field (e.g. 0 for
unvegetated and 1 for vegetated). Vegetated and unvegetated classes will be sufficient for
estimating Total Percent cover of vegetation - estimating species-specific percent cover will be
discussed below. Once the schema is complete with all desired classes, save the schema.
○ Any area not covered by live vegetation should be considered unvegetated, including
areas covered by water, wrack, or debris (see Percent Cover Ground-based vegetation
surveys subsection for more details).
● To start adding training samples, select one of the classes in the schema pane, then select a
drawing tool (rectangle, circle, polygon, or freehand) from above the schema pane, and start
41
drawing training samples on the image that are good representatives of the selected class. The
polygon and freehand tools take more time, but could theoretically make better/more precise
training samples. Generally, using the rectangle tool has produced satisfactory results. Save
these training samples regularly. If you make a training sample that overlaps with another class,
you can delete it using the red ‘x’.
Quality Note: Capturing representative training samples is very important! Make sure to select
areas of the image that represent the full range of colors and brightness of each class across the
image. For instance, vegetation may appear green on clean growth, but may al so appear
grayish, purplish, or brownish if it is muddy or reflecting sunlight (be sure to use training
samples from monospecific stands, short form, tall form, mixed-species stands, etc.). Likewise,
bare ground may appear brown, light gray, dark gray, or purplish depending on soil moisture
and sunlight reflection, bright white if covered by wrack, or black when shadowed by
vegetation. It is important to capture all colors present for each class, or the classifier will
struggle to interpret the imagery. Try to provide training samples for each class that span the
geography and the elevation gradient.
○ Aim for ~ 100 training samples each for vegetated and unvegetated classes.
■ For reference, the wetland area of the North Carolina NERR covered 14,000 m2,
and used 187 training samples (73 vegetated and 114 unvegetated). Training
samples were rectangular, varied greatly in size, and in some instances included
non-comforming objects (e.g. a small bare patch in a vegetated area, or a couple
of Spartina alterniflora stems on a mudflat).
○ Once the training samples have been saved, they can be added into the map, as with
any shapefile (when saving the Schema, add ‘.ecs’ to the end of your file name; when
saving the training samples add ‘.shp’. If you are unable to locate the saved data, save it
in a separate folder that is not a geodatabase). Once added to the map, right-click on
the layer in the Contents pane, and select Attribute Table. This allows the user to more
easily obtain a count of the number of samples within each class, or to reassign classes if
needed (by changing by the ‘Classname’ and ‘Classvalue’ fields for an entry).
Classification - can be supervised or unsupervised, though this protocol uses only supervised methods.
There are three desired products from this section - one classification derived from the RGB orthomosaic
alone, one from the RGB orthomosaic in combination with the NDVI raster, and one from the RGB
orthomosaic in combination with the Digital Surface Model (DSM).
● ArcGIS has three supervised classifiers available: random trees, support vector machine, and
maximum likelihood.
Quality Note: The Random Trees classifier provided the most accurate classification at North
Carolina NERR, so it is recommended to start with Random Trees, then try the Maximum
Likelihood or Support Vector Machine classifiers if the classification from Random Trees is
unsatisfactory.
● Run the Train ___ Classifier from the Spatial Analyst toolbox (it is recommended to start with
Random Trees). Use output from the Segmentation step above as the Input Raster and the
42
saved training samples shapefile as input training sample file. Leave The Dimension Value Field
blank, and leave all other inputs as their defaults, except for Additional Input Raster.
○ When naming the output file, be sure to indicate which classifier was used by adding
‘_RT’ or ‘_SVM’ or ‘_ML’ for the Random Trees, Support Vector Machine, or Maximum
Likelihood classifiers, respectively.
○ In both the Train Classifier and Classify Raster tools, there is an option to provide a
secondary raster. Classification results will be significantly improved by providing the
corresponding NDVI or DSM raster.
■ It will be necessary to run this step three times, once with no additional input
raster, once with the NDVI raster, and once with the DSM raster.
● Next, run the Classify Raster tool, using the output from the Segmentation step above for the
Input Raster and each output of the Train ___ Classifier tool as the input classifier definition.
○ The Additional Input Raster input should match the same additional raster (NDVI, DSM,
or none) that was used in the Train ___ Classifier tool.
○ Be sure to indicate which inputs were used in the output file name by adding ‘_RGB’ or
‘_withDSM’ or ‘_withNDVI’ to the end of the file name.
● Conduct a visual inspection of the classified output raster. Toggle between the RGB orthomosaic
and the classified raster, taking mental note if the classified raster appears to be fairly accurate,
or if it has consistent, obvious, and significant errors. Be sure to inspect different areas of the
image, e.g. high marsh, low marsh, transition zones, darker and lighter areas, etc.
● If the classified raster visually appears to be fairly accurate (relative to site knowledge and the
site orthomosaic), proceed to the next section (accuracy assessment). If the classified raster
appears to be deficient, try running a different classifier (one of the Train ___ Classifier tools) to
see if that provides a more accurate result before proceeding.
Quality Note: If the output classified image from all three classifiers has low accuracy, it may
help to add more training samples and then run the classifiers again.
Quality Note: The output raster will sometimes create extraneous rectangular blocks outside of
the imagery. If this happens, run the Extract By Mask tool to clip the classified raster to just the
area of interest, as at the beginning of this section.
Accuracy assessment
The following quantitative accuracy assessment should be done following a visual inspection of the
classified rasters. This method of accuracy assessment creates random points across a classified image,
which will be compared to the class those points should belong to (which will be manually identified
from the segmented raster). Repeat this for each of the three classified rasters from the previous
section.
● Run the Create Accuracy Assessment Points tool from the Spatial Analyst toolbox, with the
classified raster as the input. Select Stratified Random or Equalized Stratified Random for the
Sampling Strategy option, and specify at least 100 points in the Number of Random Points
option.
43
○ Stratified random sampling distributes the points proportionally to the area of each
class, while equalized stratified random sampling distributes the random points evenly
between each class. Stratified random sampling is generally recommended, unless there
are concerns about the accuracy of classes that cover relatively little area (as may be the
case with a multiple-species classification), in which case equalized random sampling is
recommended.
○ A greater number of random points increases confidence in the accuracy estimates, but
increases the amount of work required. Where possible, use 50 points per category.
○ Visually inspect the generated points on the map to ensure that the points are not
clustered together on a per-class basis. If they are, run the tool again, switching
between Sampling Strategy options if necessary.
● Identify the correct class for each accuracy point by manually editing the entry in the ‘Grndtruth’
column of the accuracy point shapefile, being sure to save edits.
○ The quickest way to do this is to right-click on an entry in the attribute table, select
‘zoom to,’ and identify what segment in the segmented raster that point overlays in the
RGB imagery raster. Be sure to right click on the ‘Classified’ column header and select
Hide so as to not bias your classifications in the ‘GrndTruth’ column (to un-hide right
click a column header, select fields, and add a check under visible for the hidden field).
● Run the Compute Confusion Matrix tool from the Spatial Analyst toolbox with the fully updated
accuracy point shapefile as the input.
○ The output table provides a matrix of which points were classified as each class, and
which classes they should have been assigned to. The diagonal of the matrix is the
number of points that were correctly classified, and the off-diagonal cells represent
misclassified points.
○ The ‘P_accuracy’ row (producer’s accuracy) represents the proportion of the area for
each class in the classified raster that was correctly classified (e.g. “80% of the area
classified as vegetation was actually vegetation”)
○ The ‘U_accuracy’ column (user’s accuracy) represents the proportion of the actual area
of each class that was classified correctly (e.g. “80 percent of vegetated area was
correctly identified as vegetation by the classifier”)
○ The cell at the intersection of the ‘P_accuracy’ row and the ‘U_accuracy’ column
represents the overall accuracy of the classified map.
○ Ideally, the error will be randomly distributed (producer’s and user’s accuracies are both
consistently similar across all classes), and the overall accuracy will be greater than 75%.
If this is not the case, then return to the previous section to try to create a more
accurate classification.
■ If it isn’t possible to achieve these conditions after multiple attempts, it is fine to
proceed to the next section, comparing the classified imagery to field-measured
percent cover - but it is unlikely that the relationship will be particularly good.
● Repeat the accuracy assessment for each of the output classified rasters (RGB alone, RGB+DSM,
and RGB+NDVI). This will eventually result in a total of 6 confusion matrix tables - three for the
Total Percent Cover analysis, and another three for the Multi-Species Percent Cover analysis.
44
Comparison Between Drone vs. Field-based Total Percent Cover Estimation
After confirming that the accuracy assessment from the previous section yielded an overall accuracy of
75% or better (or that achieving such accuracy is unrealistic for a site), proceed to comparing the
classified imagery to field-measured percent cover. This process compares total percent cover of all
vegetation estimated from 1m2 quadrats in the field to the coverage from the classified image in the
footprint of the quadrats. This should be the same process with both ArcGIS Pro and Desktop.
Repeat this for each of the three classified rasters from the classification section.
● Create a shapefile with the footprints of the square quadrat locations
○ Load the center points of the quadrat locations into ArcGIS as a new point shapefile
○ Use the Buffer tool (found in the Analysis Toolbox) to create a circular buffer around the
center points in the canopy elevation shapefile, input 0.5 meters Distance for a 1 x 1
meter vegetation quadrat. Leave the Method and Dissolve Type inputs at their default
(Planar and No Dissolve).
○ Use the Minimum Bounding Geometry tool from the Data Management toolbox to turn
the circle buffers into squares. Use the circular buffers as the input feature, and set the
Geometry Type to Rectangle by Area and Group Option to None.
● Use the Tabulate Area tool from the Spatial Analyst toolbox to calculate vegetated/unvegetated
coverage. Use the squared quadrat locations (output from Minimum Bounding Geometry) as the
‘Input raster or feature zone data,’ and use the classified raster as the ‘Input raster or feature
class data.’ If they don’t come up as the defaults, set the ‘Zone’ field to ‘Point_name’ (or
whatever field identifies the individual quadrat locations), and set the ‘Class field’ to
‘Class_name.’ The processing cell size should default to the same as the classified raster.
Quality Note: The Tabulate Area tool can be buggy. It may need to be run twice to get an
appropriate output if the first output is blank or missing data.
● Right-click on the output table, and under Data, select Export Table. Specify the output location
(not in a geodatabase) and name it with ‘.csv’ to specify a CSV output file.
● If the quadrat size was 1 m2, the data in the output table represent fractional coverage (multiply
by 100 to get percent coverage). If the quadrat size was not 1 m2, then the output data (areal
coverage) will need to be divided by the quadrat size to calculate percent coverage.
4.3.2 Species-specific Percent Cover Analysis (multiple vegetation species
classification)
The only difference for attempting to distinguish different vegetation species (instead of just vegetated
vs. unvegetated, as above) is the need to create a new classification schema that contains classes for
each different species to be identified, being sure to create enough training samples within that class for
the classifier to reliably pick out the species. Err on the side of having an excess amount of training
samples rather than too few, but it won’t be necessary (or maybe even possible) to get 100 training
samples (as suggested above) for some of the less prominent species.
45
Larger species, especially those that form dense monospecific stands (e.g. Juncus roemarianus, Spartina
alterniflora), will be easier for the supervised classifiers to distinguish. Mixed-species stands are unlikely
to be accurately classified into their individual species, so a ‘mixed species’ training class made up of
similar-looking/co-occurring species can be useful to improve output quality. Small, sparsely distributed
understory species (e.g. Limonium carolinianum or Salicornia spp. mixed with S. alterniflora ) are unlikely
to be reliably detected and should not be assigned classes unless good training samples can be provided.
At the North Carolina site, the following classes were specified: unvegetated, Spartina alterniflora,
Borrichia frutescens, and Spartina patens + Distichlis spicata together (since they co-occur and are barely
distinguishable when estimating percent cover in the field). While vegetative wrack was treated as
unvegetated area in the total percent cover analysis above, here it can be assigned its own class and
training samples.
General Note: The training samples from the vegetated-unvegetated classification can be modified for
reuse, with new classes added. To do so, navigate to the location of the previously used training samples
in the Catalog pane, right-click, select Copy, then right-click again and select Paste. Rename the new
shapefile, and add it to the map. Inspect the samples on the map, and delete any of the ‘vegetated’
samples that are not exclusively the dominant species (e.g. Spartina alterniflora), leaving the shapefile
with only samples representing the dominant species and unvegetated area (In ArcGIS Desktop, be sure
to click Start Editing first). Save the edits, then open the shapefile with the Training Sample Manager.
Add new classes that represent the other species present, then begin populating each class with training
samples.
As with the vegetated-unvegetated classification, when running the Train ___ Classifier and Classify
Raster tools, run each tool using as inputs the RGB orthoimagery with the NDVI raster, RGB
orthoimagery with the DSM raster, and RGB orthoimagery alone. This will again yield three separate
output rasters.
If analyzing the entire vegetated area of the image produces poor results when classifying for multiple
species, then it may be worth extracting just the multi-species area of the imagery (with the Extract by
Mask tool) and analyzing just that area of the imagery.
Quality Note: For quantifying the accuracy of the classification, it may only be possible to use the
method outlined in the Accuracy Assessment section above, unless the site has a significant number of
field measurements (10 or more plots) that are representative of the areas containing multiple species,
in which case the regression method of comparing to field-measured percent cover should also be used
(the Comparison Between Drone vs. Field-based Total Percent Cover Estimation section above).
Accuracy assessment will again need to be completed for each of the three classified rasters.
4.4 Assessing Efficacy of Vegetation Indices to Estimate Above-ground
Plant Biomass
This biomass analysis was conducted to assess how well drone-collected vegetation indices (NDVI)
correlated with ground-based biomass measurements.
46
The following data are used in the above-ground biomass analysis:
● NDVI raster (Normalized Difference Vegetation Index, generated in image processing)
● Total biomass CSV (clipped biomass measurements (g/m^2), acquired in the field)
● Biomass plot shapefile (X,Y locations of biomass plots, acquired in the field)
Extract mean NDVI value at each Biomass Plot
● Ground-measured biomass plot locations and measurements should be organized into a CSV,
imported into ArcGIS Pro and converted to a biomass shapefile (the biomass shapefile should
contain plot number, longitude, latitude and field-derived biomass values).
○ To create the biomass shapefile, feed the CSV into the XY Table to Point tool (designate
the longitude column as the X Field, the latitude column as the Y Field, and the
Coordinate System used to collect the coordinates in the field).
● The points in the biomass shapefile should be buffered and bounded to create plots that
emulate the 0.25 m^2 square plots where biomass measurements were taken in the field. To
Create a shapefile with the footprints of the square quadrat locations:
○ Load the center points of the quadrat locations into ArcGIS as a new point shapefile
○ Use the Buffer tool to create a circular buffer around the center points (0.25 meters for
a .25 m^2 biomass quadrat). Leave the Method and Dissolve Type inputs at their default
(Planar and No Dissolve).
○ Use the Minimum Bounding Geometry tool from the Data Management toolbox to turn
the circle buffers into squares. Use the circular buffers as the input feature, and set the
Geometry Type to Rectangle by Area and Group Option to None.
● The NDVI TIFF file created in processing step 3 of Pix4D should be imported into ArcGIS Pro as a
raster. Average NDVI values should be extracted at each biomass plot to create the mean NDVI
table (the mean NDVI table should contain plot number and mean NDVI value for each plot).
○ To create the mean NDVI table, use the Zonal Statistics as Table tool to extract the mean
NDVI value at each biomass plot (designate the square buffer output from the previous
step as the feature zone data, designate the plot number as the zone field, designate
the NDVI TIFF as the input value raster, choose a name and location for the output
table, and designate mean as the statistics type). Running this should output a table
containing a mean NDVI value associated with each 0.25 m^2 biomass plot.
Explore Correlation Between NDVI and Ground Biomass
● By plotting the data from the NDVI table (mean NDVI value at each biomass plot) against the
data from the ground-based biomass table (ground-measured biomass at each biomass plot),
the correlation between NDVI and ground biomass can be explored.
○ Join the NDVI data to the ground biomass data using the plot numbers as the join key.
The resulting CSV should have an average NDVI value and a measure of biomass (total
biomass (live + dead) was used in this analysis) for each plot.
○ Perform a linear regression analysis to model the relationship between ground biomass
and NDVI.
47
Appendices
Appendix 1. Ground control point construction instructions.
Ground control Points (GCPs) can be constructed in many ways, two examples are as follows:
● A relatively cheap option is to paint 5-gallon bucket lids and install mounting hardware for
attachment to PVC poles (full Bucket Lid GCP construction instructions are documented below)
● A more expensive GCP option that provides increased durability and a target with sharper
edges for improved accuracy during image processing. Guidance is as follows:
○ Construct GCPs by fastening (screws or heat annealing) black and white starboard
together using precut 12” white and 6” black squares (at least ¼” thick).
○ Mounting hardware (part 1 [screwed to GCP] and part 2 [part 2 is glued to the PVC
pole]) can also be installed on starboard GCPs for attachment of GCP to PVC poles.
Bucket Lid ground control points (GCPs) Construction Instructions
Supplies needed:
-Standard, 5-gallon white or blue bucket lids (one for each GCP)
-Drill with wire brush OR angle grinder with wire brush OR sanding sponge
-Spray paint (black for white bucket lids or white for blue bucket lids). Use paints that are
specifically formulated to adhere to plastics. There are several available on the market such
as Krylon Fusion All-in-one, Valspar Plastic Spray Paint, and Rust-Oleum Specialty Paint for
Plastic Spray. Rustoleum appliance epoxy spray paint works well also.
-thin cardboard (from a soda box)
-scissors
-1 PVC floor flange per GCP
-1 PVC adapter fitting per GCP
-2 stainless steel bolts per GCP (we probably used 3/8” x 1.5”)
-drill bit (match to bolt diameter)
-4 stainless steel washers per GCP
-2 stainless steel lock nuts per GCP
-1/2” PVC poles—we cut poles at varying lengths to provide flexibility in application (c. 24”, 36”
and 48”)
-PVC cutter OR Miter saw or Sawzall
-2-4 landscape stakes per GCP (if you anticipate putting any GCPs directly on the ground). They
clip nicely on the edge of the bucket lids to keep lids from moving.
1) Use drill w/ brush (angle grinder or sanding sponge) to scuff and rough-up bottom side
of lid to improve adherence of spray paint. FYI, metal brushes are going to work better
and faster than sanding sponge. See picture below for how we ‘scuffed’ bucket lids.
48
2) Cutout cardboard triangles to mask ~1/2 of the bucket lid as shown in the picture below.
You may need to add tape around the edges of the cutout to provide reinforcement for
repeated spray paintings.
3) Spray paint the parts of the bucket lid not covered by cardboard masks. We had the best
luck weighing down the two triangle masks with a couple rocks and spraying quickly. We
also tried using a variety of tape (including painters tape), but were unable to get
straight edges because the spray paint bled under the tape due to the roughing (step 1).
Allow the lid to dry.
4) Attach PVC pole mounting hardware. We used a PVC floor flange, which has 4 holes for
bolting through (we only used two holes in the flange; using all 4 seemed unne cessary).
Put the flat side of the flange in the middle of the non-painted side of the bucket lid (see
picture below). Mark location for drilling two holes (opposite each other) ensuring that
the holes and washers will not cover up important color-change edges on the GCP. Drill
holes through bucket lid. Put the washer on each side of the bucket lid (picture 1 and 3)
and tighten the lock nut. Thread PVC adapter fitting on flange (see picture below). Insert
PVC pole into PVC adapter.
49
50
Appendix 2. Image Processing in Drone2Map version 2.3
General Note: Drone2Map software should be updated to version 2.3 or beyond. Some of the steps and
functionality detailed below are not available in earlier versions.
Basic workflow for 2D processing (used to process Nadir images): Add Imagery and Source Data →
Define Processing Options → Add Ground Control → Process Image Collection → Generate Output
Products (e.g., orthomosaics, elevation models).
1. Adding Imagery
1.1. New Project
● Select project template. For wetland monitoring, select 2D Full.
○ Name the project and specify the file path. Ensure naming convention is clear (e.g.,
2Dfull_rgb_ncnerr_05062021).
General Note: Once the file name and location has been established, they should remain constant. If
changed, Drone2Map will no longer recognize the project and it can result in having to reprocess the images.
1.2. Select Images
● Add images using the Add Images… button to add images individually or Add Folder to add an
entire folder of images.
○ If your drone has separate RGB and multispectral sensors, the two imagery sets should
be processed separately. Create the project.
○ If multiple flights were flown to cover one study area, all images from all flights can be
imported at the same time, as long as the areas covered by the flights are continuous
and there is sufficient overlap between the flights (if this is not the case, the software
will not be able to stitch the images together).
General Note: Drone2Map will NOT import images with the same filename. Some sensors start over on
image naming after each battery change. You must rename images with the same name prior to
importing into Drone2Map.
● To rename a batch of images quickly in Windows, hold Shift and right-click in open space in the
File Explorer window where the image files to be renamed are located, select Open PowerShell
Window Here. In the resulting window use the following statement: ls | Rename-Item -
NewName {"I" + $_.name}, which adds the letter I in front of each filename.
○ The ls command is passing its output (list of files in that directory) to the Rename-Item
command, where the name structure is specified in the braces {}. In PowerShell $_ is a
placeholder for whatever object is currently being processed, in this case each file. In
this way we can quickly and easily prepend any string to the filename. This command
will make this change to all files contained in this folder.
General Note: Drone2Map v2.3 is unable to process calibration card images used for calibrating
multispectral sensors.
1.3. Image Properties
Drone2Map will read EXIF metadata from images upon loading them in; metadata including altitude and
geolocation information will be displayed in the Image Attribute Table (right click Images in the Contents
pane, then select Attribute Table to view).
51
Drone2Map uses the image geolocation information to position the cameras correctly relative to the
modeled surface; it is recommended that the automatically recognized image geolocation coordinate
system is left as is.
2. Defining Processing Options
Processing options can be adjusted in Drone2Map. Steps can be run independently, minimizing the time
required to generate the desired products; however, the initial step must be run at least once.
Use the Processing Options window to configure which steps will run, the settings for each step, and
which products will be created. To view image processing options, click Options on the Home tab in the
Processing group. The default settings for this step are generally recommended, with some notable
exceptions (detailed below). For more information on all options, see Drone2Map’s documentation.
2.1. 2D Products tab
Make sure Create Orthomosaic, Create Digital Surface Model, and Create Digital Terrain Model are all
selected.
● Create orthomosaic: Automatic Resolution should be selected. Resolution defines the spatial
resolution used to generate the orthomosaic and DSM. Automatic = 1 x ground sampling
distance.
● Create Digital Surface Model: Select Triangulation Method as the method used for the raster
DSM generation. The method affects the processing time and the quality of the results. The
triangulation algorithm is recommended for flat areas (agriculture fields) and stockpiles.
● Create Digital Terrain Model: Select Automatic Resolution (5 x GSD)
2.2. 3D Products tab
We do not need to select the desired output formats for the point cloud or set the parameters to be
used for mesh generation, so none of the boxes in the Create Point Clouds and Create Textured Meshes
should be checked.
● General 3D Options: make sure Classify Point Clouds is checked. This enables the generation of
the point cloud classification and, when used for the DTM generation, it significantly improves
the DTM. Make sure Merge LAS Tiles is checked. This option produces a single file with all the
points.
2.3. Initial tab
Initial processing options change the way Drone2Map calculates keypoints and matching image pairs.
● Run Initial: Enables the Initial Processing step. Make sure this is checked.
● Keypoints Image Scale: The Keypoint Image Scale defines the image size at which keypoints are
extracted, with Full using the full image scale and Rapid using a reduced image scale for faster
processing. Full requires longer processing time, but is best when creating GIS-ready products.
Make sure Full is selected.
52
● Matching Image Pairs: allows the user to optimize the processing for flights flown in an aerial
grid (Option: Aerial Grid or Corridor), free-flown (Option: Free Flight or Terrestrial), or with other
specific parameters (Option: Custom). We will use Aerial Grid or Corridor.
● Matching Strategy: Geometry Verified Matching is more computationally expensive, but can be
more rigorous by excluding geometrically inconsistent matches. Geometrically Verified
Matching is recommended for applications like ours with repeated features (e.g. dense
mangrove canopy, homogeneous field).
● Targeted Number of Keypoints: select Automatic for the number of keypoints to be extracted.
For multispectral imagery, select Custom and enter 10,000 keypoints. The Calibration Method
determines how the camera’s internal and external parameters are optimized. Select
Alternative from the drop-down, which is optimized for aerial nadir images with accurate
geolocation and low texture content and for relatively flat terrain.
● Camera Optimization: The internal camera parameters can be optimized in the following ways:
○ All (recommended for UAS)
○ None (recommended for using large cameras already calibrated)
○ Leading (optimizes the most important internal parameters)
○ All Prior (forcing the internal parameters to be close to the initial values)
Quality Note: Toggling All Prior and reprocessing output can help with camera
optimization quality if the quality report indicates a greater than 5% relative difference
in internal camera parameter optimization. See Quality Check Table for more details
(Appendix 4).
○ The External Camera Parameters can be optimized in the following ways:
■ All (optimizes the rotation and position of the cameras)
■ None (no optimization)
■ Orientation (optimizes the orientation only; It is recommended only when the
camera position is known and very accurate, and the camera orientation is not
as accurate as the camera position)
● Rematch: make sure Automatic is selected if <500 images in the project. Select Custom if > 500
images.
○ Select the Rematch option to allow for rematching after the first part of the initial
processing which may improve the reconstruction (see Drone2Map Quality Check Table
below for further details).
2.4. Dense tab
This step increases the density of the points of the Point Cloud, which leads to higher accuracy of both
the DSM and orthomosaic. Processing options allow the user to define parameters for the point cloud
densification.
● Check the Run Dense box. This option increases processing times, but improves the accuracy of
the output orthomosaic.
● mage Scale: use the default (Half Image Size), but make sure Multiscale is checked. When this
option is used, additional 3D points are computed on multiple image scales, starting with the
chosen scale from the Image Scale drop-down list and going to the 1/8 scale (eighth image size,
tolerant). For example, if 1/2 (half image size, default) is selected, the additional 3D points are
computed on images with half, quarter, and eighth image size. This is useful for computing
additional 3D points on vegetation areas as well as keeping details about areas without
vegetation.
53
● Toggle off Limit Camera Depth Automatically: Prevents the reconstruction of background
objects. When toggled on this is useful for oblique/terrestrial projects around objects.
2.5. Coordinate Systems tab
This step defines the horizontal and vertical coordinate system for the images and the project outputs.
● Image Coordinate System: The default horizontal coordinate system for images is WGS84.
Drone2Map should pull this from the imagery EXIF header. Drone GPS
settings should be checked to confirm this is carried over correctly. To change the image
horizontal coordinate system, click the globe button and select the appropriate coordinate
system. The default vertical reference is EGM96 for images (most image hei ghts are referenced
to the EGM96 geoid).
● Project Coordinate System: This can only be modified if ground control points are NOT included
in the project. Since we used GCP’s, do not worry about changing the Project Coordinate System
at this step. If GCP’s are not used, the coordinate system and vertical reference model are
determined by the coordinate system and vertical reference of the images themselves.
2.6. Resources tab
Ensure the project location, image location and location of Log File are correct. Adjust amount of CPU
threads (computer resources) being used during image processing. The more CPU threads, the faster the
processing (recommended if the computer is not going to be simultaneously used for other tasks). Use
CUDA uses the computers graphics processing unit during processing (recommended).
2.7. Processing multispectral imagery
While most camera models are well defined in the camera database, other camera models, such as
those that store each band as an individual image (e.g., Micasense Altum), need additional information
defined to process correctly.
When a camera stores each band as an individual image, it may be necessary to assign group names.
Drone2Map uses group names to correctly assign each drone image to its correct single band
orthomosaic, which will be composited into a multiband orthomosaic during post processing.
To select images and group them into logical single orthomosaics that are ready for compositing during
post processing, complete the following steps: (see Drone2Map documentation for more details):
1. Make a note of the group names that need to be defined. For example band 1 through band 6
on the Micasense Altum are Blue, Green, Red, Red edge, NIR, and LWIR, respectively. Use these
names for each group name.
2. For each group, you'll select images and assign a name.
3. On the Flight Data tab, open the Images Table.
4. Find the unique image names and determine how to select all images in one group. This is
typically done by filtering by character strings in the file names or file paths. Using the Altum as
an examples, images within an image set are named IMG_0000_1.tif, IMG_0000_2.tif,
IMG_0000_3.tif, IMG_0000_4.tif, IMG_0000_5.tif, IMG_0000_6.tif; one image for each band.
Use the Select by Attributes tool (Flight Data tab, Selection group). In the Select by Attributes
window (in the right panel of Drone2Map), click Add Clause and a query such as Where -- File --
ends with -- _1.tif to select all the Blue band images. Click Select.
5. Verify that your query selected the desired images in the Images Table. Then use the Group
Names tool on the Flight Data tab to enter the new group name (e.g., Blue).
6. Clear the previous selection (using Clear on Image Table). Repeat the above steps until all
images are assigned to the correct group.
54
3. Add Ground Control
Drone2Map obtains GPS information from images or an external geolocation file during project setup.
Where projects require better accuracy than the GPS can provide, you can add controls to your project.
Control refers to points on the earth's surface with a known location that can be used to georeference
your images to the correct position. Drone2Map provides the capability to import control from a file or
manually add control from the map. Drone2Map documentation on managing control is here.
Quality Note: At least 3 control points, but preferably 5+ must be included for them to be taken into
account during image processing. For each control point, a minimum of two image links are required,
but 5+ links are recommended.
3.1. Import Control
To import control points from a file, click the Control dropdown in the Control group within the Home
Tab. Select Import Control → Import from CSV or text file.
● Import Control From: Click browse to select the pathname from your control file. Make sure
Data Contains Headers is checked, and Comma Delimiter is selected if importing a .csv file.
○ Make sure there are no letters included at the end of the coordinates in .csv file (e.g., N
for latitude North or W for longitude West). Make sure coordinates are negative when
appropriate. For instance, GCP surveys using WGS84 as the horizontal coordinate
system will need to report longitudes as negative (in US).
● Control Coordinate System: If necessary, update the Horizontal Coordinate System. This should
match the coordinate system used for RTK GNSS surveys (#3 on readme.txt file you submitted
with your data). If the updated horizontal coordinate system is one you anticipate using
frequently for other GCP surveys, right click the coordinate system and select add to favorites.
○ Vertical Reference: The vertical reference indicates the vertical model to which your
Ground Control elevation values refer. Note that the vertical reference choice may be
different for images than it is for control points. You have the following vertical
reference
Options:asdf
● EGM 84—For altitudes based on the EGM84 geoid.
● EGM 96—For altitudes based on the EGM96 geoid.
● EGM 2008—For altitudes based on the EGM2008 geoid.
● Ellipsoid—For altitudes based on the ellipsoid specified in the horizontal
coordinate system.
● Height Above Ellipsoid—For altitudes based on the ellipsoid specified in the
horizontal coordinate system; it allows you to provide a height above the
applicable ellipsoid.
The default is the EGM96 geoid, but it is common to conduct RTK surveys of control
points (and vegetation plots) using the NAVD88 geoid. If this is the case, you will need to
select Height Above Ellipsoid as your vertical reference. NOAA’s online vertical datum
transformation (VDATUM) tool is a useful means of determining the offset between the
NAVD88 orthometric height you reported in your GCP file and the ellipsoid you used for
the horizontal coordinate system.
55
For instance, at NCNERR, we used the NAD83(2011) ellipsoid for our horizontal
coordinate system and NAVD88 Geoid12A model for estimating orthometric height of
our ground control points. The vertical offset between the ellipsoid and geoid we used
at
our site is -37.343m (i.e., the NAVD88 geoid12A model is 37.343m below the
NAD83(2011) ellipsoid).
Use VDATUM to calculate the vertical offset. Select the appropriate Region. In the
Vertical Information section, enter the source reference frame based on the vertical
reference your ground control points are reported in (for NCNERR example, this is
NAVD88). The Target Reference Frame is the ellipsoid used in the horizontal coordinate
system (for NCNERR example, this is NAD83(2011). Choose the appropriate units. Select
Height. Check GEOID model and select the model used for the source data (for NCNERR
example, this is GEOID12A). Repeat for Target GEOID model. Scroll down to the map and
click where your study site is located to populate the latitude and longitude inputs (for
NCNERR example, this is 34.168226, -77.828171). Enter a Height of 0. Click Convert. At
the bottom of the Ellipsoidal Transformation Epoch Inputs, select month-day-year, and
enter the date of your ground control surveys and specify the reference data of the
output positions (for NCNERR example, this is 5,6,2021 for both). Click OK. The value
reported in the Height Output cell is the offset between the orthometric elevation of
your control points and the ellipsoid (for NCNERR example, this is -37.343m).
In Drone2Map, enter this number (-37.343) in the Geoid Height Field in the Import
Control Window. Make sure the vertical units are set correctly.
● Control Photos: Provides an option to import your GCP photos. Skip this option.
● Control Field Information: Make sure the appropriate fields within your GCP .csv file are selected
for Lat (Y), Long (X), Elevation (Z), and Label. Click Enter Accuracy Values Manually next to the
Accuracy Horizontal field (it should default to 0.02m, which is fine for RTK) unless accuracy
values are included in the .csv file. Click OK. GCPs should be visible on the map (green + sign)
and should be located within the footprint of your flight lines (orange lines) and images (blue
circles).
Revisit the Options window (Home Tab → Processing group), select coordinates and make sure the
project coordinate system (both horizontal coordinate system and vertical reference) match what you
selected above the Control Coordinate System.
Export the selected processing options as a template by selecting Export Template (bottom left of
options window), then browse to and designate the output location for your template, and click Save.
When you create your next project to process wetland imagery, choose your exported template, and
these settings and options are loaded into Drone2Map.
3.2. Control Manager
The control manager is located in the Home tab in the control group. The control manager displays
information about the placed control points and provides quick access to common control operations.
3.3. Image Links Editor
To apply control, links are created between each control point and corresponding images. Links can be
created either manually or computer assisted. Manually linking control points to images is the preferred
method when processing projects unattended, such as overnight processing, in which there is no
intervention by the user throughout processing. Assisted linking of control points requires initial
56
processing to be run, and significantly reduces the amount of time required to link control points to
images. The steps below are for creating manual links.
Click the Image Links Editor button in the control manager (left icon or via the Home tab in the Control
group). The image links editor opens and displays information about the selected control point (Lat,
Long, Elevation), Control Point name, Control Point Type and a list of the images on the left. The list of
images is ordered by distance from the GCP. Note, not all the images listed will contain the GCP.
General Note: Control can be set as a Ground Control Point (GCP) or Check Point.
Ground control points (GCPs) are used to georeference the model. If there are more GCPs in a project
than necessary to accurately scale, rotate, and locate, some of the GCPs can be used as checkpoints to
assess the accuracy of the project. GCPs improve the relative and the absolute accuracy of the model.
Checkpoints (CPs) are used to assess the absolute accuracy of the model. For our purposes, we will
specify controls as Ground Control Points.
From the image list, select an image to which control should be linked. The selected image will appear in
the preview window. Either the built-in map controls or mouse can be used to navigate the image (zoom
and pan).
1) Zoom in on the GCP and place the crosshairs of the pointer in the image that corresponds to the
location on the GCP where the RTK was placed for surveying (often at the center of the target).
Click to create the link. A yellow X marks the point on the image. To redo the link, click another
location. To remove the link, click Remove Link (right hand corner above the image). Repeat this
process for at least 5 images for each GCP. It is a best practice to NOT link the first 5 images
listed to a single GCP. Rather, try to spread the links among images that are not directly in
sequence (i.e., rather than linking GCP to images 190, 191, 192, 193…, link to a set of random
numbers).
Multispectral Note: When processing multispectral imagery, it is important that images from a
variety of image sets are registered. In the file naming system, an image set is usually the
number before the file extension (e.g., in file DJI_0713.JPG, 0713 is the image set number). An
image set number represents a unique camera position above the ground. When capturing
multispectral imagery, you will have an image within each image set for every band captured on
the multispectral sensor. Each GCP must be registered in images from different image
sets/camera positions in order for Drone2Map to be able to triangulate the GCP locations (i.e.,
do not link DJI_071_1, DJI_071_2, DJI_071_3, rather link DJI_071_1, DJI_045_2, DJI_88_3, etc).
One band from each image set can be registered and counted in the minimum of 5 registered
GCP images, it does not matter which band is registered. Also note that thermal imagery is often
not visually sharp enough to identify GCP target centers, so thermal images can be ignored in
this process.
2) To move to the next control point, click the dropdown in the Control Point list. Repeat step 1 for
each control point. If two GCPs appear in a single image, make sure the image is being linked to
the correct GCP.
3.4. Export Control
The Export Control tool allows you to save your control and associated linked images as an external file.
This way, you can reuse them in future projects. To export your control, use the Select tool to select the
control points you want to export (or you can select GCPs by clicking on the left most column of the
57
control manager, holding shift and clicking on the last row to select all GCPs for export). On the Home
tab, click the Control drop-down list → Export Control. Choose a location to save the .zip file. Click Save.
3.5. Start Processing
Click the Start button on the Home Tab in the Processing Group to begin processing the imagery.
Depending on the number of images, homogeneity of the landscape, your CPU specs and the CPU
resources you allotted in the Options window, this will likely take hours.
3.6. Troubleshooting
3.6.1. Fixing a distorted orthomosaic or DSM
If the output orthomosaic and/or DSM is spatially distorted (compressed, stretched, etc), this may be
because the GCPs were in a poor configuration. To remedy this, open the Control Manager and delete
one of the GCPs. More than one GCP can be deleted here, but each GCP removed reduces accuracy, so it
is recommended to start by only removing one GCP. Start processing again, and the resulting products
will hopefully not be distorted.
After running processing without the full set of GCPs, it is important to inspect the DSM for ‘doming’,
where the elevation of the DSM appears to form a dome, with the remaining GCPs at the center. This
may also appear as ‘banding’ if the remaining GCPs are in a linear arrangement, where the DSM is
steeply sloped and the colors in the DSM appear as a rainbow. If this occurs, refer to the following
section to add additional GCPs using water level as a reference to improve the georeferencing of the
surface. When re-processing with the additional GCPs, be sure to re-enable any GCPs that were
removed.
3.6.2. Creating new GCPs to improve georeferencing of surface using water level as a reference
This method is used when there is a need for additional GCPs but there is no ground-checked reference
available (e.g., another GCP or checkpoint), but there is at least one area in the study site that contains a
distinguishable feature at or very close to water level. Manual tie points can be created at these
locations using water level as the assumed elevation value. These manual tie points can then be
transformed into 3D GCPs and used to georeference the modeled surface.
Quality Note: Ideally the points created will be evenly distributed across the study site surface (points
arranged in a quincunx pattern (i.e., how dots are arranged on the 5-side of a die) results in much higher
quality georeferencing as compared to points arranged in a straight line).
Quality Note: It is ideal that a water level measurement is taken in the field at the time imagery was
captured. Alternatively, a water level estimate can be extrapolated from initial DSM. If this alternate
method is used, note that points in open water have variable elevation values, so it is ideal to identify a
point at the edge of land and water to serve as the water level reference. This method is done under the
assumption that all points in areas considered at water level (i.e., tide pools, mouths of tributaries, etc.)
are at the same or close to the same elevation (the height of the water).
● Run processing as normal with all available GCPs.
○ If the distribution of GCPs is causing distortion in the orthomosaic and/or DSM, and it is
necessary to remove some GCPs, be sure to leave a GCP adjacent to the water enabled.
● Once processing is completed, toggle between viewing the DSM and the orthomosaic, and click
around on points as close to the water’s edge as is possible near whichever GCP is closest to the
water. Try a few different points around the water’s edge and try to estimate an average
elevation. This value will be used as the water level elevation.
58
● Once the water level elevation is established, use the Add From Map option from the Control
dropdown to create at least two manual GCPs that are selected in spots that correspond to the
water level elevation estimated (i.e. at the water’s edge). Place them on a memorable feature
that can be picked out later (e.g. a clump of oysters, a piece of debris, a hole in the mud).
● It is recommended to zoom in on each new GCP and take a screenshot to refer back to when
linking images to each new GCP because there won’t be a physical GCP in the image to link to.
● The new GCPs will appear in the Control Manager, and the X, Y, and Z coordinates can be viewed
in the Image Links Editor; however, those coordinates are derived from the current (inaccura te)
DSM.
○ It is currently not possible to edit those coordinates in Drone2Map (as of version 2.3.2),
so it will be necessary to manually edit the CSV file that the initial GCPs were imported
from. Open that CSV file, and add new entries for each new GCP, copying the Latitude
and Longitude from the Image links Editor (but not elevation), and enter the ‘water
level’ elevation as each new GCP’s elevation.
○ If the CSV file does not already have columns for horizontal accuracy and vertical
accuracy, add them. For the GCPs that were surveyed in the field, assign horizontal and
vertical accuracy values of 0.02m. For the newly created GCPs, assign horizontal and
vertical accuracies of 0.1m so that Drone2Map doesn’t assign too much weight to this
GCP location, as the elevation value is not exact.
● Save the CSV file.
● Once the CSV file has been edited and saved, remove the new GCPs from the Control Manager
(they currently have inaccurate elevations).
● Follow the steps from the Import Control section above to re-import the CSV file containing the
GCPs (including the newly created GCPs).
● In the Control Manager remove all duplicates (each of the original GCPs should appear twice
now - remove the ones that are not already linked to images to save time).
● Using the Image Links Editor, link the new GCPs to 5 to 8 drone images, referring back to the
screenshot to make sure that they’re being linked to the same feature every time.
● Once all GCPs are linked to enough images, press the Start Processing button again. The
resulting DSM should have much better accuracy.
Quality Note: After re-processing, make sure to refer to the Drone2Map Quality Check Table to ensure
there are no georeferencing errors.
4. Products
Drone2Map will store outputs within the home project folder specified when initiating the project. The
following file structure will be used:
● .gdb (geodatabase)
● Products
○ 2D
■ DEM
■ Ortho
○ 3D
● Project
○ Data
○ Logs
○ process
● Report
○ Quality Report (see Drone2Map Processing Report below for details)
● .d2mx (Drone2Map project file)
● .tbx (ArcGIS Toolbox)
59
4.1. Indices
● After processing is completed, generate multispectral indices by selecting Indices in the Analysis
tab in the Tools group. Click the drop-down menu and select NDVI (normalized difference
vegetation index). Feel free to explore the other indices, but for our purposes, we will only use
NDVI for analyses.
○ NDVI is a standardized index allowing you to generate an orthomosaic displaying
greenness, also known as relative biomass. This index takes advantage of the contrast of
characteristics between two bands from a multispectral raster dataset—the chlorophyll
pigment absorption in the red band and the high reflectivity of plant material in the
near-infrared (NIR) band.
The documented and default NDVI equation is as follows:
NDVI = ((NIR - Red)/(NIR + Red))
● NIR = pixel values from the near-infrared band
● Red = pixel values from the red band
This index outputs values between -1.0 and 1.0.
● Export the NDVI as a raster by right clicking on the NDVI layer in the Contents pane → Data →
Export Raster. The Export Raster window will open. Make sure the file path of the output raster
dataset is correct. The Coordinate System is automatically populated with the coordinate system
of the source raster layer that is being exported. Use the default values for the other options
(for details, see the Drone2Map documentation).
General Note: Drone2Map does not allow any coefficients to be applied to specific bands which
is necessary for sensors such as the Sentera d4k ndvi + ndre to generate sensical index values.
For instance, the Red and NIR bands for the Sentera d4k sensor are calculated as follows:
Red = -0.966*DNblue + 1.000*DNred
NIR = 4.350*DNblue - 0.286*DNred
● Application of band specific corrections for calculating indices can be done in ArcGIS Pro. Using
the Catalog Pane in ArcGIS Pro, add the raster to be corrected to a new map. In the case of the
NDVI generated from Sentera d4k ndvi + ndre, you should see red, green, and blue (NIR) bands.
Use the Raster Functions tool (Imagery tab, Analysis group). In Raster Functions, click on the
Math dropdown and select Band Arithmetic. Select the Raster to be corrected in the Raster
dropdown. For Method, select User Defined (top of list). Enter the sensor-specific formula in the
Band Indexes field. For instance, for the Sentera d4k sensor Band 3 (B3) is the NIR band and
Band 1 (B1) is the Red band. Substitute the band equations above into the NDVI formula:
((4.350*B3 - 0.286*B1)-(-0.966*B3+1.000*B1))/((4.350*B3 - 0.286*B1)+(-0.966*B3+1.000*B1)).
● Create a new layer. To save the new layer, right click the new layer in the contents pane, select
Data → Export Raster.
● To bound the NDVI index between 1 and -1, use the following equation in the raster calculator in
ArcGIS Pro (image analyst tools):
○ Con(“raster.tif” <-1, -1, “raster.tif”) where “raster.tif” is the name of your NDVI raster.
Repeat with Con(“raster.tif” >1, 1, “raster.tif”) to bound values to 1.
60
5. Drone2Map Processing Report
Every ArcGIS Drone2Map project includes a detailed processing report that displays the results. To access the report once the initial processing
step is completed, on the Home tab, in the Processing section, click Report. You can also access the processing report at any time in the project
folder in PDF and HTML format.
5.1 Drone2Map Quality Check Table
The following table provides guidance for using the Quality Check table on page 1 of the quality report. For additional guida nce, see
Drone2Maps documentation on quality reports.
Quality
Check Item
Definitions and Conceptual Framework Quality Report Value
Check
Visual Quality Check Troubleshooting
Images Median number of keypoints per image.
Keypoints are points of interest (high
contrast, interesting texture) on the
images that can be easily distinguished.
The number of keypoints identified
depends on the size of the images and the
visual content.
Drone2Map’s ability to reconstruct an
accurate 3D surface depends on the
amount of keypoints that can be identified
in multiple, overlapping images (aka
matched keypoints).
At least 10,000 keypoints
extracted per image is
recommended for optimal
quality.
Quality Report: 2D
keypoint matches can be
visualized in figure 5 of
the quality report.
Images: Image quality can
be viewed by scanning
through the images and
checking for appropriate
exposure levels and
lighting, crispness, etc.
Not having enough keypoints per
image can be the result of
repetitive visual content (e.g., a
uniform area of grass or water),
lack of image overlap, poor
image quality and/or too many
changes in the scene during
image acquisition.
These issues can be addressed in
the following ways:
● Increase image overlap
during image acquisition
● Adjust camera settings
● Increasing image size
Dataset Number of enabled images that have been
calibrated.
Calibrated images are images that contain
adequate numbers of keypoints to be used
for surface reconstruction. In order for an
image to calibrate, it needs to have a
minimum number of 25 keypoint matches.
At least 95% of images
calibrated in one block is
recommended for optimal
quality.
Quality Report: Image
overlap can be visualized
in figures 4 and 5 of the
Quality Report. The
distribution of project
blocks can be visualized in
figure 3 of the Quality
Report. The Uncertainty
Ellipses describe how
Not having enough images
calibrated can be the result of an
image calibration error and/or an
instance of multiple blocks.
The presence of uncalibrated
images in a project can be
resolved in the following ways:
● Increase image overlap
during image acquisition
61
Enabled images are incorporated into the
project, while disabled images are
recognized as not useful to surface
reconstruction; this could happen
automatically (e.g., Drone2Map
recognizing calibration card images) or
manually by the user.
A block is a set of images that are
calibrated together. Ideally, all or most of
the images are calibrated in one block.
Having multiple blocks indicates that there
were not enough matches between blocks
to provide global optimization.
precisely each image is
located with respect to
the other images by
means of the Manual and
Automatic Tie Points.
Ideally, the ellipses in the
center of the project are
smaller than at the
outside, as these images
have more matches that
bind them to the
surrounding images. Large
ellipses in parts of the
project may indicate
problems calibrating these
parts and typically
correspond to areas with
few matches.
● Process project with
lower keypoint image
scale
● Adjust camera
parameters to improve
image quality
Camera
Optimizatio
n
Percentage representing the difference
between the initial camera model and the
optimized camera model.
When using a perspective lens, the camera
optimization value is the percentage
difference between the initial and optimal
focal lengths.
The focal length transformation
parameters are a property of the camera’s
sensor and optics and vary with
temperature, shocks, altitude and time.
The calibration process starts from an
initial camera model and optimizes the
parameters.
Less than 5% difference
between initial and
optimized camera model
value is recommended for
optimal quality.
A large difference between the
initial and optimized camera
models can be due to flat or
homogeneous areas not
capturing enough visual
information, images having
significant rolling shutter
distortion, and/or wrong initial
internal camera parameters.
A project with homogenous or
flat areas not capturing enough
visual information can be
resolved in the following ways:
● Process with lower
keypoints image scale
● Enable geometrically
verified matching
● Set internal calibration
parameters to All Prior
62
Matching Common keypoints identified in multiple
images. This information is used to
correctly orient and stitch individual
images together. Higher numbers of
matches will increase the processing time
and the quality of the results.
More than 1,000 matches
computed per calibrated
image (for keypoint image
scale > ¼) and 100 matches
per calibrated image (for
keypoint image scale <⅛) is
recommended for optimal
quality.
Quality Report: Figure 5 in
the quality report is useful
for assessing the strength
and quality of matches.
A low number of matches
suggests the results may be
unreliable. This is often related
to low overlap between images,
but can also be attributed to
initial camera model parameters.
Address this issue by doing the
following:
● See the Dataset Quality
Check section (above) to
improve the results.
● Restart the calibration a
few times with different
settings (camera model,
Manual Tie Points) to get
more matches.
● Increase image overlap.
Georeferen
cing
Information about how the project was
georeferenced and what error is
associated with the GCPs.
Ground Sampling Distance (GSD) is the
distance between two consecutive pixel
centers measured on the ground. A higher
GSD value translates to a lower spatial and
image resolution.
The Root Mean Square error in each
direction (X, Y, Z)
This error calculation will take into account
the systematic error. If Mean error is equal
to 0 (zero), the RMS error will be equal to
the Sigma Z error. The comparison of the
RMS error and Sigma error, indicates a
Optimal accuracy is
obtained when 5-10 GCPs
are distributed evenly
across the study site.
Ideally, the GCP error is
less than 2 times the
average GSD.
A GCP error greater than 4 times
the GSD could indicate a severe
issue with the dataset or an error
with marking or specifying the
GCPs.
In a project where GCPs are
used, georeferencing errors can
be addressed in the following
ways:
● Adding additional GCPs
● Adjusting GCP accuracy
values
● Remarking images
In a project where no GCPs were
used, the project is
63
systematic error. Of the 3 indicators, the
RMS Error is the most representative of
the error in the project since it takes into
account both the mean error and variance.
georeferenced using the position
of the computed image
positions. Error could be a result
of a GPS device used to
geolocate the original images
suffering from global shift.
There could also be cases where
GCPs are discarded by the
software due to errors in the
GCPs.
64
Appendix 3. GCP Caveats
1. Manually Adjusting GCP Height to Facilitate Registration Process
If GCP icons are visible above the camera positions, or too high off the point surface for GCPs to be
identified in any imagery, follow the steps below to manually adjust the height of the GCPs so they are
close to the modeled surface:
● Navigate to GCP/MTP Manager and click Edit… to open GCP coordinate system options.
● Check the Advanced Coordinate Options box to expand the vertical coordinate system options.
○ Select the Geoid Height Above GRS 1980 Ellipsoid [m] option and input an altitude that
will adjust the GCPs to be close to the level of the tie point cloud (e.g., if the altitudes of
the tie points are around -80 m, and the altitude of the GCP is 1.5 m, input a height of -
80 into the box to bring the GCP icons down to just above the point surface).
● Click OK in the coordinate system options box, the vertical GCP coordinate system (the
information in parentheses) will update to reflect the change. Click OK in the GTP Manager
window and the following message will display: Reoptimize the project in the new output
coordinate system for higher accuracy. Click OK.
● The GCP icons will have adjusted based on the height above the ellipsoid that was inputted.
Select one of the GCPs either by clicking on the icon in the rayCloud view or by selecting one in
the Layers menu list.
○ If image thumbnails appear in the right sidebar, proceed to next steps. If no images
appear, repeat the previous step to adjust the height until the GCP icons are as close to
the height of the point surface as possible.
● Once images appear in the right sidebar, the standard GCP Registration steps can be completed.
However, because the vertical GCP coordinate system was adjusted to facilitate the registration
process, once all GCPs are registered, the vertical GCP coordinate system must be set back to
Arbitrary (in the GCP Manager > Advanced Options) before reoptimizing.
2. Creating New GCPs to Improve Georeferencing of Surface
There are various instances that necessitate manually creating GCPs. Two instances are described in the
steps below.
Manual tie points must be created and then transformed into 3D GCPs. See Pix4D’s documentation
about the different types of Tie Points to learn more.
A. Creating New GCPs Using Original GCP as Reference
This method is used when something goes wrong with an original GCP (e.g., nearby GCP #4 target was
accidentally tagged as GCP #5, eliciting a Georeferencing error in the quality report). The following
65
process explains how to create a fresh GCP icon in the rayCloud, populate it with the original GCP
information, and delete the original/erroneous GCP.
● In the rayCloud view, select a tie point very close to the erroneous GCP.
● Upon selecting a tie point in the proximity of the GCP, both Selection information and Images
should be shown in the right sidebar.
● Hover mouse over the first image displayed and zoom out until the GCP target is visible in the
image. Once the desired zoom (confidence) level has been reached, select the New Tie Point
icon (the leftmost icon in the Images icon bar), then click once in the center of the GCP target to
establish the tie point.
● Continue registering this GCP following the Registering GCPs instructions.
● In the Selection section of the right sidebar, change the Type from Manual Tie Point to 3D GCP.
Change the Label to something distinguishable if desired.
● Populate the Selection information ( X, Y, Z, Horizontal and Vertical Accuracy) by copying and
pasting the values from the original GCP Selection information.
● Once GCP is established and Selection information is entered, click Apply to save changes. A new
GCP icon should appear in the rayCloud view and the name of the new GCP should appear in the
Layers menu list.
● Now that a new GCP (essentially a copy of the original) has been created, the original/erroneous
GCP can be deleted by right clicking it in the Layers menu list and selecting Remove.
B. Creating New GCPs Using Water Level as Reference
This method is used when there is a need for additional GCPs but there is no ground-checked reference
available (e.g., another GCP or checkpoint), but there is at least one area in the study site that contains a
distinguishable feature at or very close to ‘water level’. Manual tie points can be created at these
locations using ‘water level’ as the assumed elevation value. These manual tie points can then be
transformed into 3D GCPs and used to georeference the modeled surface.
Quality Note: Ideally the points created will be evenly distributed across the study site surface (points
arranged in a quincunx pattern (i.e., how dots are arranged on the 5-side of a die) results in much higher
quality georeferencing as compared to points arranged in a straight line).
Quality Note: It is ideal that a water level measurement is taken in the field at the time imagery was
captured. Alternatively, a water level estimate can be extrapolated from the point cloud. If this alternate
method is used, note that tie points in open water have variable elevation values, so it is ideal to identify
a point at the edge of land and water to serve as the water level reference. This method is done under
the assumption that all points in areas considered at ‘water level’ (i.e., tide pools, mouths of tributaries,
etc.) are at the same or close to the same elevation (the height of the water).
● Run step 2 and generate the densified point cloud before starting the GCP creation process. By
running step 2 and generating the densified point cloud, points closer to the water’s edge can be
selected in order to determine the most accurate elevation value for water level at this site
when the images were taken.
66
● Once the densified point cloud is generated, turn it on in the rayCloud view (check the Point
Cloud box in the left Layers menu) and select a point near the boat that’s as close to the water’s
edge as is possible. Try a few different points around the water’s edge and see if an ‘average’
elevation is discernible. This value will be used as the ‘water level’ elevation.
● Once water level elevation is established, create at least two manual tie points (that will become
GCPs) that are selected in spots with enough water to be considered ‘water level’ (in this case,
one at the edge of the tidepool on the east side of the site and the other at the edge of the
tributary in the northwest corner of the map).
● To create manual tie points:
○ Select a point in the point cloud surface where the new tie point/GCP will be placed.
○ A series of thumbnail images will pop up in the right sidebar under the Images
dropdown.
○ In the first image, zoom out until a distinguishable feature becomes visible (e.g., a sharp
angle in the shoreline, a cluster of oysters at the water’s edge) then zoom in on this
feature. Ideally, the feature is distinguishable and as close to the water’s edge as
possible.
○ Once the distinguishable feature is visually located in one image, ensure that the feature
can be located in multiple images.
○ Once the feature has been visually located in multiple images, return to the first image
and zoom in on the feature (the higher the zoom level, the more confidence Pix4D will
interpret in its location). Once at the appropriate zoom level, select the New Tie Point
button (the leftmost icon in the Images icon bar), then click once on the feature in the
image to establish the tie point.
○ Repeat this zooming and clicking process in multiple images (as explained in the GCP
Registration Process). Register at least 5 images, or as many as it takes for Pix4D to
automatically place the green cross in the correct position in unregistered images.
○ In the Selection section of the right sidebar, change the Type from Manual Tie Point to
3D GCP. Change the Label to something distinguishable if desired.
○ Input the established water level elevation value derived from the previous step into the
Z value. The X and Y values should auto populate after changes are saved.
○ Change both Horizontal and Vertical Accuracy to .1 so that Pix4D doesn’t assign too
much ‘weight’ to this GCP location, as the elevation value is not exact.
○ Click Apply. This will save the changes and the GCP will show up in the GCP list under the
Layers menu and as a GCP icon in the rayCloud view. Any GCP can be removed by right
clicking its name in the Layer menu and clicking Remove.
○ Repeat this process for all manual tie points / new GCPs.
● Once all manual tie points are created and designated as 3D GCPs, the project should be
reoptimized and existing outputs should be reprocessed to incorporate changes.
Quality Note: After reoptimization, make sure to refer to the Output Quality Check step and the Quality
Check Tables (for Pix4D or Drone2Map, respectively) to ensure there are no georeferencing errors.
67
Appendix 4. Pix4D Output Quality check table
The following table provides guidance for using the Quality Check table on page 1 of the quality report. For additional guida nce, see Pix4D’s
documentation on quality reports:
● Basic guidance on analyzing the quality report
● Specifications of quality check table terms, symbols and quality report figures
● Comprehensive guidance on understanding and troubleshooting using the quality report
Quality Check
Item
Definitions and Conceptual
Framework
Quality Report Value
Check
Visual Quality Check Troubleshooting
Images Median number of keypoints
per image.
Keypoints are points of interest
(high contrast, interesting
texture) on the images that can
be easily distinguished. The
number of keypoints identified
depends on the size of the
images and the visual content.
Pix4D’s ability to reconstruct an
accurate 3D surface depends
on the amount of keypoints
that can be identified in
multiple, overlapping images
(aka matched keypoints).
At least 10,000
keypoints extracted per
image is recommended
for optimal quality.
Quality Report: 2D keypoint matches
can be visualized in figure 5 of the
quality report.
Images: Image quality can be viewed
by scanning through the images and
checking for appropriate exposure
levels and lighting, crispness, etc.
Not having enough keypoints
per image can be the result of
repetitive visual content (e.g., a
uniform area of grass or
water), lack of image overlap,
poor image quality and/or too
many changes in the scene
during image acquisition.
These issues can be addressed
in the following ways:
● Increase image overlap
during image
acquisition
● Adjust camera settings
● Increasing image size
Dataset Number of enabled images that
have been calibrated.
Calibrated images are images
At least 95% of images
calibrated in one block
is recommended for
optimal quality.
rayCloud: Uncalibrated images are
displayed as red camera position
icons in the rayCloud. Uncalibrated
images are not used for processing
Not having enough images
calibrated can be the result of
an image calibration error
68
that contain adequate numbers
of keypoints to be used for
surface reconstruction. In order
for an image to calibrate, it
needs to have a minimum
number of 25 keypoint
matches.
Enabled images are
incorporated into the project,
while disabled images are
recognized as not useful to
surface reconstruction; this
could happen automatically
(e.g., Pix4D recognizing
calibration card images) or
manually by the user.
Uncalibrated images display in
the rayCloud as red icons.
A block is a set of images that
are calibrated together. Ideally,
all or most of the images are
calibrated in one block. Having
multiple blocks indicates that
there were not enough
matches between blocks to
provide global optimization.
and can arise from a variety of
causes.
Quality Report: Image overlap can be
visualized in figures 4 and 5 of the
Quality Report. The distribution of
project blocks can be visualized in
figure 3 of the Quality Report. The
Uncertainty Ellipses describe how
precisely each image is located with
respect to the other images by means
of the Manual and Automatic Tie
Points. Ideally, the ellipses in the
center of the project are smaller than
at the outside, as these images have
more matches that bind them to the
surrounding images. Large ellipses in
parts of the project may indicate
problems calibrating these parts and
typically correspond to areas with
few matches.
and/or an instance of multiple
blocks.
The presence of uncalibrated
images in a project can be
resolved in the following ways:
● Increase image overlap
during image
acquisition
● Process project with
lower keypoint image
scale
● Adjust camera
parameters to improve
image quality
A project with multiple blocks
can be resolved in the
following ways:
● Enabling the ‘Rematch’
option
● Adding Manual Tie
Points between blocks
Camera
Optimization
Percentage representing the
difference between the initial
camera model and the
optimized camera model.
When using a perspective lens,
the camera optimization value
is the percentage difference
between the initial and optimal
Less than 5% difference
between initial and
optimized camera
model value is
recommended for
optimal quality.
rayCloud: By viewing the rayCloud
from the side, the shape of the point
cloud surface can be inspected. A
relatively level surface is expected for
marsh landscapes; a dome-like shape
may be due to poor camera
optimization quality.
A large difference between the
initial and optimized camera
models can be due to flat or
homogeneous areas not
capturing enough visual
information, images having
significant rolling shutter
69
focal lengths. When using a
fisheye lens, the value is the
percentage difference between
the initial and optimized affine
transformation parameters C
and F.
The focal length/affine
transformation parameters are
a property of the camera’s
sensor and optics and vary with
temperature, shocks, altitude
and time. The calibration
process starts from an initial
camera model and optimizes
the parameters.
rayCloud: By viewing the rayCloud
from above, the camera optimization
quality can be visualized. The closer
the blue (initial) and green
(computed or optimized) camera
position icons are to each other, the
better the camera optimization
quality. A ‘doming effect’ in the point
cloud surface can be attributed to
poor camera optimization quality if a
pattern of better camera
optimization is observed in the
center of the surface compared to
the periphery (i.e., the camera
position icons forming a green circle
with a blue ring around it from
above).
distortion, and/or wrong initial
internal camera parameters.
A project with homogenous or
flat areas not capturing enough
visual information can be
resolved in the following ways:
● Set internal calibration
parameters to All Prior
● Process with lower
keypoints image scale
● Enable geometrically
verified matching
Images with rolling shutter
distortion can be corrected in
the following ways:
● Calculate the Vertical
Pixel Displacement and
enable linear shutter
optimization in the
Image Properties
Editor if needed.
Incorrect initial internal camera
parameters can be resolved in
the following ways:
● Edit camera model
● Generate parameter
values for perspective
lens or fisheye lens
Matching Common keypoints identified
in multiple images.
More than 1,000
matches computed per
calibrated image is
Quality Report: Figure 5 in the quality
report is useful for assessing the
strength and quality of matches.
A low number of matches
suggests the results may be
unreliable. This is often related
70
Pix4D uses a SIFT algorithm to
identify the same unique pixel
clusters (keypoints) in multiple
images; this information is used
to correctly orient and stitch
individual images together.
Higher numbers of matches will
increase the processing time
and the quality of the results.
recommended for
optimal quality.
to low overlap between
images, but can also be
attributed to initial camera
model parameters.
Address this issue by doing the
following:
● Increase image overlap
during image
acquisition
Georeferencing
Quality Note:
Initial
information will
be displayed in
the
Georeferencing
section of the
quality check
table of the
quality report
after step 1 has
been processed,
but user should
make sure to
check the
updated
georeferencing
information in
the quality
check table
after GCPs have
been registered
Information about how the
project was georeferenced and
what error is associated with
the GCPs.
Ground Sampling Distance
(GSD) is the distance between
two consecutive pixel centers
measured on the ground. A
higher GSD value translates to
a lower spatial and image
resolution.
The Root Mean Square error in
each direction (X, Y, Z)
This error calculation will take
into account the systematic
error. If Mean error is equal to
0 (zero), the RMS error will be
equal to the Sigma Z error. The
comparison of the RMS error
and Sigma error, indicates a
systematic error. Of the 3
indicators, the RMS Error is the
most representative of the
error in the project since it
Optimal accuracy is
obtained when 5-10
GCPs are distributed
evenly across the study
site.
Ideally, the GCP error is
less than 2 times the
average GSD.
A GCP error greater than 4
times the GSD could indicate a
severe issue with the dataset
or an error with marking or
specifying the GCPs.
In a project where GCPs are
used, georeferencing errors
can be addressed in the
following ways:
● Adding additional GCPs
● Adjusting GCP accuracy
values
● Remarking images
In a project where no GCPs
were used, the project is
georeferenced using the
position of the computed
image positions. Error could be
a result of a GPS device used to
geolocate the original images
suffering from global shift.
There could also be cases
where GCPs are discarded by
71
(See GCP
Registration
section)
takes into account both the
mean error and variance.
the software due to errors in
the GCPs. Read more about
Ground Control Points for
more information.
72
Appendix 5. Create Alternative Vegetation Indices with RGB Imagery in
ArcGIS Pro
Process for creating alternate vegetation indices using RGB imagery:
1. Once multispectral imagery has been processed and outputs have been created, create a new
ArcGIS Pro project.
2. Search for and open the Raster Calculator geoprocessing tool (under Analysis > Tools) and use
the following steps to generate each vegetation index:
a. For the Rasters input field, use the browse icon to navigate to the Pix4D project output
folder called Indices (located in ‘Project Folder Name’ > 4_index > indices). From here,
each of the .tif files within the Red, Green and Blue index folders can be selected, which
will add them to the Rasters list in the tool.
b. Once the three rasters are listed in the Rasters list, create the equation for each
vegetation index by selecting the appropriate operators from the Tools list, usi ng the
keyboard and double clicking raster files from the list. Each index can be created using
the following equations:
i. Excess Green (ExG) = 2*Green - Red - Blue
ii. Vegetative Index Green (VIg) = (Green - Red) / (Green + Red)
c. Once an equation is inputted into the tool’s equation box, choose an output location for
the vegetation index raster being created (it is best practice to store rasters in a fresh
folder, rather than a geodatabase) and name the output appropriately.
d. The Environment settings can be left as they are.
e. Repeat and run this process for each vegetation index raster.
3. Search for and open the Composite Bands geoprocessing tool (under Analysis > Tools) and input
the following:
a. For the Input Rasters field, use the browse icon to navigate to the Pix4D project output
folder called Indices (located in ‘Project Folder Name’ > 4_index > indices). From within
the indices folder, navigate to the .tif file within each of the Blue, Red, and Green
folders. The blue, green and red .tif files are the inputs that will be combined into one,
multiband raster.
b. For the Output Raster field, select an output file location (make sure it is not inside a
geodatabase) and name the output appropriately.
c. The Environment settings can be left as they are.
4. Once the raster is created, verify the output is as desired by right clicking on the raster and
examining bands (in symbology) and raster information (in properties).
73
Appendix 6. Creating RGB Orthomosaic from Multispectral Imagery in
ArcGIS Pro
1. Once multispectral imagery has been processed and outputs have been created, create a new
ArcGIS Pro project.
2. Search for and open the Composite Bands geoprocessing tool (under Analysis > Tools) and use
the following steps to generate the RGB orthomosaic:
a. For the Input Rasters field, use the browse icon to navigate to the Pix4D project output
folder called Indices (located in ‘Project Folder Name’ > 4_index > indices). From within
the indices folder, navigate to the .tif file within each of the Red, Green and Blue folders.
The red, green and blue .tif files are the inputs that will be combined into one multiband
raster.
b. For the Output Raster field, select an output file location (make sure it is not inside a
geodatabase) and name the output appropriately.
c. The Environment settings can be left as they are.
3. Once the raster is created, verify that the bands are in the desired order by right clicking the
composite raster and selecting Symbology. The RGB option should be selected and the red,
green and blue bands should be designated as band 1, 2 and 3, respectively.