Posts

Lab 07, Hyperspectral Remote Sensing

Image
  Background and goals:     The goal of this lab was to gain knowledge and technical skills in hyperspectral imagery and hyperspectral imagery analysis. The second goal of this lab was it introduce the user to FLAASH processes, including correcting a hyperspectral image & calculating vegetation metrics.     The results portion of this report will be omitted, as there was no major result from this lab so all deliverables will be present in the methods section. Methods:     First was an introduction to spectral processing. A ROI file was added to a hyperspectral image, which was then used to extract mean spectras within the ROI. This was then compared in the spectral library, as seen below. The ROI statistics dialogue with results from spectral library.             Next, 20 bands were animated within a greyscale image. This is done to make the spatial occurrence...

Lab 06, Flood simulation modeling

Background and goals:     The goal of this lab was to gain skills and experience in flood simulation modeling. This was achieved through a tutorial created by Esri. Method:     The first step was to determine which areas would be effected by the flooding. This was found using LAS data and flood height information. using that information, we determined that roads and buildings would be effects by the flooding. Results:     It was found that in the Baltimore area in places of 5 meter flood depth, 57% of bridges will be effected.  Sources:     Esri.(2021).[LAS and Geodatabase dataset for Baltimore]. Esri . Provided by Cyril Wilson

Lab 05, LiDAR vegetation metrics modeling

Image
  Background and Goals:          The main goal of this lab was to gain an understanding of how to extract forest metrics from LiDAR point cloud data. This was achieved through data extraction(namely canopy height) from LiDAR data through LP360, then processing said data with ArcPro. Methods:           To begin this lab, we extracted vegetation height and ground elevation from LiDAR point data. This was done using the LP360 software. Using those two rasters the canopy height was then calculated, as seen below. Canopy Height Raster          From there, we then had to calculate the ABG. To achieve this, first we took a land use map of the area, and re-classed it so it has our areas of interest only. That reclassed raster was then used to create a mask in the below model. AGB calculation model          The below constants and equatio...

Lab 04, Classification accuracy assessment

Image
Background and Goal:     The goal of this lab was to gain experience and knowledge in the evaluation of classification results through accuracy assessment. Accuracy assessment is a necessary process that needs to occur in able for the classification to be used in any application.  Methods:          The accuracy assessment was done by adding random points to the high resolution image of Eau Claire, and then manually classifying those points. These points will then be checked against the classified image. The tool for creating those points is shown below.  Add Random Points Tool               The points were added using the stratified random parameters. Once they were added, they resulted in the below image. Points on the Map     After that, the accuracy assessment is ran, producing data that is used to fill out the error matrix. Results:    ...

Lab Three, Object-based image analysis & machine learning classifiers

Image
Background and Goal:          The goal of this lab was twofold. One, to develop knowledge in Object-based image analysis and two, gain experience with the eCognition software. The object-based image analysis was done with machine learning, namely SVM and Random Forest classifiers.  Methods:          Through the lab, there were classifications done three times. However, as the general process is the same I will be highlighting only the methods of one classification. Drone imagery in false-color          The first step is to segment the image into objects. Above can be seen the base image. We achieved segmentation using the multiresolution segmentation process in eCognition. Drone imagery objects          From here, we then selected training samples to train the classifier. Training Samples          Now all that is left to do is to train a...

Lab Two, Pixel-based supervised classification

Image
Background and Goal:     The mail goal of this lab was to gain experience in extracting LC/LU (Land use/ Land Cover) information from remotely sensed images using pixel-based supervised classification. Another goal was to understand how to select and properly add training data to a supervised classification. Methods:     Simply put, the method used was to get straining samples, combine into their separate classifications, and then run the supervised classification.      As for a more detailed method, we collected 12 water training samples, 11 first, 10 agriculture, 5 for urban areas, and 12 for areas of bare soil. These samples were then viewed on the mean data plot windows, and outliers were removed and replaced. Below is the mean plot of all of the signatures used for this exercise. Signature mean plot used for this exercise     From there a separability report was r...

Lab One, surface temperature extraction from thermal remote sensed data

Image
 Background and goal:     The goal of this lab was to gain skills and understanding of extracting land surface temperature information from thermal bands of satellite images and drone imagery. Methods:     Starting off, we brought different types of imagery into Erdas Imagine to look for tonal quality differences, as well as practice identifying relatively warm to cool features.     From there, we began to get from DN(digital numbers) to the radiant surface temperature on 2000 thermal imagery of Eau Claire.. That was done by determining the Grescale & Brescale values of the data, and then using the model maker to run two separate models to get the radiant temperature.     Lastly, we took 2014 thermal imagery of Eau Claire and did a similar process to it as the 2000 data.  The exception was that the data was first trimmed down using an AOI file, leaving the area surround...