Information

Program Title :                                           PHIL LIDAR 1. Hazard Mapping of the Philippines using LiDAR

 

                                                                                 Program B. LiDAR Data Processing & Validation by SUCs and HEIs

 

Project Title :                                              Project 8. LIDAR Data Processing & Validation in the Visayas: Central Visayas Region 7

 

Period Covered :                                       January 1, 2015-December 31, 2015

 

Implementing Agency :                          University of San Carlos

 

Project Leader :                                         Roland Emerito S. Otadoy, Ph.D.

 

Source of Fund :                                        DOST-GIA

 

Amount of Grant for this Period:      P 9,135,089.00

 

Monitoring Agency :                                 Philippine Council for Industry, Energy, and Emerging Technology Research

                                                                            and Development

 

 

Figure 11 The organizational structure of USC Phil-LiDAR 1.

                            The project is officially known in the University of San Carlos (USC) as USC Phil-LiDAR 1. It is housed at the USC Phil-LiDAR Research Center, Josef Baumgartner Learning Resource Center, University of San Carlos-Talamban Campus. Our organizational structure is shown in Figure 11.

                             Included in this report are editing of digital surface models (DSM) and digital terrain models (DTM), mosaicking of LiDAR data, integrating bathymetric and validation data to DEM, ocular surveys, field measurements and generation of flood maps. The summary of our accomplishments against the workplan is relegated as appendix.

 

A work flow background review

 

The current practice at the LiDAR-1 subgroup in the Philippine National LiDAR Project for Flood Hazard Exposure feature extraction requires the selection of (5 km x 5 km) blocks. These are to be covered from areas that are historically known to experience flooding due to and brought about by natural hydrological processes that could be seasonal. These blocks will be delineated following a processing workflow as depicted in Figure 1 in the following page. For the most part the feature analyst or editor for the floodplain need to use one’s judgement to select features from a true color image such as from an orthophoto or DSM derivative from the usual (d,v,s,t) ascii data supplied from data acquisition group which preprocesses flight missions data . A constraint is placed so that for example a 2m above structures and 25sq.m area as an accept criterion and anything below that is discarded. The analyst or data processor then draws by hand the outline of the structure based on his or her visual perception of depth or rise from the input photo of area under study or from a featureless DSM raster.

 

Figure 5 1 Basic workflow for feature extraction for Floodplain data processing and integration

Figure 5 1 Basic workflow for feature extraction for Floodplain data processing and integration

 

The proposed new workflow concept

 

In the interest of computational efficiency with cost as well the more objective manner of a software driven feature extraction that does not entirely depend on the skill and experience of an analyst o data processor person, we have proposed an enhancement to the current process by introducing a modification of the workflow. Our test case is based unavailability of an orthophotos or similar true color imagery when LAS cloud point data is readily available when floodplain planning has started. This paper proposes instead the use of LiDAR LAS laser point and the use LAS cloud data processing program LAStools as input in contrast with those in shown in the previous section. We show in the later development the notable differentiation in terms of

 

Figure 5 2 Proposed workflow modification requires use of LAS laser point file only as input

Figure 5 2 Proposed workflow modification requires use of LAS laser point file only as input

In principle the hand editing from orthophotos that gives the analyst or data processor person his or her visual perception from front of operator’s monitor is replaced with a script that can run an arrangement of executable programs called in sequence with parameters being passed from placeholders or variables in a command interface (CMD) batch file in a Windows OS environment – this is a custom Pipeline Algorithm that could programmatically differentiate for example buildings or structures from vegetation.

 

ACOMPUTATIONAL SET-UP FOR AUTOMATING FEATURE EXTRACTION FROM LAS DATA SET

 

Selection of study area and set-up specifications for prototyping of new method

 

For our initial test and proof of the concept we take three areas namely the flight missions for Cebu_Blk47A with the following .las laser point files: pt000218.las, pt000219.las and pt000220.las which are specially selected for the planned floodplain feature extraction and field validation under actual study. We have our analyst and data processors performed the usual hand editing to the best of their ability to generate the polygons with a constraint placed to structures to drop 2m downwards and enclose on a single polygon smaller structures to a lumped 25sq.m polygon from a provided DSM. We obtained the start and finish time of 4.5 hrs. on the average for the manual editing using iMAC desktop computers running with i7 quad core class CPUs on 2TB hybrid HDDs on 6 cores up to the shape file generation from the finished polygons. Our very recent versions of the scripts for the pipeline algorithm runs to an average of 1 hr. 10 min.

 

when all the mission flights in Cebu_Blk47A on a folder are processed while between point files is just under 12 minutes. At the end of the LAStool based pipeline algorithm execution we would have produced one complete self-contained folder zipped for each of the three point files consisting of .shp, .tiff, .kml and other associated files. The shape files are then read into ArcGIS and the overlap between those that were processed by hand editing and those with LAStool processing results and are again visually compared. The process goes to iteration on which height break and step size will show the optimal match by visual inspection. Both of these shape files will overlaid on the DSM and visually checked again on the quality of their alignment all of these files are then stored in a server database. We summarize this scheme in Table 1 shown in the next page.

 


 
 

Description of the Pipeline Algorithm for discriminating buildings and structures

We adapted and modified for our purposes a publicly shared example for the use of LAStools for discriminating buildings and vegetation from “rapidlasso” the makers of LAStools for las cloud point processing – we reference to that work for our development of a custom script. At the core of this algorithm are successive steps that will lead to the generation of shape files and classified TIFF images for both buildings and vegetation. In our application we are only interested in the structures or buildings that are in the areas on the floodplains.

 

Figure 5 4 Flowchart for running pipeline algorithm iteratively to generate successive files with changing height breaks

Figure 5 4 Flowchart for running pipeline algorithm iteratively to generate successive files with changing height breaks

To determine how fast to finish each tile being processed, a code profiling measurement was devised that is for each of the loop on each height break change the CPU time is captured and log to a text file . Also, the same profiling method is done for each of the executable program that is executed internally in the Pipeline Algorithm routine. At the code level in the loop we show in Figure 5.0 in the next page a code snippet of the script for the Runloop.bat the boxed part highlights where the time capture happens and logged. We do not show the Pipeline Algorithm script algorithm here but is made available by the authors in the provided download link shown in the references and can be freely be downloaded and modified by the reader and using their own LiDAR LAS data set.

 

At a code level a loop script calls a main script and passes a variable value for an count for height breaks because of the limitation in the command interface (CMD) in Windows only integer values can be used and so the fractional step size is changed inside the batch file first before running the loop. The code snippet is shown in Figure 5.0

 

Figure 5.0

Figure 5.0

 

Result and Discussion

Evolution of structural detail from heart breaks iteration

Table 5 2 Evolution of height structures from selected height breaks from laser point file

Table 5 2 Evolution of height structures from selected height breaks from laser point file


 
 

Comparison of manual edited shape files from computational LAS derived shape files

 

We also compared the results of the two methods by overlaying their corresponding generated shape files on the same DSM and making visual inspection. For our purposes we define the reference or template to the hand edited by the analyst and data processor person for those which uses an orthophotos as the to versus our computer generated shape files as “input shape file” which refers to those which used the LAS data set. In Table 3 shown in the successive pages the results for overlaying the hand edited shape files unto the DSM and when the same is done for the LAS derived shape files. These are select heights and step size for our test case for CebuBlk47A with laser point files pt000218, pt000219 and pt000220.

 

 

Table 5 3 Test case results for CebuBlk74A LAS laser point files pt000218, pt000219, pt000220

Table 5 3 Test case results for CebuBlk74A LAS laser point files pt000218, pt000219, pt000220


 

Measurement of similarity by 2-dimensional Normalized Cross Correlation

 

We developed an objective test for similarity between an arbitrary template here being the “hand generated” shape file from the output of an analyst and data processor and those that were “computationally generated” from LAS data set using the pipeline algorithm. The basic concept of correlation as we apply them for images one is a sub image I1(x, y) of size K L within an image I2(x, y) of size M N, where K ≤M and L ≤N. The Normal Cross Correlation between I1(x, y) and I2(x, y) at a point (i, j) is thus a mathematical operation given by following equation

code2
 

Because an image I (x, y) is represented as a 2D data matrix whose elements are light intensity values of R,G and B color planes, we can see that in equation (1) above that it is essentially a discrete multiplication operation of two matrices in which one image is offset by an amount (x+i) in the rows element and (y+j) in columns element. A perfect cross correlation between two images also means that it is a perfect match between the template and the image under comparison or otherwise results to a zero if they are completely off. We use a readily available function in Matlab to perform the 2D cross correlation called normxcorr2 in the script below in Figure 6 to compare in a graphical manner the manually edited shape file from that of the LAS laser point file derived shape file. Here shown in the script for the test case of pt000218.las. The results for each test case are shown in the next page.

 
array
 

Table 5 4 Result of 2D Cross Correlation between hand generated shape file (template) and 30m derived LAS using pipeline algorithm

Table 5 4 Result of 2D Cross Correlation between hand generated shape file (template) and 30m derived LAS using pipeline algorithm


 

When assumed that the results from the manual editing is takes as the reference or the template in the 2D cross correlation process we see some spread in the peaking. This can be explained by the fact that the pipeline algorithm results in more detail in the exact height breaks as compared with those done by hand. If the reverse is true that is when the computer generated shape file is decided to be the template then the manual editing actually misses much more height details from what is should. This demonstrates better accuracy and precision when height or elevation of the structures from ground becomes the most important criterion in the feature extraction in the floodplains modeling.