World Journal of Geomatics and Geosciences
Article | Open Access | 10.31586/wjgg.2025.1242

Exploring LiDAR Applications for Urban Feature Detection: Leveraging AI for Enhanced Feature Extraction from LiDAR Data

Olly Harouni1, Alan Forghani2,*, Maria Rashidi1 and Payam Rahnamayiezekavat1
1
School of Engineering, Design and Built Environment, University of Western Sydney (UWS), Sydney Australia
2
Science, Technology, Engineering and Mathematics (STEM), University of South Australia, Adelaide, Australia, Faculty of Science & Technology, University of Canberra, Canberra, Australia

Abstract

The integration of LiDAR and Artificial Intelligence (AI) has revolutionized feature detection in urban environments. LiDAR systems, which utilize pulsed laser emissions and reflection measurements, produce detailed 3D maps of urban landscapes. When combined with AI, this data enables accurate identification of urban features such as buildings, green spaces, and infrastructure. This synergy is crucial for enhancing urban development, environmental monitoring, and advancing smart city governance. LiDAR, known for its high-resolution 3D data capture capabilities, paired with AI, particularly deep learning algorithms, facilitates advanced analysis and interpretation of urban areas. This combination supports precise mapping, real-time monitoring, and predictive modeling of urban growth and infrastructure. For instance, AI can process LiDAR data to identify patterns and anomalies, aiding in traffic management, environmental oversight, and infrastructure maintenance. These advancements not only improve urban living conditions but also contribute to sustainable development by optimizing resource use and reducing environmental impacts. Furthermore, AI-enhanced LiDAR is pivotal in advancing autonomous navigation and sophisticated spatial analysis, marking a significant step forward in urban management and evaluation. The reviewed paper highlights the geometric properties of LiDAR data, derived from spatial point positioning, and underscores the effectiveness of machine learning algorithms in object extraction from point clouds. The study also covers concepts related to LiDAR imaging, feature selection methods, and the identification of outliers in LiDAR point clouds. Findings demonstrate that AI algorithms, especially deep learning models, excel in analyzing high-resolution 3D LiDAR data for accurate urban feature identification and classification. These models leverage extensive datasets to detect patterns and anomalies, improving the detection of buildings, roads, vegetation, and other elements. Automating feature extraction with AI minimizes the need for manual analysis, thereby enhancing urban planning and management efficiency. Additionally, AI methods continually improve with more data, leading to increasingly precise feature detection. The results indicate that the pulse emitted by continuous wave LiDAR sensors changes when encountering obstacles, causing discrepancies in measured physical parameters.

1. Introduction

Urbanization and the influx of people into major cities in developing countries have led to population growth and significant spatial changes in urban and suburban areas. Uncontrolled expansion of population centers can result in adverse effects such as poverty, inadequate facilities, diminished community cohesion, social damage, weak social security, crime, traffic congestion, inefficient urban transportation, and environmental pollution. Excessive urban development poses a serious threat to the quality of life and social stability within communities. To address these challenges, 3D spatial data infrastructure is emerging as a crucial tool for managing crises and coordinating societal responses. Smart, knowledge-based management is essential for effective crisis management [1]. Modern research is focusing on creating 3D urban models to track changes in buildings and urban expansion. Historically, monitoring transformations involved manual methods and visual comparisons, which were time-consuming and required skilled operators. Advances in Earth observation data and Machine Learning (ML) techniques are increasingly used for extracting urban and natural features and monitoring their changes [2]. While satellite and aerial imagery have been employed for mapping and tracking horizontal changes in cities, vertical profile extraction necessitates technologies like optical image pairs, radar image interferometry, or 3D LiDAR point clouds.

1.1. LiDAR Technology and Applications

LiDAR is a laser scanning technology that emits light pulses or continuous wave signals towards objects and records the intensity and travel time of the reflected waves. Compared to photogrammetry, LiDAR is an active system capable of producing high-density point clouds (one point per square meter or better) with reasonable accuracy (0.15-0.25 m in length and 0.1-0.3 m in height) over relatively large areas in a short time [3]. LiDAR systems describe each point on the Earth's surface using Y, X, and Z coordinates, correcting for plane deviations and laser scanner errors. LiDAR functions similarly to radar but with optical pulses [4].

Key advantages of LiDAR include: a) Operation in any lighting condition and weather; b) Ability to penetrate vegetation; c) Rapid and extensive digital land surveying.

LiDAR technology works by transmitting laser pulses to the ground, which reflect back to the sensor. The distance to the ground is calculated using the speed of light. LiDAR provides highly accurate measurements of both the canopy and ground surface, allowing for the detection of individual trees and their dimensions. Airborne LiDAR systems consist of four main components: (1) a laser scanner, (2) differential global positioning systems (GPS) for aircraft and ground units, (3) a sensitive inertial measurement unit (IMU) attached to the scanner, and (4) an onboard computer that controls the system and stores data from the other components. The GPS and IMU data determine the scanner's position and orientation during pulse emission. Early LiDAR systems in the 1990s used lasers coupled with GPS for mapping but lacked inertial measurements, which affected ground point accuracy and exploitation. A typical LiDAR system includes:

  1. A differential global positioning system to locate the laser source;
  2. An inertial measurement system to measure laser transmission angles;
  3. A laser scanning system to measure distances between the laser source and the ground;
  4. A computer for system control and data storage.
1.2. Machine Learning Integration in Urban Feature Detection

Additionally, machine learning (ML) techniques can efficiently process extensive LiDAR point clouds to automatically extract ground features such as buildings, streets, bridges, trees, and ground surfaces for 3D modeling [6, 7, 8, 9, 10]. While these techniques offer significant benefits, challenges such as measurement errors and the inability to clearly display edges persist. This has driven researchers to explore new approaches in knowledge engineering and ML, including the integration of LiDAR data with other sources like aerial photography and satellite imagery [7].

Over the past twenty years, LiDAR technology has become widely used for creating digital and surface models of the Earth, providing 3D points with high geometric accuracy [8]. The preparation and production of these highly accurate models from unstructured 3D point clouds are reliable, fast, and precise [9]. A notable application of aerial laser scanners is the automatic extraction of urban buildings for urban modeling [8]. This automatic generation of 3D models of man-made structures is crucial in photogrammetry research, where aerial laser scanners play a vital role [10].

Initially, rule-based algorithms were employed to automate point cloud processing, relying on a series of procedures and workflows based on the physical structure of the point cloud. However, there has been a recent shift towards using ML algorithms for topographic LiDAR data processing [11]. This review aims to evaluate the effectiveness of integrating LiDAR with artificial intelligence (AI) and ML techniques for identifying and extracting building characteristics in urban areas.

This study focuses on:

  1. A comprehensive analysis of existing research on evaluating LiDAR images using ML methods.
  2. Proposing a deep learning model for analyzing high-density aerial LiDAR data, which assigns sufficient points to each polygon for more accurate feature calculation. Here, a "polygon" refers to a geometric shape used to represent specific areas or features in LiDAR data. By connecting vertices in a closed shape, polygons help delineate and define urban features such as buildings, roads, and vegetation in aerial LiDAR analysis.

Object recognition from images has significantly enhanced human interaction systems, impacting military operations, autonomous vehicles, and security surveillance. The aim of object detection is to localize and classify objects within an image [12, 13]. Satellite remote sensing, with its broad view and repeatability, represents a new advancement in aerial imaging and processing. It offers timely, accurate, and stable information about the Earth's surface for cost-effective monitoring of changes [14].

Researchers have explored the use of multi-sensor approaches to generate regional or national terrain surface roughness maps that adhere to wind loading standards, such as those set by the Australian-New Zealand wind loading standard. For instance, [57] developed a method to derive terrain surface roughness from various multi-source satellite images. Multispectral broadband data is classified into high (e.g., IKONOS), medium (e.g., Landsat), and coarse (e.g., MODIS) spatial resolutions. In a study in New South Wales, an object-based image segmentation and classification technique was applied to MODIS, Landsat Thematic Mapper, and IKONOS bands. This technique identified eleven terrain categories with classification accuracies of 79% for metropolitan Sydney and 93% for rural or urban areas. It was found that object-based classification improves terrain production quality compared to traditional spectral-based methods. To enhance terrain roughness classification, [59] employed an integrated textural-spectral analysis, merging Synthetic Aperture Radar and optical datasets. This approach achieved about a 35% improvement over previous classification methods.

Automatic feature extraction remains a complex task in photogrammetry and remote sensing. Recent advancements in LiDAR system capabilities, coupled with the growing application of LiDAR data across various fields, along with its low cost, high speed, and advanced technology, drive ongoing research into automatic detection using 3D LiDAR sensors [15, 16].

Feature Selection

To address the inefficiencies in traditional methods for building feature extraction, a common approach is to use a broad range of features for classification. However, not all features hold equal practical value. This has led to the development of feature selection techniques for LiDAR point cloud processing, aimed at enhancing classification accuracy while optimizing computational efficiency and reducing memory usage [17]. Feature selection methods are generally divided into two categories: filter and wrapper methods.

Filter Methods: These assess features based on intrinsic properties, such as their correlation with the target variable. They do not consider feature interactions, which can lead to missing important synergies. Despite their speed and effectiveness in reducing the feature subset, filter methods may overlook crucial feature interactions [17, 18].

Wrapper Methods: These evaluate feature subsets by training a model, considering interactions between features. Although more computationally demanding, wrapper methods can provide superior performance due to their consideration of feature interactions. However, they may not be practical for large datasets due to their high computational costs.

Traditional approaches have limitations in handling complex, high-dimensional data. Filter methods may miss key feature interactions, while wrapper methods, though comprehensive, are often impractical for large datasets. This has led to the adoption of more advanced feature selection techniques.

Among these advanced techniques, meta-heuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Simulated Annealing (SA), and Ant Colony Optimization (ACO) are commonly used for feature selection [60, 61, 62]. Recently, the Equalization Optimizer algorithm has emerged as a promising method. This algorithm, inspired by a physical volume-mass balance model used for estimating dynamic and equilibrium states, helps minimize feature dimensions and prevent the “Hughes” phenomenon, resulting in a low-dimensional, multi-scale neighborhood spatial-spectral feature subset [63]. Research has shown that this algorithm outperforms other methods like PSO, GA, Gray Wolf Optimization (GWO), Gravity Search Algorithm (GSA), Salp Swarm Algorithm (SSA), and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [18].

Producer Function

This function identifies candidate subsets of features. It can start with no features, all features, or a random subset. In the bottom-up approach, features are added one by one to form the desired subset. Conversely, the top-down approach starts with all features and iteratively removes them to create the desired subset.

Naranda and Fakunaga posited that for a set A with n features, there exists a subset B of size m among all possible subsets (2^n) of set A, which maximizes the value of the fitting function. There are (nm)\binom{n}{m}(mn) such subsets, where subsets of size m are considered optimal [19]. Various feature selection methods aim to find the best subset from these 2^n candidate subsets through complete, revelatory, or random search methods.

Evaluation Function

This function assesses the accuracy of the desired subset and assigns a fit value based on the feature extraction method. The fit value is compared with previous subsets, and the lowest fitting value determines the optimal subset. The evaluation function is crucial in feature selection, as it measures how well a feature contributes to the model’s performance. Evaluation criteria may include information distance, consistency dependency, and classifier error.

Termination Condition

This defines when the algorithm should stop, which could be based on the number of selected features, the number of iterations, or the stability of the fitting function’s value.

Validation

The validity of the selected subset is assessed by the validation function, which operates outside the feature selection process. Research into feature selection for point clouds has shown that methods using classification approaches and classifier error criteria are effective [20, 21]. For example, Guislain et al. [19] and Chehata and Mallet [21] employed Random Forest (RF) for classifying and selecting features in LiDAR point clouds. RF not only classifies the dataset but also evaluates the importance of features for classification [21, 22].

Feature selection is essentially a multi-objective optimization problem, focusing on achieving the highest accuracy with the minimum number of features. The most trusted feature subsets are those that deliver the highest accuracy based on validation functions.

2. Literature Review

Various techniques have been utilized in data detection to analyze LiDAR scans with artificial intelligence, aiming to determine the features and geometry of urban buildings. This paper reviews studies published in the past five years, focusing on Light Detection and Ranging (LiDAR) analysis, machine learning applications in LiDAR, urban area detection, LiDAR observations, and remote sensing.

Gharineiat et al. [10] explored the automatic processing of topographic and surface features in LiDAR data using machine learning (ML). Their review covers various ML methods, including Decision Trees (DT), Support Vector Machines (SVM), and Deep Learning (DL) models such as Convolutional Neural Networks (CNN). However, this study's limitation is the lack of precision in assessing the accuracy of these methods and the reliance on classical ML techniques.

Zhang et al. [22] examined 3D urban building extraction using airborne LiDAR and photogrammetric point cloud fusion with the U-Net deep learning model. Their approach involves an initial geographic localization of photogrammetric point clouds followed by segmentation with the U-Net model. The U-Net model achieved 87% accuracy in building extraction, with an F-score of 0.89 and an Intersection over Union (IoU) of 0.80. A noted weakness is the failure to distinguish building features using color information, despite the reliable geometric accuracy provided by fused point clouds.

Wang et al. [23] proposed a multi-stage method for urban building damage extraction by analyzing spectral, height, and angle information from Very High Resolution (VHR) satellite images and airborne LiDAR data. The method involves extracting height and angle features from pre- and post-event data, isolating vegetation, ground, and shadows, and assessing building damage through height and point cloud differences. The method, tested in earthquake-damaged Port-au-Prince, Haiti, faces issues with false positives in damage detection and limited sample size.

Deng et al. [24] introduced a hierarchical data mining model for identifying urban building functions using multi-source data in Guangzhou, China. This model classifies buildings into categories such as residential, commercial, and industrial with 85% overall accuracy. A limitation is the mislabeling of some points of interest (POIs), affecting classification accuracy.

Cao et al. [25] discussed three-dimensional multi-scale detection of urban buildings with aerial LiDAR data. Their method integrates surface smoothness, variance in the normal direction, and the gray-level co-occurrence matrix into a graph-cutting algorithm for building labeling. The method demonstrated high accuracy in extracting building parameters but lacks detection of additional urban features and multi-source data synthesis.

Ojogbane et al. [26] utilized deep learning with airborne LiDAR and high-resolution aerial imagery for automatic building detection. Their approach, featuring parallel channels for high-resolution imagery and DSM, achieved over 80% overall accuracy. However, it lacks sensitivity evaluation for large and complex buildings.

Zhou and Y. Chang [27] focused on automatic building classification using machine learning and LiDAR images. They tested twelve ML algorithms, finding that gradient-boosted decision trees performed best. This study’s primary limitation is its exclusion of deep learning methods and focus on only commercial and residential buildings.

Nahhas et al. [28] proposed a deep learning-based framework for building recognition integrating LiDAR data and orthophotos. Their method employs object-based analysis, dimensionality reduction, and CNNs, achieving modest improvements in accuracy. A shortcoming is the lack of consideration for meta-heuristic algorithms to enhance feature selection.

Guo et al. [29] evaluated building extraction methods using photogrammetric and LiDAR point clouds, comparing DSM images and point clouds. Their threshold segmentation method yielded high accuracy for building footprints but experienced accuracy reduction with increased threshold settings.

Cooner et al. [30] assessed urban damage detection from the 2010 Haiti earthquake using remote sensing and machine learning. The study compared various neural network and random forest algorithms, achieving around 90% accuracy. However, it did not explore deep learning capabilities like CNNs, which could enhance feature detection.

Shirowzhan et al. [31] compared machine learning and point-based algorithms for detecting building changes over time using two sets of LiDAR data. Their study identified strengths and weaknesses of different algorithms, noting that M3C2 outperforms C2C in detecting height changes but is limited by the availability of suitable datasets.

Hartling et al. [32] investigated urban tree species classification using data fusion of WorldView-2/3 and LiDAR with DenseNet deep learning. DenseNet showed superior performance with 82.6% accuracy compared to RF and SVM. The study’s limitations include low accuracy due to limited training data.

Cetin and Yastikli [33] classified tree species from 3D LiDAR data using machine learning algorithms. Despite achieving decent accuracy with SVM, RF, and MLP classifiers, the study’s main weakness is its lower detection accuracy compared to other methods and lack of feature reduction in preprocessing.

Zhou and Gong [34] proposed a deep neural network approach for detecting residential buildings from airborne LiDAR data. This method demonstrated high accuracy across different point cloud datasets but is limited by the need for dense point clouds and insufficient high-level data infrastructure.

Vakalopoulou et al. [35] developed an automatic building recognition framework using high-resolution multispectral data and deep convolutional neural networks. While promising, the study's key limitation is its focus on single-class data without multi-class consideration.

Park and Guldmann [36] created 3D city models with building footprints using ML and LiDAR point cloud classification. Their Random Forest algorithm achieved 96.5% accuracy in classifying building points but did not address variations across building types.

Li et al. [37] reviewed solid-state LiDAR techniques but did not explore machine learning’s role in enhancing LiDAR image evaluation.

Zamanakos et al. [38] reviewed LIDAR-based 3D object detection methods, highlighting gaps in integrating machine learning insights to capture trends in LIDAR-based object recognition.

Su et al. [39] utilized deep learning (KPConv) for building recognition from aerial LiDAR data, finding that RGB features had limited impact. The study’s main drawback is the lack of multi-class feature consideration and sufficient point density.

Overall, machine learning approaches have significantly influenced LiDAR aerial image evaluation, with deep learning methods like CNNs showing notable advancements in object recognition. However, challenges remain, including accuracy, feature point considerations, and data density. Advances in deep learning continue to enhance remote sensing techniques, improving satellite image quality and analysis.

3. Outliers in LiDAR Point Clouds

Outliers in LiDAR point clouds can be categorized into high-altitude and low-altitude outliers. High-altitude outliers are caused by factors such as aircraft, birds, or erroneous LiDAR pulses, while low-altitude outliers result from repeated pulses impacting ground features. Removing these outliers is crucial as they can introduce errors in filters, especially those assuming the minimum height corresponds to the Earth's surface. Additionally, remote and cluster outliers require expert interpretation. Identifying outliers is often referred to as "anomaly detection," "fault detection," "novelty detection," or "one-class classification" in various scientific and technological contexts.

3.1. Response of LiDAR Data

The registered LiDAR points are affected by three components namely bare ground surface, surface features, and noise.

Msensor=Eground+Enon-ground+Enoise

In this equation, Msensor is the value recorded by the LiDAR sensor, Eground is the height of the ground or the minimum measured local height, Enon-ground is the height of the above-ground features such as trees, bushes, buildings, bridges, etc. In this equation, Enoise is the unwanted measured noise like the sensor noise of low-flying planes or birds [40].

The points on the earth's surface can be divided into the following four groups based on their physical characteristics:

3.2. The Lowest Height

In LiDAR data analysis, the lowest height of cloud points within a local area is typically regarded as indicative of the ground surface. This characteristic is widely utilized in various methods for filtering and identifying the earth's surface [41, 42, 43, 44].

3.3. The Surface

Inclined surfaces are defined as areas between two points on the earth’s surface and a more complex terrain with varying elevations above the surface. Many advanced filtering techniques consider points with slopes exceeding the maximum ground slope as features unrelated to the ground surface [45, 46]. In urban environments, sloping surfaces related to the ground usually have a slope of less than 30 degrees.

3.4. Ground Height Difference

Points on a bare earth surface typically exhibit minimal sharp elevation changes. Conversely, the height difference between the earth's surface and overlying features, such as trees or buildings, is usually more pronounced. Consequently, if a point’s height deviation from the ground surface surpasses a specific threshold, it is classified as a feature other than the ground surface [47, 48, 49, 50].

3.5. Homogeneity of Earth's Surface

The bare ground surface generally displays smooth and continuous changes, whereas obstructions like buildings and trees disrupt this uniformity. Trees, in particular, present less surface homogeneity compared to buildings, leading to the use of morphological characteristics for their identification and filtering [51, 61].

These principles form the foundation of many existing filters designed to distinguish between earth surface points and other features. However, it is important to recognize that these characteristics do not always apply, and exceptions such as cliffs and rocks with extreme height variations can cause filtering errors. A comprehensive review of current filters for earth surface point separation and noise removal is available in [51, 62, 63, 64].

4. Conclusions

This paper reviewed various feature selection methods for classifying 3D point clouds from LiDAR sensors. Our survey indicated that research in 3D point cloud classification is limited. Some researchers focus on processing return wave signals for LiDAR data classification, while others enhance feature vectors by integrating LiDAR data with additional sources like optical images. Machine learning, particularly deep learning techniques, has proven effective for automatic 3D point cloud classification without requiring signal processing or extensive resources, yielding satisfactory results. The physical characteristics of LiDAR data significantly impact classification outcomes, as distortions in the return pulse due to interactions with objects can alter the extracted physical parameters.

Abbreviations

  • AI: Artificial Intelligence
  • BCA: Building Code of Australia
  • LiDAR: Light Detection and Ranging
  • ML: Machine Learning
  • GPS: Global Positioning System
  • IMU: Inertial Measurement Unit
  • DSM: Digital Surface Model
  • SVM: Support Vector Machines
  • CNN: Convolutional Neural Networks
  • VHR: Very High Resolution
  • POI: Point of Interest
  • 3D: Three Dimensions
  • DT: Decision Trees
  • OD: Object Detection
  • U-Net: U-Net Deep Learning Model
  • RF: Random Forest
  • GA: Genetic Algorithm
  • PSO: Particle Swarm Optimization
  • ACO: Ant Colony Optimization
  • SA: Simulated Annealing
  • M3C2: Multi-Scale Comparison of 3D Point Clouds
  • GWO: Gray Wolf Optimization
  • GSA: Gravity Search Algorithm
  • SSA: Salp Swarm Algorithm
  • CMA-ES: Covariance Matrix Adaptation Evolution Strategy

Acknowledgments

We appreciate the anonymous reviewers for their valuable feedback on the manuscript. The views expressed herein are those of the authors and do not necessarily reflect the views of their organization.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. M. Hameed, F. Yang, S. U. Bazai, M. I. Ghafoor, A. Alshehri, I. Khan, ... and F. H. Jaskani, "Urbanization detection using LiDAR-based remote sensing images of Azad Kashmir using novel 3D CNNs," Journal of Sensors, 2022, pp. 1-9, 2022.[CrossRef]
  2. D. C. Mesta, J. T. Van Stan, S. A. Yankine, J. F. Cote, M. T. Jarvis, A. Hildebrandt, ... and G. Maldonado, "Canopy rainfall partitioning across an urbanization gradient in forest structure as characterized by terrestrial LiDAR," in AGU Fall Meeting Abstracts, vol. 2017, pp. H11D-1197, Dec. 2017.
  3. W. Wen, Y. Zhou, G. Zhang, S. Fahandezh-Saadi, X. Bai, W. Zhan, ... and L. T. Hsu, "UrbanLoco: A full sensor suite dataset for mapping and localization in urban scenes," in Proc. 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2310-2316, May 2020.[CrossRef]
  4. L. Cheng, S. Chen, X. Liu, H. Xu, Y. Wu, M. Li, and Y. Chen, "Registration of laser scanning point clouds: A review," Sensors, vol. 18, no. 5, 1641, 2018.[CrossRef] [PubMed]
  5. Kumar V. Forest inventory parameters and carbon mapping from airborne LIDAR (Master's thesis, University of Twente).
  6. Bazezew MN, Hussin YA, Kloosterman EH. Integrating Airborne LiDAR and Terrestrial Laser Scanner Forest parameters for accurate above-ground biomass/carbon estimation in Ayer Hitam tropical forest, Malaysia. International journal of applied earth observation and geoinformation. 2018 Dec 1; 73: 638-52. W. W. Wen, G. Zhang, and L. T. Hsu, "GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud," IEEE Trans. on Intell. Transp. Syst., vol. 22, no. 2, pp. 853-862, 2019.[CrossRef]
  7. F. Ackermann, "Airborne laser scanning-present status and future expectations," ISPRS J. of Photogramm. and Remote Sens., vol. 54, no. 2-3, pp. 64-67, 1999.[CrossRef]
  8. C. Mallet, U. Soergel, and F. Bretar, "Analysis of full-waveform lidar data for classification of urban areas," in ISPRS Congress 2008, July 2008.
  9. A. Mancini, E. Frontoni, and P. Zingaretti, "Automatic extraction of urban objects from multi-source aerial data," in Proc. of CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring—Concepts, Algorithms and Evaluation, vol. 38, pp. 13-18, 2009.
  10. Z. Gharineiat, F. Tarsha Kurdi, and G. Campbell, "Review of Automatic Processing of Topography and Surface Feature Identification LiDAR Data Using Machine Learning Techniques," Remote Sens., vol. 14, no. 19, pp. 4685, 2022.[CrossRef]
  11. N. Bustos, M. Mashhadi, S. K. Lai-Yuen, S. Sarkar, and T. K. Das, "A systematic literature review on object detection using near-infrared and thermal images," Neurocomputing, 126804, 2023.[CrossRef]
  12. J. Kaur and W. Singh, "A systematic review of object detection from images using deep learning," Multimedia Tools and Applications, pp. 1-86, 2023.
  13. Y. Ban and O. Yousif, "Change detection techniques: A review," Multitemporal Remote Sensing: Methods and Applications, pp. 19-43, 2016.[CrossRef]
  14. G. Heritage and A. Large, Eds., Laser Scanning for the Environmental Sciences. John Wiley & Sons, 2009.[CrossRef]
  15. S. Y. Alaba and J. E. Ball, "A survey on deep-learning-based lidar 3d object detection for autonomous driving," Sensors, vol. 22, no. 24, 9577, 2022.[CrossRef] [PubMed]
  16. J. Shan and C. K. Toth, Eds., Topographic Laser Ranging and Scanning: Principles and Processing. CRC Press, 2018, pp. 1-9.[CrossRef]
  17. Shi S, Bi S, Gong W, Chen B, Chen B, Tang X, Qu F, Song S. Land cover classification with multispectral LiDAR based on multi-scale spatial and spectral feature selection. Remote Sensing. 2021 Oct 14; 13(20): 4118.[CrossRef]
  18. P. M. Narendra and K. Fukunaga, "A branch and bound algorithm for feature subset selection," IEEE Transactions on Computers, vol. 26, no. 09, pp. 917-922, 1977.[CrossRef]
  19. M. Guislain, J. Digne, R. Chaine, and G. Monnier, "Fine-scale image registration in large-scale urban LIDAR point sets," Computer Vision and Image Understanding, vol. 157, pp. 90-102, 2017.[CrossRef]
  20. S. Warnke, "Variable selection for road segmentation in aerial images," 2017.[CrossRef]
  21. N. Chehata, L. Guo, and C. Mallet, "Airborne lidar feature selection for urban classification using random forests," in Laserscanning, Sep. 2009.
  22. P. Zhang, H. He, Y. Wang, Y. Liu, H. Lin, L. Guo, and W. Yang, "3D urban buildings extraction based on airborne lidar and photogrammetric point cloud fusion according to U-Net deep learning model segmentation," IEEE Access, vol. 10, pp. 20889-20897, 2022.[CrossRef]
  23. X. Wang, and P. Li, "Extraction of urban building damage using spectral, height, and corner information from VHR satellite images and airborne LiDAR data," ISPRS J. of Photogramm. and Remote Sens., vol. 159, pp. 322-336, 2020.[CrossRef]
  24. Y. Deng, R. Chen, J. Yang, Y. Li, H. Jiang, W. Liao, and M. Sun, "Identify urban building functions with multisource data: A case study in Guangzhou, China," International Journal of Geographical Information Science, vol. 36, no. 10, pp. 2060-2085, 2022.[CrossRef]
  25. S. Cao, Q. Weng, M. Du, B. Li, R. Zhong, and Y. Mo, "Multi-scale three-dimensional detection of urban buildings using aerial LiDAR data," GIScience & Remote Sensing, vol. 57, no. 8, pp. 1125-1143, 2020.[CrossRef]
  26. S. S. Ojogbane, S. Mansor, B. Kalantar, Z. B. Khuzaimah, H. Z. M. Shafri, and N. Ueda, "Automated building detection from airborne LiDAR and very high-resolution aerial imagery with deep neural network," Remote Sensing, vol. 13, no. 23, pp. 4803, 2021.[CrossRef]
  27. P. Zhou and Y. Chang, "Automated classification of building structures for urban built environment identification using machine learning," Journal of Building Engineering, vol. 43, 103008, 2021.[CrossRef]
  28. F. H. Nahhas, H. Z. Shafri, M. I. Sameen, B. Pradhan, and S. Mansor, "Deep learning approach for building detection using lidar–orthophoto fusion," Journal of Sensors, 2018, pp. 1-10.[CrossRef]
  29. L. Guo, X. Deng, Y. Liu, H. He, H. Lin, G. Qiu, and W. Yang, "Extraction of dense urban buildings from photogrammetric and LiDAR point clouds," IEEE Access, vol. 9, pp. 111823-111832, 2021.[CrossRef]
  30. A. J. Cooner, Y. Shao, and J. B. Campbell, "Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 Haiti earthquake," Remote Sensing, vol. 8, no. 10, pp. 868, 2016.[CrossRef]
  31. S. Shirowzhan, S. M. Sepasgozar, H. Li, J. Trinder, and P. Tang, "Comparative analysis of machine learning and point-based algorithms for detecting 3D changes in buildings over time using bi-temporal lidar data," Automation in Construction, vol. 105, 102841, 2019.[CrossRef]
  32. S. Hartling, V. Sagan, P. Sidike, M. Maimaitijiang, and J. Carron, "Urban tree species classification using a WorldView-2/3 and LiDAR data fusion approach and deep learning," Sensors, vol. 19, no. 6, pp. 1284, 2019.[CrossRef] [PubMed]
  33. Z. Cetin and N. Yastikli, "The use of machine learning algorithms in urban tree species classification," ISPRS International Journal of Geo-Information, vol. 11, no. 4, 226, 2022.[CrossRef]
  34. Z. Zhou and J. Gong, "Automated residential building detection from airborne LiDAR data with deep neural networks," Advanced Engineering Informatics, vol. 36, pp. 229-241, 2018.[CrossRef]
  35. M. Vakalopoulou, K. Karantzalos, N. Komodakis, and N. Paragios, "Building detection in very high-resolution multispectral data with deep learning features," in 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 1873-1876, July 2015.[CrossRef]
  36. Y. Park and J. M. Guldmann, "Creating 3D city models with building footprints and LIDAR point cloud classification: A machine learning approach," Computers, Environment, and Urban Systems, vol. 75, pp. 76-89, 2019.[CrossRef]
  37. N. Li, C. P. Ho, J. Xue, L. W. Lim, G. Chen, Y. H. Fu, and L. Y. T. Lee, "A progress review on solid-state LiDAR and nanophotonics-based LiDAR sensors," Laser & Photonics Reviews, vol. 16, no. 11, 2100511, 2022.[CrossRef]
  38. G. Zamanakos, L. Tsochatzidis, A. Amanatiadis, and I. Pratikakis, "A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving," Computers & Graphics, vol. 99, pp. 153-181, 2021.[CrossRef]
  39. S. Su, K. Nakano, and K. Wakabayashi, "Building Detection from Aerial LIDAR Point Cloud Using Deep Learning," The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 43, pp. 291-296, 2022.[CrossRef]
  40. D. M. Hawkins, Identification of Outliers, vol. 11. London: Chapman and Hall, 1980.[CrossRef]
  41. X. Meng, N. Currit, and K. Zhao, "Ground filtering algorithms for airborne LiDAR data: A review of critical issues," Remote Sensing, vol. 2, no. 3, pp. 833-860, 2010.[CrossRef]
  42. H. Masaharu and K. Ohtsubo, "A filtering method of airborne laser scanner data for complex terrain," International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, vol. 34, no. 3/B, pp. 165-169, 2002.
  43. J. L. Silván-Cardenás and L. Wang, "A multi-resolution approach for filtering LiDAR altimetry data," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 61, no. 1, pp. 11-22, 2006.[CrossRef]
  44. K. Zhao, S. Popescu, and R. Nelson, "Lidar remote sensing of forest biomass: A scale-invariant estimation approach using airborne lasers," Remote Sensing of Environment, vol. 113, no. 1, pp. 182-196, 2009.[CrossRef]
  45. X. Wang and G. Xu, "Application of LiDAR Remote Sensing Forest Leaf Area Index Extraction Method Based on Big Data Network," in Journal of Physics: Conference Series, vol. 1881, no. 2, p. 022012, Apr. 2021.[CrossRef]
  46. J. Shan and A. Sampath, "Urban DEM generation from raw LiDAR data: A labeling algorithm and its performance," Photogrammetric Engineering & Remote Sensing, vol. 71, no. 2, pp. 217-226, 2005.[CrossRef]
  47. K. Zhang and D. Whitman, "Comparison of three algorithms for filtering airborne LiDAR data," Photogrammetric Engineering & Remote Sensing, vol. 71, no. 3, pp. 313-324, 2005.[CrossRef]
  48. G. Vosselman, "Slope-based filtering of laser altimetry data," International Archives of Photogrammetry and Remote Sensing, vol. 33, B3/2; PART 3, pp. 935-942, 2000.
  49. M. Okagawa, "Algorithm of multiple filters to extract DSM from LiDAR data," in 2001 ESRI International User Conference, Jul. 2001, pp. 193-203.
  50. R. Passini, D. Betzner, and K. Jacobsen, "Filtering of digital elevation models," in ASPRS Annual Convention, Washington, Apr. 2002.
  51. G. Vosselman, "Slope-based filtering of laser altimetry data," International Archives of Photogrammetry and Remote Sensing, vol. 33, B3/2; PART 3, pp. 935-942, 2000.
  52. M. Jaboyedoff, T. Oppikofer, A. Abellán, M. H. Derron, A. Loye, R. Metzger, and A. Pedrazzini, "Use of LIDAR in landslide investigations: a review," Natural Hazards, vol. 61, pp. 5-28, 2012.[CrossRef]
  53. W. Y. Yan, A. Shaker, and N. El-Ashmawy, "Urban land cover classification using airborne LiDAR data: A review," Remote Sensing of Environment, vol. 158, pp. 295-310, 2015.[CrossRef]
  54. R. W. Kulawardhana, S. C. Popescu, and R. A. Feagin, "Airborne lidar remote sensing applications in non-forested short stature environments: a review," Annals of Forest Research, vol. 60, no. 1, pp. 173-196, 2017.[CrossRef]
  55. A. Forghani, K. Nadimpalli, and R. P. Cechet, "Extracting terrain categories from multi-source satellite imagery," International Journal of Geoinformatics, June 2018, pp. 1-10.
  56. Forghani A, Nadimpalli K, Cechet R. Extracting terrain categories from multi-source satellite imagery. International Journal of Geoinformatics. 2018 Apr; 14(2): 1-0.
  57. Marshall M, Thenkabail P. Advantage of hyperspectral EO-1 Hyperion over multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM+, and MODIS vegetation indices in crop biomass estimation. ISPRS Journal of Photogrammetry and Remote Sensing. 2015 Oct 1; 108: 205-18.[CrossRef]
  58. Forghani A, Cechet B, Nadimpalli K. Object-based classification of multi-sensor optical imagery to generate terrain surface roughness information for input to wind risk simulation. In2007 IEEE International Geoscience and Remote Sensing Symposium 2007 Jul 23 (pp. 3090-3095). IEEE.[CrossRef]
  59. Carballeira López J. Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments (Doctoral dissertation).
  60. He, G., Du, Y., Liang, Q., Zhou, Z., & Shu, L. (2023). Modeling and optimization method of laser cladding based on GA-ACO-RFR and GNSGA-II. International Journal of Precision Engineering and Manufacturing-Green Technology, 10(5), 1207-1222.[CrossRef]
  61. Sameen MI, Pradhan B, Shafri HZ, Mezaal MR, bin Hamid H. Integration of ant colony optimization and object-based analysis for LiDAR data classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2017 Feb 2; 10(5): 2055-66.[CrossRef]
  62. Shi S, Bi S, Gong W, Chen B, Chen B, Tang X, Qu F, Song S. Land cover classification with multispectral LiDAR based on multi-scale spatial and spectral feature selection. Remote Sensing. 2021 Oct 14; 13(20): 4118.[CrossRef]
  63. Zhou L, Meng R, Tan Y, Lv Z, Zhao Y, Xu B, Zhao F. Comparison of UAV-based LiDAR and digital aerial photogrammetry for measuring crown-level canopy height in the urban environment. Urban Forestry & Urban Greening. 2022 Mar 1; 69: 127489.[CrossRef]
  64. Shao J, Yao W, Wang P, He Z, Luo L. Urban GeoBIM construction by integrating semantic LiDAR point clouds with as-designed BIM models. IEEE Transactions on Geoscience and Remote Sensing. 2024 Jan 25.[CrossRef]

Copyright

© 2025 by authors and Scientific Publications. This is an open access article and the related PDF distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Article Metrics

Citations

No citations were found for this article, but you may check on Google Scholar

If you find this article cited by other articles, please click the button to add a citation.

Article Access Statistics
Article Download Statistics
Article metrics
Views
142
Downloads
17

How to Cite

Harouni, O., Forghani, A., Rahnamayiezekavat, P., & Rashidi, M. (2025). Exploring LiDAR Applications for Urban Feature Detection: Leveraging AI for Enhanced Feature Extraction from LiDAR Data. World Journal of Geomatics and Geosciences, 4(1), 1242. Retrieved from https://www.scipublications.com/journal/index.php/wjgg/article/view/1242
  1. M. Hameed, F. Yang, S. U. Bazai, M. I. Ghafoor, A. Alshehri, I. Khan, ... and F. H. Jaskani, "Urbanization detection using LiDAR-based remote sensing images of Azad Kashmir using novel 3D CNNs," Journal of Sensors, 2022, pp. 1-9, 2022.[CrossRef]
  2. D. C. Mesta, J. T. Van Stan, S. A. Yankine, J. F. Cote, M. T. Jarvis, A. Hildebrandt, ... and G. Maldonado, "Canopy rainfall partitioning across an urbanization gradient in forest structure as characterized by terrestrial LiDAR," in AGU Fall Meeting Abstracts, vol. 2017, pp. H11D-1197, Dec. 2017.
  3. W. Wen, Y. Zhou, G. Zhang, S. Fahandezh-Saadi, X. Bai, W. Zhan, ... and L. T. Hsu, "UrbanLoco: A full sensor suite dataset for mapping and localization in urban scenes," in Proc. 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2310-2316, May 2020.[CrossRef]
  4. L. Cheng, S. Chen, X. Liu, H. Xu, Y. Wu, M. Li, and Y. Chen, "Registration of laser scanning point clouds: A review," Sensors, vol. 18, no. 5, 1641, 2018.[CrossRef] [PubMed]
  5. Kumar V. Forest inventory parameters and carbon mapping from airborne LIDAR (Master's thesis, University of Twente).
  6. Bazezew MN, Hussin YA, Kloosterman EH. Integrating Airborne LiDAR and Terrestrial Laser Scanner Forest parameters for accurate above-ground biomass/carbon estimation in Ayer Hitam tropical forest, Malaysia. International journal of applied earth observation and geoinformation. 2018 Dec 1; 73: 638-52. W. W. Wen, G. Zhang, and L. T. Hsu, "GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud," IEEE Trans. on Intell. Transp. Syst., vol. 22, no. 2, pp. 853-862, 2019.[CrossRef]
  7. F. Ackermann, "Airborne laser scanning-present status and future expectations," ISPRS J. of Photogramm. and Remote Sens., vol. 54, no. 2-3, pp. 64-67, 1999.[CrossRef]
  8. C. Mallet, U. Soergel, and F. Bretar, "Analysis of full-waveform lidar data for classification of urban areas," in ISPRS Congress 2008, July 2008.
  9. A. Mancini, E. Frontoni, and P. Zingaretti, "Automatic extraction of urban objects from multi-source aerial data," in Proc. of CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring—Concepts, Algorithms and Evaluation, vol. 38, pp. 13-18, 2009.
  10. Z. Gharineiat, F. Tarsha Kurdi, and G. Campbell, "Review of Automatic Processing of Topography and Surface Feature Identification LiDAR Data Using Machine Learning Techniques," Remote Sens., vol. 14, no. 19, pp. 4685, 2022.[CrossRef]
  11. N. Bustos, M. Mashhadi, S. K. Lai-Yuen, S. Sarkar, and T. K. Das, "A systematic literature review on object detection using near-infrared and thermal images," Neurocomputing, 126804, 2023.[CrossRef]
  12. J. Kaur and W. Singh, "A systematic review of object detection from images using deep learning," Multimedia Tools and Applications, pp. 1-86, 2023.
  13. Y. Ban and O. Yousif, "Change detection techniques: A review," Multitemporal Remote Sensing: Methods and Applications, pp. 19-43, 2016.[CrossRef]
  14. G. Heritage and A. Large, Eds., Laser Scanning for the Environmental Sciences. John Wiley & Sons, 2009.[CrossRef]
  15. S. Y. Alaba and J. E. Ball, "A survey on deep-learning-based lidar 3d object detection for autonomous driving," Sensors, vol. 22, no. 24, 9577, 2022.[CrossRef] [PubMed]
  16. J. Shan and C. K. Toth, Eds., Topographic Laser Ranging and Scanning: Principles and Processing. CRC Press, 2018, pp. 1-9.[CrossRef]
  17. Shi S, Bi S, Gong W, Chen B, Chen B, Tang X, Qu F, Song S. Land cover classification with multispectral LiDAR based on multi-scale spatial and spectral feature selection. Remote Sensing. 2021 Oct 14; 13(20): 4118.[CrossRef]
  18. P. M. Narendra and K. Fukunaga, "A branch and bound algorithm for feature subset selection," IEEE Transactions on Computers, vol. 26, no. 09, pp. 917-922, 1977.[CrossRef]
  19. M. Guislain, J. Digne, R. Chaine, and G. Monnier, "Fine-scale image registration in large-scale urban LIDAR point sets," Computer Vision and Image Understanding, vol. 157, pp. 90-102, 2017.[CrossRef]
  20. S. Warnke, "Variable selection for road segmentation in aerial images," 2017.[CrossRef]
  21. N. Chehata, L. Guo, and C. Mallet, "Airborne lidar feature selection for urban classification using random forests," in Laserscanning, Sep. 2009.
  22. P. Zhang, H. He, Y. Wang, Y. Liu, H. Lin, L. Guo, and W. Yang, "3D urban buildings extraction based on airborne lidar and photogrammetric point cloud fusion according to U-Net deep learning model segmentation," IEEE Access, vol. 10, pp. 20889-20897, 2022.[CrossRef]
  23. X. Wang, and P. Li, "Extraction of urban building damage using spectral, height, and corner information from VHR satellite images and airborne LiDAR data," ISPRS J. of Photogramm. and Remote Sens., vol. 159, pp. 322-336, 2020.[CrossRef]
  24. Y. Deng, R. Chen, J. Yang, Y. Li, H. Jiang, W. Liao, and M. Sun, "Identify urban building functions with multisource data: A case study in Guangzhou, China," International Journal of Geographical Information Science, vol. 36, no. 10, pp. 2060-2085, 2022.[CrossRef]
  25. S. Cao, Q. Weng, M. Du, B. Li, R. Zhong, and Y. Mo, "Multi-scale three-dimensional detection of urban buildings using aerial LiDAR data," GIScience & Remote Sensing, vol. 57, no. 8, pp. 1125-1143, 2020.[CrossRef]
  26. S. S. Ojogbane, S. Mansor, B. Kalantar, Z. B. Khuzaimah, H. Z. M. Shafri, and N. Ueda, "Automated building detection from airborne LiDAR and very high-resolution aerial imagery with deep neural network," Remote Sensing, vol. 13, no. 23, pp. 4803, 2021.[CrossRef]
  27. P. Zhou and Y. Chang, "Automated classification of building structures for urban built environment identification using machine learning," Journal of Building Engineering, vol. 43, 103008, 2021.[CrossRef]
  28. F. H. Nahhas, H. Z. Shafri, M. I. Sameen, B. Pradhan, and S. Mansor, "Deep learning approach for building detection using lidar–orthophoto fusion," Journal of Sensors, 2018, pp. 1-10.[CrossRef]
  29. L. Guo, X. Deng, Y. Liu, H. He, H. Lin, G. Qiu, and W. Yang, "Extraction of dense urban buildings from photogrammetric and LiDAR point clouds," IEEE Access, vol. 9, pp. 111823-111832, 2021.[CrossRef]
  30. A. J. Cooner, Y. Shao, and J. B. Campbell, "Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 Haiti earthquake," Remote Sensing, vol. 8, no. 10, pp. 868, 2016.[CrossRef]
  31. S. Shirowzhan, S. M. Sepasgozar, H. Li, J. Trinder, and P. Tang, "Comparative analysis of machine learning and point-based algorithms for detecting 3D changes in buildings over time using bi-temporal lidar data," Automation in Construction, vol. 105, 102841, 2019.[CrossRef]
  32. S. Hartling, V. Sagan, P. Sidike, M. Maimaitijiang, and J. Carron, "Urban tree species classification using a WorldView-2/3 and LiDAR data fusion approach and deep learning," Sensors, vol. 19, no. 6, pp. 1284, 2019.[CrossRef] [PubMed]
  33. Z. Cetin and N. Yastikli, "The use of machine learning algorithms in urban tree species classification," ISPRS International Journal of Geo-Information, vol. 11, no. 4, 226, 2022.[CrossRef]
  34. Z. Zhou and J. Gong, "Automated residential building detection from airborne LiDAR data with deep neural networks," Advanced Engineering Informatics, vol. 36, pp. 229-241, 2018.[CrossRef]
  35. M. Vakalopoulou, K. Karantzalos, N. Komodakis, and N. Paragios, "Building detection in very high-resolution multispectral data with deep learning features," in 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 1873-1876, July 2015.[CrossRef]
  36. Y. Park and J. M. Guldmann, "Creating 3D city models with building footprints and LIDAR point cloud classification: A machine learning approach," Computers, Environment, and Urban Systems, vol. 75, pp. 76-89, 2019.[CrossRef]
  37. N. Li, C. P. Ho, J. Xue, L. W. Lim, G. Chen, Y. H. Fu, and L. Y. T. Lee, "A progress review on solid-state LiDAR and nanophotonics-based LiDAR sensors," Laser & Photonics Reviews, vol. 16, no. 11, 2100511, 2022.[CrossRef]
  38. G. Zamanakos, L. Tsochatzidis, A. Amanatiadis, and I. Pratikakis, "A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving," Computers & Graphics, vol. 99, pp. 153-181, 2021.[CrossRef]
  39. S. Su, K. Nakano, and K. Wakabayashi, "Building Detection from Aerial LIDAR Point Cloud Using Deep Learning," The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 43, pp. 291-296, 2022.[CrossRef]
  40. D. M. Hawkins, Identification of Outliers, vol. 11. London: Chapman and Hall, 1980.[CrossRef]
  41. X. Meng, N. Currit, and K. Zhao, "Ground filtering algorithms for airborne LiDAR data: A review of critical issues," Remote Sensing, vol. 2, no. 3, pp. 833-860, 2010.[CrossRef]
  42. H. Masaharu and K. Ohtsubo, "A filtering method of airborne laser scanner data for complex terrain," International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, vol. 34, no. 3/B, pp. 165-169, 2002.
  43. J. L. Silván-Cardenás and L. Wang, "A multi-resolution approach for filtering LiDAR altimetry data," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 61, no. 1, pp. 11-22, 2006.[CrossRef]
  44. K. Zhao, S. Popescu, and R. Nelson, "Lidar remote sensing of forest biomass: A scale-invariant estimation approach using airborne lasers," Remote Sensing of Environment, vol. 113, no. 1, pp. 182-196, 2009.[CrossRef]
  45. X. Wang and G. Xu, "Application of LiDAR Remote Sensing Forest Leaf Area Index Extraction Method Based on Big Data Network," in Journal of Physics: Conference Series, vol. 1881, no. 2, p. 022012, Apr. 2021.[CrossRef]
  46. J. Shan and A. Sampath, "Urban DEM generation from raw LiDAR data: A labeling algorithm and its performance," Photogrammetric Engineering & Remote Sensing, vol. 71, no. 2, pp. 217-226, 2005.[CrossRef]
  47. K. Zhang and D. Whitman, "Comparison of three algorithms for filtering airborne LiDAR data," Photogrammetric Engineering & Remote Sensing, vol. 71, no. 3, pp. 313-324, 2005.[CrossRef]
  48. G. Vosselman, "Slope-based filtering of laser altimetry data," International Archives of Photogrammetry and Remote Sensing, vol. 33, B3/2; PART 3, pp. 935-942, 2000.
  49. M. Okagawa, "Algorithm of multiple filters to extract DSM from LiDAR data," in 2001 ESRI International User Conference, Jul. 2001, pp. 193-203.
  50. R. Passini, D. Betzner, and K. Jacobsen, "Filtering of digital elevation models," in ASPRS Annual Convention, Washington, Apr. 2002.
  51. G. Vosselman, "Slope-based filtering of laser altimetry data," International Archives of Photogrammetry and Remote Sensing, vol. 33, B3/2; PART 3, pp. 935-942, 2000.
  52. M. Jaboyedoff, T. Oppikofer, A. Abellán, M. H. Derron, A. Loye, R. Metzger, and A. Pedrazzini, "Use of LIDAR in landslide investigations: a review," Natural Hazards, vol. 61, pp. 5-28, 2012.[CrossRef]
  53. W. Y. Yan, A. Shaker, and N. El-Ashmawy, "Urban land cover classification using airborne LiDAR data: A review," Remote Sensing of Environment, vol. 158, pp. 295-310, 2015.[CrossRef]
  54. R. W. Kulawardhana, S. C. Popescu, and R. A. Feagin, "Airborne lidar remote sensing applications in non-forested short stature environments: a review," Annals of Forest Research, vol. 60, no. 1, pp. 173-196, 2017.[CrossRef]
  55. A. Forghani, K. Nadimpalli, and R. P. Cechet, "Extracting terrain categories from multi-source satellite imagery," International Journal of Geoinformatics, June 2018, pp. 1-10.
  56. Forghani A, Nadimpalli K, Cechet R. Extracting terrain categories from multi-source satellite imagery. International Journal of Geoinformatics. 2018 Apr; 14(2): 1-0.
  57. Marshall M, Thenkabail P. Advantage of hyperspectral EO-1 Hyperion over multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM+, and MODIS vegetation indices in crop biomass estimation. ISPRS Journal of Photogrammetry and Remote Sensing. 2015 Oct 1; 108: 205-18.[CrossRef]
  58. Forghani A, Cechet B, Nadimpalli K. Object-based classification of multi-sensor optical imagery to generate terrain surface roughness information for input to wind risk simulation. In2007 IEEE International Geoscience and Remote Sensing Symposium 2007 Jul 23 (pp. 3090-3095). IEEE.[CrossRef]
  59. Carballeira López J. Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments (Doctoral dissertation).
  60. He, G., Du, Y., Liang, Q., Zhou, Z., & Shu, L. (2023). Modeling and optimization method of laser cladding based on GA-ACO-RFR and GNSGA-II. International Journal of Precision Engineering and Manufacturing-Green Technology, 10(5), 1207-1222.[CrossRef]
  61. Sameen MI, Pradhan B, Shafri HZ, Mezaal MR, bin Hamid H. Integration of ant colony optimization and object-based analysis for LiDAR data classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2017 Feb 2; 10(5): 2055-66.[CrossRef]
  62. Shi S, Bi S, Gong W, Chen B, Chen B, Tang X, Qu F, Song S. Land cover classification with multispectral LiDAR based on multi-scale spatial and spectral feature selection. Remote Sensing. 2021 Oct 14; 13(20): 4118.[CrossRef]
  63. Zhou L, Meng R, Tan Y, Lv Z, Zhao Y, Xu B, Zhao F. Comparison of UAV-based LiDAR and digital aerial photogrammetry for measuring crown-level canopy height in the urban environment. Urban Forestry & Urban Greening. 2022 Mar 1; 69: 127489.[CrossRef]
  64. Shao J, Yao W, Wang P, He Z, Luo L. Urban GeoBIM construction by integrating semantic LiDAR point clouds with as-designed BIM models. IEEE Transactions on Geoscience and Remote Sensing. 2024 Jan 25.[CrossRef]

Citations of