Review Article Open Access November 30, 2022

A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches

1
Faculty of Science & Technology, University of Canberra (UC), Canberra, Australia, Australia
2
Science, Technology, Engineering and Mathematics (STEM), University of South Australia (UniSA), Adelaide, Australia
Page(s): 47-59
Received
October 06, 2022
Revised
November 20, 2022
Accepted
November 28, 2022
Published
November 30, 2022
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
Copyright: Copyright © The Author(s), 2022. Published by Scientific Publications

Abstract

Today, the presentation of a three-dimensional model of real-world features is very important and widely used and has attracted the attention of researchers in various fields, including surveying and spatial information systems, and those interested in the three-dimensional reconstruction of buildings. The building is the key part of the information in a three-dimensional city model, so extracting and modeling buildings from remote sensing data is an important step in building a digital model of a city. LiDAR technology due to its ability to map in all three modes of one-dimensional, two-dimensional, and three-dimensional is a suitable solution to provide hyperspectral and comprehensive images of the building in an urban environment. In this review article, a comprehensive review of the methods used in identifying buildings from the past to the present and appropriate solutions for the future is discussed.

1. Introduction

Buildings are the most fundamental structures in urban areas due to their abundance, diversity, and complexity. Therefore, automatic identification of buildings has become an important issue for creating and updating spatial information maps and databases for better performance in change detection, land use analysis, and urban monitoring programs [32]. In order to better manage urban and rural areas, having accurate and timely information about buildings is required. Manual extraction of buildings requires skilled operators, increasing time and cost. For this reason, in recent decades, the approach of automating the extraction of buildings from aerial and satellite images has been the subject of much research. Some building detection and extraction algorithms have been developed for various data sources, the most common of which are optical, lead, and radar images. Different data sources provide benefits in different situations. Nevertheless, they still suffer from problems such as shading, clogging, heterogeneity of building roofs, and spectral similarities in urban features. But the unique features of different data sources provide an idea of how to take advantage of each resource and improve the results by combining them [26]. Many attempts have been made to speed up the process of identifying and extracting building characteristics by semi-automatic and automated algorithms, but this is still a challenging issue due to the geometric reasons of complex building structures, radiometric reasons, shadow presence, and other factors [23]. Accordingly, in this paper, by examining the methods used in the field of buildings detection, from the past to the present, an attempt is made to evaluate the LiDAR method and its specification in order to fast, accurately, and automatically identify the building and use it as an ideal method for future use. Therefore, the structure of this paper is as follows: In section 2, the literature, and the history of using different methods in building detection are reviewed. In section 3, the LiDAR method, and its unique features in order to detect the building are reviewed and introduced. Finally, in section 4, the conclusion will be presented.

2. Literature Review

The first approaches to automatic detection of buildings relied mainly on the use of an aerial or satellite image. For example, [40] proposed a knowledge-based interpretation method for identifying buildings from aerial imagery. This paper is known as the first article in this field that has paved the way for improving the performance of knowledge-based methods. Obviously, it has shortcomings such as the incompleteness of the proposed model. [38] later developed an expert system based on artificial intelligence to automatically interpret remote sensing images of buildings. In general, two types of knowledge, namely knowledge of objects and analysis tools (for example, image processing techniques) have been used to realize versatile photo interpretation systems. This approach has played a significant role in the development of remote sensing satellite image processing of buildings. But it suffers from low detection accuracy. [62] proposed a simple method for extracting the three-dimensional shape of buildings from high-resolution digital elevation models (DEMs) with a grid resolution of 0.5 to 5 m. The knowledge used relates to an object that is consistent, simple, and transparent with data of varying density and resolution. But the most important weakness of this article is the simplification of images and thus ignoring some basic features in images. [51] proposed a method for classifying Pan Sharpen images using edge-based and object-based methods. In the study, the image was first classified by a pixel-based classifier. The image was then segmented, and the attributes were extracted from the segmented image. Finally, the classification results were improved by a fuzzy base pixel classifier. One of the issues that should be considered in the field of pixel classification in edge-based and object-based methods is the selection and extraction of optimal features in the preprocessing stage, which reduces the computational volume and processing time, which unfortunately doesn’t consider in this paper. [11] Identified urban areas using a combination of tissue features and fuzzy neural classifiers. [9], to identify the building, first extracted the three-dimensional planes from the cloud points and then identified the primary edges of the building from the lead data by the Canny Edge Detection Algorithm. Then, based on the approximate edges, they extracted the exact edges of the building in the image space through Huff conversion. In this paper, the feature selection technique in the preprocessing stage is not used, which has led to a long processing time. [27] proposed a method based on differential morphological characteristics to extract buildings from high-resolution panchromatic images. The main drawback of this article is the low accuracy of 72% and the percentage of obtained quality is 58.8%. [46] used snake models to distinguish buildings from aerial images. An important disadvantage of snake-based models is the reduction of feature space, which causes some spatial-spatial features to be ignored in aerial images. Therefore, this model is not recommended for evaluating aerial images. [30] in order to extract the roofs of buildings, developed a method for adapting flat surfaces to altitude data within fragmented aerial image areas. The proposed method in this paper is based on normalized thresholding, which, despite having a good detection rate and reliability, also has a high rate of false-positive errors. [49] evaluated the Dempster-Shafer algorithm based on a combination of aerial images and LiDAR data to identify the building. Using these two data sources as a compliment is a good idea to address the shortcomings of each of them and use the capabilities of both at the same time. But the results of [30] showed that feature extraction at both pixel and object levels performed better than Dempster-Shafer and AdaBoost methods. Therefore, these models can’t be trusted. [39] proposed a semi-automatic method for extracting buildings from panchromatic images using active curve and radial casting algorithms. One of the problems that can be mentioned in this article is the lack of noise removal from aerial images. [1] developed an active curve model to extract building boundaries from aerial imagery. According to the appearance of buildings in this data, its useful features can be used to identify and differentiate the building complication from other complications. However, these approaches are exposed to problems such as complex buildings and vegetation, which is mainly due to the fact that the use of a single image does not provide enough information for algorithms. However, buildings and some other land features may be about the same height. In this case, it is necessary to introduce some image features such as spectral features and image texture to distinguish them. Recently, with widespread access to height data and multi-band optical images, the use of data integration methods to identify buildings has attracted much attention [7].

[20] developed a Generalisation Expert System (GES) in recent studies that have detailed four key parts: (a) knowledge representation; (b) inference engine; (c) knowledge representation; and (d) user interface for semi-automated road network generalization [28, 29]; and [20] GES capabilities have been demonstrated in a case study involving the simplification of 1:250,000 national topographic (several lines and polyline database) data to a 1:500,000 scale over Canberra, Australia. The GES has a simple Graphical User Interface (GUI) that can assist users without requiring a high level of technical skill and knowledge of spatial data management. The GES system was developed in Java, Python, and C programming environments for the delivery of generalized geographical features. The results of the trials utilizing GES were analyzed: a series of generalization routines were performed to assess the quality of simplification results for different spatial layers. The test results show that GES generalizes line features accurately while still maintaining their geometric relations.

[52] used optical and radar images to extract three-dimensional buildings in three stages of identifying and estimating the height and eligibility of buildings. This method was only able to extract large and rectangular buildings. In this article, SAR images are used, the most important problem with these images is the presence of speckle noise. However, in this article, no action has been taken to remove this noise, which indicates the weakness of this paper. [47] produced and updated buildings in a database using combined aperture optical and radar images. In this method, the only feature used is the radar image with a combined contrast aperture, which was obtained from the average inversion and shadow ratio of the area, and other features used were extracted from the optical image. The use of Dempster-Shafer evidence theory in LiDAR images has little performance, which has been used in this paper. [13], used the Square Root Pair Difference (SRPD) and Gi* to derive tissue properties from two tissue descriptors. Through these two descriptors, they extracted buildings in urban areas using TerraSAR-X images. In this method, the presence of noise disturbances, especially in metal fences, and the confusion of the medium texture of SRPD and Gi with the rough texture of the urban area caused a decrease in accuracy. This issue has caused the low accuracy of the proposed method in this paper. [57] proposed a hybrid method of 3D extraction of a building by considering Word View-2 satellite imagery with different viewing angles. In their proposed method, they extracted the relative heights of the buildings using pattern matching on the Pan Sharpen image. Then they used the SVM classifier to extract the buildings. One of the unsuitable features of the SVM method in feature extraction is the use of the binary approach. Accordingly, in this paper, 3D images are converted to 2D images so that the SVM classifier can be used to classify images. This has distorted the result. [64] reconstructed the building three-dimensionally using mathematical operators. [6] used texture information to identify buildings by classifying terraSAR-X images in high resolution. [65] used a basin conversion indicator controlled by combining features and texture information to identify the building from the radar image with a combined aperture. This method is used only for buildings with simple shapes and also depends on determining the threshold. The proposed method is applied to high-resolution SAR images from various scenes and the results confirm that the new method is effective with a high detection rate, low false alarm rate, and good localization performance. However, the main drawback of the article is in the techniques used to remove noise from images, which has caused the SNR obtained from image noise removal to be high. [22] proposed a new method to improve the spectral quality of the IHS algorithm for combining radar and optical images in urban areas. So, the image with a higher spatial resolution and the intensity image were combined based on their statistical information. The combined image was then replaced with an intensified image and an inverse IHS conversion was applied. Despite the improvements made in terms of classification accuracy in this paper, it still has the highest accuracy rate of 85.6%, which is much lower than newer methods. [58] proposed a method for extracting buildings based on primitive geometric shapes such as lines and intersecting lines. In three stages of preprocessing, they extracted the edge lines and finally, using a search graph, extracted the rectangular buildings. The disadvantage of this paper is its long processing time, which requires a high hardware system. Also, [56] proposed an automatic method for extracting rectangular and circular buildings from high-resolution optical images using Huff transformer, support vector machine classifier, and perceptual grouping. In this study, to overcome the SVM binary classification, 3D images have been reduced in size, which will also cause the loss of many of the main features of the images.

In many studies, the same various types of features such as normalized vegetation differential index to distinguish buildings from trees and elevation features to distinguish high from non-high features have been defined and studied, and research has focused mainly on creating feature spaces. For example, [37] used the Guinness-based classification tree to identify buildings from aerial imagery and LiDAR data. In this method, first, the DSM resulting from the last return of LiDAR data was divided into two parts of land and high tolls. The buildings and trees were then separated using different combinations of DSM-derived extraction features from the first and last LiDAR returns and aerial imagery. The accuracy obtained from this model is equal to 89.97%, which shows the weakness of the model. [53] have used local spatial correlation features and morphological methods to distinguish areas created from the background. The main weakness of the proposed method is the dependence of the proposed algorithm on the choice of threshold and the need for the proposed method to have a region with strong dispersions. In addition, the problem of building identification and extraction can be divided into low-level and high-level techniques [25]. Low-level techniques are mainly based on edge detection and image extraction, which are followed by processes of defining rules and hypotheses to identify buildings. These methods have the advantage of providing a relatively simple design and low computational cost but are not reliable due to their inherent methodological limitations. In contrast, high-level techniques seek to mimic the human cognition process and decision-making skills based on information analysis. Most high-level building detection methods are based on image classification. Classification usually depends on the type and number of used features [2].

[60] proposed a method based on baseline and edge region information using high-resolution PolSAR data. The most important weakness of this article is that it does not remove noise from the used SAR images. [50] is continuing the above research, the performance of different classification trees were examined to identify the building from a combination of aerial imagery and LiDAR data. In the above research, they produced features that could be extracted from the data surface, including spectral, textural, image data, LiDAR data intensity, and normalized elevation data. This paper has all the requirements of a suitable article for this field. Because in the first step, after filtering, it removes noise and uses different classification approaches based on the images obtained from 4 different regions. The only drawback to this article is the average accuracy of 95%, which can be increased. [42] systematically examined the LiDAR ground filter algorithms used in the process of creating digital elevation models. This paper discusses filtering methods for a variety of different features, and criteria for site selection, accuracy assessment, and algorithm classification. This review paper examines only three categories of current ground filtering algorithms: surfaces with rough terrain or discontinuous slope, dense forest areas where laser beams cannot penetrate, and regions with low vegetation that is often ignored by ground filters. In the method proposed by [24], in order to separate the construction points from the tall plants, the most similar classification method was used, and the educational data prepared from the LiDAR classified data level were used. Finally, in order to improve the results of the final classification, the rules extracted from the LiDAR data surface were used and the classification results were improved post-processing. This study examines the issue of automatic classification by integrating Lidar airborne data and aerial imagery in three steps, which unfortunately ignores the filtering and noise removal phase of the image. [63] evaluated progress in integrating LiDAR optical and cloud point images into photogrammetry and remote sensing. This paper provides a systematic overview of the latest methodology of the integrated method used in various applications such as recording, creating real orthophotography, sharpening, classification, detection of some key targets, three-dimensional reconstruction, and changing detection. The accuracy detection rate is 90.25% which is low and needs to be better. [4] using the concept of levels of details (LODs) in point clouds, proposed the LOD0 approach to classify point clouds (the lowest possible level of detail) using LiDAR aerial data and using Machine learning technique to distinguish between urban and non-urban buildings. The accuracy obtained in all tests is about 90% and the Kappa Cohen index has reached 81% at best, which makes the result of this article not very satisfactory. [8] confirm the effectiveness of the Hyperspectral CASI method in an observation area in the middle of the Heihe River in China. By combining the features of compact airborne spectrographic image (CASI) data with LiDAR data, various features have been extracted for data fusion and classification of terrestrial objects around the river. The overall classification accuracy based on the proposed hierarchical fusion residual network reached 97.89%, which is 10.13 and 5.68% higher than the results of CNN and Deep Residual Network (DRN), respectively. But the most important weakness of this paper is the lack of consideration of spatial-spectral and textural features of CASI hyperspectral images of LiDAR data in the air. These complementary features can provide richer and more accurate information than individual features for classifying terrestrial objects and can therefore perform better based on a remote sensing method. [41] combined LiDAR aerial point clouds and aerial imagery to evaluate heterogeneous urban maps. The proposed method has used three machine learning algorithms, namely ML (Maximum Likelihood), SVM (Support Vector Machines), and MLP (Multilayer Perceptron Neural Network) algorithms to classify LiDAR point clouds of a residential urban area after geographical recording in aerial photographs. Despite the high 97% classification accuracy achieved in this paper, incorrect classifications between different classes occurred due to independent acquisition of aerial data and LiDAR, as well as problems with shading and correct correction of aerial images.

According to the studies conducted in this section, it is clear that the use of machine learning methods and LiDAR super points have had good performance in various research from the past to the present. Also, the use of noise removal technique in the preprocessing stage, consideration of spatial-spectral and textural features of hyperspectral images of LiDAR data, and selection of optimal features can be known as existing research chat, which can be considered in addition to increasing accuracy. Also, ensure the accuracy of the results. Therefore, the next section examines the LiDAR approach.

3. LiDAR Technology and its Applications

During the 1930s and 1940s, researchers such as [20, 21], and [61] were the first to introduce the basic principles of using light detection and ranging (LiDAR) floodlights to measure air density and specificity. The meteorologists used the upper atmosphere. In 1938, the Bureau first used light pulses to measure high altitudes. Middleton and Spilhaus (1953) coined the term "LiDAR" for this measurement method. [44] invented LASER, paving the way for the combination of LASER and LiDAR, revolutionizing active optical remote sensing forever. Over the years, [40] developed a giant pulsed laser, or Q-switch, which was easily used the following year by [15] to detect scattered layers in the upper atmosphere. Following these developments, the first book on LiDAR appeared, edited by [17]. Since then, LiDAR has been at the forefront of optical research [5].

Recently, LiDAR imaging technology has been used in many different fields. Over time, the design of the LiDAR system has also improved significantly, resulting in a very low cost, size, weight, and power (SWaP) design. Due to its light and energy savings, LiDAR's role in aerospace and mobile operating systems has been enhanced to facilitate mapping and obstacle avoidance, which has traditionally been challenging. LiDAR systems have the ability to collect spatial information in all three conditions of one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) with the help of optical deflection systems. LiDAR architecture is defined as "the art of LiDAR instrumentation on LiDAR hardware and software". As shown in Figure 1, a fully functional LiDAR system consists of four main subsystems: laser rangefinder, beam deflection, power management, and main controller units [48].

The output of the LiDAR system is a cloud of points in a three-dimensional reference system, and each point contains data from the area level [35]. In mapping techniques, the samples include bare land, and features including buildings, trees, etc., are also present in them. Pre-processing, or filtering of LiDAR data, is done to separate uncovered land samples from toll samples. Filtering is a fundamental process because the quality of filtered data directly affects modeling quality [45]. Therefore, when producing a digital terrestrial model from the LiDAR data, selecting the appropriate filtering method will be much more critical than the DTM method [10].

LiDAR is an object-level data collection technique based on laser distance measurement. The system was first used around 1970 by NASA and other organizations, American, Canadian, and Australian (Figure 2). The vector L is the laser transmission center position vector, which is determined by GPS observations. A laser detector also measures the longitudinal r vector. The vector P, which contains the coordinates of the points, can be calculated by specifying r and L [59].

LiDAR data has a stable network structure, and the distance between network points is 1 meter. Images related to LiDAR data are 692 × 472 pixels. LiDAR uses intense laser light to target, and its wavelength can be in three areas: ultraviolet, visible, and infrared. The pulse generated by the laser, or the beam of the laser beam hits the target, then the returned wave is collected by the telescope system and the LiDAR sensor and examined by its total electron relative to the original wave emitted [36], as shown in Figure 3.

The Digital Surface Model (DSM) needs to be extended and condensed for more complex activities. This is precisely the case for three-dimensional building models. The areas of use of three-dimensional building models are very diverse and are used for visualization, community planning, environmental control, and electromagnetic wave emission for telecommunications applications. This knowledge can seek to set reconstructions quickly since the raw point cloud classification should not be dependent on complex algorithms. Finally, buildings will be reconstructed to a footprint position, and the shape of those footprints can be applied to measure the exterior walls of the buildings correctly. Multiple data sets are an ideal way to manage dynamic functions [12]. 

Automated cadastral maps and 3D simulation land plans are also easy to access. For each zone, alternative data sources are not available, such as digital ground plans. Data enabling the processing of elevations data can also be obsolete or may come from untrusted sources. It is also disturbing to map as well. The positioning of the buildings because of the inaccuracy and generalization of the map is unclear. These are not greater than 0.5 m; they influence the accuracy and size of the diagrams [54]. Also, the initial type of map is essential. There are three stages of BIM scanning [33]:

  1. Modelling the geometry of the component,
  2. The distribution of a section of the division of individual and substance properties, and
  3. The creation of inter-component relationships. 

[3] have described several features caused by using laser scanner data to create BIM. Second, the size of the smallest modeled object is determined by the cloud point density, which describes the average spacing between points. Clouds must be chosen based on the level of detail specified in the project requirements. Second, as point clouds are the source of information for model generation, discrepancies in measurements may affect the derivative geometry. Absolute measurements are attributed, in particular, to the features of the scanning sample [58] and the acquisition geometry, including the width of the laser beam from the object and the angle of incidence, and the scanning surface. Third, the laser scanner includes evaluating extensive areas, but the occlusion of objects inhibits the acquisition of the entire region. All is achieved in point clouds in heavily congested spaces. However, specific masks can be circumvented by optimizing the location of the successive laser scanners. In specific, appliances and furniture are located in the indoor area [55].

3.1. Future of LiDAR

Based on the studies performed and the desirable features of LiDAR technology, an attempt is made to use this technology in combination with drones in order to be able to image and map places that are not accessible or where the global positioning system is not active.

Today, the use of laser scanning systems installed in cars with the aim of measuring road and urban environments is very widespread. Multi-platform systems extend the use of Mobile Laser Scanning in natural environments, industrial facilities, and urban environments that are not easily accessible by a system installed on a vehicle. With the development of algorithms that enable simultaneous localization and mapping (SLAM), mobile laser scanning has also been developed to provide 3D data from environments deprived of the Global Navigation System (GNSS), indoor locations, and industrial sites [34].

In this regard, the most important challenges in the application of sensor technologies are a significant reduction in size and price. However, the efficiency and accuracy of providing accurate 3D information in tunnels, roads, urban sites, and sites. Industrial has improved. While some industrial scanners were not able to synchronize with an external positioning system a few years ago, integrating current sensors into multi-sensor platforms has usually made it much easier, and has overcome this challenge. Small size and easy integration allow systems to adapt to a variety of 3D measurement needs. We have seen MLS installed on cars, trains, all-terrain vehicles (ATVs), boats, and tractors in the past, and no doubt new applications using cinematic data collection will emerge in the future. With the development of science and technology, small sensors such as the RIEGL MiniVUX-1UAV and Velodyne Buck LITE in the regular category and the Cepton SORA200 in the solid-state are available for UAS-Lidar applications depending on the scale of the drones [34].

However, today's world is moving towards automated systems and real-time data processing. Also, a longer operating time for drones is achieved by improving the aircraft, battery life, and native ideas, such as the Avartek Boxer Hybrid drone with a flight time of 2-4 hours. Small but high-performance sensors and real-time data are the most important requirements for drones, and project constraints usually do not require the presence of GNSS-IMU. Data is processed into a local coordinate system using common techniques in the robotic community. However, much smaller, and more powerful GNSS-IMUs, such as the NovAtel CPT7 or SBG Ellipse2-D, are available, and as prices fall, direct ground reference reduces the effort to control the ground [34].

LiDAR technology is for real-time sensing for remote sensing applications. Lidar measurement is based on the principle that the coordinates of each point on the ground can be calculated by specifying the coordinates of the laser transmission point, measuring the distance of miles between the pulse transmission point and the ground surface, and measuring the wave transmission angle from the pulse transmission point to the ground surface. Due to the more advanced technology, it is used for various applications such as hydrography, application in forestry, application in urban development, application in photogrammetry and remote sensing, and application in hydrography. Lidar due to significant advantages such as some high accuracy and precision, surveying the ground with different vegetation, vertical accuracy above 15 cm and high density of scanned points per square meter, and high-resolution Lidar device which tend to will be used more in the future than in the past. One of the most important factors that can affect the accuracy and performance of the data obtained from Lidar is high winds, wet snow, rain, fog, sultry weather, and the presence of clouds at low altitudes that new LIDAR devices are trying to overcome these challenges.

4. Conclusions

In this article, considering the importance of identifying and recognizing buildings due to the variety in size, shape, design, and location of urban planning and urban planning points, we examine the different methods used to identify buildings in urban spaces using different air. Images and LiDAR were paid. Remote sensing satellites have several limitations; therefore, air and ground remote sensing platforms and sensors are needed to cover time and space distances for comprehensive snow cover research. Optical Detection and Targeting Antennas (LiDAR) are a group of active remote sensing sensors and can be easily deployed on all three platforms: satellite, aerial, and ground. Generating altitude data for glacial lands and snow cover from photogrammetry requires high contrast of different reflective surfaces (ice, snow, snow ice and watery snow). Conventional optical remote sensing sensors do not provide the required accuracy, especially due to the lack of access to valid control points. However, active LiDAR sensors can fill this research gap and provide high-quality, accurate digital elevation models (DEMs). Due to the obvious advantages of LiDAR over conventional passive remote sensing sensors, the number of LiDAR-based snow cover studies has increased in recent years. Accordingly, by studying more than 70 articles from a long time ago, the methods used in various articles were evaluated, which showed that the use of the machine learning method can have a high impact on the accuracy and speed of building detection. On the other hand, due to various challenges in the field of data processing and mapping of different places, the use of LiDAR technology using cloud metropolises was identified as a suitable solution in this field. In general, in optical imaging, only amplitude information is recorded, but other useful information is also recorded by active sensors such as LIDAR. However, the use of this technology has been considered by many different researchers due to the unresolved challenges in terms of size and cost. Because looking to the future, it is expected that this technology will be used on drones and UAVs to allow imaging and mapping of indoor and inaccessible places.

Acknowledgments

We would like to express our gratitude to anonymous reviewers for their valuable comments and feedback on the manuscript. The views expressed in this paper are the authors' and not necessarily the views of their organisation.

References

  1. Ahmadi, S., Zoej, M. V., Ebadi, H., Moghaddam, H. A., & Mohammadzadeh, A. (2010). Automatic urban building boundary extraction from high-resolution aerial images using an innovative model of active contours. International Journal of Applied Earth Observation and Geoinformation, 12(3), 150-157.[CrossRef]
  2. Aladeemy, M., Tutun, S., & Khasawneh, M. T. (2017). A new hybrid approach for feature selection and support vector machine model selection based on self-adaptive cohort intelligence. Expert Systems with Applications, 88, 118-131.[CrossRef]
  3. Anil, E.B., Tang, P., Akinci, B. and Huber, D., 2013. Deviation analysis method for the assessment of the quality of the as-is Building Information Models generated from point cloud data. Automation in Construction, 35, pp.507-516.[CrossRef]
  4. Balado, J., Díaz-Vilariño, L., Arias, P., & González-Desantos, L. M. (2018). Automatic LOD0 classification of airborne LiDAR data in urban and non-urban areas. European Journal of Remote Sensing, 51(1), 978-990.[CrossRef]
  5. Bhardwaj, A., Sam, L., Bhardwaj, A., & Martín-Torres, F. J. (2016). LiDAR remote sensing of the cryosphere: Present applications and future prospects. Remote Sensing of Environment, 177, 125-143.[CrossRef]
  6. Cao, Y., Su, C., & Liang, J. (2012, October). High-resolution SAR building detection with scene context priming. In 2012 IEEE 11th International Conference on Signal Processing (Vol. 3, pp. 1791-1794). IEEE.[CrossRef]
  7. Chai, D. (2016). A probabilistic framework for building extraction from airborne color image and DSM. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(3), 948-959.[CrossRef]
  8. Chang, Z., Yu, H., Zhang, Y., & Wang, K. (2020). Fusion of hyperspectral CASI and airborne LiDAR data for ground object classification through the residual network. Sensors, 20(14), 3961.[CrossRef] [PubMed]
  9. Chen, L. C., Teo, T. A., Shao, Y. C., Lai, Y. C., & Rau, J. Y. (2004). Fusion of LIDAR data and optical imagery for building modeling. International Archives of Photogrammetry and Remote Sensing, 35(B4), 732-737.
  10. Chen, Z., Gao, B. and Devereux, B., 2017. State-of-the-art: DTM generation using airborne LIDAR data. Sensors, 17(1), p.150.[CrossRef] [PubMed]
  11. Dell'Acqua, F., & Gamba, P. (2003). Texture-based characterization of urban environments on satellite SAR images. IEEE Transactions on Geoscience and Remote Sensing, 41(1), 153-159.[CrossRef]
  12. Dibs, H., Al-Hedny, S. and Karkoosh, H.A., 2018. Extracting Detailed Buildings 3D Model with Using High-Resolution Satellite Imagery by Remote Sensing and GIS Analysis; Al-Qasim Green University a Case Study. International Journal of Civil Engineering and Technology, 9(7), pp.1097-1108.
  13. Dong, Y., Chen, H., Yu, D., Pan, Y., & Zhang, J. (2011). Building extraction from high-resolution SAR imagery in urban areas. Geo-spatial Information Science, 14(3), 164.[CrossRef]
  14. Dursun, S., Sagir, D., Büyüksalih, G., Buhur, S., Kersten, T. and Jacobsen, K., 2008, June. 3D city modeling of Istanbul historic peninsula by a combination of aerial images and terrestrial laser scanning data. In 4th EARSel Workshop on Remote Sensing for Developing Countries/GISDECO (Vol. 8, pp. 4-7).
  15. E. A. Johnson, R. C. Meyer, R. E. Hopkins, and W. H. Mock, "The Measurement of Light Scattered by the Upper Atmosphere from a Search-Light Beam," J. Opt. Soc. Am. 29, 512-517 (1939)[CrossRef]
  16. E.H. Synge (1930) XCI. A method of investigating the higher atmosphere, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 9:60, 1014-1020, DOI:[CrossRef]
  17. E.D. Hinkley, ed., Laser Monitoring of the Atmosphere (Springer, Berlin 1976[CrossRef]
  18. Fiocce, G., and Thompson, E., Phys. Rev. Letters, 10, 89 (1963).[CrossRef]
  19. Forghani, A., 1998. A Knowledge-Based Approach to Mapping Roads from Aerial Imagery Using a GIS Database. Ph.D. Dissertation, Surveying, and Spatial Information Science, the University of Tasmania, November 1998, Hobart, Australia, pp.1-300.
  20. Forghani, A., Kazemi, S., and D., Bruce, 2021. A Machine-Learning Approach to Generalization of GIS Data. International Journal of Geoinformatics, Vol. 17, No. 2, April 2021. pp. 41-59.[CrossRef]
  21. Forlani, G., Nardinocchi, C., Scaioni, M., & Zingaretti, P. (2006). A complete classification of raw LIDAR data and 3D reconstruction of buildings. Pattern analysis and applications, 8(4), 357-374.[CrossRef]
  22. Ghanbari, Z., & Sahebi, M. R. (2014). Improved IHS algorithm for fusing high-resolution satellite images of urban areas. Journal of the Indian Society of Remote Sensing, 42(4), 689-699.[CrossRef]
  23. Ghanea, M., Moallem, P., & Momeni, M. (2014). Automatic building extraction in dense urban areas through GeoEye multispectral imagery. International journal of remote sensing, 35(13), 5094-5119.[CrossRef]
  24. Guan, H., Ji, Z., Zhong, L., Li, J., & Ren, Q. (2013). Partially supervised hierarchical classification for urban features from lidar data with aerial imagery. International Journal of Remote Sensing, 34(1), 190-210.[CrossRef]
  25. Hermosilla, T., Ruiz, L. A., Recio, J. A., & Estornell, J. (2011). Evaluation of automatic building detection approaches combining high-resolution images and LiDAR data. Remote Sensing, 3(6), 1188-1210.[CrossRef]
  26. Ji, S., Wei, S., & Lu, M. (2019). A scale robust convolutional neural network for automatic building extraction from aerial and satellite imagery. International journal of remote sensing, 40(9), 3308-3322.[CrossRef]
  27. Jin, X., & Davis, C. H. (2005). Automated building extraction from high-resolution satellite imagery in urban areas using structural, contextual, and spectral information. EURASIP Journal on Advances in Signal Processing, 2005(14), 1-11.[CrossRef]
  28. Kazemi, S., and A. Forghani, 2016. Knowledge–Base Generalization of Road Networks, International Journal of Geoinformatics, Vol. 12, No. 1, March 2016, pp. 1-13.
  29. Kazemi, S., and A., Forghani, 2016. Knowledge-based generalization of Spatial Data. LAP LAMBERT Academic Publishing, AV Akademikerverlag GmbH & Co. KG, Heinrich-Böcking-Straße 6-8, 66121 Saarbrücken, Germany, pp. 1-320.
  30. Khoshelham, K., Li, Z., & King, B. (2005). A split-and-merge technique for automated reconstruction of roof planes. Photogrammetric Engineering & Remote Sensing, 71(7), 855-862.[CrossRef]
  31. Khoshelham, K., Nardinocchi, C., Frontoni, E., Mancini, A., & Zingaretti, P. (2010). Performance evaluation of automated approaches to building detection in multi-source aerial data. ISPRS Journal of Photogrammetry and Remote Sensing, 65(1), 123-133.[CrossRef]
  32. Kim, D. J., & Manjusha, P. L. (2017). Building detection in high resolution remotely sensed images based on automatic histogram-based fuzzy c-means algorithm. Asia-pacific Journal of Convergent Research Interchange, 3(1), 57-62.[CrossRef]
  33. Kim, M.K., Cheng, J.C., Sohn, H. and Chang, C.C., 2015. A framework for dimensional and surface quality assessment of precast concrete elements using BIM and 3D laser scanning. Automation in Construction, 49, pp.225-238.[CrossRef]
  34. Kukko, A., Kaartinen, H., & Hyyppä, J. (2019). Technologies for the Future: A Lidar Overview: Building the Capability for High-density 3D Data. GIM International.
  35. Lindsay, J.B. and Dhun, K., 2015. Modeling surface drainage patterns in altered landscapes using LiDAR. International Journal of Geographical Information Science, 29(3), pp.397-411.[CrossRef]
  36. Lohani, B. and Ghosh, S., 2017. Airborne LiDAR technology: a review of data collection and processing systems. Proceedings of the National Academy of Sciences, India Section A: Physical Sciences, 87(4), pp.567-579.[CrossRef]
  37. Matikainen, L., Kaartinen, H., & Hyyppä, J. (2007). Classification tree-based building detection from the laser scanner and aerial image data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(Part 3), W52.
  38. Matsuyama, T. (1987). Knowledge-based aerial image understanding systems and expert systems for image processing. IEEE Transactions on Geoscience and Remote Sensing, (3), 305-316.[CrossRef]
  39. Mayunga, S. D., Coleman, D. J., & Zhang, Y. (2007). A semi‐automated approach for extracting buildings from QuickBird imagery applied to informal settlement mapping. International Journal of Remote Sensing, 28(10), 2343-2357.[CrossRef]
  40. McKeown Jr, D. M., & Harvey, W. A. (1987, June). Automating knowledge acquisition for aerial image interpretation. In Image Understanding and the Man-Machine Interface (Vol. 758, pp. 144-164). International Society for Optics and Photonics.[CrossRef]
  41. Megahed, Y., Shaker, A., & Yan, W. Y. (2021). Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping. Remote Sensing, 13(4), 814.[CrossRef]
  42. Meng, X., Currit, N., & Zhao, K. (2010). Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sensing, 2(3), 833-860.[CrossRef]
  43. Middleton, W.E.K. and A. F. Sphilhaus, Meteorological Instruments. University of Toronto Press, Toronto, Canada, 208, 1953
  44. Maiman, T. H. (1960). Stimulated optical radiation in ruby. Nature 187, 493–494.[CrossRef]
  45. Mongus, D., Lukač, N. and Žalik, B., 2014. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS Journal of Photogrammetry and Remote Sensing, 93, pp.145-156.[CrossRef]
  46. Peng, J., Zhang, D., & Liu, Y. (2005). An improved snake model for building detection from urban aerial images. Pattern Recognition Letters, 26(5), 587-595.[CrossRef]
  47. Poulain, V., Inglada, J., Spigai, M., Tourneret, J. Y., & Marthon, P. (2011). High-resolution optical and SAR image fusion for building database updating. IEEE Transactions on Geoscience and Remote Sensing, 49(8), 2900-2910.[CrossRef]
  48. Raj, T., Hashim, F. H., Huddin, A. B., Ibrahim, M. F., & Hussain, A. (2020). A Survey on LiDAR Scanning Mechanisms. Electronics, 9(5), 741.[CrossRef]
  49. Rottensteiner, F., Trinder, J., Clode, S., & Kubik, K. (2007). Building detection by fusion of airborne laser scanner data and multi-spectral images: Performance evaluation and sensitivity analysis. ISPRS Journal of Photogrammetry and Remote Sensing, 62(2), 135-149.[CrossRef]
  50. Salah, M., Trinder, J. C., & Shaker, A. (2011). Performance evaluation of classification trees for building detection from aerial images and LiDAR data: a comparison of classification trees models. International journal of remote sensing, 32(20), 5757-5783.[CrossRef]
  51. Shackelford, A. K., & Davis, C. H. (2003). A combined fuzzy pixel-based and object-based approach for classification of high-resolution multispectral data over urban areas. IEEE Transactions on GeoScience and Remote sensing, 41(10), 2354-2363.[CrossRef]
  52. Sportouche, H., Tupin, F., & Denise, L. (2011). Extraction and three-dimensional reconstruction of isolated buildings in urban scenes from high-resolution optical and SAR spaceborne images. IEEE transactions on Geoscience and remote sensing, 49(10), 3932-3946.[CrossRef]
  53. Stasolla, M., & Gamba, P. (2008). Spatial indexes for the extraction of formal and informal human settlements from high-resolution SAR images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 1(2), 98-106.[CrossRef]
  54. Suveg, I. and Vosselman, G., 2004. Reconstruction of 3D building models from aerial images and maps. ISPRS Journal of Photogrammetry and remote sensing, 58(3-4), pp.202-224.[CrossRef]
  55. Thomson, C. and Boehm, J., 2015. Automatic geometry generation from point clouds for BIM. Remote Sensing, 7(9), pp.11753-11775.[CrossRef]
  56. Turker, M., & Koc-San, D. (2015). Building extraction from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation, and perceptual grouping. International Journal of Applied Earth Observation and Geoinformation, 34, 58-69.[CrossRef]
  57. Turlapaty, A., Gokaraju, B., Du, Q., Younan, N. H., & Aanstoos, J. V. (2012). A hybrid approach for building extraction from spaceborne multi-angular optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(1), 89-100.[CrossRef]
  58. Wang, J., Yang, X., Qin, X., Ye, X., & Qin, Q. (2014). An efficient approach for automatic rectangular building extraction from very high-resolution optical satellite imagery. IEEE Geoscience and Remote Sensing Letters, 12(3), 487-491.[CrossRef]
  59. Wang, R., Peethambaran, J. and Chen, D., 2018. LiDAR point clouds to 3-D Urban Models $: $ a review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(2), pp.606-627.[CrossRef]
  60. Wang, Y., Tupin, F., Han, C., & Nicolas, J. M. (2008, July). Building detection from high-resolution PolSAR data by combining region and edge information. In IGARSS 2008-2008 IEEE International Geoscience and Remote Sensing Symposium (Vol. 4, pp. IV-153). IEEE.[CrossRef]
  61. Waldram, J.M., 1945. Trns.Illum.Eng.Soc., 10, 147-188.[CrossRef]
  62. Weidner, U., & Förstner, W. (1995). Towards automatic building extraction from high-resolution digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 50(4), 38-49.[CrossRef]
  63. Zhang, J., & Lin, X. (2017). Advances in the fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. International Journal of Image and Data Fusion, 8(1), 1-31.[CrossRef]
  64. Zhao, J., Zhu, Q., Du, Z., Feng, T., & Zhang, Y. (2012). Mathematical morphology-based generalization of complex 3D building models incorporating semantic relationships. ISPRS Journal of Photogrammetry and Remote Sensing, 68, 95-111.[CrossRef]
  65. Zhao, L., Zhou, X., & Kuang, G. (2013). Building detection from urban SAR image using building characteristics and contextual information. EURASIP Journal on Advances in Signal Processing, 2013(1), 1-16.[CrossRef]
Article metrics
Views
855
Downloads
211

Cite This Article

APA Style
Harouni, O. , & Forghani, A. (2022). A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches. World Journal of Geomatics and Geosciences, 2(1), 47-59. https://doi.org/10.31586/wjgg.2022.477
ACS Style
Harouni, O. ; Forghani, A. A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches. World Journal of Geomatics and Geosciences 2022 2(1), 47-59. https://doi.org/10.31586/wjgg.2022.477
Chicago/Turabian Style
Harouni, Olly, and Alan Forghani. 2022. "A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches". World Journal of Geomatics and Geosciences 2, no. 1: 47-59. https://doi.org/10.31586/wjgg.2022.477
AMA Style
Harouni O, Forghani A. A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches. World Journal of Geomatics and Geosciences. 2022; 2(1):47-59. https://doi.org/10.31586/wjgg.2022.477
@Article{wjgg477,
AUTHOR = {Harouni, Olly and Forghani, Alan},
TITLE = {A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches},
JOURNAL = {World Journal of Geomatics and Geosciences},
VOLUME = {2},
YEAR = {2022},
NUMBER = {1},
PAGES = {47-59},
URL = {https://www.scipublications.com/journal/index.php/WJGG/article/view/477},
ISSN = {2771-229X},
DOI = {10.31586/wjgg.2022.477},
ABSTRACT = {Today, the presentation of a three-dimensional model of real-world features is very important and widely used and has attracted the attention of researchers in various fields, including surveying and spatial information systems, and those interested in the three-dimensional reconstruction of buildings. The building is the key part of the information in a three-dimensional city model, so extracting and modeling buildings from remote sensing data is an important step in building a digital model of a city. LiDAR technology due to its ability to map in all three modes of one-dimensional, two-dimensional, and three-dimensional is a suitable solution to provide hyperspectral and comprehensive images of the building in an urban environment. In this review article, a comprehensive review of the methods used in identifying buildings from the past to the present and appropriate solutions for the future is discussed.},
}
%0 Journal Article
%A Harouni, Olly
%A Forghani, Alan
%D 2022
%J World Journal of Geomatics and Geosciences

%@ 2771-229X
%V 2
%N 1
%P 47-59

%T A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches
%M doi:10.31586/wjgg.2022.477
%U https://www.scipublications.com/journal/index.php/WJGG/article/view/477
TY  - JOUR
AU  - Harouni, Olly
AU  - Forghani, Alan
TI  - A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches
T2  - World Journal of Geomatics and Geosciences
PY  - 2022
VL  - 2
IS  - 1
SN  - 2771-229X
SP  - 47
EP  - 59
UR  - https://www.scipublications.com/journal/index.php/WJGG/article/view/477
AB  - Today, the presentation of a three-dimensional model of real-world features is very important and widely used and has attracted the attention of researchers in various fields, including surveying and spatial information systems, and those interested in the three-dimensional reconstruction of buildings. The building is the key part of the information in a three-dimensional city model, so extracting and modeling buildings from remote sensing data is an important step in building a digital model of a city. LiDAR technology due to its ability to map in all three modes of one-dimensional, two-dimensional, and three-dimensional is a suitable solution to provide hyperspectral and comprehensive images of the building in an urban environment. In this review article, a comprehensive review of the methods used in identifying buildings from the past to the present and appropriate solutions for the future is discussed.
DO  - A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches
TI  - 10.31586/wjgg.2022.477
ER  - 
  1. Ahmadi, S., Zoej, M. V., Ebadi, H., Moghaddam, H. A., & Mohammadzadeh, A. (2010). Automatic urban building boundary extraction from high-resolution aerial images using an innovative model of active contours. International Journal of Applied Earth Observation and Geoinformation, 12(3), 150-157.[CrossRef]
  2. Aladeemy, M., Tutun, S., & Khasawneh, M. T. (2017). A new hybrid approach for feature selection and support vector machine model selection based on self-adaptive cohort intelligence. Expert Systems with Applications, 88, 118-131.[CrossRef]
  3. Anil, E.B., Tang, P., Akinci, B. and Huber, D., 2013. Deviation analysis method for the assessment of the quality of the as-is Building Information Models generated from point cloud data. Automation in Construction, 35, pp.507-516.[CrossRef]
  4. Balado, J., Díaz-Vilariño, L., Arias, P., & González-Desantos, L. M. (2018). Automatic LOD0 classification of airborne LiDAR data in urban and non-urban areas. European Journal of Remote Sensing, 51(1), 978-990.[CrossRef]
  5. Bhardwaj, A., Sam, L., Bhardwaj, A., & Martín-Torres, F. J. (2016). LiDAR remote sensing of the cryosphere: Present applications and future prospects. Remote Sensing of Environment, 177, 125-143.[CrossRef]
  6. Cao, Y., Su, C., & Liang, J. (2012, October). High-resolution SAR building detection with scene context priming. In 2012 IEEE 11th International Conference on Signal Processing (Vol. 3, pp. 1791-1794). IEEE.[CrossRef]
  7. Chai, D. (2016). A probabilistic framework for building extraction from airborne color image and DSM. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(3), 948-959.[CrossRef]
  8. Chang, Z., Yu, H., Zhang, Y., & Wang, K. (2020). Fusion of hyperspectral CASI and airborne LiDAR data for ground object classification through the residual network. Sensors, 20(14), 3961.[CrossRef] [PubMed]
  9. Chen, L. C., Teo, T. A., Shao, Y. C., Lai, Y. C., & Rau, J. Y. (2004). Fusion of LIDAR data and optical imagery for building modeling. International Archives of Photogrammetry and Remote Sensing, 35(B4), 732-737.
  10. Chen, Z., Gao, B. and Devereux, B., 2017. State-of-the-art: DTM generation using airborne LIDAR data. Sensors, 17(1), p.150.[CrossRef] [PubMed]
  11. Dell'Acqua, F., & Gamba, P. (2003). Texture-based characterization of urban environments on satellite SAR images. IEEE Transactions on Geoscience and Remote Sensing, 41(1), 153-159.[CrossRef]
  12. Dibs, H., Al-Hedny, S. and Karkoosh, H.A., 2018. Extracting Detailed Buildings 3D Model with Using High-Resolution Satellite Imagery by Remote Sensing and GIS Analysis; Al-Qasim Green University a Case Study. International Journal of Civil Engineering and Technology, 9(7), pp.1097-1108.
  13. Dong, Y., Chen, H., Yu, D., Pan, Y., & Zhang, J. (2011). Building extraction from high-resolution SAR imagery in urban areas. Geo-spatial Information Science, 14(3), 164.[CrossRef]
  14. Dursun, S., Sagir, D., Büyüksalih, G., Buhur, S., Kersten, T. and Jacobsen, K., 2008, June. 3D city modeling of Istanbul historic peninsula by a combination of aerial images and terrestrial laser scanning data. In 4th EARSel Workshop on Remote Sensing for Developing Countries/GISDECO (Vol. 8, pp. 4-7).
  15. E. A. Johnson, R. C. Meyer, R. E. Hopkins, and W. H. Mock, "The Measurement of Light Scattered by the Upper Atmosphere from a Search-Light Beam," J. Opt. Soc. Am. 29, 512-517 (1939)[CrossRef]
  16. E.H. Synge (1930) XCI. A method of investigating the higher atmosphere, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 9:60, 1014-1020, DOI:[CrossRef]
  17. E.D. Hinkley, ed., Laser Monitoring of the Atmosphere (Springer, Berlin 1976[CrossRef]
  18. Fiocce, G., and Thompson, E., Phys. Rev. Letters, 10, 89 (1963).[CrossRef]
  19. Forghani, A., 1998. A Knowledge-Based Approach to Mapping Roads from Aerial Imagery Using a GIS Database. Ph.D. Dissertation, Surveying, and Spatial Information Science, the University of Tasmania, November 1998, Hobart, Australia, pp.1-300.
  20. Forghani, A., Kazemi, S., and D., Bruce, 2021. A Machine-Learning Approach to Generalization of GIS Data. International Journal of Geoinformatics, Vol. 17, No. 2, April 2021. pp. 41-59.[CrossRef]
  21. Forlani, G., Nardinocchi, C., Scaioni, M., & Zingaretti, P. (2006). A complete classification of raw LIDAR data and 3D reconstruction of buildings. Pattern analysis and applications, 8(4), 357-374.[CrossRef]
  22. Ghanbari, Z., & Sahebi, M. R. (2014). Improved IHS algorithm for fusing high-resolution satellite images of urban areas. Journal of the Indian Society of Remote Sensing, 42(4), 689-699.[CrossRef]
  23. Ghanea, M., Moallem, P., & Momeni, M. (2014). Automatic building extraction in dense urban areas through GeoEye multispectral imagery. International journal of remote sensing, 35(13), 5094-5119.[CrossRef]
  24. Guan, H., Ji, Z., Zhong, L., Li, J., & Ren, Q. (2013). Partially supervised hierarchical classification for urban features from lidar data with aerial imagery. International Journal of Remote Sensing, 34(1), 190-210.[CrossRef]
  25. Hermosilla, T., Ruiz, L. A., Recio, J. A., & Estornell, J. (2011). Evaluation of automatic building detection approaches combining high-resolution images and LiDAR data. Remote Sensing, 3(6), 1188-1210.[CrossRef]
  26. Ji, S., Wei, S., & Lu, M. (2019). A scale robust convolutional neural network for automatic building extraction from aerial and satellite imagery. International journal of remote sensing, 40(9), 3308-3322.[CrossRef]
  27. Jin, X., & Davis, C. H. (2005). Automated building extraction from high-resolution satellite imagery in urban areas using structural, contextual, and spectral information. EURASIP Journal on Advances in Signal Processing, 2005(14), 1-11.[CrossRef]
  28. Kazemi, S., and A. Forghani, 2016. Knowledge–Base Generalization of Road Networks, International Journal of Geoinformatics, Vol. 12, No. 1, March 2016, pp. 1-13.
  29. Kazemi, S., and A., Forghani, 2016. Knowledge-based generalization of Spatial Data. LAP LAMBERT Academic Publishing, AV Akademikerverlag GmbH & Co. KG, Heinrich-Böcking-Straße 6-8, 66121 Saarbrücken, Germany, pp. 1-320.
  30. Khoshelham, K., Li, Z., & King, B. (2005). A split-and-merge technique for automated reconstruction of roof planes. Photogrammetric Engineering & Remote Sensing, 71(7), 855-862.[CrossRef]
  31. Khoshelham, K., Nardinocchi, C., Frontoni, E., Mancini, A., & Zingaretti, P. (2010). Performance evaluation of automated approaches to building detection in multi-source aerial data. ISPRS Journal of Photogrammetry and Remote Sensing, 65(1), 123-133.[CrossRef]
  32. Kim, D. J., & Manjusha, P. L. (2017). Building detection in high resolution remotely sensed images based on automatic histogram-based fuzzy c-means algorithm. Asia-pacific Journal of Convergent Research Interchange, 3(1), 57-62.[CrossRef]
  33. Kim, M.K., Cheng, J.C., Sohn, H. and Chang, C.C., 2015. A framework for dimensional and surface quality assessment of precast concrete elements using BIM and 3D laser scanning. Automation in Construction, 49, pp.225-238.[CrossRef]
  34. Kukko, A., Kaartinen, H., & Hyyppä, J. (2019). Technologies for the Future: A Lidar Overview: Building the Capability for High-density 3D Data. GIM International.
  35. Lindsay, J.B. and Dhun, K., 2015. Modeling surface drainage patterns in altered landscapes using LiDAR. International Journal of Geographical Information Science, 29(3), pp.397-411.[CrossRef]
  36. Lohani, B. and Ghosh, S., 2017. Airborne LiDAR technology: a review of data collection and processing systems. Proceedings of the National Academy of Sciences, India Section A: Physical Sciences, 87(4), pp.567-579.[CrossRef]
  37. Matikainen, L., Kaartinen, H., & Hyyppä, J. (2007). Classification tree-based building detection from the laser scanner and aerial image data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(Part 3), W52.
  38. Matsuyama, T. (1987). Knowledge-based aerial image understanding systems and expert systems for image processing. IEEE Transactions on Geoscience and Remote Sensing, (3), 305-316.[CrossRef]
  39. Mayunga, S. D., Coleman, D. J., & Zhang, Y. (2007). A semi‐automated approach for extracting buildings from QuickBird imagery applied to informal settlement mapping. International Journal of Remote Sensing, 28(10), 2343-2357.[CrossRef]
  40. McKeown Jr, D. M., & Harvey, W. A. (1987, June). Automating knowledge acquisition for aerial image interpretation. In Image Understanding and the Man-Machine Interface (Vol. 758, pp. 144-164). International Society for Optics and Photonics.[CrossRef]
  41. Megahed, Y., Shaker, A., & Yan, W. Y. (2021). Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping. Remote Sensing, 13(4), 814.[CrossRef]
  42. Meng, X., Currit, N., & Zhao, K. (2010). Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sensing, 2(3), 833-860.[CrossRef]
  43. Middleton, W.E.K. and A. F. Sphilhaus, Meteorological Instruments. University of Toronto Press, Toronto, Canada, 208, 1953
  44. Maiman, T. H. (1960). Stimulated optical radiation in ruby. Nature 187, 493–494.[CrossRef]
  45. Mongus, D., Lukač, N. and Žalik, B., 2014. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS Journal of Photogrammetry and Remote Sensing, 93, pp.145-156.[CrossRef]
  46. Peng, J., Zhang, D., & Liu, Y. (2005). An improved snake model for building detection from urban aerial images. Pattern Recognition Letters, 26(5), 587-595.[CrossRef]
  47. Poulain, V., Inglada, J., Spigai, M., Tourneret, J. Y., & Marthon, P. (2011). High-resolution optical and SAR image fusion for building database updating. IEEE Transactions on Geoscience and Remote Sensing, 49(8), 2900-2910.[CrossRef]
  48. Raj, T., Hashim, F. H., Huddin, A. B., Ibrahim, M. F., & Hussain, A. (2020). A Survey on LiDAR Scanning Mechanisms. Electronics, 9(5), 741.[CrossRef]
  49. Rottensteiner, F., Trinder, J., Clode, S., & Kubik, K. (2007). Building detection by fusion of airborne laser scanner data and multi-spectral images: Performance evaluation and sensitivity analysis. ISPRS Journal of Photogrammetry and Remote Sensing, 62(2), 135-149.[CrossRef]
  50. Salah, M., Trinder, J. C., & Shaker, A. (2011). Performance evaluation of classification trees for building detection from aerial images and LiDAR data: a comparison of classification trees models. International journal of remote sensing, 32(20), 5757-5783.[CrossRef]
  51. Shackelford, A. K., & Davis, C. H. (2003). A combined fuzzy pixel-based and object-based approach for classification of high-resolution multispectral data over urban areas. IEEE Transactions on GeoScience and Remote sensing, 41(10), 2354-2363.[CrossRef]
  52. Sportouche, H., Tupin, F., & Denise, L. (2011). Extraction and three-dimensional reconstruction of isolated buildings in urban scenes from high-resolution optical and SAR spaceborne images. IEEE transactions on Geoscience and remote sensing, 49(10), 3932-3946.[CrossRef]
  53. Stasolla, M., & Gamba, P. (2008). Spatial indexes for the extraction of formal and informal human settlements from high-resolution SAR images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 1(2), 98-106.[CrossRef]
  54. Suveg, I. and Vosselman, G., 2004. Reconstruction of 3D building models from aerial images and maps. ISPRS Journal of Photogrammetry and remote sensing, 58(3-4), pp.202-224.[CrossRef]
  55. Thomson, C. and Boehm, J., 2015. Automatic geometry generation from point clouds for BIM. Remote Sensing, 7(9), pp.11753-11775.[CrossRef]
  56. Turker, M., & Koc-San, D. (2015). Building extraction from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation, and perceptual grouping. International Journal of Applied Earth Observation and Geoinformation, 34, 58-69.[CrossRef]
  57. Turlapaty, A., Gokaraju, B., Du, Q., Younan, N. H., & Aanstoos, J. V. (2012). A hybrid approach for building extraction from spaceborne multi-angular optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(1), 89-100.[CrossRef]
  58. Wang, J., Yang, X., Qin, X., Ye, X., & Qin, Q. (2014). An efficient approach for automatic rectangular building extraction from very high-resolution optical satellite imagery. IEEE Geoscience and Remote Sensing Letters, 12(3), 487-491.[CrossRef]
  59. Wang, R., Peethambaran, J. and Chen, D., 2018. LiDAR point clouds to 3-D Urban Models $: $ a review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(2), pp.606-627.[CrossRef]
  60. Wang, Y., Tupin, F., Han, C., & Nicolas, J. M. (2008, July). Building detection from high-resolution PolSAR data by combining region and edge information. In IGARSS 2008-2008 IEEE International Geoscience and Remote Sensing Symposium (Vol. 4, pp. IV-153). IEEE.[CrossRef]
  61. Waldram, J.M., 1945. Trns.Illum.Eng.Soc., 10, 147-188.[CrossRef]
  62. Weidner, U., & Förstner, W. (1995). Towards automatic building extraction from high-resolution digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 50(4), 38-49.[CrossRef]
  63. Zhang, J., & Lin, X. (2017). Advances in the fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. International Journal of Image and Data Fusion, 8(1), 1-31.[CrossRef]
  64. Zhao, J., Zhu, Q., Du, Z., Feng, T., & Zhang, Y. (2012). Mathematical morphology-based generalization of complex 3D building models incorporating semantic relationships. ISPRS Journal of Photogrammetry and Remote Sensing, 68, 95-111.[CrossRef]
  65. Zhao, L., Zhou, X., & Kuang, G. (2013). Building detection from urban SAR image using building characteristics and contextual information. EURASIP Journal on Advances in Signal Processing, 2013(1), 1-16.[CrossRef]