Research Article
BibTex RIS Cite
Year 2023, , 100 - 106, 30.01.2023
https://doi.org/10.17694/bajece.1212563

Abstract

References

  • [1] Q, Han, Q. Yin, X. Zheng, Z. Chen, “Remote sensing image building detection method based on Mask R-CNN.” Complex & Intelligent Systems, 8(3), 1847-1855, 2022.
  • [2] M. Ataş, “Fıstık sınıflandırma sistemi için Siirt fıstığı imgelerinden gürbüz özniteliklerin çıkarılması.” Dicle Üniversitesi Mühendislik Fakültesi Mühendislik Dergisi 7(1):93-102, 2016.
  • [3] E. Acar, “Detection of unregistered electric distribution transformers in agricultural fields with the aid of Sentinel-1 SAR images by machine learning approaches.” Computers and Electronics in Agriculture, 175, 105559, 2020.
  • [4] A. D. Yetis, M. I. Yesilnacar, M. Atas, “A machine learning approach to dental fluorosis classification.” Arabian Journal of Geosciences, 14(2):1-12, 2021.
  • [5] M. Atas, Y. Dogan, İ. Atas, “Chess playing robotic arm.” In 2014 22nd Signal Processing and Communications Applications Conference (SIU) (pp. 1171-1174). IEEE, 2014.
  • [6] C. Özdemı̇ r, M. Ataş, A. B. Özer, “Classification of Turkish spam emails with artificial immune system.” 21st Signal Processing and Communications Applications Conference (SIU). IEEE, 2013.
  • [7] S. Ji, S. Wei, M. Lu, “Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery dataset.” IEEE Transactions on Geoscience and Remote Sensing, 57(1), 574-586, 2018.
  • [8] Ç. Kaymak, A. Uçar, “Semantic Image Segmentation for Autonomous Driving Using Fully Convolutional Networks.” International Artificial Intelligence and Data Processing Symposium (IDAP), 2019. DOI: 10.1109/IDAP.2019.8875923.
  • [9] A. Valizadeh, M. Shariatee, “The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection.” Comput Intell Neurosci. 2021, DOI: 10.1155/2021/7265644.
  • [10] A. Mousavian, J. Kosecka, “Semantic Image Based Geolocation Given a Map.” DOI: 10.48550/arXiv.1609.00278.
  • [11] T. Anand, S. Sinha, M. Mandal, V. Chamola, F. R. Yu, “AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture.” IEEE Sensors Journal, 21(16), 17581-17590, 2021.
  • [12] W. Wu et al., “Building extraction from high resolution remote sensing imagery based on spatial-spectral method.”, Geomat Inf Sci Wuhan Univ 7:800–805, 2012.
  • [13] X. Huang et al., “Classification of high spatial resolution remotely sensed imagery based upon fusion of muitiscale features and SVM.”, J Remote Sens 11:48–54, 2007.
  • [14] F. Xin, C. Shanxiong, “High-resolution remote sensing image building extraction in dense urban areas.” Bull Surv Mapp, 2019.
  • [15] H. Acar, M. S. Özerdem, E. Acar, “Soil moisture inversion via semiempirical and machine learning methods with full-polarization Radarsat-2 and polarimetric target decomposition data: A comparative study.” IEEE Access, 8, 197896-197907, 2020.
  • [16] W. Xu-dong, G. Jian-ming, J. Bai-jun et al., “Mixed-pixel classification of remote sensing images of cellular automata.”, J Surv Mapp 37(1):42–48, 2008.
  • [17] G. Wu, X. Shao, Z. Guo, Q. Chen, W. Yuan, X. Shi, et al. “Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks”, Remote Sensing, 10, p. 407, 2018.
  • [18] J. Yuan, “Learning building extraction in aerial scenes with convolutional networks.”, IEEE Transactions on Pattern Analysis Machine Intelligence, 40, pp. 2793-2798, 2017.
  • [19] Q. Chen, L. Wang, Y. Wu, G. Wu, Z. Guo, S. L. Waslander, “Aerial imagery for roof segmentation: A large-scale dataset towards automatic mapping of buildings.”, ISPRS Journal of Photogrammetry and Remote Sensing, 147, pp. 42-55, 2018.
  • [20] http://study.rsgis.whu.edu.cn/pages/download/
  • [21] O. Ronneberger, P. Fischer, T. Brox, “U-net: Convolutional networks for biomedical image segmentation.” International conference on medical image computing and computer-assisted intervention, (pp. 234–241). Springer, 2015.
  • [22] J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation”, University of Berkeley, Proceedings of the IEEE, 2015.
  • [23] N. Ketkar, “Stochastic gradient descent.”, In Deep learning with Python (pp. 113-132). Apress, Berkeley, CA, 2017.
  • [24] A. Radford, L. Metz, S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks.”, arXiv preprint arXiv:1511.06434, 2015.
  • [25] F. Chollet, "Keras: Deep learning library for theano and tensorflow", 2015, [online] Available: https://github.com/fchollet/keras.
  • [26] Google Colab [Online] Access Link: https://colab.research.google.com/, on 21 November 2022.
  • [27] G. Chhor, B. A. Cristian, B-L. Ianis, "Satellite image segmentation for building detection using U-Net." Web: http://cs229. stanford. edu/proj2017/final-reports/5243715.pdf, 2017.
  • [28] İ. Ataş, “Human gender prediction based on deep transfer learning from panoramic dental radiograph images.” Traitement du Signal, 39(5), 1585-1595, 2022. DOI:10.18280/ts.390515
  • [29] A. H. Murphy, "The Finley Affair: A Signal Event in the History of Forecast Verification." Weather and Forecasting. 11 (1): 3, 1996.
  • [30] Jaccard, Paul, "The Distribution of the Flora in the Alpine Zone.1". New Phytologist. 11 (2): 37–50, 1912. DOI:10.1111/j.1469-8137.1912.tb05611.x.
  • [31] T. Sørensen, "A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons." Kongelige Danske Videnskabernes Selskab. 5 (4): 1–34, 1948.
  • [32] L. R. Dice, "Measures of the Amount of Ecologic Association Between Species." Ecology. 26 (3): 297–302, 1945.DOI:10.2307/1932409.
  • [33] F. Milletari, N. Navab, S. A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation.” In Proceedings of the 14th 3D Vision, Stanford, CA, USA, 25–28, pp. 565–571, 2016.
  • [34] J. Zhang, et al. "Segmenting purple rapeseed leaves in the field from UAV RGB imagery using deep learning as an auxiliary means for nitrogen stress detection." Remote Sensing 12.9, 1403, 2020.
  • [35] J. Ma, et al., “Building Extraction of Aerial Images by a Global and Multi-Scale Encoder Decoder Network.” Remote Sens., 12, 2350, 2020.
  • [36] J. Lin, W. Jing, H. Song, G. Chen, “ESFNet: Efficient Network for Building Extraction from High-Resolution Aerial Images.” IEEE Access, 7, 54285–54294, 2019.
  • [37] V. Iglovikov, A. Shvets, “Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation.” arXiv preprint arXiv:1801.05746, 2018.
  • [38] G. Chhor, C. B. Aramburu, I. Bougdal-Lambert, Satellite image segmentation for building detection using U-Net. Web: http://cs229. stanford. edu/proj2017/final-reports/5243715, 2017.
  • [39] D. Patil, K. Patil, R. Nale, S. Chaudhari, "Semantic Segmentation of Satellite Images using Modified U-Net," IEEE 10. Regional Symposium (TENSYMP), s.1-6, 2022.

Performance Evaluation of Jaccard-Dice Coefficient on Building Segmentation from High Resolution Satellite Images

Year 2023, , 100 - 106, 30.01.2023
https://doi.org/10.17694/bajece.1212563

Abstract

In remote sensing applications, segmentation of input satellite images according to semantic information and estimating the semantic category of each pixel from a given set of tags are of great importance for the automatic tracking task. It is important in situations such as building detection from high resolution satellite images, city planning, environmental preparation, disaster management. Buildings in metropolitan areas are crowded and messy, so high-resolution images from satellites need to be automated to detect buildings. Segmentation of remote sensing images with deep learning technology has been a widely considered area of research. The Fully Convolutional Network (FCN) model, a popular segmentation model, is used for building detection based on pixel-level satellite images. In the U-Net model developed for biomedical image segmentation and modified in our study, its performances during training, accuracy and testing were compared by using customized loss functions such as Dice Coefficient and Jaccard Index measurements. Dice Coefficient loss score was obtained 84% and Jaccard Index lost score was obtained 70%. In addition, the Dice Coefficient loss score increased from 84% to 87% by using the Batch Normalization (BN) method instead of the Dropout method in the model.

References

  • [1] Q, Han, Q. Yin, X. Zheng, Z. Chen, “Remote sensing image building detection method based on Mask R-CNN.” Complex & Intelligent Systems, 8(3), 1847-1855, 2022.
  • [2] M. Ataş, “Fıstık sınıflandırma sistemi için Siirt fıstığı imgelerinden gürbüz özniteliklerin çıkarılması.” Dicle Üniversitesi Mühendislik Fakültesi Mühendislik Dergisi 7(1):93-102, 2016.
  • [3] E. Acar, “Detection of unregistered electric distribution transformers in agricultural fields with the aid of Sentinel-1 SAR images by machine learning approaches.” Computers and Electronics in Agriculture, 175, 105559, 2020.
  • [4] A. D. Yetis, M. I. Yesilnacar, M. Atas, “A machine learning approach to dental fluorosis classification.” Arabian Journal of Geosciences, 14(2):1-12, 2021.
  • [5] M. Atas, Y. Dogan, İ. Atas, “Chess playing robotic arm.” In 2014 22nd Signal Processing and Communications Applications Conference (SIU) (pp. 1171-1174). IEEE, 2014.
  • [6] C. Özdemı̇ r, M. Ataş, A. B. Özer, “Classification of Turkish spam emails with artificial immune system.” 21st Signal Processing and Communications Applications Conference (SIU). IEEE, 2013.
  • [7] S. Ji, S. Wei, M. Lu, “Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery dataset.” IEEE Transactions on Geoscience and Remote Sensing, 57(1), 574-586, 2018.
  • [8] Ç. Kaymak, A. Uçar, “Semantic Image Segmentation for Autonomous Driving Using Fully Convolutional Networks.” International Artificial Intelligence and Data Processing Symposium (IDAP), 2019. DOI: 10.1109/IDAP.2019.8875923.
  • [9] A. Valizadeh, M. Shariatee, “The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection.” Comput Intell Neurosci. 2021, DOI: 10.1155/2021/7265644.
  • [10] A. Mousavian, J. Kosecka, “Semantic Image Based Geolocation Given a Map.” DOI: 10.48550/arXiv.1609.00278.
  • [11] T. Anand, S. Sinha, M. Mandal, V. Chamola, F. R. Yu, “AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture.” IEEE Sensors Journal, 21(16), 17581-17590, 2021.
  • [12] W. Wu et al., “Building extraction from high resolution remote sensing imagery based on spatial-spectral method.”, Geomat Inf Sci Wuhan Univ 7:800–805, 2012.
  • [13] X. Huang et al., “Classification of high spatial resolution remotely sensed imagery based upon fusion of muitiscale features and SVM.”, J Remote Sens 11:48–54, 2007.
  • [14] F. Xin, C. Shanxiong, “High-resolution remote sensing image building extraction in dense urban areas.” Bull Surv Mapp, 2019.
  • [15] H. Acar, M. S. Özerdem, E. Acar, “Soil moisture inversion via semiempirical and machine learning methods with full-polarization Radarsat-2 and polarimetric target decomposition data: A comparative study.” IEEE Access, 8, 197896-197907, 2020.
  • [16] W. Xu-dong, G. Jian-ming, J. Bai-jun et al., “Mixed-pixel classification of remote sensing images of cellular automata.”, J Surv Mapp 37(1):42–48, 2008.
  • [17] G. Wu, X. Shao, Z. Guo, Q. Chen, W. Yuan, X. Shi, et al. “Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks”, Remote Sensing, 10, p. 407, 2018.
  • [18] J. Yuan, “Learning building extraction in aerial scenes with convolutional networks.”, IEEE Transactions on Pattern Analysis Machine Intelligence, 40, pp. 2793-2798, 2017.
  • [19] Q. Chen, L. Wang, Y. Wu, G. Wu, Z. Guo, S. L. Waslander, “Aerial imagery for roof segmentation: A large-scale dataset towards automatic mapping of buildings.”, ISPRS Journal of Photogrammetry and Remote Sensing, 147, pp. 42-55, 2018.
  • [20] http://study.rsgis.whu.edu.cn/pages/download/
  • [21] O. Ronneberger, P. Fischer, T. Brox, “U-net: Convolutional networks for biomedical image segmentation.” International conference on medical image computing and computer-assisted intervention, (pp. 234–241). Springer, 2015.
  • [22] J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation”, University of Berkeley, Proceedings of the IEEE, 2015.
  • [23] N. Ketkar, “Stochastic gradient descent.”, In Deep learning with Python (pp. 113-132). Apress, Berkeley, CA, 2017.
  • [24] A. Radford, L. Metz, S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks.”, arXiv preprint arXiv:1511.06434, 2015.
  • [25] F. Chollet, "Keras: Deep learning library for theano and tensorflow", 2015, [online] Available: https://github.com/fchollet/keras.
  • [26] Google Colab [Online] Access Link: https://colab.research.google.com/, on 21 November 2022.
  • [27] G. Chhor, B. A. Cristian, B-L. Ianis, "Satellite image segmentation for building detection using U-Net." Web: http://cs229. stanford. edu/proj2017/final-reports/5243715.pdf, 2017.
  • [28] İ. Ataş, “Human gender prediction based on deep transfer learning from panoramic dental radiograph images.” Traitement du Signal, 39(5), 1585-1595, 2022. DOI:10.18280/ts.390515
  • [29] A. H. Murphy, "The Finley Affair: A Signal Event in the History of Forecast Verification." Weather and Forecasting. 11 (1): 3, 1996.
  • [30] Jaccard, Paul, "The Distribution of the Flora in the Alpine Zone.1". New Phytologist. 11 (2): 37–50, 1912. DOI:10.1111/j.1469-8137.1912.tb05611.x.
  • [31] T. Sørensen, "A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons." Kongelige Danske Videnskabernes Selskab. 5 (4): 1–34, 1948.
  • [32] L. R. Dice, "Measures of the Amount of Ecologic Association Between Species." Ecology. 26 (3): 297–302, 1945.DOI:10.2307/1932409.
  • [33] F. Milletari, N. Navab, S. A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation.” In Proceedings of the 14th 3D Vision, Stanford, CA, USA, 25–28, pp. 565–571, 2016.
  • [34] J. Zhang, et al. "Segmenting purple rapeseed leaves in the field from UAV RGB imagery using deep learning as an auxiliary means for nitrogen stress detection." Remote Sensing 12.9, 1403, 2020.
  • [35] J. Ma, et al., “Building Extraction of Aerial Images by a Global and Multi-Scale Encoder Decoder Network.” Remote Sens., 12, 2350, 2020.
  • [36] J. Lin, W. Jing, H. Song, G. Chen, “ESFNet: Efficient Network for Building Extraction from High-Resolution Aerial Images.” IEEE Access, 7, 54285–54294, 2019.
  • [37] V. Iglovikov, A. Shvets, “Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation.” arXiv preprint arXiv:1801.05746, 2018.
  • [38] G. Chhor, C. B. Aramburu, I. Bougdal-Lambert, Satellite image segmentation for building detection using U-Net. Web: http://cs229. stanford. edu/proj2017/final-reports/5243715, 2017.
  • [39] D. Patil, K. Patil, R. Nale, S. Chaudhari, "Semantic Segmentation of Satellite Images using Modified U-Net," IEEE 10. Regional Symposium (TENSYMP), s.1-6, 2022.
There are 39 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Araştırma Articlessi
Authors

İsa Ataş 0000-0003-4094-9598

Publication Date January 30, 2023
Published in Issue Year 2023

Cite

APA Ataş, İ. (2023). Performance Evaluation of Jaccard-Dice Coefficient on Building Segmentation from High Resolution Satellite Images. Balkan Journal of Electrical and Computer Engineering, 11(1), 100-106. https://doi.org/10.17694/bajece.1212563

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı