Research Article
BibTex RIS Cite

Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi

Year 2023, , 181 - 190, 25.08.2023
https://doi.org/10.19113/sdufenbed.1090596

Abstract

Son yıllarda gelişen teknoloji sürekli akan, farklı yapılarda ve yüksek boyutlarda verileri de beraberinde getirmiştir. Bu hızlı değişim ve veri setlerinde rastlanan problemler özellikle geleneksel yöntemleri bir noktadan sonra yetersiz bırakmaktadır. Bu çalışma kapsamında iki önemli veri problemi ele alınmıştır: i) kayıp gözlem içeren veri setleri ve ii) dengesiz sınıf dağılımı içeren veri setleri. Bu çalışmanın amacı aynı anda hem kayıp gözlem hem de dengesiz sınıf dağılımı sorununa sahip veri setlerini çeşitli kayıp gözlem atama yöntemleri kullanarak doldurmak ve elde edilen veri üzerinde topluluk öğrenme algoritmalarının başarı düzeylerini değerlendirmektir. Uygulama için sensörler aracılığıyla toplanan veri setinde eğitim için 59000 gözlemden oluşan negatif sınıfa karşılık 1000 adet pozitif sınıfa ait gözlem bulunmaktadır. Elde edilen modeller %2.4 oranında dengesiz sınıf dağılımına sahip sınama verisi ile sınanmıştır. Ayrıca veri setinde bulunan değişkenlerin yaklaşık %99’unda %82’ye varan kayıp veri söz konusudur. Bu kayıp gözlemler sıcak deste ataması, ortalama, ortanca, tepe değeri, çoklu atama, beklenti en büyükleme ve k en yakın komşu yöntemleri ile giderilmeye çalışılmıştır. Atama metodu ile eksik veri tamamlaması yapılan veri setleri Extra Trees, Random Forest, Gradient Boosting, LightGBM ve XGBoost gibi algoritmalar ile karşılaştırmalı sınanmış, en iyi sonuç XGBoost algoritması ile elde edilmiştir.

Thanks

Bu çalışma, Mimar Sinan Güzel Sanatlar Üniversitesi Fen Bilimleri Enstitüsü, İstatistik Anabilim Dalı Yüksek Lisans Programı’nda, Enis Gümüştaş tarafından, Doç. Dr. Ayça Çakmak Pehlivanlı danışmanlığında tamamlanan “Kayıp Gözlem İçeren Dengesiz Veri Setlerinin Topluluk Öğrenme Algoritmaları ile Sınıflandırılması” başlıklı Yüksek Lisans tezinden üretilmiştir. Tezin inceleme ve değerlendirme aşamasında yapmış oldukları katkılardan dolayı jüri üyelerine teşekkür ederiz.

References

  • [1] Rubin, D. B. 1976. Inference and missing data. Biometrika, 63(3), pp. 581-592.
  • [2] Dempster, A. P., Laird, N. M. and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B: Methodological, 39(1), pp. 1-22.
  • [3] Little, R. J. 1988. A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), pp. 1198-1202.
  • [4] Chan, P., and Stolfo, S. 1998. Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection. In Proc. of Knowledge Discovery and Data Mining, pp:164–168.
  • [5] Fu K., Cheng D., Tu Y., Zhang L. 2016. Credit Card Fraud Detection Using Convolutional Neural Networks. Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science, vol 9949. Springer, Cham.
  • [6] Sanz, J. A., Bernardo, D., Herrera, F., Bustince, H., and Hagras, H. 2015. A compact evolutionary interval-valued fuzzy rule-based classification system for the modeling and prediction of real-world financial applications with imbalanced data. Fuzzy Systems, IEEE Transactions on, 23(4), pp. 973–990
  • [7] Mitchell P.S., Parkin R.K., Kroh E.M., et al. 2008. Circulating microRNAs as stable blood-based markers for cancer detection. Proc. of the National Academy of Sciences, 105(30) pp. 10513-8.
  • [8] Oh, S., Lee, M. S. And Zhang, B.T. 2011. Ensemble learning with active example selection for imbalanced biomedical data classification. IEEE-ACM Trans. on Computational Biology and Bioinformatics (TCBB), 8(2), pp. 316–325
  • [9] Li, Y., Sun, G., & Zhu, Y. 2010. Data imbalance problem in text classification. IEEE 2010 3rd Int. Symposium on Information Processing, pp. 301-305.
  • [10] Kubat, M., Holte, R.C. and Matwin, S. 1998. Machine learning for the detection of oil splis in radar images Machine Learning, 30, pp.195–215.
  • [11] Chawla, N.V., Bowyer, K.W., Hall, L.O. and Kegelmeyer, W.P. 2002. SMOTE: Synthetic Minority Over-Sampling Technique. Journal of Artificial Intelligence Research 16, pp. 321–357.
  • [12] Drummond, C. and Holte, R. C. 2003. C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In Workshop on Learning from Imbalanced Datasets II, vol. 11, pp. 1-8.
  • [13] Han, H., Wang, W. Y. and Mao, B. H. 2005. Borderline-SMOTE: A new oversampling method in imbalanced data sets learning. In International Conference on Intelligent Computing (ICIC'05). Lecture Notes in Computer Science 3644, pp. 878-887, Springer-Verlag.
  • [14] Van Hulse, J., Khoshgoftaar, T.M. and Napolitano, A. 2007. Experimental perspectives on learning from imbalanced data. In Proc. of the 24th Int. Conf. on ML (ICML), pp. 17–23.
  • [15] He, H., Bai, Y., Garcia, E. A. and Li, S. 2008. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. IEEE International Joint Conference on Neural Networks, pp. 1322-1328.
  • [16] He, H., Garcia, E. A. 2009. Learning from Imbalanced Data, IEEE Trans. Knowledge and Data Eng., 21(9), pp. 1263-1284.
  • [17] Batista, G. E. D. A. P. A., Silva, D. F. and Prati, R. C. 2012. An Experimental Design to Evaluate Class Imbalance Treatment Methods, 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, pp. 95-101.
  • [18] Haixiang, G., Yijing, L., Shang, J., Mingyun, G., Yuanyue, H., and Bing, G. 2017. Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 73, pp. 220-239.
  • [19] Schapire, R. E. 1990. The strength of weak learnability. Machine learning, 5(2), 197-227.
  • [20] Freund, Y., and Schapire, R. E. 1996. Experiments with a new boosting algorithm. Proc. of the 13th International Conference on International Conference on Machine Learning, ICML’ 96, pp. 148-156.
  • [21] Chawla N.V., Lazarevic A., Hall L.O. and Bowyer K.W. 2003. SMOTEBoost: Improving Prediction of the Minority Class in Boosting. Knowledge Discovery in Databases: PKDD 2003. Lecture Notes in Computer Science, vol 2838. Springer, Berlin, Heidelberg.
  • [22] Liu, X. Y., Wu, J. and Zhou, Z. H. 2009. Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 39(2), pp. 539-550.
  • [23] Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J., and Napolitano, A. 2009. RUSBoost: A hybrid approach to alleviating class imbalance. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(1), pp. 185-197.
  • [24] Salem, M., Taheri, S., and Yuan, J. S. 2018. An Experimental Evaluation of Fault Diagnosis from Imbalanced and Incomplete Data for Smart Semiconductor Manufacturing. Big Data and Cognitive Computing, 2(4), 30.
  • [25] Liu, Z., Cao, W., Gao, Z., Bian, J., Chen, H., Chang, Y., and Liu, TY. 2020. Self-paced Ensemble for Highly Imbalanced Massive Data Classification, IEEE 36th International Conference on Data Engineering (ICDE), pp. 841-852.
  • [26] Razavi-Far, R., Farajzadeh-Zanajni, M., Wang, B., Saif, M. and Chakrabarti, S. 2021. Imputation-Based Ensemble Techniques for Class Imbalance Learning, IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 5, pp. 1988-2001.
  • [27] Zhou, Z.-H. and Liu, X.-Y. 2006. Training cost-sensitive neural networks with methods addressing the class imbalance problem, IEEE Trans. Knowledge. Data Eng., vol. 18, pp. 63–77.
  • [28] Zong, W., Huang, G.-B. and Chen, Y. 2013. Weighted extreme learning machine for imbalance learning, Neurocomputing, vol. 101, pp. 229–242.
  • [29] Wang, J., Zhao, P. and Hoi, S. C. H. 2014. Cost-sensitive online classification, IEEE Trans. Knowledge. Data Eng., vol. 26(10), pp. 2425–2438.
  • [30] Gümüştaş, E. 2019. Kayıp gözlem içeren dengesiz veri setlerinin topluluk öğrenme algoritmaları ile sınıflandırılması. Mimar Sinan Güzel Sanatlar Üniversitesi Fen Bilimleri Enstitüsü, Yüksek Lisans Tezi, 48s, İstanbul.
  • [31] Longford, N. T. 2004. Missing data and small area estimation in the UK Labour Force Survey. Journal of the Royal Statistical Society: Series A: Statistics in Society, 167(2), pp. 341-373.
  • [32] Little, R.J.A. and Rubin, D.B. 1987. Statistical Analysis with Missing Data. John Wiley & Sons, New York.
  • [33] Oğuzlar, A. 2001. Alan araştırmalarında kayıp değer problemi ve çözüm önerileri. Ulusal Ekonometri ve İstatistik Sempozyumu, Çukurova Üniversitesi Adana, 20(22), pp. 1-28.
  • [34] Allison, Paul. 2001. Missing data. Sage University Papers Series on Quantitative Applications in the Social Sciences. 07-136.
  • [35] Alpar, R. 2003. Uygulamalı Çok Değişkenli İstatistiksel Yöntemlere Giriş 1, Nobel Akademik Yayıncılık, 404s.
  • [36] Gümüştaş, E. ve Çakmak Pehlivanlı, A. 2021. In-Silico Mutajenisite Tahmininde İstatistiksel Öğrenme Modeli. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 25, pp. 365-370.
  • [37] Dietterich, T. G. 2000. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40(2), pp. 139-157.
  • [38] Breiman, L. 1996. Bagging predictors. Machine learning, 24(2), pp. 123-140
  • [39] Efron, B. 1979. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7(1), pp. 1-26.
  • [40] Efron, B. and Tibshirani, R. 1994. An introduction to the bootstrap. Chapman & Hall/CRC.
  • [41] Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economics, Societies and Nations., Little, Brown.
  • [42] Freund, Y. and Schapire, R.E. 1997. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences, 55(1), pp. 119-139.
  • [43] Wolpert, D. H., 1992. Stacked generalization, Neural Networks, 5(2), pp. 241-259.
  • [44] Maloof, M. A. 2003. Learning when data sets are imbalanced and when costs are unequal and unknown. Workshop on Learning from Imbalanced Datasets II vol. 2.
  • [45] Sun, Y., Kamel, M. S., Wong, A. K., & Wang, Y. 2007. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12), pp. 3358-3378.
  • [46] Lipton, Z. C., Elkan, C., and Naryanaswamy, B. 2014. Optimal thresholding of classifiers to maximize F1 measure. Joint European Conf. on Machine Learning and Knowledge Discovery in Databases pp. 225-239. Springer, Berlin, Heidelberg.

Statistical Evaluation of Ensemble Learning Outcomes for Missing Value Problem in Imbalanced Class Distribution

Year 2023, , 181 - 190, 25.08.2023
https://doi.org/10.19113/sdufenbed.1090596

Abstract

Rapid developments in technology have brought data in different structures and high dimensions in recent decades. Due to this rapid changes and problems encounters in data sets, it has been inevitable that traditional methods replaced with machine learning methods. Within the range of this study, two important data problems are discussed: data sets with i) missing observations and ii) imbalanced class distribution. This study aims to fill the datasets that have both missing observation and imbalanced class distribution problems at the same time by using various missing observation imputation methods and to assess the success levels of ensemble learning algorithms on the obtained data. In the data set collected through sensors for the application, there are 59000 observations for training versus 1000 positive observations for the negative class. The models obtained were tested with the data with an imbalanced class distribution of 2.4%. In addition, approximately 99% of the features in the data set have missing data up to 82%. These missing observations have been tried to be eliminated by hot deck imputation, mean, median, mode, multiple imputation, expectation maximization, and k nearest neighbour methods. Datasets completed with the imputation methods were comparatively tested with algorithms such as Extra Trees, Random Forest, Gradient Boosting, LightGBM, and XGBoost, and the most promising result was obtained with the XGBoost algorithm.

References

  • [1] Rubin, D. B. 1976. Inference and missing data. Biometrika, 63(3), pp. 581-592.
  • [2] Dempster, A. P., Laird, N. M. and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B: Methodological, 39(1), pp. 1-22.
  • [3] Little, R. J. 1988. A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), pp. 1198-1202.
  • [4] Chan, P., and Stolfo, S. 1998. Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection. In Proc. of Knowledge Discovery and Data Mining, pp:164–168.
  • [5] Fu K., Cheng D., Tu Y., Zhang L. 2016. Credit Card Fraud Detection Using Convolutional Neural Networks. Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science, vol 9949. Springer, Cham.
  • [6] Sanz, J. A., Bernardo, D., Herrera, F., Bustince, H., and Hagras, H. 2015. A compact evolutionary interval-valued fuzzy rule-based classification system for the modeling and prediction of real-world financial applications with imbalanced data. Fuzzy Systems, IEEE Transactions on, 23(4), pp. 973–990
  • [7] Mitchell P.S., Parkin R.K., Kroh E.M., et al. 2008. Circulating microRNAs as stable blood-based markers for cancer detection. Proc. of the National Academy of Sciences, 105(30) pp. 10513-8.
  • [8] Oh, S., Lee, M. S. And Zhang, B.T. 2011. Ensemble learning with active example selection for imbalanced biomedical data classification. IEEE-ACM Trans. on Computational Biology and Bioinformatics (TCBB), 8(2), pp. 316–325
  • [9] Li, Y., Sun, G., & Zhu, Y. 2010. Data imbalance problem in text classification. IEEE 2010 3rd Int. Symposium on Information Processing, pp. 301-305.
  • [10] Kubat, M., Holte, R.C. and Matwin, S. 1998. Machine learning for the detection of oil splis in radar images Machine Learning, 30, pp.195–215.
  • [11] Chawla, N.V., Bowyer, K.W., Hall, L.O. and Kegelmeyer, W.P. 2002. SMOTE: Synthetic Minority Over-Sampling Technique. Journal of Artificial Intelligence Research 16, pp. 321–357.
  • [12] Drummond, C. and Holte, R. C. 2003. C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In Workshop on Learning from Imbalanced Datasets II, vol. 11, pp. 1-8.
  • [13] Han, H., Wang, W. Y. and Mao, B. H. 2005. Borderline-SMOTE: A new oversampling method in imbalanced data sets learning. In International Conference on Intelligent Computing (ICIC'05). Lecture Notes in Computer Science 3644, pp. 878-887, Springer-Verlag.
  • [14] Van Hulse, J., Khoshgoftaar, T.M. and Napolitano, A. 2007. Experimental perspectives on learning from imbalanced data. In Proc. of the 24th Int. Conf. on ML (ICML), pp. 17–23.
  • [15] He, H., Bai, Y., Garcia, E. A. and Li, S. 2008. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. IEEE International Joint Conference on Neural Networks, pp. 1322-1328.
  • [16] He, H., Garcia, E. A. 2009. Learning from Imbalanced Data, IEEE Trans. Knowledge and Data Eng., 21(9), pp. 1263-1284.
  • [17] Batista, G. E. D. A. P. A., Silva, D. F. and Prati, R. C. 2012. An Experimental Design to Evaluate Class Imbalance Treatment Methods, 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, pp. 95-101.
  • [18] Haixiang, G., Yijing, L., Shang, J., Mingyun, G., Yuanyue, H., and Bing, G. 2017. Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 73, pp. 220-239.
  • [19] Schapire, R. E. 1990. The strength of weak learnability. Machine learning, 5(2), 197-227.
  • [20] Freund, Y., and Schapire, R. E. 1996. Experiments with a new boosting algorithm. Proc. of the 13th International Conference on International Conference on Machine Learning, ICML’ 96, pp. 148-156.
  • [21] Chawla N.V., Lazarevic A., Hall L.O. and Bowyer K.W. 2003. SMOTEBoost: Improving Prediction of the Minority Class in Boosting. Knowledge Discovery in Databases: PKDD 2003. Lecture Notes in Computer Science, vol 2838. Springer, Berlin, Heidelberg.
  • [22] Liu, X. Y., Wu, J. and Zhou, Z. H. 2009. Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 39(2), pp. 539-550.
  • [23] Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J., and Napolitano, A. 2009. RUSBoost: A hybrid approach to alleviating class imbalance. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(1), pp. 185-197.
  • [24] Salem, M., Taheri, S., and Yuan, J. S. 2018. An Experimental Evaluation of Fault Diagnosis from Imbalanced and Incomplete Data for Smart Semiconductor Manufacturing. Big Data and Cognitive Computing, 2(4), 30.
  • [25] Liu, Z., Cao, W., Gao, Z., Bian, J., Chen, H., Chang, Y., and Liu, TY. 2020. Self-paced Ensemble for Highly Imbalanced Massive Data Classification, IEEE 36th International Conference on Data Engineering (ICDE), pp. 841-852.
  • [26] Razavi-Far, R., Farajzadeh-Zanajni, M., Wang, B., Saif, M. and Chakrabarti, S. 2021. Imputation-Based Ensemble Techniques for Class Imbalance Learning, IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 5, pp. 1988-2001.
  • [27] Zhou, Z.-H. and Liu, X.-Y. 2006. Training cost-sensitive neural networks with methods addressing the class imbalance problem, IEEE Trans. Knowledge. Data Eng., vol. 18, pp. 63–77.
  • [28] Zong, W., Huang, G.-B. and Chen, Y. 2013. Weighted extreme learning machine for imbalance learning, Neurocomputing, vol. 101, pp. 229–242.
  • [29] Wang, J., Zhao, P. and Hoi, S. C. H. 2014. Cost-sensitive online classification, IEEE Trans. Knowledge. Data Eng., vol. 26(10), pp. 2425–2438.
  • [30] Gümüştaş, E. 2019. Kayıp gözlem içeren dengesiz veri setlerinin topluluk öğrenme algoritmaları ile sınıflandırılması. Mimar Sinan Güzel Sanatlar Üniversitesi Fen Bilimleri Enstitüsü, Yüksek Lisans Tezi, 48s, İstanbul.
  • [31] Longford, N. T. 2004. Missing data and small area estimation in the UK Labour Force Survey. Journal of the Royal Statistical Society: Series A: Statistics in Society, 167(2), pp. 341-373.
  • [32] Little, R.J.A. and Rubin, D.B. 1987. Statistical Analysis with Missing Data. John Wiley & Sons, New York.
  • [33] Oğuzlar, A. 2001. Alan araştırmalarında kayıp değer problemi ve çözüm önerileri. Ulusal Ekonometri ve İstatistik Sempozyumu, Çukurova Üniversitesi Adana, 20(22), pp. 1-28.
  • [34] Allison, Paul. 2001. Missing data. Sage University Papers Series on Quantitative Applications in the Social Sciences. 07-136.
  • [35] Alpar, R. 2003. Uygulamalı Çok Değişkenli İstatistiksel Yöntemlere Giriş 1, Nobel Akademik Yayıncılık, 404s.
  • [36] Gümüştaş, E. ve Çakmak Pehlivanlı, A. 2021. In-Silico Mutajenisite Tahmininde İstatistiksel Öğrenme Modeli. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 25, pp. 365-370.
  • [37] Dietterich, T. G. 2000. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40(2), pp. 139-157.
  • [38] Breiman, L. 1996. Bagging predictors. Machine learning, 24(2), pp. 123-140
  • [39] Efron, B. 1979. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7(1), pp. 1-26.
  • [40] Efron, B. and Tibshirani, R. 1994. An introduction to the bootstrap. Chapman & Hall/CRC.
  • [41] Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economics, Societies and Nations., Little, Brown.
  • [42] Freund, Y. and Schapire, R.E. 1997. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences, 55(1), pp. 119-139.
  • [43] Wolpert, D. H., 1992. Stacked generalization, Neural Networks, 5(2), pp. 241-259.
  • [44] Maloof, M. A. 2003. Learning when data sets are imbalanced and when costs are unequal and unknown. Workshop on Learning from Imbalanced Datasets II vol. 2.
  • [45] Sun, Y., Kamel, M. S., Wong, A. K., & Wang, Y. 2007. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12), pp. 3358-3378.
  • [46] Lipton, Z. C., Elkan, C., and Naryanaswamy, B. 2014. Optimal thresholding of classifiers to maximize F1 measure. Joint European Conf. on Machine Learning and Knowledge Discovery in Databases pp. 225-239. Springer, Berlin, Heidelberg.
There are 46 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Makaleler
Authors

Enis Gumustas 0000-0003-0220-4544

Ayça Çakmak Pehlivanlı 0000-0001-9884-6538

Publication Date August 25, 2023
Published in Issue Year 2023

Cite

APA Gumustas, E., & Çakmak Pehlivanlı, A. (2023). Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 27(2), 181-190. https://doi.org/10.19113/sdufenbed.1090596
AMA Gumustas E, Çakmak Pehlivanlı A. Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. August 2023;27(2):181-190. doi:10.19113/sdufenbed.1090596
Chicago Gumustas, Enis, and Ayça Çakmak Pehlivanlı. “Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 27, no. 2 (August 2023): 181-90. https://doi.org/10.19113/sdufenbed.1090596.
EndNote Gumustas E, Çakmak Pehlivanlı A (August 1, 2023) Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 27 2 181–190.
IEEE E. Gumustas and A. Çakmak Pehlivanlı, “Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi”, Süleyman Demirel Üniv. Fen Bilim. Enst. Derg., vol. 27, no. 2, pp. 181–190, 2023, doi: 10.19113/sdufenbed.1090596.
ISNAD Gumustas, Enis - Çakmak Pehlivanlı, Ayça. “Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 27/2 (August 2023), 181-190. https://doi.org/10.19113/sdufenbed.1090596.
JAMA Gumustas E, Çakmak Pehlivanlı A. Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. 2023;27:181–190.
MLA Gumustas, Enis and Ayça Çakmak Pehlivanlı. “Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, vol. 27, no. 2, 2023, pp. 181-90, doi:10.19113/sdufenbed.1090596.
Vancouver Gumustas E, Çakmak Pehlivanlı A. Dengesiz Sınıf Dağılımında Kayıp Gözlem Sorunu için Topluluk Öğrenmesi Sonuçlarının İstatistiksel Değerlendirmesi. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. 2023;27(2):181-90.

e-ISSN :1308-6529
Linking ISSN (ISSN-L): 1300-7688