Araştırma Makalesi
BibTex RIS Kaynak Göster

DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS

Yıl 2022, Cilt: 30 Sayı: 3, 328 - 339, 21.12.2022
https://doi.org/10.31796/ogummf.1054761

Öz

Nowadays, camera-based perception is most popular topic in robotic systems. Verification of camera-based perception systems are crucial and difficult with current tools and methods. This study proposes Camera Fault Injection Tool (CamFITool), which enables different kind of fault injection methods to RGB and TOF cameras in order to perform verification and validation activities on robotic systems. Besides, Fault Injected Image Database which is created by CamFITool is introduced. In addition, the study guides to readers to create new datasets by injecting faults into existing image libraries or camera streams with CamFITool. As a result, CamFITool, an open-source fault injection tool, which is a critical tool for assessing of fault tolerant systems’ safety and security, is proposed. Also, a fault injected image dataset created by CamFITool for verification of camera-based perception studies in robotic systems is given.

Destekleyen Kurum

ECSEL Joint Undertaking (JU) ve TÜBİTAK

Proje Numarası

876852 ve 120N803

Kaynakça

  • Referans1 Osadcuks, V., Pudzs, M., Zujevs, A., Pecka, A., & Ardavs, A. (2020, May). Clock-based time sync hronization for an event-based camera dataset acquisition platform. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4695-4701). IEEE.
  • Referans2 Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision (pp. 2938-2946).
  • Referans3 Park, H., & Mu Lee, K. (2017). Joint estimation of camera pose, depth, deblurring, and super-resolution from a blurred image sequence. In Proceedings of the IEEE International Conference on Computer Vision (pp. 4613-4621).
  • Referans4 Anomaly Detection, A Key Task for AI and Machine Learning, Explained. [Online]. Available: https://www.kdnuggets.com/2019/10/anomaly-detection-explained.html (2021)
  • Referans5 Scharr, H., Minervini, M., Fischbach, A., & Tsaftaris, S. A. (2014, July). Annotated image datasets of rosette plants. In European Conference on Computer Vision. Zürich, Suisse (pp. 6-12). Referans6 Rezazadegan, F., Shirazi, S., Upcrofit, B., & Milford, M. (2017, May). Action recognition: From static datasets to moving robots. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3185-3191). IEEE.
  • Referans7 Su, C., Zhang, S., Xing, J., Gao, W., & Tian, Q. (2016, October). Deep attributes driven multi-camera person re-identification. In European conference on computer vision (pp. 475-491). Springer, Cham.
  • Referans8 Per, J., Kenk, V. S., Kristan, M., & Kovacic, S. (2012, September). Dana36: A multi-camera image dataset for object identification in surveillance scenarios. In 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (pp. 64-69). IEEE.
  • Referans9 Wu, S., Oreifej, O., & Shah, M. (2011, November). Action recognition in videos acquired by a moving camera using motion decomposition of lagrangian particle trajectories. In 2011 International conference on computer vision (pp. 1419-1426). IEEE.
  • Referans10 Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: a database and web-based tool for image annotation. International journal of computer vision, 77(1-3), 157-173.
  • Referans11 Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
  • Referans12 Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338.
  • Referans13 Torralba, A., Fergus, R., & Freeman, W. T. (2008). 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11), 1958-1970.
  • Referans14 Noguchi, A., & Harada, T. (2019). Rgbd-gan: Unsupervised 3d representation learning from natural image datasets via rgbd image synthesis. arXiv preprint arXiv:1909.12573.
  • Referans15 Leitner, J., Dansereau, D., Shirazi, S., & Corke, P. (2015). The need for dynamic and active datasets. In CVPR Workshop on The Future of Datasets in Computer Vision (pp. 1-1).
  • Referans16 Orchard, G., Jayawant, A., Cohen, G. K., & Thakor, N. (2015). Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in neuroscience, 9, 437.
  • Referans17 Ravi, N., Shankar, P., Frankel, A., Elgammal, A., & Iftode, L. (2005, August). Indoor localization using camera phones. In Seventh IEEE Workshop on Mobile Computing Systems & Applications (WMCSA'06 Supplement) (pp. 1-7). IEEE.
  • Referans18 Padhy, R. P., Verma, S., Ahmad, S., Choudhury, S. K., & Sa, P. K. (2018). Deep neural network for autonomous uav navigation in indoor corridor environments. Procedia computer science, 133, 643-650.
  • Referans19 Gloe, T., & Böhme, R. (2010, March). The'Dresden Image Database'for benchmarking digital image forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing (pp. 1584-1590).
  • Referans20 Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., ... & Ng, A. Y. (2009, May). ROS: an open-source Robot Operating System. In ICRA workshop on open source software (Vol. 3, No. 3.2, p. 5).
  • Referans21 GAZEBO website. [Online]. Available: http://GAZEBOsim.org/, (2021)
  • Referans22 Chitta, S., Sucan, I., & Cousins, S. (2012). Moveit![ros topics]. IEEE Robotics & Automation Magazine, 19(1), 18-19.
  • Referans23 Sucan, I. A., Moll, M., & Kavraki, L. E. (2012). The open motion planning library. IEEE Robotics & Automation Magazine, 19(4), 72-82.
  • Referans24 Open Source Computer Vision, OpenCV-Python Tutorials, Morphological Transformations. [Online]. Available: https://docs.opencv.org/4.5.3/d9/d61/tutorial_py_morphological_ops.html, (2021)
  • Referans25 Nene, S. A., Nayar, S. K., & Murase, H. (1996). Columbia object image library (coil-100).
  • Referans26 Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010, June). Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition (pp. 3485-3492). IEEE.
  • Referans27 Fregin, A., Muller, J., Krebel, U., & Dietmayer, K. (2018, May). The DriveU traffic light dataset: Introduction and comparison with existing datasets. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3376-3383). IEEE.
  • Referans28 Barbu, T. (2013, December). Variational image denoising approach with diffusion porous media flow. In Abstract and Applied Analysis (Vol. 2013). Hindawi.
  • Referans29 Schottky, W. (2018). On spontaneous current fluctuations in various electrical conductors. Journal of Micro/Nanolithography, MEMS, and MOEMS, 17(4), 041001.
  • Referans30 Blanter, Y. M., & Büttiker, M. (2000). Shot noise in mesoscopic conductors. Physics reports, 336(1-2), 1-166.
  • Referans31 Rosin, P., & Collomosse, J. (Eds.). (2012). Image and video-based artistic stylisation (Vol. 42). Springer Science & Business Media.
  • Referans32 Erdogmus, A. K., & Karaca, M. (2021). Manipulation of Camera Sensor Data via Fault Injection for Anomaly Detection Studies in Verification and Validation Activities For AI. arXiv preprint arXiv:2108.13803.
  • Referans33 Hsueh, M. C., Tsai, T. K., & Iyer, R. K. (1997). Fault injection techniques and tools. Computer, 30(4), 75-82.
  • Referans34 Parasyris, K., Tziantzoulis, G., Antonopoulos, C. D., & Bellas, N. (2014, June). GemFI: A fault injection tool for studying the behavior of applications on unreliable substrates. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (pp. 622-629). IEEE.
  • Referans35 Aidemark, J., Vinter, J., Folkesson, P., & Karlsson, J. (2001, July). Goofi: Generic object-oriented fault injection tool. In 2001 International Conference on Dependable Systems and Networks (pp. 83-88). IEEE.
  • Referans36 Hari, S. K. S., Tsai, T., Stephenson, M., Keckler, S. W., & Emer, J. (2017, April). Sassifi: An architecture-level fault injection tool for gpu application resilience evaluation. In 2017 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) (pp. 249-258). IEEE.
  • Referans37 Svenningsson, R., Vinter, J., Eriksson, H., & Törngren, M. (2010, September). MODIFI: a MODel-implemented fault injection tool. In International Conference on Computer Safety, Reliability, and Security (pp. 210-222). Springer, Berlin, Heidelberg.
  • Referans38 Yayan, U. & Erdoğmuş, A. (2021). Endüstriyel Robot Hareket Planlama Algoritmaları Performans Karşılaştırması . Journal of Scientific, Technology and Engineering Research , 2 (2) , 31-45 . DOI: 10.53525/jster.979689
  • Referans39 Camera Fault Injection Tool, Inovasyon Muhendislik Github Repository, (2021), https://github.com/inomuh/Camera-Fault-Injection-Tool
  • Referans40 IFR International Federation of Robotics, (2021), https://ifr.org/ifr-press-releases/news/robot-sales-rise-again
  • Referans41 Camera Fault Injection Tool, ROS Wiki, (2021), wiki.ros.org/CamFITool
  • Referans42 Jankowski, M. (2006). Erosion, dilation and related operators. Department of Electrical EngineeringUniversity of Southern Maine Portland, Maine, USA.
  • Referans43 Acton, S. T., & Mukherjee, D. P. (2000). Scale space classification using area morphology. IEEE Transactions on Image Processing, 9(4), 623-635.
  • Referans44 Larnier, S., Fehrenbach, J., & Masmoudi, M. (2012). The topological gradient method: From optimal design to image processing. Milan Journal of Mathematics, 80(2), 411-441.
  • Referans45 Ji, H., & Liu, C. (2008, June). Motion blur identification from image gradients. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-8). IEEE.

ROBOTİK SİSTEMLERDE KAMERA TABANLI ALGININ DOĞRULANMASI İÇİN HATA ENJEKSİYON ARACI VE VERİ KÜMESİNİN GELİŞTİRİLMESİ

Yıl 2022, Cilt: 30 Sayı: 3, 328 - 339, 21.12.2022
https://doi.org/10.31796/ogummf.1054761

Öz

Günümüzde robotik sistemlerde kamera tabanlı algılama en popüler konulardan biridir. Mevcut araç ve yöntemlerle kamera tabanlı algılama sistemlerinin doğrulanması da çok önemli ve zordur. Bu çalışma, robotik sistemlerde doğrulama ve onaylama faaliyetlerini gerçekleştirmek için RGB ve TOF kameralara farklı türlerde hata enjeksiyon yöntemleri sağlayan Kamera Hatası Enjeksiyon Aracını (CamFITool) önermektedir. Ayrıca CamFITool tarafından oluşturulan hata enjekte edilmiş imge kümesi tanıtılmaktadır. Buna ek olarak çalışma, CamFITool ile mevcut görüntü kitaplıklarına veya kamera akışlarına hatalar enjekte ederek yeni imge kümeleri oluşturmak için okuyuculara rehberlik edilmektedir. Sonuç olarak, hataya dayanıklı sistemlerin emniyet ve güvenliğini değerlendirmek için kritik bir araç olan açık kaynaklı hata enjeksiyon aracı CamFITool önerilmiştir. Ayrıca robotik sistemlerde kamera tabanlı algılama çalışmalarının doğrulanması için CamFITool tarafından oluşturulan hata enjekte edilmiş görüntü veri kümesi verilmiştir.

Proje Numarası

876852 ve 120N803

Kaynakça

  • Referans1 Osadcuks, V., Pudzs, M., Zujevs, A., Pecka, A., & Ardavs, A. (2020, May). Clock-based time sync hronization for an event-based camera dataset acquisition platform. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4695-4701). IEEE.
  • Referans2 Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision (pp. 2938-2946).
  • Referans3 Park, H., & Mu Lee, K. (2017). Joint estimation of camera pose, depth, deblurring, and super-resolution from a blurred image sequence. In Proceedings of the IEEE International Conference on Computer Vision (pp. 4613-4621).
  • Referans4 Anomaly Detection, A Key Task for AI and Machine Learning, Explained. [Online]. Available: https://www.kdnuggets.com/2019/10/anomaly-detection-explained.html (2021)
  • Referans5 Scharr, H., Minervini, M., Fischbach, A., & Tsaftaris, S. A. (2014, July). Annotated image datasets of rosette plants. In European Conference on Computer Vision. Zürich, Suisse (pp. 6-12). Referans6 Rezazadegan, F., Shirazi, S., Upcrofit, B., & Milford, M. (2017, May). Action recognition: From static datasets to moving robots. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3185-3191). IEEE.
  • Referans7 Su, C., Zhang, S., Xing, J., Gao, W., & Tian, Q. (2016, October). Deep attributes driven multi-camera person re-identification. In European conference on computer vision (pp. 475-491). Springer, Cham.
  • Referans8 Per, J., Kenk, V. S., Kristan, M., & Kovacic, S. (2012, September). Dana36: A multi-camera image dataset for object identification in surveillance scenarios. In 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (pp. 64-69). IEEE.
  • Referans9 Wu, S., Oreifej, O., & Shah, M. (2011, November). Action recognition in videos acquired by a moving camera using motion decomposition of lagrangian particle trajectories. In 2011 International conference on computer vision (pp. 1419-1426). IEEE.
  • Referans10 Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: a database and web-based tool for image annotation. International journal of computer vision, 77(1-3), 157-173.
  • Referans11 Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
  • Referans12 Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338.
  • Referans13 Torralba, A., Fergus, R., & Freeman, W. T. (2008). 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11), 1958-1970.
  • Referans14 Noguchi, A., & Harada, T. (2019). Rgbd-gan: Unsupervised 3d representation learning from natural image datasets via rgbd image synthesis. arXiv preprint arXiv:1909.12573.
  • Referans15 Leitner, J., Dansereau, D., Shirazi, S., & Corke, P. (2015). The need for dynamic and active datasets. In CVPR Workshop on The Future of Datasets in Computer Vision (pp. 1-1).
  • Referans16 Orchard, G., Jayawant, A., Cohen, G. K., & Thakor, N. (2015). Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in neuroscience, 9, 437.
  • Referans17 Ravi, N., Shankar, P., Frankel, A., Elgammal, A., & Iftode, L. (2005, August). Indoor localization using camera phones. In Seventh IEEE Workshop on Mobile Computing Systems & Applications (WMCSA'06 Supplement) (pp. 1-7). IEEE.
  • Referans18 Padhy, R. P., Verma, S., Ahmad, S., Choudhury, S. K., & Sa, P. K. (2018). Deep neural network for autonomous uav navigation in indoor corridor environments. Procedia computer science, 133, 643-650.
  • Referans19 Gloe, T., & Böhme, R. (2010, March). The'Dresden Image Database'for benchmarking digital image forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing (pp. 1584-1590).
  • Referans20 Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., ... & Ng, A. Y. (2009, May). ROS: an open-source Robot Operating System. In ICRA workshop on open source software (Vol. 3, No. 3.2, p. 5).
  • Referans21 GAZEBO website. [Online]. Available: http://GAZEBOsim.org/, (2021)
  • Referans22 Chitta, S., Sucan, I., & Cousins, S. (2012). Moveit![ros topics]. IEEE Robotics & Automation Magazine, 19(1), 18-19.
  • Referans23 Sucan, I. A., Moll, M., & Kavraki, L. E. (2012). The open motion planning library. IEEE Robotics & Automation Magazine, 19(4), 72-82.
  • Referans24 Open Source Computer Vision, OpenCV-Python Tutorials, Morphological Transformations. [Online]. Available: https://docs.opencv.org/4.5.3/d9/d61/tutorial_py_morphological_ops.html, (2021)
  • Referans25 Nene, S. A., Nayar, S. K., & Murase, H. (1996). Columbia object image library (coil-100).
  • Referans26 Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010, June). Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition (pp. 3485-3492). IEEE.
  • Referans27 Fregin, A., Muller, J., Krebel, U., & Dietmayer, K. (2018, May). The DriveU traffic light dataset: Introduction and comparison with existing datasets. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3376-3383). IEEE.
  • Referans28 Barbu, T. (2013, December). Variational image denoising approach with diffusion porous media flow. In Abstract and Applied Analysis (Vol. 2013). Hindawi.
  • Referans29 Schottky, W. (2018). On spontaneous current fluctuations in various electrical conductors. Journal of Micro/Nanolithography, MEMS, and MOEMS, 17(4), 041001.
  • Referans30 Blanter, Y. M., & Büttiker, M. (2000). Shot noise in mesoscopic conductors. Physics reports, 336(1-2), 1-166.
  • Referans31 Rosin, P., & Collomosse, J. (Eds.). (2012). Image and video-based artistic stylisation (Vol. 42). Springer Science & Business Media.
  • Referans32 Erdogmus, A. K., & Karaca, M. (2021). Manipulation of Camera Sensor Data via Fault Injection for Anomaly Detection Studies in Verification and Validation Activities For AI. arXiv preprint arXiv:2108.13803.
  • Referans33 Hsueh, M. C., Tsai, T. K., & Iyer, R. K. (1997). Fault injection techniques and tools. Computer, 30(4), 75-82.
  • Referans34 Parasyris, K., Tziantzoulis, G., Antonopoulos, C. D., & Bellas, N. (2014, June). GemFI: A fault injection tool for studying the behavior of applications on unreliable substrates. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (pp. 622-629). IEEE.
  • Referans35 Aidemark, J., Vinter, J., Folkesson, P., & Karlsson, J. (2001, July). Goofi: Generic object-oriented fault injection tool. In 2001 International Conference on Dependable Systems and Networks (pp. 83-88). IEEE.
  • Referans36 Hari, S. K. S., Tsai, T., Stephenson, M., Keckler, S. W., & Emer, J. (2017, April). Sassifi: An architecture-level fault injection tool for gpu application resilience evaluation. In 2017 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) (pp. 249-258). IEEE.
  • Referans37 Svenningsson, R., Vinter, J., Eriksson, H., & Törngren, M. (2010, September). MODIFI: a MODel-implemented fault injection tool. In International Conference on Computer Safety, Reliability, and Security (pp. 210-222). Springer, Berlin, Heidelberg.
  • Referans38 Yayan, U. & Erdoğmuş, A. (2021). Endüstriyel Robot Hareket Planlama Algoritmaları Performans Karşılaştırması . Journal of Scientific, Technology and Engineering Research , 2 (2) , 31-45 . DOI: 10.53525/jster.979689
  • Referans39 Camera Fault Injection Tool, Inovasyon Muhendislik Github Repository, (2021), https://github.com/inomuh/Camera-Fault-Injection-Tool
  • Referans40 IFR International Federation of Robotics, (2021), https://ifr.org/ifr-press-releases/news/robot-sales-rise-again
  • Referans41 Camera Fault Injection Tool, ROS Wiki, (2021), wiki.ros.org/CamFITool
  • Referans42 Jankowski, M. (2006). Erosion, dilation and related operators. Department of Electrical EngineeringUniversity of Southern Maine Portland, Maine, USA.
  • Referans43 Acton, S. T., & Mukherjee, D. P. (2000). Scale space classification using area morphology. IEEE Transactions on Image Processing, 9(4), 623-635.
  • Referans44 Larnier, S., Fehrenbach, J., & Masmoudi, M. (2012). The topological gradient method: From optimal design to image processing. Milan Journal of Mathematics, 80(2), 411-441.
  • Referans45 Ji, H., & Liu, C. (2008, June). Motion blur identification from image gradients. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-8). IEEE.
Toplam 44 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makaleleri
Yazarlar

Uğur Yayan 0000-0003-1394-5209

Alim Kerem Erdoğmuş 0000-0001-5111-5965

Proje Numarası 876852 ve 120N803
Erken Görünüm Tarihi 21 Aralık 2022
Yayımlanma Tarihi 21 Aralık 2022
Kabul Tarihi 29 Haziran 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 30 Sayı: 3

Kaynak Göster

APA Yayan, U., & Erdoğmuş, A. K. (2022). DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, 30(3), 328-339. https://doi.org/10.31796/ogummf.1054761
AMA Yayan U, Erdoğmuş AK. DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS. ESOGÜ Müh Mim Fak Derg. Aralık 2022;30(3):328-339. doi:10.31796/ogummf.1054761
Chicago Yayan, Uğur, ve Alim Kerem Erdoğmuş. “DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi 30, sy. 3 (Aralık 2022): 328-39. https://doi.org/10.31796/ogummf.1054761.
EndNote Yayan U, Erdoğmuş AK (01 Aralık 2022) DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 30 3 328–339.
IEEE U. Yayan ve A. K. Erdoğmuş, “DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS”, ESOGÜ Müh Mim Fak Derg, c. 30, sy. 3, ss. 328–339, 2022, doi: 10.31796/ogummf.1054761.
ISNAD Yayan, Uğur - Erdoğmuş, Alim Kerem. “DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS”. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 30/3 (Aralık 2022), 328-339. https://doi.org/10.31796/ogummf.1054761.
JAMA Yayan U, Erdoğmuş AK. DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS. ESOGÜ Müh Mim Fak Derg. 2022;30:328–339.
MLA Yayan, Uğur ve Alim Kerem Erdoğmuş. “DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, c. 30, sy. 3, 2022, ss. 328-39, doi:10.31796/ogummf.1054761.
Vancouver Yayan U, Erdoğmuş AK. DEVELOPMENT OF A FAULT INJECTION TOOL & DATASET FOR VERIFICATION OF CAMERA BASED PERCEPTION IN ROBOTIC SYSTEMS. ESOGÜ Müh Mim Fak Derg. 2022;30(3):328-39.

20873 13565 13566 15461 13568  14913