Pengenalan Tulisan Tangan Huruf Hijaiyah Menggunakan Convolution Neural Network Dengan Augmentasi Data

Authors

  • Sunu Ilham Pradika Universitas Pembangunan Nasional "Veteran" Jawa Timur
  • Budi Nugroho Universitas Pembangunan Nasional “Veteran” Jawa Timur
  • Eva Yulia Puspaningrum Universitas Pembangunan Nasional “Veteran” Jawa Timur

DOI:

https://doi.org/10.33005/santika.v1i0.35

Keywords:

Deep learning, Convolutional Neural Network, Pengenalan Huruf Hijaiyah, Tulisan Tangan, Supervised Learning, Unsupervised Learning.

Abstract

Sistem pengenalan tulisan tangan huruf hijaiyah diperlukan untuk melakukan koreksi otomatis terhadap seseorang yang tengah belajar menulisnya. Dalam pengimplementasiannya terdapat beberapa tantangan. Tantangan seperti banyaknya bentuk variasi tulisan tangan huruf hijaiyah, pemilihan arsitektur yang tepat, dan banyak data pelatihan yang dibutuhkan agar sistem dapat memprediksi secara akurat. Convolutional Neural Network (CNN) adalah salah satu algoritma deep learning yang efektif dalam mengolah citra yang dapat dilatih baik secara supervised learning maupun unsupervised learning. Model CNN dilatih menggunakan dataset  Hijaiyah1SKFI. Dataset tersebut terdiri dari 2100 data dengan 30 kelas yakni huruf alif hingga ya yang ditulis oleh 4 orang berbeda dengan 80% digunakan sebagai data latih dan 20% adalah data tes. Dalam makalah ini dilakukan optimisasi berupa augmentasi data karena data yang tidak banyak sehingga dengan data yang sedikit maka variasi data pelatihan akan bertambah. Arsitektur yang diusung di makalah ini bernama SIP-Net mendapatkan akurasi pada data uji sebesar 99.7%.

References

[1] T. M. Iqbal, “Huruf Hijaiyah: 30 Huruf Arab yang Luar Biasa [PENJELASAN LENGKAP],” 2020. [Online]. Available: https://hasana.id/huruf-hijaiyah/. [Accessed: 04-Oct-2020].
[2] R. M. Fauzi, Adiwijaya, and W. Maharani, “The recognition of Hijaiyah letter pronunciation using mel frequency cepstral coefficients and Hidden Markov Model,” Adv. Sci. Lett., 2016, doi: 10.1166/asl.2016.7769.
[3] R. Dharmawati and H. Destiana, “Interactive Animation Design of Hijaiyah Letters in Early Age Children at Al-Hidayah Kindergarten Bekasi,” SinkrOn, 2019, doi: 10.33395/sinkron.v3i2.10033.
[4] D. Doochin, “How Many People Speak Arabic Around The World, And Where?,” 2019. [Online]. Available: https://www.babbel.com/en/magazine/how-many-people-speak-arabic. [Accessed: 05-Oct-2020].
[5] I. Ghosh, “Ranked: The 100 Most Spoken Languages Around the World,” 2020. [Online]. Available: https://www.visualcapitalist.com/100-most-spoken-languages/. [Accessed: 05-Oct-2020].
[6] WHO, “Considerations for school-related public health measures in the context of Annex to Considerations in adjusting public health and social measures in the context of COVID-19,” no. May, pp. 1–6, 2020.
[7] WHO, “Coronavirus,” 2020. [Online]. Available: https://www.who.int/health-topics/coronavirus#tab=tab_2. [Accessed: 05-Oct-2020].
[8] A. El-sawy, M. Loey, and H. El-Bakry, “Arabic Handwritten Characters Recognition using Convolutional Neural Network,” WSEAS Trans. Comput. Res., 2017.
[9] K. Younis and A. Khateeb, “Arabic Hand-Written Character Recognition Based on Deep Convolutional Neural Networks,” Jordanian J. Comput. Inf. Technol., vol. 3, no. 3, p. 186, 2017, doi: 10.5455/jjcit.71-1498142206.
[10] N. Altwaijry and I. Al-Turaiki, “Arabic handwriting recognition system using convolutional neural network,” Neural Comput. Appl., vol. 8, 2020, doi: 10.1007/s00521-020-05070-8.
[11] G. Latif, J. Alghazo, L. Alzubaidi, M. M. Naseer, and Y. Alghazo, “Deep Convolutional Neural Network for Recognition of Unified Multi-Language Handwritten Numerals,” 2018 IEEE 2nd Int. Work. Arab. Deriv. Scr. Anal. Recognit., pp. 90–95, 2018, doi: 10.1109/ASAR.2018.8480289.
[12] A. Ashiquzzaman and A. K. Tushar, “Handwritten Arabic numeral recognition using deep learning neural networks,” in 2017 IEEE International Conference on Imaging, Vision and Pattern Recognition, icIVPR 2017, 2017, doi: 10.1109/ICIVPR.2017.7890866.
[13] N. Das, A. F. Mollah, S. Saha, and S. S. Haque, “Handwritten Arabic Numeral Recognition using a Multi Layer Perceptron Computer Science and Engineering Department , Computer Science and Engineering Department , Corresponding Author ’ s email : nibs_breath@yahoo.com,” Inf. Syst., pp. 200–203, 2006.
[14] A. Mars and G. Antoniadis, “Arabic Online Handwriting Recognition Using Neural Network,” Int. J. Artif. Intell. Appl., vol. 7, no. 5, pp. 51–59, 2016, doi: 10.5121/ijaia.2016.7504.
[15] A. A. Alani, “Arabic handwritten digit recognition based on restricted Boltzmann machine and convolutional neural networks,” Inf., 2017, doi: 10.3390/info8040142.
[16] A. Ashiquzzaman, A. K. Tushar, A. Rahman, and F. Mohsin, “An efficient recognition method for handwritten arabic numerals using CNN with data augmentation and dropout,” in Advances in Intelligent Systems and Computing, 2019, doi: 10.1007/978-981-13-1402-5_23.
[17] R. Dunford, Q. Su, E. Tamang, A. Wintour, and Project, “The Pareto Principle Puzzle,” Plymouth Student Sci., vol. 7, no. 1, pp. 140–148, 2014.
[18] J. Park, E. S. Jang, and J. W. Chong, “Demosaicing method for digital cameras with white-RGB color filter array,” ETRI J., vol. 38, no. 1, pp. 164–173, 2016, doi: 10.4218/etrij.16.0114.1371.
[19] [19] T. Wu and A. Toet, “Color-to-grayscale conversion through weighted multiresolution channel fusion,” J. Electron. Imaging, vol. 23, no. 4, p. 043004, 2014, doi: 10.1117/1.jei.23.4.043004.
[20] F. Chollet, “Building powerful image classification models using very little data,” 2016. [Online]. Available: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html. [Accessed: 07-Oct-2020].
[21] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” in 2018 International Interdisciplinary PhD Workshop, IIPhDW 2018, 2018, doi: 10.1109/IIPHDW.2018.8388338.
[22] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a convolutional neural network,” in Proceedings of 2017 International Conference on Engineering and Technology, ICET 2017, 2018, doi: 10.1109/ICEngTechnol.2017.8308186.
[23] Q. Zhang, M. Zhang, T. Chen, Z. Sun, Y. Ma, and B. Yu, “Recent advances in convolutional neural network acceleration,” Neurocomputing, vol. 323, pp. 37–51, 2019, doi: 10.1016/j.neucom.2018.09.038.
[24] J. Guérin, O. Gibaru, S. Thiery, and E. Nyiri, “CNN Features are also Great at Unsupervised Classification,” pp. 83–95, 2018, doi: 10.5121/csit.2018.80308.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi: 10.1109/5.726791.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Handbook of approximation algorithms and metaheuristics,” Handb. Approx. Algorithms Metaheuristics, pp. 1–1432, 2007, doi: 10.1201/9781420010749.
[27] M. Lin, Q. Chen, and S. Yan, “Network in network,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track Proc., pp. 1–10, 2014.
[28] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
[29] C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015, doi: 10.1109/CVPR.2015.7298594.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, doi: 10.1109/CVPR.2016.90.
[31] R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Imaging, vol. 9, no. 4, pp. 611–629, 2018, doi: 10.1007/s13244-018-0639-9.
[32] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “A new deep convolutional neural network for fast hyperspectral image classification,” ISPRS J. Photogramm. Remote Sens., vol. 145, pp. 120–147, 2018, doi: 10.1016/j.isprsjprs.2017.11.021.
[33] Prabhu, “Understanding of Convolutional Neural Network (CNN) — Deep Learning,” 2018. [Online]. Available: https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148. [Accessed: 09-Oct-2020].
[34] C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, “Activation Functions: Comparison of trends in Practice and Research for Deep Learning,” pp. 1–20, 2018.
[35] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, 2017, doi: 10.1145/3065386.
[36] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2017, doi: 10.1109/TPAMI.2016.2644615.
[37] T. Wood, “What is the Softmax Function?,” 2020. [Online]. Available: https://deepai.org/machine-learning-glossary-and-terms/softmax-layer. [Accessed: 10-Oct-2020].
[38] V. Kohir, “Calculating Output dimensions in a CNN for Convolution and Pooling Layers with KERAS,” 2020. [Online]. Available: https://medium.com/@kvirajdatt/calculating-output-dimensions-in-a-cnn-for-convolution-and-pooling-layers-with-keras-682960c73870. [Accessed: 09-Oct-2020].
[39] Keras.io, “MaxPooling2D layer,” 2020. [Online]. Available: https://keras.io/api/layers/pooling_layers/max_pooling2d/. [Accessed: 09-Oct-2020].
[40] S. H. S. Basha, S. R. Dubey, V. Pulabaigari, and S. Mukherjee, “Impact of fully connected layers on performance of convolutional neural networks for image classification,” Neurocomputing, vol. 378, pp. 112–119, 2020, doi: 10.1016/j.neucom.2019.10.008.
[41] Z. Zhang and M. R. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” Adv. Neural Inf. Process. Syst., vol. 2018-Decem, no. NeurIPS, pp. 8778–8788, 2018.
[42] T. Jethwani, “Difference Between Categorical and Sparse Categorical Cross Entropy Loss Function,” 2020. [Online]. Available: https://leakyrelu.com/2020/01/01/difference-between-categorical-and-sparse-categorical-cross-entropy-loss-function/. [Accessed: 10-Oct-2020].
[43] Chris, “How to use sparse categorical crossentropy in Keras?,” 2019. [Online]. Available: https://www.machinecurve.com/index.php/2019/10/06/how-to-use-sparse-categorical-crossentropy-in-keras/#categorical-crossentropy. [Accessed: 10-Oct-2020].
[44] V. Bushaev, “Adam — latest trends in deep learning optimization.,” 2018. [Online]. Available: https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c. [Accessed: 10-Oct-2020].
[45] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
[46] S. Ghoneim, “Accuracy, Recall, Precision, F-Score & Specificity, which to optimize on?,” 2019. [Online]. Available: https://towardsdatascience.com/accuracy-recall-precision-f-score-specificity-which-to-optimize-on-867d3f11124. [Accessed: 10-Oct-2020].
[47] R. Arthana, “Mengenal Accuracy, Precision, Recall dan Specificity serta yang diprioritaskan dalam Machine Learning,” 2019. [Online]. Available: https://medium.com/@rey1024/mengenal-accuracy-precission-recall-dan-specificity-serta-yang-diprioritaskan-b79ff4d77de8. [Accessed: 10-Oct-2020].

Downloads

Published

2020-11-01

How to Cite

Pradika, S. I., Nugroho, B., & Puspaningrum, E. Y. (2020). Pengenalan Tulisan Tangan Huruf Hijaiyah Menggunakan Convolution Neural Network Dengan Augmentasi Data. Prosiding Seminar Nasional Informatika Bela Negara, 1, 129–136. https://doi.org/10.33005/santika.v1i0.35

Issue

Section

Articles

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.

Similar Articles

1 2 3 > >> 

You may also start an advanced similarity search for this article.