COMPARATIVE ANALYSIS OF MODELS FOR FAKE NEWS DETECTION AND CLASSIFICATION USING GRU

Authors

DOI:

https://doi.org/10.31474/2786-9024/v2i2(34).313834

Keywords:

fake news detection, text classification, gated recurrent unit, GRU, deep learning, natural language processing, NLP, news classification system, machine learning, text data analysis

Abstract

The article presents a comparative analysis of models for detecting and classifying fake news using GRU (gated recurrent unit), a modern neural network architecture that serves as an alternative to LSTM. The aim of the study is to evaluate the efficiency of the GRU model in comparison with other popular natural language processing (NLP) models, such as BERT, RoBERTa, and LSTM, in the context of identifying fake news. The relevance of the topic is driven by the need for accurate and timely detection of disinformation in today’s information space, which significantly impacts societal processes and decision-making.

The research methodology is based on a comparative analysis using specific criteria. GRU, as a recurrent neural network, has a simpler architecture compared to LSTM, making it less resource-intensive while maintaining the ability to process long sequences of text. The main focus is on comparing the performance of GRU with other models in tasks related to fake news detection and classification, taking into account contextual processing capabilities.

The results of the comparative analysis show that GRU delivers competitive performance in terms of accuracy and training speed compared to LSTM and transformer-based models (BERT, RoBERTa), especially in resource-constrained environments. GRU proves effective when handling large volumes of text and analyzing complex contextual relationships. Due to its simpler architecture, GRU is a promising model for implementation in real-time fake news monitoring and detection systems.

The scientific novelty of the article lies in the exploration of GRU’s effectiveness compared to other NLP models for text classification tasks, which can improve disinformation identification processes. The practical significance of the study is that the results can serve as recommendations for selecting a specific class of models to solve various tasks when developing systems for combating fake news in different domains, including media, social networks, and analytical centers.

Author Biographies

Vitalii Kovalenko, Donetsk National Technical University

Master's student of the Department of Applied Mathematics and Informatics of DonNTU

Iaroslav Dorohyi, Donetsk National Technical University

Professor of the Department of Applied Mathematics and Informatics of DonNTU

Katerina Doroshenko, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Senior lecturer of the Department of information systems and technologies of KPI named after I. Sikorskyi

References

BERT Explained: State of the art language model for NLP. [Online]. Available: http://surl.li/xymxsn. Accessed: 10.11.2023.

J. Devlin, M.W. Chang, K. Lee, K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. [Online]. Available: http://surl.li/vlpsnz.

Google AI Blog, “Understanding Searches Better Than Ever Before”. [Online]. Available: http://surl.li/wvhmqi.

Google Jigsaw, “Perspective API Using BERT”. [Online]. Available: https://urlzs.com/vczAs.

W3School, “Machine Learning - Linear Regression”. [Online]. Available: http://surl.li/rlbrd. Accessed: 10.11.2023.

“Spam Classification with BERT,” arXiv:2102.07004. [Online]. Available: https://arxiv.org/abs/2102.07004. Accessed: 10.11.2023.

“Legal Document Search with BERT,” arXiv:2105.09121. [Online]. Available: https://arxiv.org/abs/2105.09121. Accessed: 10.11.2023.

“Plagiarism Detection with BERT,” arXiv:1910.00291. [Online]. Available: https://arxiv.org/abs/1910.00291. Accessed: 10.11.2023.

J. Leal, “Using RoBERTA for text classification,” 2020. [Online]. Available: https://urlzs.com/qafnF. Accessed: 10.11.2023.

Grammarly. “Grammar and AI Writing.” [Online]. Available: https://urlzs.com/w9y66. Accessed: 10.11.2023.

“RoBERTa: A Robustly Optimized BERT Pretraining Approach.” [Online]. Available: http://surl.li/xvxnwb. Accessed: 10.11.2023.

J. Leal. “A Novel Approach to Fake News Classification Using LSTM-Based Deep Learning Models.” [Online]. Available: http://surl.li/quygwl. Accessed: 10.11.2023.

R. Zhang, M. Zhang, Y. Zhang, “Hierarchical Co-Attention Selection Network for Interpretable Fake News Detection.” [Online]. Available: http://surl.li/pdkggu. Accessed: 10.11.2023.

S. K. Lee, H. Jeong, “A Deep Neural Network for Fake News Detection.” [Online]. Available: https://urlzs.com/BnPri. Accessed: 10.11.2023.

A. Dasgupta, S. Agerri, “Natural Language Inference over Interaction Space.” [Online]. Available: https://urlzs.com/FYcRg. Accessed: 10.11.2023.

S. Devlin, M. Chang, K. Lee, “BERT for Language Translation.” [Online]. Available: http://surl.li/xthhii. Accessed: 10.11.2023.

P. Wu, Z. Liu, L. Han, "Fake News Detection via GRU-Based Ensemble Model in Social Media," IEEE Access, vol. 8, pp. 47103-47115, 2020. DOI: 10.1109/ACCESS.2020.2973912. (Q1)

E. E. Lazaridis, A. Drosos, "Fake News Classification Using GRU with Attention Mechanism: A Comparative Study," Information Processing & Management, vol. 58, no. 4, 2021, pp. 102-109. DOI: 10.1016/j.ipm.2021.102594. (Q2)

Published

2024-11-01

How to Cite

Kovalenko, V., Dorohyi, I., & Doroshenko, K. (2024). COMPARATIVE ANALYSIS OF MODELS FOR FAKE NEWS DETECTION AND CLASSIFICATION USING GRU. Scientific Papers of Donetsk National Technical University. Series: “Computer Engineering and Automation", 2(2(34), 39–57. https://doi.org/10.31474/2786-9024/v2i2(34).313834

Issue

Section

Artificial Intelligence