AI's Revolutionary Role in Cyber Defense and Social Engineering
DOI:
10.47709/ijmdsa.v3i4.4752Keywords:
Cybersecurity, Artificial Intelligence, Social Engineering, Incident Response, Machine Learning, Threat Detection, Data Quality, Ethical Considerations, Case Studies, and Future TrendsDimension Badge Record
Abstract
Creative methods of cybersecurity are required due to the growing complexity of cyber threats, especially those originating from social engineering techniques. The revolutionary role that artificial intelligence (AI) is playing in transforming cybersecurity practices is examined in this review article. It starts by looking at social engineering assaults and how AI technologies improve the ability to identify threats and take appropriate action. The study goes on to address the particular uses of AI in a number of cybersecurity fields, such as automated incident response, fraud detection, and anomaly detection. The application of AI in cybersecurity is not without difficulties, despite its many advantages. Significant challenges are presented by problems with data quality and bias, adversarial attacks, ethical issues, and resource requirements. In order to create complete cybersecurity plans, it is crucial to integrate AI with human expertise and emphasize the necessity for human oversight and collaboration. Future developments in AI technology are expected to continue, especially in the areas of machine learning algorithms and their integration with newly developed platforms like block chain and the Internet of Things (IoT). Case studies reveal how AI has been successfully implemented in businesses in a variety of industries, demonstrating how AI may enhance danger detection and reaction times. Artificial intelligence has enormous potential to improve cybersecurity protocols. In order to secure a safer digital future, organizations that adopt AI technology while addressing ethical issues and promoting a culture of continuous learning will be in a better position to manage the always changing terrain of cyber dangers.
Abstract viewed = 73 times
References
H. S. Anderson, J. Woodbridge, and B. Filar, "Deepdga: Adversarially-tuned domain generation and detection," in Proceedings of the ACM Workshop on Artificial Intelligence and Security, Vienna, Austria, 2016, pp. 13-21.
Babuta, M. Oswald, and A. Janjeva, "Artificial Intelligence and UK National Security Policy Considerations," Royal United Services Institute Occasional Paper, 2020.
C. Bahnsen, I. Torroledo, L. Camacho, and S. Villegas, "DeepPhish: Simulating malicious AI," in APWG Symposium on Electronic Crime Research, London, United Kingdom, 2018, pp. 1-8.
M. Bilal, A. Gani, M. Lali, M. Marjani, and N. Malik, "Social profiling: A review, taxonomy, and challenges," Cyberpsychology, Behavior and Social Networking, vol. 22, no. 7, pp. 433-450, 2019, doi: 10.1089/cyber.2018.0670.
M. Brundage et al., "The malicious use of artificial intelligence: forecasting, prevention, and mitigation," Future of Humanity Institute, Oxford, 2018.
E. Bursztein, J. Aigrain, A. Moscicki, and J. C. Mitchell, "The end is nigh: generic solving of text-based CAPTCHAs," in 8th Usenix Workshop on Offensive Technologies WOOT ‘14, San Diego, CA, USA, 2014.
Aslan, Ö. Aktu?, S. S., Ozkan-Okay, M., Yilmaz, A. A., & Akin, E. (2023). A comprehensive review of cyber security vulnerabilities, threats, attacks, and solutions. Electronics, 12(6), 1333.
Syafitri, W., Shukur, Z., Asma’Mokhtar, U., Sulaiman, R., & Ibrahim, M. A. (2022). Social engineering attacks prevention: A systematic literature review. IEEE access, 10, 39325-39343.
Oseni, A., Moustafa, N., Janicke, H., Liu, P., Tari, Z., & Vasilakos, A. (2021). Security and privacy for artificial intelligence: Opportunities and challenges. arXiv preprint arXiv:2102.04661
H. Gao et al., "Research on the security of microsoft’s two-layer captcha," IEEE Transactions On Information Forensics And Security, vol. 12, no. 7, pp. 1671–85, 2017, doi: 10.1109/tifs.2017.2682704.
S. Hamadah and D. Aqel, "Cybersecurity becomes smart using artificial intelligent and machine learning approaches: An overview," ICIC Express Letters, Part B: Applications, vol. 11, no. 12, pp. 1115– 1123, 2020, doi: 10.24507/icicelb.11.12.1115.
Hitaj, P. Gasti, G. Ateniese, and F. Perez-Cruz, "PassGAN: A deep learning approach for password guessing," Applied Cryptography and Network Security, vol. 11464, pp. 217–37, 2019, doi: 10.1007/978-3- 030-21568-2_11.
M. Bilal et al., "Social profiling: A review, taxonomy, and challenges," Cyberpsychology, Behavior and Social Networking, vol. 22, no. 7, pp. 433–50, 2019, doi: 10.1089/cyber.2018.0670
M. Brundage et al., The malicious use of artificial intelligence: forecasting, prevention, and mitigation, Oxford: Future of Humanity Institute, 2018.
E. Bursztein et al., "The end is nigh: generic solving of text-based CAPTCHAs," in 8th Usenix Workshop on Offensive Technologies (WOOT ‘14), San Diego, CA, USA, 2014.
K. Cabaj et al., "Cybersecurity: trends, issues, and challenges," EURASIP Journal on Information Security, 2018, doi: 10.1186/s13635-018-0080-0.
Cani et al., "Towards automated malware creation," in Proceedings of The 29th Annual ACM Symposium On Applied Computing, Gyeongju Republic of Korea, 2014, pp. 157–60, doi: 10.1145/2554850.2555157.
F. Hamad, M. Al-Fadel, and H. Fakhouri, "The effect of librarians’ digital skills on technology acceptance in academic libraries in Jordan," Journal of Librarianship and Information Science, vol. 53, no. 4, pp. 589-600, 2021
J. Chen et al., "An Attack on Hollow CAPTCHA Using Accurate Filling and Nonredundant Merging," IETE Technical Review, vol. 35, sup1, pp. 106–118, 2018, doi: 10.1080/02564602.2018.1520152.
W. Xu, D. Evans, and Y. Qi, "Feature squeezing: Detecting adversarial examples in deep neural networks," in Proceedings of the 2018 Network and Distributed System Security Symposium, San Diego, California, USA, 2018, doi:10.14722/ndss.2018.23198.
Y. Yao, B. Viswanath, J. Cryan, H. Zheng, and B. Zhao, "Automated crowdturfing attacks and defenses in online review systems," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, Texas, USA, 2017, doi:10.1145/3133956.3133990.
G. Ye, Z. Tang, D. Fang, Z. Zhu, Y. Feng, P. Xu, X. Chen, and Z. Wang, "Yet another text captcha solver," in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, Canada, 2018, doi:10.1145/3243734.3243754.
N. Yu and K. Darling, "A low-cost approach to crack python CAPTCHAs using AI-based chosen-plaintext attack," Applied Sciences, vol. 9, no. 10, p. 2010, 2019, doi: 10.3390/app9102010.
X. Zhou, M. Xu, Y. Wu, and N. Zheng, "Deep model poisoning attack on federated learning," Future Internet, vol. 13, no. 3, p. 73, 2021, doi:10.3390/fi13030073.
Y. Sawa, R. Bhakta, I. G. Harris, and C. Hadnagy, "Detection of social engineering attacks through natural language processing of conversations," in 2016 IEEE Tenth International Conference on Semantic Computing (ICSC), 2016, pp. 262–265.
H. N. Fakhouri, S. Alawadi, F. M. Awaysheh, I. B. Hani, M. Alkhalaileh, and F. Hamad, "A Comprehensive Study on the Role of Machine Learning in 5G Security: Challenges, Technologies, and Solutions," Electronics, vol. 12, no. 22, Art. no. 4604, 2023.
D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky, "The Stanford CoreNLP natural language processing toolkit," in Association for Computational Linguistics (ACL) System Demonstrations, 2014, pp. 55–60.
F. Mouton, L. Leenen, and H. S. Venter, "Social engineering attack detection model: Seadm v2," in 2015 International Conference on Cyberworlds (CW), 2015, pp. 216–223.
F. Mouton, L. Leenen, and H. S. Venter, "Social engineering attack examples, templates and scenarios," Computers and Security, vol. 59, pp. 186–209, 2016, doi:10.1016/j.cose.2016.03.004.
Shivamurthaiah, M., Kumar, P., Vinay, S., & Podaralla, R. (2023). Intelligent Computing: An Introduction to Artificial Intelligence Book. Shineeks Publishers.
N. T. Nguyen, "An influence analysis of the inconsistency degree on the quality of collective knowledge for objective case," in Asian conference on intelligent information and database systems, 2016, pp. 23–32, Berlin: Springer, doi: 10.1007/978-3-662-.
J. Nicholson, L. Coventry, and P. Briggs, "Can we fight social engineering attacks by social means? Assessing social salience as a means to improve phish detection," in Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017), 2017, pp. 285–298, USENIX Association.
T. Peng, I. Harris, and Y. Sawa, "Detecting phishing attacks using natural language processing and machine learning," in 2018 IEEE 12th International Conference on Semantic Computing (ICSC), 2018, pp 300–301.
R.-E. Precup, and R. C. David, "Nature-inspired optimization algorithms for fuzzy controlled servo systems," ButterworthHeinemann, 2019.
P. M. Saadat Javad, and H. Koofigar, "Training echo state neural network using harmony search algorithm," International Journal of Artificial Intelligence, vol. 15, no. 1, pp. 163–179, 2017.
B. H. Abed-alguni, "Island-based cuckoo search with highly disruptive polynomial mutation," International Journal of Artificial Intelligence, vol. 17, no. 1, pp. 57–82, 2019.
M. Bezuidenhout, F. Mouton, and H. S. Venter, "Social engineering attack detection model: Seadm," in 2010 Information Security for South Africa, pp. 1–8, 2010.
R. Bhakta and I. G. Harris, "Semantic analysis of dialogs to detect social engineering attacks," in Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015), pp. 424–427, 2015.
R. Gurzeev, "Seven AI attack threats and what to do about them," 2024. [Online]. Available: https://www.scmagazine.com/perspective/seven-ai-attack-threats-and-what-to-do-about-them
W. Elizabeth Montalbano, "Google Categorizes 6 Real-World AI Attacks to Prepare for Now," 2023. [Online]. Available: https://www.darkreading.com/cyberattacks-data-breaches/google-red-teamprovides-insight-on-real-world-ai-attacks.
Lundqvist, "Backdoor Attacks on AI Models," 2024. [Online]. Available: https://www.cobalt.io/blog/backdoor-attacks-on-ai-models.
"Critical Scalability: Trend Micro Security Predictions for 2024. (n.d.)," [Online]. Available: https://www.trendmicro.com/vinfo/us/security/research-and-analysis/predictions/critical-scalabilitytrend-micro-security-predictions-for-2024
P. Uy, "AI Cyber-Attacks: The Growing Threat to Cybersecurity and Countermeasures," 2023. [Online]. Available: https://ipvnetwork.com/ai-cyber-attacks-the-growing-threat-to-cybersecurity-andcountermeasures
Proliferation of AI-driven Attacks Anticipated in 2024. (n.d.)," 2024. [Online]. Available: https://newsroom.trendmicro.com/2023-12-05-Proliferation-of-AI-driven-Attacks-Anticipated-in2024.
Ness, S., Shepherd, N.J. and Xuan, T.R. (2023) Synergy between AI and Robotics: A Comprehensive Integration. Asian Journal of Research in Computer Science, 16, 80-94. https://doi.org/10.9734/ajrcos/2023/v16i4372
Khinvasara, T., Ness, S. and Tzenios, N. (2023) Risk Management in Medical Device Industry. Journal of Engineering Research and Reports, 25, 130-140. https://doi.org/10.9734/jerr/2023/v25i8965
Xuan, T. and Ness, S. (2023) Integration of Blockchain and AI: Exploring Application in the Digital Business. Journal of Engineering Research and Reports, 25, 20-39. https://doi.org/10.9734/jerr/2023/v25i8955
Nasnodkar, S., Cinar, B. and Ness, S. (2023) Artificial Intelligence in Toxicology and Pharmacology. Journal of Engineering Research and Reports, 25, 192-206. https://doi.org/10.9734/jerr/2023/v25i7952
Capuano, N., Fenza, G., Loia V. and Stanzione, C. (2022) Explainable Artificial Intelligence in CyberSecurity: A Survey. IEEE Access, 10, 93575-93600. https://doi.org/10.1109/ACCESS.2022.3204171
Edu, J.S., Such, J.M. and Suarez-Tangil, G. (2020) Smart Home Personal Assistants: A Security and Privacy Review. ACM Computing Surveys, 53, Article No. 116. https://doi.org/10.1145/3412383
Gupta, M., Akiri, C., Aryal, K., Parker, E. and Praharaj, L. (2023) From ChatGPT to ThreatGPT: Impact of Generative AI in Cyber Security and Privacy. IEEE Access, 11, 80218-80245. https://doi.org/10.1109/ACCESS.2023.3300381
Budzinski, O., Noskova, V. and Zhang, X. (2019) the Brave New World of Digital Personal Assistants: Benefits and Challenges from an Economic Perspective. NETNOMICS: Economic Research and Electronic Networking, 20, 177-194. https://doi.org/10.1007/s11066-019-09133-4
Hussain, S., Neekhara, P., Jere, M., Koushanfar, F. and McAuley, J. (2021) Adversarial Deepfake: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, 3-8 January 2021, 3347-3356. https://doi.org/10.1109/WACV48630.2021.00339
Basit, A., Zafar, M., Liu, X., Javed, A.R., Jalil, Z. and Kifayat, K. (2021) A Comprehensive Survey of AI-Enabled Phishing Attacks Detection Techniques. Telecommunication Systems, 76, 139-154. https://doi.org/10.1007/s11235-020-00733-2
Hitaj, B., Ateniese, G. and Perez-Cruz, F. (2017) Deep Models under the GAN: Information Leakage from Collaborative Deep Learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, 30 October-3 November 2017, 603-618. https://doi.org/10.1145/3133956.3134012
Downloads
ARTICLE Published HISTORY
Issue
Section
License
Copyright (c) 2024 Muhammad Ismaeel Khan, Aftab Arif, Ali Khan
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.