Engineering and Technology
| Open Access | Cognitive Vulnerabilities in the Age of LLMs: Mitigating Generative AI-Driven Social Engineering Through Context-Aware Threat Detection
Dr. Elias Thorne , Department of Computer Science and Information Systems, Institute of Advanced CyberneticsAbstract
The advent of Large Language Models (LLMs) has fundamentally altered the cybersecurity landscape, specifically within the domain of social engineering. While LLMs facilitate productivity, they also empower threat actors to generate hyper-personalized, grammatically perfect, and contextually relevant phishing campaigns at scale. This paper explores the intersection of generative AI, cognitive psychology, and intrusion detection to propose a novel defense framework. We investigate the efficacy of current AI-driven social engineering tactics, utilizing the Five-Factor Model of personality to map cognitive vulnerabilities exploited by generative agents. Furthermore, we introduce a Context-Aware Defense System (CADS) that leverages fine-tuned LLMs to detect semantic anomalies and psychological manipulation triggers in real-time communications. Our methodology involves simulating high-fidelity spear-phishing attacks against generative agent personas representing diverse psychological profiles. Results indicate that traditional signature-based detection fails against LLM-generated content, whereas the proposed semantic analysis approach improves detection rates significantly. We find that high Agreeableness and Neuroticism correlate with higher susceptibility to AI-generated pretexts. The study concludes that effective defense against the next generation of social engineering requires a paradigm shift from static filtering to dynamic, psychological, and semantic content analysis.
Keywords
Large Language Models, Social Engineering, Generative AI, Intrusion Detection
References
Rajgopal, P. R..(2025). AI Threat Countermeasures: Defending Against LLM-Powered Social Engineering. International Journal of IoT, 5(02), 23-43. https://doi.org/10.55640/ijiot-05-02-03
Pastor-Galindo, J., Nespoli, P., Mármol, F.G., Pérez, G.M.: The not yet exploited goldmine of osint: Opportunities, open challenges and future trends. IEEE Access 8, 10282–10304 (2020)
Parrish Jr, J.L., Bailey, J.L., Courtney, J.F.: A personality based model for determining susceptibility to phishing attacks. Little Rock: University of Arkansas pp. 285–296 (2009)
Jones, C.R., Bergen, B.K.: People cannot distinguish gpt-4 from a human in a turing test. arXiv preprint arXiv:2405.08007 (2024)
Matz, S., Teeny, J., Vaid, S.S., Peters, H., Harari, G., Cerf, M.: The potential of generative ai for personalized persuasion at scale. Scientific Reports 14(1), 4692 (2024)
Kantardzic, M. (2011). Data mining: Concepts, models, methods, and algorithms. John Wiley& Sons.
Kayacik, H. G., Zincir-Heywood, A. N., & Heywood,
M. I. (2005). Selecting features forintrusion detection: A feature relevance analysis on KDD 99 intrusion detection datasets.Proceedings of the Third Annual Conference on Privacy, Security and Trust.
Lee, W., & Stolfo, S. J. (2000). A framework for constructing features and models for intrusiondetection systems. ACM Transactions on Information and System Security (TISSEC), 3(4), 227-261.
Kim, J., Kim, H., & Cho, J. (2020). User satisfaction with biometric systems in eCommerce: Astudy on fingerprint scanning. Journal of Information Security and Applications, 54, 102512.
Kumar, A., & Singh, P. K. (2021). A review of deep reinforcement learning for cybersecurityapplications. IEEE Access, 9, 126245-
McCrae, R.R., John, O.P.: An introduction to the five-factor model and its applications. Journal of Personality 60(2), 175–215 (1992)
Park, J.S., Zou, C.Q., Shaw, A., Hill, B.M., Cai, C.,
Morris, M.R., Willer, R., Liang, P., Bernstein, M.S.: Generative agent simulations of 1,000 people. arXiv preprint arXiv:2411.10109 (2024)
AI Threat Countermeasures: Defending Against LLM-Powered Social Engineering. (2025). International Journal of IoT, 5(02), 23-43. https://doi.org/10.55640/ijiot-05-02-03
Irhimefe Otuburun. “Real-Time Fraud Detection Using Large Language Models: A Context-Aware System for Mitigating Social Engineering Threats.” World Journal of Advanced Research and Reviews, vol. 26, no. 3, 30 June 2025, pp. 2811–2821,
Download and View Statistics
Copyright License
Copyright (c) 2025 Dr. Elias Thorne

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.

