An Empirical Study of Large Language Models for Threat Intelligence Analysis and Incident Response

Authors

  • Ruoxi Jia Computer Science, Universtiy of Southern California, CA, USA Author
  • Jin Zhang Computer Science, Illinois Institute of Technology, IL, USA Author
  • Julian Prescot Computational Science, Princeton University, Princeton, NJ, USA Author

DOI:

https://doi.org/10.63575/CIA.2024.20109

Keywords:

Large Language Models, Threat Intelligence, Incident Response, Cybersecurity Automation

Abstract

The exponential growth of cyber threats necessitates advanced automation in threat intelligence analysis and incident response workflows. This empirical study investigates the application of Large Language Models (LLMs) across critical security operations tasks, including threat intelligence extraction, TTP mapping, and automated response generation. Through systematic evaluation of multiple LLM architectures on real-world cybersecurity datasets comprising 1,000 threat intelligence reports and 500 incident records, we assess performance across entity extraction, threat actor attribution, and remediation recommendation tasks. Our experimental results demonstrate that LLMs achieve F1 scores exceeding 0.88 for Indicator of Compromise (IoC) extraction and reduce incident response time by 64% while maintaining 82% accuracy in MITRE ATT&CK technique mapping. The findings reveal significant efficiency gains with RAG-enhanced configurations showing 19% performance improvement over baseline approaches. This work provides empirical evidence supporting LLM deployment in security operations centers and identifies critical challenges in production environments.

Author Biography

  • Julian Prescot, Computational Science, Princeton University, Princeton, NJ, USA

     

     

Published

2024-02-05

How to Cite

[1]
Ruoxi Jia, Jin Zhang, and Julian Prescot, “An Empirical Study of Large Language Models for Threat Intelligence Analysis and Incident Response”, Journal of Computing Innovations and Applications, vol. 2, no. 1, pp. 99–110, Feb. 2024, doi: 10.63575/CIA.2024.20109.