An Empirical Study of Large Language Models for Threat Intelligence Analysis and Incident Response
DOI:
https://doi.org/10.63575/CIA.2024.20109Keywords:
Large Language Models, Threat Intelligence, Incident Response, Cybersecurity AutomationAbstract
The exponential growth of cyber threats necessitates advanced automation in threat intelligence analysis and incident response workflows. This empirical study investigates the application of Large Language Models (LLMs) across critical security operations tasks, including threat intelligence extraction, TTP mapping, and automated response generation. Through systematic evaluation of multiple LLM architectures on real-world cybersecurity datasets comprising 1,000 threat intelligence reports and 500 incident records, we assess performance across entity extraction, threat actor attribution, and remediation recommendation tasks. Our experimental results demonstrate that LLMs achieve F1 scores exceeding 0.88 for Indicator of Compromise (IoC) extraction and reduce incident response time by 64% while maintaining 82% accuracy in MITRE ATT&CK technique mapping. The findings reveal significant efficiency gains with RAG-enhanced configurations showing 19% performance improvement over baseline approaches. This work provides empirical evidence supporting LLM deployment in security operations centers and identifies critical challenges in production environments.


