Blog Details

The Risks of Using LLMs in Security Operations

The Risks of Using LLMs in Security Operations

LLMs (large language models) are everywhere in security operations these days. They can scan logs, suggest fixes, even draft reports in seconds. They come with a few gotchas that can trip you up if you’re not careful.

1. They Don’t Really Understand
LLMs are fast, but they don’t “get it” the way a human does.
1. They could just flag something harmless as a threat.
2. They might miss subtle warning signs that a trained analyst would catch.
Treat them like a helpful assistant, not the boss.

2. Data Exposure Is Real
Putting sensitive info into an AI system especially a cloud one can be risky.
1. Logs, credentials, internal notes: if they leave your control, that’s a problem.
2. Even small leaks can snowball.
Always know where your AI lives and how it handles data.

3. Advice Isn’t Always Reliable
LLMs generate recommendations based on patterns, not experience.
1. Sometimes it’s spot-on.
2. Other times, it’s… questionable at best.
Always double-check scripts or fixes before running them. Your servers will thank you.

4. Overconfidence Creeps In
It’s tempting to trust an AI’s answer blindly.
1. “It says we’re safe, so we must be.”
2. That’s when mistakes slip through.
Humans still need to verify and validate.

5. They can be turned against you.
Attackers may trick LLMs as well:
1. Malicious prompts can push analysts into unsafe actions.
2. Poisoned inputs can skew outputs and create blind spots.
Even defensive AI, if misused, may present a weak point.

How to Use LLMs Safely
1. Have an individual approve all AI-generated recommendations before taking action.
2. Restrict and limit sensitive data that will be shared with the AI.
3. Monitoring and validation of all AI-generated suggestions.
4. Incorporating AI-generated insights into any tools that currently exist in your environment including Security Information and Event Management (SIEM) and Endpoint Detection & Response (EDR).
5. Provide training for your team members on the quirks and limitations associated with AI.

AI (Large Language Model(s)/LLM)) can be incredibly helpful to security operations because they allow security teams to become faster and more efficient. However, AIs, like humans, can also mislead and/or misfire and, if not handled carefully, may leak sensitive data. Think of AI as like a superintelligence intern: they are helpful and amazing, but you still must supervise what they do.

© 2016 - 2025 Red Secure Tech Ltd. Registered in England and Wales under Company Number: 15581067