False Positive

False Positive

One-liner: A security alert triggered by benign activity that is incorrectly flagged as malicious.

🎯 What Is It?

A false positive (FP) occurs when a security detection system (SIEM, IDS, antivirus, etc.) generates an alert for activity that appears suspicious but is actually legitimate. False positives waste analyst time, cause alert fatigue, and can lead to real threats being missed.

⚠️ The Problem with False Positives

Alert Fatigue

High FP Rate β†’ Analyst Burnout β†’ Missed True Threats

When SOC analysts face thousands of alerts daily, with 80%+ being false positives, they:

Cost Impact

πŸ” Common Causes

Cause Example
Overly Broad Rules Alert on any PowerShell execution (including admin scripts)
Lack of Context Flagging admin accessing servers they manage
Poor Baselining Not understanding normal network behavior
Signature Overlap Benign software matches malware signature
Misconfigured Thresholds Alerting on single failed login instead of 10+
Outdated Rules Detecting deprecated attack methods

βœ… Reducing False Positives

1. Baseline Normal Behavior

Understand what's normal in your environment:

2. Tune Detection Rules

Before: Alert on ANY PowerShell execution
After:  Alert on PowerShell + encoded command + no digital signature

3. Use Allowlists

4. Add Context

5. Tiered Alerting

Informational β†’ Low β†’ Medium β†’ High β†’ Critical

Not everything needs immediate escalation.

6. Continuous Feedback Loop

Analyst marks FP β†’ Detection Engineer tunes rule β†’ Deploy update

πŸ†š False Positive vs False Negative

Type Definition Impact
False Positive Alert fires, but activity is benign Wasted time, alert fatigue
False Negative No alert fires, but activity is malicious BREACH β€” worst outcome

Security Trade-off:

Goal: Balance detection sensitivity with operational efficiency.

πŸ“Š Measuring FP Rate

Formula

FP Rate = (False Positives / Total Alerts) Γ— 100

Industry Benchmarks

Tracking

Week 1: 1000 alerts β†’ 850 FPs (85%)
Tune rules...
Week 4: 600 alerts β†’ 180 FPs (30%)

πŸ› οΈ Detection Engineering Approach

Example: Tuning a Brute Force Alert

Initial Rule (High FP):

Trigger: 1 failed login attempt

Tuned Rule (Low FP):

Trigger: 
  - 10+ failed login attempts
  - Within 5 minutes
  - From same source IP
  - Against multiple accounts
  - Outside business hours
  - AND source IP not in admin allowlist

🎀 Interview Angles

Q: How do you handle a high false positive rate in your SOC?

STAR Example:
Situation: Our SOC had an 80% FP rate on PowerShell execution alerts.
Task: Reduce FPs without missing real threats.
Action:

Q: What's worseβ€”false positives or false negatives?