One vendor uses 5 patterns, the other uses 500 rules. What’s better?
Anyone who has configured a SIEM or UEBA (e.g., QRadar, Splunk, ArcSight, Nitro, AlienVault, Securonix, Exabeam) knows that tuning 500 rules is painful — and no one will run them in production. So, why even consider 500 rules?
More is not better.
It’s important not to get caught up in vendor rules “arms race.” It’s also important not to assume that all vendor content is good and to turn everything on straight out of the box. This could lead to high false-positive rates and degraded performance. And then what?
How about an alternative approach? Use five patterns, not 500 rules.
How to detect threats with 5 patterns instead of 500 rules, and why you don’t want all those rules
Again, more is not always better. Consider:
Do you want to manage false positives and the noise from alerts of 500 or more rules?
Can you detect everything you need in fewer rules?
Can you reduce the noise of false positives other ways, rather than tuning each rule?
How well will your SIEM/security analytics tool perform with 500 or more active rules?
Why not use rules for everything?
We tried that route 20 years ago and it failed. Remember when Gartner said IDS is dead? Not really, but it’s a bit of a zombie. It’ll never die, but its usefulness is limited. If rules could solve the security problem, then IDS/IPS (Intrusion Detection/Intrusion Prevention System) would be the answer.
Patterns are better than rules
With just a few detection patterns on a few rules (and a few variants), we can detect 99% of what most rule-based tools can detect and find the new and unknown threats.
Repeat Attacks – Everything counts in large amounts.
Success after Failed – They tried every door, and finally found one that was open (brute force).
Rare Events – User or machine did something it’s never done before – a change of behavior.
Spike in Events (or spike in total $/bytes/numeric field) – A change in quantity.
Peer Anomaly – Entity did something other similar entities don’t do.
It’s not all rainbows and butterflies
Yes, I admit, some things do require simple matching rules. For instance, compliance-mandated alerts for audit purposes. There are some single conditions/single match events that we do need rules for, but these should be the exception, not the norm. We should expect them never to go off.
Access to a trusted/protected system from an untrusted network.
Use of default credentials (e.g., guest, anonymous, admin, root, database administrator, system administrator).
Known threats in/out of the network – TTP match/threat intel match on live events.
This is one of the few rules I do like. It’s a bit like supervised machine learning on your data. If known/scored data sets can match an event to an indicator of compromise (IOC), then yes, we want to know about it. But don’t count on this one to save your bacon from the fire. I’ve found only about 30% of the things in a pentest or worse, in a breach-response trigger, known-evil rules.
My favorite threat chains start with something unknown (Rare, Spike, Peer) and then later, trigger a known-evil rule. It’s beautiful confirmation of malicious intent.
Patterns produce noise. So how do you eliminate false positives?
Yes, patterns are noisy, but remember as a kid in baseball, we learned three strikes and you’re out? In the modern world, MITRE calls this the ATT&CK® Framework and the U.S. Department of Defense calls it a kill chain.
Rather than tune each rule to never alert on a false positive — an exercise in abject futility that often blinds us to events that are part of a threat chain — we look for three strikes. It’s not about one event, it’s about multiple indicators; a preponderance of evidence! Three strikes, you’re out!
Rules and patterns are noisy. False-positive rates on individual events, even when rules are well-tuned, result in alerts that are about 33% true positives, 33% misconfiguration issues, and 33% unknown or undetermined. Tune them beyond that and we create a false-negative problem. Simply put, we blind ourselves and miss sh*t.
For instance, RULE - Alert if there are five or more failed logins in a one-hour interval for the same user.
Once the attacker learns the pattern, they adjust the attack and go “low and slow” with no more than three failed logins in an hour. The attack takes a bit longer, but the attacker still gets in.
Consider the same threat rule as a pattern. If any user has 3x more failed logins than normal in an hour, day, week, or month, now alert. What’s the threshold? It varies by user.
Let’s face it, some people fat finger their password a lot. I don’t want my SOC wasting time investigating Betty Backhoe, who works outdoors with big gloves in the winter in Minnesota, just because she failed to login again for the 30th time today. But when my CEO’s account logs in from China for the first time ever after three failed logins today (and he’s never had more than one failed login before), I’m awake and listening.
Are you still skeptical?
I’m always skeptical. I’m a curmudgeon. For light reading and similar conclusions, consider:
From personal experience having run a few hundred SIEM and UEBA projects, I know that ArcSight, QRadar, Splunk, Nitro, and other SIEMs can be tuned to detect 90+% of what is needed with 20 rules. For UEBAs, an additional 10 rules for the five patterns and a few variations (rare IP, rare EXE, rare country) mentioned above can be implemented with low false-negative rates and high detection fidelity. It’s not magic, it’s math. More on this, as well as specific patterns and log sources, in my next blog.
And if I haven’t convinced you and you still want extensive rules sets, there’s always SOC Prime for about $6,000/month.
If you want to see what I mean in action with Netenrich Resolution Intelligence Cloud, here’s a quick 7-minute demo.