Security is a Holistic Proposition

Gorka Sadowski

Subscribe to Gorka Sadowski: eMailAlertsEmail Alerts
Get Gorka Sadowski: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: SOA & WOA Magazine, Java Developer Magazine

Article

Why Rule-Based Log Correlation Is Almost a Good Idea - Part 2

Modeling attack scenarios? Is it possible?

Rule-based log correlation is based on modeling attack scenarios
Back to the visibility aspect.

"By managing all your logs you get universal visibility in everything that is happening in your IT infrastructure." Yes, this is a true statement.

But to tell that you can easily flag security attacks using rule-based correlation is a major overstatement.

Rule-based correlation essentially automates the "If this is happening here" and "That is happening there" then "We have a problem." More precisely, "If this precise event is taking place at this particular time in this specific device" and "That precise event is taking place at that particular time in that specific device" then "We may have a problem." Of course, you can set a time window (not too wide as we'll see later) and you can specify a group of devices (not too many as we'll also see later), but in the core of the engine it will get translated to plenty of single specific rules.

Rule-based log correlation automates the inference of information by looking at several sets of logs. Not a bad idea from a theoretical standpoint, but far, far from the "end all, be all" to security touted by many vendors as a "plug and play," easy-to-use low TCO solution.

We first need to start by modeling a specific attack scenario, by understanding the different steps involved in this attack, and then by programming rules to synthetize these steps.

Once this phase is done and only after it is properly done, and as we'll see below it is not an easy task, then you can put together a set of correlation rules to automate the decision of whether or not an attack is taking place. And ring an alert if required.

Simple enough, right?

At a high level, yes.

But think about it. Are attacks deterministic in nature? No. So trying to model an attack as a series of discrete steps will just not work.

An outdated, static approach that doesn't scale
This reminds me of the early days of IDS, when it was based on recognizing static patterns. Each small variation of known attacks required a new signature, and although thousands of these new signatures were constantly added, it still allowed existing attacks to go right through undetected, while at the same time generating numerous false positives.

This static model is outdated, ineffective, expensive to setup and maintain, and it doesn't scale, a typical example of a bad idea.

Especially when it's positioned as an easy-to-setup, easy-to-use, out-of-the-box, deploy and forget solution - because it's not.

In rule-based log correlation, each attack scenario needs to be precisely modeled, and then a set of rules needs to be defined to defend against this attack. Any small variation, any minute difference with this scenario, will require different rules, in a different order, with exceptions and extensions.

How many attack scenarios exist out there?

Is this a model that scales?

Ask your favorite SaaS and/or MSSP how many correlation rules they manage for their clients, and they'll give you figures in the tens of thousands.

I know large MSSPs that manage 60,000 correlation rules, others 80,000 and more. And there is no guarantee that an attack will be stopped. Yet there are plenty false positives.

Is your organization ready to manage this number of correlation rules?

Let's see why so many rules are required, and you'll understand that behind the general "good idea" principal you are in fact betting on an approach that is doomed for failure.

More Stories By Gorka Sadowski

Gorka is a natural born entrepreneur with a deep understanding of Technology, IT Security and how these create value in the Marketplace. He is today offering innovative European startups the opportunity to benefit from the Silicon Valley ecosystem accelerators. Gorka spent the last 20 years initiating, building and growing businesses that provide technology solutions to the Industry. From General Manager Spain, Italy and Portugal for LogLogic, defining Next Generation Log Management and Security Forensics, to Director Unisys France, bringing Cloud Security service offerings to the market, from Director of Emerging Technologies at NetScreen, defining Next Generation Firewall, to Director of Performance Engineering at INS, removing WAN and Internet bottlenecks, Gorka has always been involved in innovative Technology and IT Security solutions, creating successful Business Units within established Groups and helping launch breakthrough startups such as KOLA Kids OnLine America, a social network for safe computing for children, SourceFire, a leading network security solution provider, or Ibixis, a boutique European business accelerator.