Security is a Holistic Proposition

Gorka Sadowski

Subscribe to Gorka Sadowski: eMailAlertsEmail Alerts
Get Gorka Sadowski: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Cloudonomics Journal, Cloud Data Analytics

Blog Post

Logs for Better Clouds - Part 6

Not all Log Management solutions are created equal

Log Collection and Reporting requirements
So far in this series we have addressed:

Trust, visibility, transparency. SLA reports and service usage measurement.

Daisy chaining clouds. Transitive Trust.

Intelligent reports that don't give away confidential information.

Logs.  Log Management.

Now, not all Log Management solutions are created equal, so what are some high-level Log Collection and Reporting requirements that apply to Log Management solutions?

Log Collection
A sound Log Management solution needs to be flexible to collect logs from a wide variety of log sources, including bespoke applications and custom equipments. Universal Collection is important to collect, track and measure all of the possible metrics that are in scope for our use case, for example the number and size of mails scanned for viruses, or number and size of files encrypted and stored in a Digital Vault, or number of network packets processed, or number of Virtual CPU consumed...

And collection needs to be as painless and transparent as possible. Log Management needs to be an enabler, not a disabler! In order for the solution to be operationally sound, it needs to be easily integrated even in complex environments.

Open Reporting Platform
In addition to an easy universal collection, the Log Management solution needs to be an Open Platform, allowing a Cloud Provider to define very specific custom reports on standard and non-standard types of logs.

Many different types of reports will be used but they will fall under 2 categories.

External facing reports will be the ones shared with adjacent layers, for example service usage reporting, SLA compliance, security traceability, etc.  These will have to show information about all the resources required to render a service while not disclosing information considered confidential.

Internal reports will deal with internal "housekeeping" needs, for example security monitoring, operational efficiency, business intelligence...

And for the sake of Trust, all of these reports need to be generated with the confidence that all data (all raw logs in our case) has been accounted for and computed.

We can see that many internal and external facing reports need to be generated and precisely customized, and again this needs to be achieved easily. An open reporting platform.

This will allow several populations of users to generate their own set of ad-hoc reports showing exactly what they need to see based on specific needs and requirements.

Operational Model
The following diagram depicts the high-level Operational Model a Cloud Provider, with the Log Management solution and associated flows of information.

 

Figure 6 - Log Management solution and interaction within a Cloud Provider

At Layer N, internal processing is comprised of processes A through F, each having logs collected by the Log Management solution at Layer N.

These "local" logs, logs about local processing, will be augmented by logs collected from the subcontracting layer, which will give visibility into the complete lifecycle of end-to-end processing.

Logs are the data points that will be used 1) as "counters" of minute operations for pay-per-use purposes, 2) for SLA reporting, 3) for traceability and also for security, operational efficiency etc.

The requirement for inter-layer visibility means that there are logs and reports that a Cloud Provider (Layer N) needs from its subcontractor (its Layer N+1). Likewise, logs and reports from Layer N will need to be made available to its client (its Layer N-1). If logs are deemed confidential, and a Cloud Provider does not want them collected by its client(s) then proper reports need to be put in place so as to give client visibility into the metrics that are mutually agreed upon without disclosing actual confidential raw logs.

Anti-inference solutions and approaches already exist in the database world and can be used in this situation.

Sounds complex?

Actually it's not that bad, just understand what information you need from the layer below, and understand what you'll need to give to the layer above. Work out your reports so that you get the information that you need, and give the information that is required.

In case of major dispute, and undisputable proof is required, all the raw logs are centralized and easily accessible via the layer in question anyways.

Next time, we'll talk about the requirements concerning integrity and proof of immutability of logs, and what it means for end-to-end treatment on logs, and especially storage.

More Stories By Gorka Sadowski

Gorka is a natural born entrepreneur with a deep understanding of Technology, IT Security and how these create value in the Marketplace. He is today offering innovative European startups the opportunity to benefit from the Silicon Valley ecosystem accelerators. Gorka spent the last 20 years initiating, building and growing businesses that provide technology solutions to the Industry. From General Manager Spain, Italy and Portugal for LogLogic, defining Next Generation Log Management and Security Forensics, to Director Unisys France, bringing Cloud Security service offerings to the market, from Director of Emerging Technologies at NetScreen, defining Next Generation Firewall, to Director of Performance Engineering at INS, removing WAN and Internet bottlenecks, Gorka has always been involved in innovative Technology and IT Security solutions, creating successful Business Units within established Groups and helping launch breakthrough startups such as KOLA Kids OnLine America, a social network for safe computing for children, SourceFire, a leading network security solution provider, or Ibixis, a boutique European business accelerator.