The Right Ingredients For Staying Ahead of The Bad Guys

shutterstock_180545660

One of the common threads you hear about in major data breaches these days is that the victim’s security team had alerts or events that should have clued them into the fact an attack was underway. In today’s complex security infrastructures it’s not unusual to have security operators/analysts receiving tens of thousands of alerts per day! Security monitoring and incident response need to transition from a basic rules-driven eyes-on-glass SIEM capability to a big data and data science solution. I frequently speak with customers about how IT Security needs to be able to handle a lot more information than current SIEM tools can support, and one question that always comes up is “what information needs to be collected and why?”, so here we go.

To start with you still need to collect all of those alerts and events from your existing security tools. While maintaining eyes-on-glass analysis of each individual alert from every tool isn’t feasible, a security analytics tool can analyze and correlate those events into a group of related activities that can help an analyst understand the potential impact of a sequence of related events instead of having to slice and dice the events manually.

The second type of information is infrastructure context – what’s in the environment, how’s it’s configured, how it’s all related and what is its impact? The analytics system needs to understand what applications are running on what servers connected to which network and what storage. By having access to these relationships the analytics tool can identify the broad-based impact of an attack on a file server by understanding all of the applications that access that file server and weight the alert accordingly. Which brings up another critical point – assets need to be classified based on their potential impact to the organization (aka security classification). If the tool identifies suspicious sequences of activity on both a SharePoint site used to exchange recipes and an Oracle database containing credit card numbers but doesn’t understand the relative value of each impacted asset it can only present both alerts as being of equal impact and let the operator decide which one to handle first. So a consolidated, accurate, up-to-date and classified system of record view your environment is critical.

Events event logs from all of those infrastructure components are the 3rd type of information; not just security events but ‘normal’ activities events as well. This means all possible event logs from operating systems, databases, applications, storage arrays, etc. Given that targeted attacks today can almost always succeed in getting into your infrastructure, these logs can help the analytics tool identify suspicious types of activities that may be occurring inside your infrastructure, even if the events don’t fall into the traditional bucket of security events. Here’s an example – a storage administrator makes an unscheduled snapshot of a LUN containing a database with sensitive data on a storage array, then mounts it on an unsecured server and proceeds to dump the contents of the LUN onto a USB device. The storage array logs show that someone made an unauthorized complete copy of all of your sensitive data, but if you weren’t collecting and analyzing the logs from that storage array you would never know it happened.

The fourth type of information a security analytics tool needs is threat intelligence – what are the bad guys doing in the world outside of your environment. A comprehensive threat intelligence feed into the security analytics tool will allow it to identify attempted communications with known command and control systems or drop sites, new attack tools and techniques, recently identified zero-day vulnerabilities, compromised identities and a host of other information that is potentially relevant. A subscription-based solution is a great solution to this.

The final type of information an analytics tool needs are network packets. Being able to identify a sequence of events that points to an infected server is only the first step – the analyst then needs to determine when the infection occurred and go back and replay the network session that initiated the infection to identify exactly what happened. Think in terms of a crime investigation – with a lot of effort and time the CSIs may be able to partially piece together what occurred based on individual clues, but being able to view a detailed replay of the network activities that led up to the infection is like having a complete video recording of the crime while it happened. Again the goal is to provide the analyst and incident responder with complete information when the alert is raised instead of the having to spend hours manually digging for individual bits.

The volume of information and amount of effort necessary to quickly identify and respond to security incidents in today’s environment is huge, which is why big-data and data science-based tools are absolutely critical to staying ahead of the bad guys.

About the Author: John McDonald