CYBER SECURITY TREND #3 FOR 2016 AND 2017: OUT WITH THE OLD, IN WITH THE NEW

By - May 17, 2016

In our forth submission for this series we will explore Gartner’s third projection for 2016 and 2017.

Out with the Old, In with the New: 2016 will bring with it a shift toward development of more adaptive security architectures, says Gartner, in which reliance on traditional and often overused “prevention and blocking” security tools is relinquished in favor of threat detection and response mechanisms that should begin to take the leading role in terms of focus and investment by the enterprise—moving into the future these security tools will become increasingly proactive and even (at some point) predictive, effectively moving customers away from the defensive, reactionary, “prevention-only” strategies they’ve clung to for years.

This prediction conflates two major trends that RSM has been putting forth for several years in regards to effective security: moving from a prevention focus to detection and response, and the move from manual methods to automation.

For the first point, organizations only need to skim any news source from the last 24 months to see that the traditional focus on preventative controls is a strategy doomed for failure. Highly effective preventative controls (firewalls, patching, anti-virus, etc.) can limit the hacker population that can breach an organization, but they can never reduce the number to zero.  An organization is only one employee’s mistake or one un-patched system away from being breached. The goal of such controls should be to make compromising the organization as difficult a proposition as possible, but companies must then shift their focus to controls that will allow them to know when a truly skilled attacker has bypassed those defenses.

This leads to the core of Gartner’s prediction, namely the shift in focus to security monitoring and incident response. The concept is relatively simple: harden preventative controls to the point that only a small set of attackers have the skill to breach the environment, have robust security monitoring that can alert an organization immediately when this happens, and then effectively respond and push the attackers out before damage is done.  Easy to say, hard to do.  In reality, many organizations are already investing heavily in security monitoring whether that is in the form of on-site SIEMs or outsourced managed security providers. While they often realize very quick, very noticeable security improvements because of these efforts, those same organizations also encounter a “wall” that is difficult to move beyond. The problem is essentially too much of a good thing.  While it is good to implement granular logging throughout your environment and then bring those logs together for review, it quickly becomes apparent that once you are pulling from more than a handful of systems your security team is quickly overwhelmed by “white noise”.  Too much data being pulled too fast making actual analysis of the data impossible.

This situation leads to the second, necessary, component of Gartner’s prediction. If the monitoring process cannot be automated, it cannot be successful. It is that simple. A moderately sized mid-market organization can produce tens of millions of lines of logs in a day. The idea that a human being will manually review the logs and produce anything of value is laughable. The security monitoring platforms, whether on-site or out-sourced, must have the capability to perform automated reviews of such material. Further, those automated reviews need to go beyond binary alerts such as “I saw this file that matched a signature that says the file is bad.”  Attackers are too fast and too skilled and can alter their attack patterns faster than signature based technologies can keep up.  What is needed is an automated approach to developing a baseline of “normal” behavior within an environment.   You can’t tell when something abnormal is occurring until you can define what is normal.

Next generation security-monitoring solutions can start to build patterns based on the behaviors of networks, systems, or even individual users. Over time, these patterns become shockingly accurate and can very effectively alert on possible malicious activity. Consider these examples:

  • Network: Over the course of a year, the boundary firewalls have never observed the internal network transmitting anything more than a few bytes over random high ports. Suddenly one night, for some unknown reason, an out-bound high port tries to move 10 GB of data to an external IP address in Russia. You do not need to know the details of how an attacker might have entered the environment, but this behavior should be blocked, alerted, and immediately investigated.
  • Systems: Over the course of a year, two servers have never interacted with each other even though they are on the same network segment. With no notice, one server suddenly mounts a network share on the other server and begins moving large quantities of data. Again, the underlying details are irrelevant. It is common sense that the activity is noteworthy and should be investigated.
  • Users: Since a user has been employed at an organization, they have only ever logged into 10 systems even though their privileges allow them to potentially access dozens of other systems and applications. One random afternoon the user’s account is used to access over 30 systems in the space of a few minutes. Once again, the details for the activity do not need to be known up-front. The behavior is obviously questionable enough to cause an immediate alert and investigation.

The above scenarios have two primary points in common. First, neither of them are based on signatures similar to how anti-virus or intrusion detection systems work. The core of the monitoring is deviation from normal behavior no matter what the underlying cause.  This type of monitoring is far, far more difficult for attackers to evade. I can create unique malware that can bypass anti-virus with minimal effort, but I cannot traverse through a network without leaving some obvious clues for a diligent observer. Second, the creation of the models that define “normal” behavior is dependent on automated analysis since they are developed via analysis of massive quantities of data over extended durations of time. It is highly unlikely that a human being, or team of human beings, could effectively produce similar models. Once these models are created it can allow the organization to alert much earlier to the presence of an attacker, and therefore begin their incident response process faster thereby reducing the chance of significant damage.

For the mid-market these two trends highlight the need for organizations to identify and invest in technologies that provide them with the capability for automated, behavioral analysis of their networks. Attempting to build strong enough preventative controls to protect from all possible attacks is simply not possible, and planning to use signature based, manual methods of detection and response is simple not realistic.

Receive Posts by Email

Subscribe and receive notifications of new posts by email.