Deceptions Everywhere

Insights on threat and cyber risk trends, use cases for deception technology and strategies for combatting targeted attacks

We Are Failing At Information Security! (part 1)

Recent reports point to a troubling reality – threat detection strategies aren’t identifying attackers early enough, and dwell times are stubbornly high. In this 2-part guest post series from Kevin Fiscus, SANS instructor and cybersecurity expert, we’ll take a deep dive into how deception technology can help.

According to the FireEye 2020 M-Trends report, most organizations learn they have been compromised via notification by a third party. Furthermore, according to the 2020 Ponemon Institute Cost of a Data Breach study, the average amount of time it takes for organizations to identify and contain a breach is over seven months.

These numbers are troubling but also quite confusing. After all, our networks are outfitted with detective capabilities galore; IDS, DLP, antivirus, system and application logs, SEIM, SOAR, etc. Why are attackers still able to penetrate a network, and spend a good amount of time there undetected?

All too often detective controls only come into play after the breach has been detected, and are used to sort out what happened. Attempting to use these technologies in a proactive or real time manner can be difficult. Traditional detective controls focus on identifying evil. Unfortunately, attackers do an excellent job of appearing to be “innocent,” thereby evading detection, at least for a time. Not only are delays in detection troubling in principle, they significantly increase risk to our organizations. To understand why, we need to understand risk in more detail.

Analyzing cyber risk to understand threat detection delays


Risk can be defined as the likelihood that a threat exploits a vulnerability causing harm. This definition incorporates four factors into the risk equation: threats, vulnerabilities, harm and likelihood. Threats are agents that can cause us harm and include things like malware, hackers, natural disasters, fires and insiders. For a threat to affect our organizations, it must take advantage of a weakness or vulnerability. These include technical issues such as a missing patch, operational issues such as lack of security awareness training or even physical problems such as inadequate fire suppression systems.

There is little most organizations can do to reduce threats and while a good percentage of security operations are focused on reducing vulnerabilities, the quantity of breaches we’ve experienced has shown those actions to be unsuccessful. Likelihood is a factor in the risk equation that can be mitigated somewhat by vulnerability management and other security techniques, but is largely a factor of where and how threats and vulnerabilities intersect. If the vulnerabilities are difficult to exploit and if the threats are unskilled, the likelihood is low. If the threats are skilled or if the vulnerabilities are easy to exploit, the likelihood is increased.

This leaves us with the final component of the risk equation, harm. Even after a breach, we can reduce risk by reducing the harm or the cost of the breach. According to the Ponemon Institute Cost of a Data Breach Study, there is a direct relationship between the time it takes to detect and respond to a breach and the cost of the breach. The longer detection and response take, the more the breach will cost. In other words, reducing harm reduces risk and one way to reduce harm is to detect and effectively respond to breaches more quickly.

This is the heart of the problem. It takes us far too long to detect and respond to breaches. Understanding the problem is not hard but what about finding a solution?

Planting traps to detect and observe attackers


Consider a real-life situation that could happen. It is a dark night and the power is out. Someone decides break into your home. The sound of them breaking in is likely to alert you to their presence, but if it does not, things get more interesting. The intruder now finds themselves in unfamiliar surroundings. They can’t see and they don’t know where to go. In all likelihood they will kick a child’s toy, knock over a lamp or step on that squeaky floorboard in the hallway. When they do, you will be alerted to their presence and can take appropriate action.

In that scenario, you know have the advantage because you are in a familiar environment where you can observe what is going on, orient yourself to the situation, decide what to do and then act on that decision while the invader is still trying to figure out where they are and what they should do.

Why are we compromised for over 6 months before we discover the breach? The answer is simple; our networks don’t have kid’s toys to kick, lamps to knock over or squeaky boards to step on.

Obviously, in a literal sense, toys, lamps and boards are not going to be placed on our networks so it is important to identify the qualities about toys, boards and lamps that make them excellent detection tools in the real world. It boils down to one word: normal. In the middle of the night it is not normal to hear lamps crashing to the ground and so when it happens, we are alerted to a potential problem. It is not however the lamp falling that we are on alert for. We don’t list out the specific actions that we believe an invader will take and set mental alarms should any of those actions be detected. Instead, we understand what is normal and are alert to anything that deviates from that norm.

Defining the “normal” in IT security is really hard


Yet, from a cybersecurity perspective, this approach also has its challenges. It can be extremely challenging due to the complexity and dynamic nature of today’s computing environments. Attempting to “normalize” all network activity is extremely difficult. Even more so in a cloud, multi-cloud or hybrid environment. Furthermore, in the new era of the remote workforce and forced digital transformation, defining what normal is – work hours, location, device – is incredibly difficult. In the Cost of Data Breach study, it mentions that “[o]f organizations that required remote work as a result of COVID-19, 76% said it would increase the time to identify and contain a potential data breach.“

And what about AI-powered or Machine Learning-based approaches like Network Traffic Analysis (NTA)? Well, according to a recent report in MIT Technology Review, models trained on normal behavior are showing cracks —forcing humans to step in to set them straight. This is “causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.”

Fortunately, there is a better way.

In part 2, I explain how deception technology, around for years much sophisticated at early threat detection, can truly give security teams back the advantage and change the world of information security.