The standard posture for most security programs is often described as “assume breach.” In other words, no matter how hard you try, a hacker will get through your defenses (indeed, is already inside and working to exfiltrate your data or sabotage your systems), so you should dedicate a significant portion of your available security resources to finding the breach that has already happened and limit the damage. That’s not an encouraging message. Some would say it’s a losing approach.
The ideal, of course, is prevention. Keeping threats outside of a network in the first place would stop hackers from gaining access to the systems and data they want to steal. To date, and despite billions of dollars invested in information security innovation, prevention has remained elusive. According to the Identity Theft Resource Center’s 2018 End-of-Year Data Breach Report, there were more than 1,200 data breaches and more than 446 million compromised records reported last year in the U.S. alone. Why is this the case?
Consider the following killchain (Figure 1) where signatures and sandboxes are used to detect the initial incursion at the perimeter firewall. Attackers have developed simple, yet highly effective strategies for evading these traditional detection controls, which are at the heart of modern network security tools. Malware can be obfuscated or repackaged on the fly, and malicious payloads constantly shifted to new URLs and IP addresses before their signatures are cataloged. Malware can also be engineered to detect and evade a sandbox.
Unfortunately, when threats bypass these signature and sandbox-based layers of security, threat detection becomes more manual and exponentially slower. The scale shifts from milliseconds to hours and days if not longer. The model shifts from fully automated to manual, human-in-the-loop dependent responses using layered-on, defense-in-depth approaches that include products like network threat analysis (NTA), threat hunting tools, behavioral (user and entity) analytics (UEBA), and security information and event management (SIEM) solutions. These defense-in-depth products that focus on identifying the symptoms of an attack are complex, disjointed, and must be meticulously managed to remain current with the latest threat intelligence. Even then, threat detection is often based on learning normal baselines of behavior and identifying anomalies or deviations from that baseline. That creates too many false positives and becomes a burden for security analysts.
ESG’s Jon Oltsik described the problem in a recent column in the security journal CSO. “Think about the ramifications here: Each of these tools must be deployed, configured, and operated daily,” he says. “Furthermore, each tool provides its own myopic alerting and reporting. Security analysts are then called upon to stitch together a complete threat management picture across endpoint security tools, network security tools, threat intelligence, etc. This is a manual process slog that doesn’t scale. Little wonder then why malware is often present on a network for hundreds of days before being discovered.”
According to the 2019 Verizon Data Breach Investigation Report (DBIR) (fig. 28), the typical time to compromise is measured in minutes and time to exfiltration is measured in hours, whereas the typical time to discovery is measured in months. Once a breach is discovered, containment can take days or weeks.
In light of these statistics it’s clear we are focusing on the wrong problem. Rather than ceding the perimeter to the threat actors in hopes of disrupting the kill chain while they move laterally along the network, we should be going on the offense by detecting and mitigating as many threats as possible at the perimeter–with tools that do not require human analyst involvement. The issue is that we’ve been tackling the perimeter problem with (signature and sandbox) technologies that are no longer effective. We need new capabilities that are transformational; we need solutions that are faster, smarter, and more accurate than the adversary. We believe we’ve found that in deep learning. Until deep learning, there existed no security technology that could be fast enough yet accurate enough to handle modern threats.
Why is this? Deep learning is a type of artificial intelligence. It is a subset of machine learning that uses available data to solve problems on its own, unlike traditional machine learning which requires a lot of human expertise to train for desired outcomes. Deep learning is attracting a lot of attention for its applications in medical science and diagnostics, voice recognition and translation, computer graphics, and robotics, and self-driving cars.
At Blue Hexagon we’re putting deep learning’s problem-solving skills to work addressing the challenges of cybersecurity. It turns out that the capabilities of deep learning are ideal for cybersecurity. Unlike signature and sandbox-based threat detection systems, deep learning can recognize known and unknown threats and deliver a verdict in seconds, meaning it can be deployed at the network perimeter where it can effect true threat prevention at a speed and scale hackers can’t match. Deep learning also operates at 10Gbps wire speed with no resulting latency, so it can operate anywhere in the network without loss of efficacy.
For far too long our industry has been losing ground to the global hacker community, but we’re optimistic that deep learning could play a major role in tipping the scales in favor of the enterprise. The numbers look good when compared to traditional solutions, and the results from customer environments have been promising. In every customer deployment and proof of value, we’ve been able to identify threats in less than a second while traditional security solutions have taken at least a week and more. The average threat detection rates for traditional signature and sandbox solutions have been in the 12-20% range on first arrival compared to 99.5-99.8% accuracy for Blue Hexagon, with verdicts delivered in less than a second.
Here’s an example from a May 10th announcement of a North Korea Tunneling Tool called Electric Fish. While Blue Hexagon detected this threat in less than a second on May 10th , Virus Total showed detection by only 13 of 70 security vendors. This is about 18% coverage.
About a week later, as shown in Figure 3, Rich Mason, CSO of Critical Infrastructure and former CISO of Honeywell tested the same attack, and discovered that a week later the detection coverage was still at 60%.
- The industry needs to reconsider the efficacy of signature and sandbox-based threat detection
- Threats will get more complex especially as adversarial AI comes into play, and these legacy perimeter security controls will not keep up.
- Early prevention can save thousands of dollars and will be more effective than analysis tools that requires baselining and human analyst support.
The Blue Hexagon unique approach to inspection protocols and payloads can deliver real-time verdicts to tip the balance in favor of the good guys. Check out our latest whitepaper where we dive into metrics-driven approach to threat detection. And follow us on our journey as we unveil more of the advantages and applications of deep learning-powered threat detection.