As a long time CISO, security advisor, and researcher, I insist on an approach that is based on defined parameters and measurable results. That idea may sound like common sense, but it is disappointingly rare in our industry.
To be effective in security, or any pursuit, it’s important to know precisely what your challenge is, what your goals are, and to define what success looks like. Yet, too many organizations operate as if cybersecurity and securing data were a battle with ghosts and boogeymen, and so they chase shadows and things that go bump in the night without ever taking the time to understand the true nature of the challenge that is before them, or measure whether things are going in the right direction.
One reason metrics-based strategies are uncommon in cybersecurity is because organizations rely on data that is supplied by vendors who measure their results against competitors and not against the conditions that today’s enterprises face. That’s why I recently participated in a panel discussion on the eve of the RSA 2019 Conference on the topic of metrics in cybersecurity. The CISO Manifesto was sponsored by security newcomer Blue Hexagon, and the premise behind the event was rethinking how our industry measures success. I served as moderator. The other participants included Richard Seiersen, security consultant and author; Anne Marie Zettlemoyer, security engineering vice president at Mastercard; Greg Shannon, chief scientist for the CERT division at Carnegie Mellon University; and Tom Baltis, CISO, Delta Dental. You can watch the entire event here or read the whitepaper summary here.
One area where we critically need metrics is network perimeter security. I have always believed that we need metrics to measure the efficacy of network perimeter security. This is one of the first lines of defense for the enterprise, and is critically important. Yet, today, we often overlook this area and assume that signatures and sandboxes will provide the best defenses possible. But, with the new threat landscape, that is no longer true.
I believe the metrics that matter right now include: Mean Time to Verdict, Mean Time to Orchestrate, and First Observation Verdict Efficacy. Mean Time to Verdict is the time for a security solution to deliver a verdict upon observation of the first sample, and goes hand in hand with First Observation Verdict Efficacy. The principle of first observation is important because most security vendors deliver very poor efficacy the first time they see a threat sample; in fact unknown threats will bypass signatures and sandboxes the very first time. Subsequently, when sandbox analysis results catch up and a signature for the unknown threat is created and tested, the verdict is updated. But, if you take the analogy of a bullet proof vest, it is similar to the first bullet getting past and blocking the next 99 bullets. You would be 100% dead, yet in cybersecurity terminology, vendors will report this as 99% efficacy.
Mean Time To Orchestrate is another critical metric, because as soon as you detect a threat, all alerts and indicators of compromise should be quickly shared with existing security products to stop lateral movement. As we head towards a real-time, AI enabled arms race, fast and right decisions the first time, along with response matter.
This is why I’m excited about Blue Hexagon and how it performs against the metrics I think are important.
Blue Hexagon’s approach to cybersecurity uses deep learning. I know everyone talks about AI, but here’s how deep learning is different. A deep learning system learns from data and in this case determines what is a “threat” without being told what specific “attacker mal-intent” to look at. In the Blue Hexagon deployment, its deep learning neural networks sits at the network perimeter (behind your switch and firewall) and inspects the complete network flow. Deep learning inspection is performed on payloads and headers, and threat verdict is delivered in less than a second.
I personally performed direct testing of Blue Hexagon’s capabilities. Based on a sample of 300 new viruses and variants (as of April 16, 2019), their platform successfully detected 299 as malicious. That is a 99.67% efficacy score against samples that had no previous signature. What’s more, those 299 verdicts were rendered in an average of 43.18 milliseconds. Near line-speed.
Mean Time to Verdict is significant when 87% of breaches occur in under 87 seconds (Verizon Data Breach Report, 2018). That means those threats are found and blocked before they can get through and have a chance to activate.
When I ran those same samples through VirusTotal, on average, 24% of their 71 engines missed those viruses completely, despite having the advantage of minutes of additional time for sandbox processing.
There’s still much to learn and more tests to conduct, but I’m convinced that the application of deep learning to meet the huge challenges that face our industry—and our customers—is a major breakthrough. I’ve seen what Blue Hexagon has done thus far and I’m encouraged that we now have a technology and product that can deliver on the metrics our panel and I deemed important.
I invite you to try them out. Either reach out to me directly, or sign up for a demo here.