MITRE released the D3FEND framework today (6/22/21), an effort funded by the National Security Agency to effectively create a knowledge graph of cybersecurity countermeasure techniques. The goal of this project that serves as a mirror of the MITRE ATT&CK framework is to provide a standard vocabulary for countermeasures that serve as a starting point for security architects and decision makers to prioritize security spend and breadth of defense.
D3FEND is NOT intended to prioritize, prescribe, or characterize the efficacy of specific countermeasures but instead provide a standard vocabulary.
Each countermeasure is also tied to specific adversary/offensive tactics. This feature allows for security decision makers to 1) select products for evaluation that meet their perceived counter measure gaps, and 2) design red team tests to specifically test the countermeasure in a precise and repeatable manner
At a high level, the framework essentially discusses countermeasures in two buckets (leaving aside deception and eviction):
1) Hardening and Hygiene
as shown in the countermeasure “periodic” table diagram below.
The first bucket is “hardening and hygiene” which involves taking care of the basics and reducing the attack surface and then looking at isolation and segmentation of networks, applications, and who and can talk to whom and what is allowed to run. Countermeasures in the same column are typically performed by the same tool.
The second bucket is detection. In this bucket, we have five categories of countermeasures which cleanly map to specific security product capabilities:
These countermeasures are all file analysis focused such as doing dynamic analysis on files aka sandbox, file hash checking with threat intelligence and running rules on file contents using YARA rules or other techniques like ssdeep. One clear item missing here is the use of deep learning or machine learning techniques for detecting malicious files which can be applied in the network, on the endpoint, for container scanning or standalone as a detection tool.
Secure Email Gateway
The countermeasures listed here focus on MTA reputation and sender reputation. This section would benefit from adding other countermeasures such as phishing detection using computer vision, NLP techniques and learning anomalous sending/receiving behavior techniques to determine BEC attempts.
Network Detection and Response
The two columns – identifier analysis and network traffic analysis, fit neatly into network detection and response (NDR) tool capabilities. The countermeasures here include items like URL analysis, looking for homoglyphs, DNS traffic profiling and host of anomalies like upload/download ratio deviations. This section also calls out “file carving” which analyzes files being sent over the network. This is an interesting technique because many modern attacks aim to be “file-less” from an endpoint perspective by not storing any files on disk after downloading them. While there are countermeasures listed related to encrypted traffic such as certificate analysis, it would be good to add a specific countermeasure named encrypted traffic analytics because multiple security tools have such capabilities to look at HTTPS traffic patterns to find potential C2. Another countermeasure that needs to be added here is “Beaconing and DGA analysis” since this is a core capability of a network tool that is looking at DNS, HTTPS, HTTP and other protocols over which beaconing occur.
Endpoint Detection and Response
The two columns – Platform monitoring and Process analysis, fit neatly into EDR tool capabilities. The countermeasures here include items like process spawn analysis, process lineage analysis, script execution analysis and system call analysis. They also include countermeasures at the OS and firmware level.
The final column in detection is user behavior analysis which fits best with SIEM/UEBA products although some of the items here are also possible to do with EDR and NDR products many of which also correlate users with endpoint and network activities. The countermeasures identified here include items like which user is logging in from unusual geolocations, data transfer anomalies for users, authentication patterns of the users.
Overall, I think this is a neat way for everyone to have a consistent vocabulary of countermeasures and relate them to specific adversary techniques. While the frameworks seems reasonably comprehensive there are a few countermeasures that are more recent that could be added. Given the maturity level of the framework, this should happen with time.
However, the framework in its current form seems biased towards traditional on-premises security. While most of the countermeasures apply to cloud security it would be good to have D3FEND for Cloud to match up with ATT&CK for cloud since there are at least 20-30% of adversary techniques that are unique to the cloud. Also, the hardening and hygiene pieces in the cloud are split between the cloud service provider and the user so the burden on the security architect could be reduced and focused more towards detection and IaC and configuration checking.
Stay tuned for my thoughts on what should be part of D3FEND for Cloud.
Comments are closed.