The Need for Evidence Based Security
In recent years, there has been an uptick in cybersecurity spending within most organizations, yet at the same time there has not been a corresponding downtick in data breaches. In fact, studies show that despite this increase in expenditure, data breaches are actually on the rise. As cybersecurity professionals, these seemingly paradoxical trends occurring in tandem should really make us pause and question how we are going about securing our environments. How can we truly evaluate and determine what we are doing right and what we are doing wrong?
As security professionals, have we ever sat down and truly made an effort to empirically determine what controls are actually effective in our environment and what controls do very little to protect our environment or, worse yet, actually work to undermine our security. As a case in point, consider the recent changes to the NIST password standard in which it was determined that the established “best” practice of changing a password every so many days actually did more harm than good. It encouraged users to pick poorer passwords because they needed to be easier to remember to deal with the frequent changes. Moreover, consider how the industry stalwart approach of trying to blacklist known bad, using Antivirus and like products, has been to shown to be increasingly ineffective against malware and other malicious behaviors and that taking the approach of only allowing known good behavior has been shown to be much more effective at preventing and mitigating security threats. Yet, despite these findings, there are still compliance standards which dictate the use of traditional AV as a required control.
It’s time we as security professionals begin to move beyond measuring our security programs in terms of compliance alone and begin to take a more evidence-based approach to how we do security. Compliance might provide a nice starting point for organizations that lack a mature security program, but compliance should not be an end goal. Large gaps often exist between what is required to be compliant and what is actually required to be secure. Just look at all of the HIPAA compliant healthcare organizations that have been taken out by ransomware or other cyber threats as examples. Compliance needs to be viewed as a minimum baseline and not and end goal, as shooting for compliance alone is like shooting for a D grade in a class. Sure, you may pass, but you are not really doing a great job. As security professionals we need to begin to develop ways to empirically measure what controls work to protect our environment against a given threat and what do not. Even for controls that are proven effective, we need to empirically establish that the controls were deployed properly and to an adequate level. For example, several years ago, the author simulated a mock malware outbreak within an organization in which a script was used to simulate the spread of malware by attempting to copy and execute the EICAR test string on each computer in that environment. This was an exercise that was conducted within an organization that was compliant with the necessary industry standards and had best practices such as network segmentation in place. The interesting thing about the test, however, is that, as expected, the network segmentation, which consisted of ACLs that restricted the flow of traffic between subnets, did its job and kept the threat contained, but that even with such segmentation in place, a malware outbreak would have still had an unacceptable level of impact on whatever subnet it impacted. By empirically evaluating their control set, they learned that the network segmentation they had in place was not fine grained enough to suite their risk tolerance and moved the organization to a more zero trust based network architecture. A written walkthrough of this exercise is made available from the AEHIS IR Committee (https://aehis.org/download/17368/).
Just as the rise of techniques such as FAIR, have begun to allow for improvements in quantifying the risk aspects of security, a similar empirical based approach needs to be taken to evaluate the efficacies of the technical and administrative controls that we deploy within our environments. Sure, having that new high end next-gen AV package is great, but has it ever been tested in your environment to see what threats may be able to bypass it, as no defense is 100% effective? For the threats that can bypass it, how effective are the other layers of controls in your environment to mitigate (e.g. network segmentation) or detect (e.g. a DNS sinkhole) the threat of a now compromised end point? Do you know how fast your staff can detect, mitigate, and otherwise respond to an incident? If you can’t concretely answer some of these questions, how do you really know how secure your environment truly is.
To begin to take a more evidence-based approach to securing your environment the following needs to be considered:
- Use a quantitative risk assessment framework such as FAIR to get an accurate assessment as to which threats pose the biggest risks to your organization.
- For the identified threats determine:
- A way to simulate the threat in a manner that remains as realistic as possible without actually damaging any of the organizations systems or assets (e.g. using the harmless EICAR string to simulate malware)
- Determine the necessary metrics to quantify what the impact will be when that threat materializes and what further risks you are now exposed to if the threat materializes (e.g. a phishing email leading to the compromise of an endpoint that can now be used as a staging ground to attack other systems).
- Determine the necessary metrics to empirically determine the efficacy of the various controls that are being used to prevent, detect, or mitigate such a threat.
- Determine the necessary metrics needed to quantify staff detection, mitigation, and recovery time for a given threat.
- Execute your simulated threat and while the simulation is underway collect the needed metrics. While a pen test can potentially be considered a form of simulation, the key is to evaluate the pen test or other form of simulation you choose in accordance with the metrics you developed in the preceding step and not just consider that the pen tester gained access via xyz exploit. Many organizations respond to pen test findings by patching or otherwise mitigating against xyz, and while mitigating this vector is something that should be done, it is entirely possible that if the proper metrics are considered that perhaps that vulnerability that allowed for xyz exploit is far less of a problem than the fact that the compromised system could be used as a trivial pivot point to access critical internal resources. Perhaps making such pivoting less trivial or less possible, may be a better use of resources, as the potential for a zero-day variant of xyz exploit is always just around the corner. The proper metrics need to be established, collected and evaluated, to empirically answer these questions.
- Once the simulation of a threat is concluded and the proper metrics collected, the collected metrics need to be analyzed to identify what security controls worked well and identify in what areas in place controls were found to be deficient. These metrics should be used to identify the areas in which deficiencies lead to the greatest adverse outcomes and target those control areas for remediation first through the addition or modification of controls.
- Once controls are added or modified, it’s time to repeat the testing and empirically determine how well the newly implemented or modified controls actually work by comparing the before and after testing metrics. Was the difference significant? Is the threat to the organization quantifiably reduced? Are there areas that are still in need of improvement or are the organizations risk tolerances satisfied?
- The above cycle of testing security needs to be a continuous process as there are always new threats on the horizon to test your environment against. Even threats already tested against should be periodically retested for as well as changes which can negatively impact security can often be introduced into an environment over time. The more we test our environments the more control deficiencies we will be able to unearth and the more we will be able to empirically demonstrate that we are actually making our organization more secure and not just implementing controls that provide a perception of security with little actual benefit.
As security professionals, it is time that we begin to take a more evidence-based approach to our security decisions and seek empirical evidence that what we are doing is actually working to make our organizations more secure. Making a process of continual simulation and testing should be a key part of your organization’s security strategy.
RETURN TO CHIME MEDIA