Each time when security folks are asked to evaluate risks, the first thing that they would usually do is to create a threat model. Some don’t like the term “threat model” since it doesn’t really model anything, and suggest another name – “threat analysis”.
After going through many more or less formal threat modelling methodologies, I’ve realized that regardless of chosen methodology, in the end what your customers want is a simple table where threats are described in a plain human language, and a risk score is expressed in simple terms that everyone would intuitively understand, e.g. high, medium, low or HML.
That’s why I prefer calling my threat analysis “threat table”. What can be simpler than that? Anyway, I’m done with choosing “the best threat modelling framework” for today. Threat Table it is.
It’s not really about methodology though. It’s about what you put to that table, and this is where the art begins. No methodology will help you with understanding which threats are real and which ones are speculative or imaginary.
There are many ways to avoid the art component in this process, e.g. you can stick to company’s security and privacy policies, compliance or other regulatory documents and reduce your task to a simple checkbox marking exercise. In this case you are not different from external security auditors whose efficiency is questionable due latest significant security breaches in organizations that formally were considered as “compliant”.
Following company’s policies only can be even less adequate. The policies can be very generic or vise versa – very specific and obsolete because of that (technology evolves fast and so does threat posture).
Then there is a “common knowledge” problem, which is well known and is taught at business schools (I’ve personally learned it here): a fact or opinion known to many, no matter how questionable it is, will always prevail over less known fact or opinion, no matter how valid and important it could be.
Since security is very subjective and speculative, “common knowledge” becomes a bigger issue. In pre-Internet era securing your network and data centers was everything needed to keep external intruders out. There was no such a thing as “connected services” or service oriented architecture. Silos were the most common approach to building absolutely isolated and self-sufficient applications. To avoid inter-application data leak all you needed to do was to segregate networks.
Contemporary applications are built differently. To be competitive and efficient, they need to be exposed not only to other internal networks, but to Internet as well. With that comes absolutely different approach to security: the app should take care of its safety, not the network. Contrary to what many think (here comes “common knowledge” problem again), it’s achievable and threat of not having enough network segregation becomes less valid. Moreover, grading that risk as “High” can completely disable a business deeply rooted to a connected services paradigm.
Another hidden cost is related to the fact that while all security resources are thrown on imaginary threats, the real ones like those attributed to 3rd party vulnerabilities remain unaddressed and can lead to gruesome consequences (examples are many and don’t even need to be quoted here).
There is no simple solution for this problem. Your threat table will change only after you move the needle in the area of “common knowledge” in your organization.
Good luck with that!