Existing Approaches.
While everybody seemed to agree that architecture, design review and threat modelling (TM) are very important for enforcing security in SDLC process, reaching an agreement on how all these activities should be conducted is not that simple.
There is a well documented approach based on STRIDE in the scope of Microsoft’s SDL methodology, but as I wrote before, it’s probably too theoretical, doesn’t operate in terms of the real world threats (e.g. the ones described in OWASP top 10) and might not be very efficient in cases when a quick security assessment is required. Besides, I think, that sometimes DFD-based approach is not applicable at all, e.g. in cases like protocols and API’s.
Each time when somebody is telling me that I must use DFD and STIDE for a threat model, I’m referring to an excellent TM work conducted in the scope of OAuth 2.0 IETF’s project. The best thing that I like about this approach is that it doesn’t have any pseudo-scientific terminology, it’s very straightforward, it simply describes real attack scenarios in plain English and provides actionable mitigation controls.
On the other hand, I’ve seen many TM’s created using a formal SDL approach with nice DFD diagrams, trust zones and threat classifications expressed with STRIDE terminology that didn’t make too much sense to me, mostly because describing an attack in abstract categories doesn’t really help a developer to understand the threat and how it can be exploited.
Let us try to understand why the simple approach works better than other over-formalized and wanna-be scientific methods. Here are the major factors as I see them:
- The authors of OAuth 2.0 TM are active participant of that IETF project and understand the protocol very well.
- The authors of the TM are capable of de-composing a system built around the protocol to understand how it can be attacked.
- The authors are familiar with the contemporary popular exploits and are capable to explain how those exploits can be applied to a system that uses the protocol (session impersonation, click jacking, CSRF, malicious redirects, etc.).
- All scenarios are described in the simple terms that any average developer can understand.
- There is a “countermeasure” section for each threat that explains clearly how the threat can be mitigated.
TM Process.
The interesting part, which can not be deduced from the described TM, is how to come up with all those threats and make it as much comprehensive as possible. In other words, a threat modeller, especially the one that doesn’t have much experience in this area might need a certain guidance, structure and discipline around finding and documenting threats. This task might become even more complicated when a threat modeller is not really involved in designing the system under review. Here are some useful tips that can help with that:
- The most important thing is to understand the system, its components and how those components communicate to each other.
- System understanding should not be limited to technical details only. It’s important to understand business context as well to assign the right severity and remediation priority for each threat.
- It’s important to know the most common, popular and dangerous exploits and understand how they can be used to attack the system. OWASP resources can be very useful when it comes to web applications.
- To add some structure to threat analysis, a threat modeler can divide threats in several categories, e.g.
- Authentication and authorization
- Input/Output data validation
- Exception handling and logging
- Concurrency
- API abuse
- System’s availability
- Host and network level security
- Database security
- etc.
- Scheduling meetings and conducting interviews with all parties involved to the system’s design and build are critical for a TM process. Business owners, architects, developers, QA, system engineers, OPS, DBA’s can add valuable details in various areas described in #4, e.g. business owners can help with understanding an impact caused by system’s compromise or unavailability, which it its turn will help assigning the right severity for each threat.
Required Skills.
A surprising fact that I’ve discovered after years of working in TM space was that the most important skill that is required to be successful in this area is not related to security, it’s about having a strong developer’s background that would allow understanding a system and technologies used to built it well.
De-composition skill is the second important one that not all developers might have, because building systems (composition) is not exactly the same as breaking them into pieces (de-composition).
Since a threat modeller would also need to suggest remediation solution and describe a new architecture with security controls in place, I think, both composition and de-composition skills are important.
When it comes to pure security skills, it’s important for a threat modeller to be familiar with real exploits, their pervasiveness in the world and evaluate their applicability to a system under review.
Knowing the latest successful attacks against similar systems running in the same operational environment or company is very benefitial as well. In each case of successful attack, a root cause needs to be found and analyzed, repeatable security patterns should be created and used as remediation controls in all new systems where they are applicable.
Conclusion.
Just like security in general, TM is not a science, it’s all about common sense, the right skills and experience. Over-formalized TM process will not necessary help, might be inefficient, produce a low quality TM and create a false sense of security.