I don’t know what the most important thing for you in Top 10 2017-Top 10 – OWASP is. For me it’s vulnerability ranking or the way how the vulnerabilities have been ordered in that list. There are a few reasons why it’s important:
- Metrics and KPI believers, whose population grows in business world very fast will immediately map the ranking to priorities
- I didn’t see too many businesses who can handle more than two or three priorities at a time well
- OWASP Top 10 are very broad and cover almost everything in application security
I think, from this point of view OWASP’s Top 10 are not perfect at all since the current ranking simply doesn’t match reality and doesn’t address real security breaches that caught everybody’s attention lately. Let us take a look at a few of them: “Injections” that were leading the OWASP charts for many years, “Component Vulnerability” ranked as #9 and “Insecure Configuration” ranked as #5 in the 2017 list.
In reality #9 and #5 should be really #1 and #2 in my view, given that breaches like Home Depot, Target, Equifax led to the loss of millions of records of highly sensitive data and have been attributed to 3rd party vulnerabilities, while the latest SMB and ransomware related cases (Anthem, banks, governments) could be related to both insecure components and insecure configurations. Multiple security incidents with AWS S3 (Accenture, Verizon, Dow Jones) can be categorized as “Insecure Configuration” as well.
On the other hand, I can’t recall any significant security breach related to “Injections”.
I think, the major reason of this disconnect is that the method used by OWASP was based on surveys. As somebody who has interviewed many people and teams for the last 10 years, I can say that whatever you get from those interviews could be very inaccurate. In the most cases people simply don’t know what they are talking about, in other cases it could be related to personal preferences and interests to certain security domains, or to ambiguity of questions leading to inaccurate answers. In other words, it’s all about opinions and not about data coming from real security breaches.
Another important factor – a survey fatigue. I’ve been in a situation when various security assessors asked me hundreds of very ambiguous question and answering all of them accurately was simply not possible.
Real top 10 should be coming from real security breaches and their magnitude, e.g. you could try evaluating a vulnerability’s rank using the following formula:
(# of stolen records) x (record’s sensitivity)
or something along those lines.