Kiuwan logo

The OWASP Benchmark & Kiuwan

OWASP Benchmark graphic

Learn how to make your own OWASP Benchmark test with Kiuwan on our DIY Blog post.

What is the OWASP Benchmark?

I’m sure that most of you are familiar with OWASP (Open Web Application Security Project), or at least you have heard about their famous Top 10 list of vulnerabilities affecting web applications today. If you are not familiar with it or haven’t heard of it, and you develop software, you’d better start with the links above now, the security of your application is at stake here.

To give you a little context, OWASP operates as a non-profit and is not affiliated with any technology company, which means it is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and anyone interested in security from the application perspective. It is a very reliable source of information, Kiuwan, as an active OWASP contributor member, is very proud to align our products with what the organization has to say. In fact, we do implement a lot of functionality that covers specific aspects defined by OWASP like the aforementioned Top 10 vulnerability list.

But today I want to introduce another OWASP Project that is not that well known but can be a very useful tool if you are in the position to decide what tools to use to ensure you develop secure applications: The OWASP Benchmark.

From their home page: “The OWASP Benchmark for Security Automation is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services. Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.”

As a Static Application Security Testing (SAST) provider we just couldn’t resist seeing how our Kiuwan Code Security product performs with the OWASP Benchmark. It makes sense and will give you objective information about Kiuwan compared with other tools out there.

So we set out for it.

With Kiuwan ease of use analyzing all test case was a walk in the park the only “extra” thing we had to do was transforming Kiuwan’s output to the benchmark expected results format (basically a CSV file) so we can use the automatic procedure provided by OWASP to generate the scorecard. If you have a Kiuwan account and you are interested in running a Kiuwan analysis on the benchmark yourself and generate the scorecard, stay tuned. We will be publishing a post with everything you need to do to reproduce the results I’m about to talk about.

The Kiuwan results

First things first. After running the analysis and generating the scorecard this is how Kiuwan’s results look like:

The OWASP Benchmark & Kiuwan benchmark comparison 1024x731 1 1

These are the details corresponding to the points in the graph:

The OWASP Benchmark & Kiuwan Kiuwan OWASP Summary 1024x448 2

Not bad! Kiuwan positions with almost 100% True Positives Rate (TPR) and just above 16% False Positive Rate (FPR). Not bad at all. This means that Kiuwan does report almost all vulnerabilities in the benchmark code.

In plain English, Kiuwan is a very sensitive tool finding almost all real vulnerabilities, but it is a little less specific reporting more vulnerabilities that are not real. In other words, Kiuwan adds some “noise” to the results.

How does Kiuwan compare to other tools?

Please note: the following comparison dates back to 2017. The OWASP benchmark site does not allow for tool comparisons anymore.

Now, if you have to select a SAST tool what would you prefer a more sensitive or a more specific tool? Well, obviously the answer should be the most sensitive AND the most specific, right? Let’s see a comparative graph with the results of 17 tools (16 SAST and 1 DAST). These are the tools compared on the OWASP benchmark site where we have added Kiuwan and the latest version of SonarQube. Notice that the other commercial tools are anonymized using SAST-0X for the names. For your reference, you can see the original graph in the OWASP Benchmark wiki.

The OWASP Benchmark & Kiuwan kiuwan owasp benchmark results 1 1024x665 1

Except 3 of them (the ones not designed for security testing), all of them are above the worse than guessing line, that’s good news. The further above the line the better, that distance is related to the average score for the tool or Youden’s index that combines sensitivity and specificity. You can see that there is no tool that is very sensitive and very specific at the same time. So if you are in the position to decide what tool to use to analyze your code there is always a tradeoff, sensitivity vs. specificity. The Youden’s index can give you a hint, the higher the better, but be careful, you can have the same Youden’s index for several sensitivity, specificity pairs.

If you ask me, I want sensitivity over specificity. I want my tool to report as many real vulnerabilities as possible, and if that means it generates some “noise” that’s okay with me. As long it is a reasonable amount, the noise can be managed. Whereas not detected real vulnerabilities just lay there in your code and you have no information about them, none. If you have any suspicions you need to manually review your code to find them, that is, costly and time-consuming.

Taking all this into account Kiuwan should be your tool of choice. But how fair is the benchmark?

Giving the Benchmark a little twist

By design, the OWASP Benchmark considers every vulnerability equally, is that fair? Well OWASP itself has its famous OWASP Top 10 project in an effort to prioritize security vulnerabilities. What happens if we put together the Top 10 with the Benchmark?

The Benchmark has all the test cases classified by vulnerability type and these mapped to correspondent CWEs. Here is the classification and the mapping.

Vulnerability typeCWEs
Cross-site Scripting – XSS79
OS Command Injection78
SQL Injection89
LDAP Injection90
XPath Injection643
Path Traversal22
Reversible One-Way Hash328
Trust Boundary Violation501
Insufficiently Random Values330
Use of a Broken or Risky Cryptographic Algorithm327
Sensitive Cookie in HTTPS session without “Secure” attribute614

There are 11 vulnerability types mapping to several CWE, how do these map to the Top 10? That is not an easy question since some of the vulnerability types and the correspondent CWEs map perfectly with a listed top 10, but others are more challenging like Trust Boundary Violation.
Anyway, we have done the exercise and mapped the 11 types in the Benchmark. Next, we have assigned a weight from 1 to 10 depending on the top 10, 10 for A1, 9 for A2, etc. I know the next table can be controversial but it is completely open to discussion.

Vulnerability typeCWEsTop 10Weight
Cross-site Scripting – XSS79A3- Cross-site Scripting (XSS)8
OS Command Injection78A1-Injection10
SQL Injection89A1-Injection10
LDAP Injection90A1-Injection10
XPath Injection643A1-Injection10
Path Traversal22A4- Insecure Direct Object Reference7
Reversible One-Way Hash328A6- Sensitive Data Exposure5
Trust Boundary Violation501A2-Broken Authentication and Session Management9
Insufficiently Random Values330A9-Using Components with Known Vulnerabilities2
Use of a Broken or Risky Cryptographic Algorithm327A6- Sensitive Data Exposure5
Sensitive Cookie in HTTPS session without “Secure” attribute614A5-Security Misconfiguration6

You probably know where we are going, right? We basically think that a tool handling more critical vulnerabilities better than the less critical ones should rank better. Treating all vulnerabilities equal in the calculation does not give you this information. So instead of calculating the tool score using True Positive and False Positive rates averages on the results for the 11 vulnerability types, we are calculating weighted averages applying the weights in the table above.

We have done this only for the open source tools which detailed results are in the OWASP Benchmark GitHub repository, the latest version of SonarQube (we run this one ourselves) and Kiuwan of course. We don’t have access to other commercial tools and the detailed results haven’t been published.
After crunching the numbers we have created the comparison scorecard for the 8 tools of what we call the weighted benchmark.

The OWASP Benchmark & Kiuwan kiuwan owasp benchmark weighted results 1 1024x663 1

You can see that all results get displaced in different directions. By taking weights into account we are not only measuring the sensitivity and specificity in general, but the ability of the tools to handle the more critical vulnerabilities according to the OWASP Top 10 ranking.

For example, if a tool can handle, let’s say half of the True Positives in the Benchmark correctly, with the average calculation it will be somewhere halfway up the Y-axis in the graph. However, if those vulnerabilities are less critical than the other 50% the tool is missing, it will be placed below 50%. If it is the other way around it will be above the 50% line.

I think is fair. Maybe controversial since, as I said, you may not agree with the mapping we’ve done, but fair if you somehow agree.

Regarding Kiuwan result with this approach, you can see that it is slightly displaced to the right in the graph. This means that the reported false positives correspond to more critical vulnerabilities according to the Top 10. Which still I think it is not a bad result. Since prevention is better than cure I prefer to get alerted of possible highly critical vulnerabilities than not having no information at all.

Another shortcoming of the Benchmark is that it is only available for Java. Which is fine, but there are many tools out there like Kiuwan that support other languages. We are doing the effort to use known benchmarks for other languages (like the ones available at NIST) and apply the scorecard mechanism of the OWASP Benchmark described here (and the weighted one too!) to come up with the same graph results for other languages too.
Stay tuned!

In This Article:

Request Your Free Kiuwan Demo Today!

Get Your FREE Demo of Kiuwan Application Security Today!

Identify and remediate vulnerabilities with fast and efficient scanning and reporting. We are compliant with all security standards and offer tailored packages to mitigate your cyber risk within the SDLC.

Related Posts

Python language graphic

How to Protect Python Code with Kiuwan

Python is the backbone for countless applications because it’s versatile and easy to use. However, there’s a downside to this popularity—Python has vulnerabilities that make it a favorit target for…
Read more
© 2024 Kiuwan. All Rights Reserved.