Skip to main content

Vulnerability States and the Over-Reliance on Numbers from Tools

I've just had a discussion with a fellow Head of Cybersecurity around vulnerabilities and two things struck me that people need to understand: firstly, vulnerabilities have different states and secondly, scanning tools don't have all the context so can't actually tell you what risk you are running. 

Our discussion started with a statement from them that vulnerabilities are black and white - they're either there or not. It's actually a bit more nuanced than that though. Without making this more complicated than it needs to be, inherently there are two states a vulnerability can be in: exploitable or unexploitable (sometimes called active or dormant with other subtleties). This doesn't mean that they are being exploited, but that it is either possible to exploit the vulnerability in the current configuration (however hard that may be) or it isn't without some other change happening first. 

In our discussion, my peer argued that if it is unexploitable then there is no vulnerability. I disagree with this point of view though. We are potentially only one configuration change or software upgrade from converting the unexploitable vulnerability into an exploitable one. We would be best off remediating this vulnerability if possible, but at a low priority level. Even if we choose not to remediate it now, we should still track it so that it doesn't come back to bite us unexpectedly in the future. 

The second part of the discussion was more important though. I stated that a critical vulnerability might actually be a fairly low risk in some circumstances, but a medium vulnerability might be an out of tolerance high risk in others. The response from my peer was surprising. They maintained that a critical was a critical and would be the top priority because the tooling had a scoring system. 

I tried to explain that I might have a system that stores confidential data, but that the availability of that data may not be time critical, for example I might be able to live with several hours downtime on an end of day reconciliation service - as long as I can reconcile before the markets open again my impact shouldn't be too high. If, on the other hand, all the positions became public knowledge or the integrity was tampered with, it would have a big impact. 

Now consider two different vulnerabilities: one critical vulnerability that only affects availability and a high that affects confidentiality and integrity. As long as I can recover within a matter of hours from an availability attack, this risk is likely to be lower than that associated with the high rated vulnerability, because the impact could be significantly higher. 

The vulnerability rating is a static score for the general case, which doesn't take into account your context. This aspect only really affects the likelihood part of the risk calculation. Different vulnerabilities expose you to different threat scenarios though, which may alter the impact. There are scoring mechanisms that take some of this into account, e.g. using the full CVSS calculation including the environmental and temporal modifiers as well, but this still doesn't give you the full context. 

Let me give you another example. Say that you have a vulnerability on your system that allows for a really easy privilege escalation - this is bad. But what if that vulnerability required a logged in user to perform the attack? Well, this could still be an issue on your workstation estate as users could escalate their privilege and attack the system. What if it's on a server protected by a privilege access management solution and no other users can log in? Now, the only people logging in already have elevated privileges and we can restrict their access to approved sessions only and record them into the bargain. Suddenly this isn't quite so much of a problem. 

Why don't the scanning tools that tell me I have a vulnerability know this? Well, how can they? Where are they going to get that context? It is possible in many cases to get more context, but only if you have multiple scanning technologies and cross-correlate all the findings, or get a security/risk professional. 

The key point I'm trying to make is that you can't just believe the numbers coming out of a tool that doesn't have all the context. This leads to focusing on what the tool thinks is the easiest vulnerability to exploit rather than the vulnerability carrying the highest risk. 

Comments

Popular Posts

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post . How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't real...

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most...

Web Hosting Security Policy & Guidelines

I have seen so many websites hosted and developed insecurely that I have often thought I should write a guide of sorts for those wanting to commission a new website. Now I have have actually been asked to develop a web hosting security policy and a set of guidelines to give to project managers for dissemination to developers and hosting providers. So, I thought I would share some of my advice here. Before I do, though, I have to answer why we need this policy in the first place? There are many types of attack on websites, but these can be broadly categorised as follows: Denial of Service (DoS), Defacement and Data Breaches/Information Stealing. Data breaches and defacements hurt businesses' reputations and customer confidence as well as having direct financial impacts. But surely any hosting provider or solution developer will have these standards in place, yes? Well, in my experience the answer is no. It is true that they are mostly common sense and most providers will conform...