Skip to main content

Internal cyber attacks - more thoughts

I presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here.

Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about BYOD, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.

Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security.

So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons.

Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way).

An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons.

Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks.

The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here.

I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts.

One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates.

Comments

Popular Posts

Coventry Building Society Grid Card

Coventry Building Society have recently introduced the Grid Card as a simple form of 2-factor authentication. It replaces memorable words in the login process. Now the idea is that you require something you know (i.e. your password) and something you have (i.e. the Grid Card) to log in - 2 things = 2 factors. For more about authentication see this post . How does it work? Very simply is the answer. During the log in process, you will be asked to enter the digits at 3 co-ordinates. For example: c3, d2 and j5 would mean that you enter 5, 6 and 3 (this is the example Coventry give). Is this better than a secret word? Yes, is the short answer. How many people will choose a memorable word that someone close to them could guess? Remember, that this isn't a password as such, it is expected to be a word and a word that means something to the user. The problem is that users cannot remember lots of passwords, so remembering two would be difficult. Also, having two passwords isn't real

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most

Web Hosting Security Policy & Guidelines

I have seen so many websites hosted and developed insecurely that I have often thought I should write a guide of sorts for those wanting to commission a new website. Now I have have actually been asked to develop a web hosting security policy and a set of guidelines to give to project managers for dissemination to developers and hosting providers. So, I thought I would share some of my advice here. Before I do, though, I have to answer why we need this policy in the first place? There are many types of attack on websites, but these can be broadly categorised as follows: Denial of Service (DoS), Defacement and Data Breaches/Information Stealing. Data breaches and defacements hurt businesses' reputations and customer confidence as well as having direct financial impacts. But surely any hosting provider or solution developer will have these standards in place, yes? Well, in my experience the answer is no. It is true that they are mostly common sense and most providers will conform