tag:blogger.com,1999:blog-5196575008293976372024-03-14T08:19:57.305+00:00cybericiA blog about Information & Cyber Security from cyberici, a security consultancyLuke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.comBlogger94125tag:blogger.com,1999:blog-519657500829397637.post-65946588398916198822021-06-15T22:42:00.000+01:002021-06-15T22:42:36.784+01:00Vulnerability States and the Over-Reliance on Numbers from Tools<p>I've just had a discussion with a fellow Head of Cybersecurity around vulnerabilities and two things struck me that people need to understand: firstly, vulnerabilities have different states and secondly, scanning tools don't have all the context so can't actually tell you what risk you are running. </p><p>Our discussion started with a statement from them that vulnerabilities are black and white - they're either there or not. It's actually a bit more nuanced than that though. Without making this more complicated than it needs to be, inherently there are two states a vulnerability can be in: exploitable or unexploitable (sometimes called active or dormant with other subtleties). This doesn't mean that they are being exploited, but that it is either possible to exploit the vulnerability in the current configuration (however hard that may be) or it isn't without some other change happening first. </p><p>In our discussion, my peer argued that if it is unexploitable then there is no vulnerability. I disagree with this point of view though. We are potentially only one configuration change or software upgrade from converting the unexploitable vulnerability into an exploitable one. We would be best off remediating this vulnerability if possible, but at a low priority level. Even if we choose not to remediate it now, we should still track it so that it doesn't come back to bite us unexpectedly in the future. </p><p>The second part of the discussion was more important though. I stated that a critical vulnerability might actually be a fairly low risk in some circumstances, but a medium vulnerability might be an out of tolerance high risk in others. The response from my peer was surprising. They maintained that a critical was a critical and would be the top priority because the tooling had a scoring system. </p><p>I tried to explain that I might have a system that stores confidential data, but that the availability of that data may not be time critical, for example I might be able to live with several hours downtime on an end of day reconciliation service - as long as I can reconcile before the markets open again my impact shouldn't be too high. If, on the other hand, all the positions became public knowledge or the integrity was tampered with, it would have a big impact. </p><p>Now consider two different vulnerabilities: one critical vulnerability that only affects availability and a high that affects confidentiality and integrity. As long as I can recover within a matter of hours from an availability attack, this risk is likely to be lower than that associated with the high rated vulnerability, because the impact could be significantly higher. </p><p>The vulnerability rating is a static score for the general case, which doesn't take into account your context. This aspect only really affects the likelihood part of the risk calculation. Different vulnerabilities expose you to different threat scenarios though, which may alter the impact. There are scoring mechanisms that take some of this into account, e.g. using the full CVSS calculation including the environmental and temporal modifiers as well, but this still doesn't give you the full context. </p><p>Let me give you another example. Say that you have a vulnerability on your system that allows for a really easy privilege escalation - this is bad. But what if that vulnerability required a logged in user to perform the attack? Well, this could still be an issue on your workstation estate as users could escalate their privilege and attack the system. What if it's on a server protected by a privilege access management solution and no other users can log in? Now, the only people logging in already have elevated privileges and we can restrict their access to approved sessions only and record them into the bargain. Suddenly this isn't quite so much of a problem. </p><p>Why don't the scanning tools that tell me I have a vulnerability know this? Well, how can they? Where are they going to get that context? It is possible in many cases to get more context, but only if you have multiple scanning technologies and cross-correlate all the findings, or get a security/risk professional. </p><p>The key point I'm trying to make is that you can't just believe the numbers coming out of a tool that doesn't have all the context. This leads to focusing on what the tool thinks is the easiest vulnerability to exploit rather than the vulnerability carrying the highest risk. </p>Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0London, UK51.5073509 -0.127758351.336657102200142 -0.402416503125 51.678044697799855 0.146899903125tag:blogger.com,1999:blog-519657500829397637.post-53683700121276775852019-10-06T19:03:00.000+01:002019-10-06T19:03:04.764+01:00Security is a mindset not a technologyI often get asked what I look for when hiring security professionals and my answer is usually that I want the right attitude first and foremost - knowledge is easy to gain and those that just collect pieces of paper should maybe think about gaining experience rather than yet more acronyms. However, it's difficult to get someone to change their mindset, so the right attitude is very important. But what is the right attitude?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-TP2kD0XZHpE/XZoqboq7IbI/AAAAAAAAAJY/uNkPoW_iZxcDU77tdt0-oam__OnCWihPwCLcBGAsYHQ/s1600/Curiosity%2Bkilled%2Bthe%2Bcat%2B2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="360" src="https://1.bp.blogspot.com/-TP2kD0XZHpE/XZoqboq7IbI/AAAAAAAAAJY/uNkPoW_iZxcDU77tdt0-oam__OnCWihPwCLcBGAsYHQ/s640/Curiosity%2Bkilled%2Bthe%2Bcat%2B2.jpg" width="640" /></a></div>
<br />
Firstly, security professionals differ from developers and IT engineers in their outlook and approach, so shouldn't be lumped in with them, in my opinion. The mindset of a security professional is constantly thinking about what could go wrong (something that tends to spill over into my personal life as well, much to the annoyance of my wife). Contrast this with the mindset of a developer who is being measured on their delivery of new features. Most developers, or IT engineers, are looking at whether what they have delivered satisfies the requirements from the 'customer', the positive case, i.e. does it perform the function we intended? Security professionals look for the negative case, i.e. can I do anything other than the function intended? Of course, as a security professional, if you don't understand the intended function then you cannot set appropriate security controls or assess the potential impact if things go wrong, but your mind will immediately go to the 'what if' scenario. Therefore, expecting an IT engineer to deliver effective security is unrealistic.<br />
<br />
Secondly, security professionals have to be curious (and I don't mean odd), continuously learning and embracing change. The threat landscape is constantly changing and technology doesn't stand still, so it isn't possible, as a security professional, to know everything. What you have to be able to do is go back to first principles and work out what you should be worrying about, not just churning out the same solutions and technologies you always have in the past. Anyone who turns up for an interview with me pretending to know everything, or puts little effort into understanding the scenario, is going to get dismissed pretty quickly. Equally, I'm not interested in someone who knows one single technology inside-out and shows no interest in learning something new - their knowledge will be obsolete very soon and then they're of no use.<br />
<br />
Finally, a key identifier of a good security professional is whether they're interested in learning the business - if they're not, then they'll never understand the impact of what can go wrong and they'll probably default to deploying tried and tested technologies rather than embracing change and setting appropriate controls. Security professionals have to spend time understanding the business in order to gauge the impact and assess risk correctly so that work can be prioritised and the risk appetite of the business met.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-50771438307809337282019-03-11T17:05:00.002+00:002019-09-08T22:31:13.915+01:00You say it's 'Security Best Practice' - prove it!Over the last few weeks I have had many conversations and even attended presentations where people talk about 'Security Best Practices' and how we should all follow them. However, 'Best Practice' is just another way of saying 'What everyone else does!' OK, so if everyone else does it and it's the right thing to do, you should be able to prove it. The trouble is that nobody ever measures best practice - why would you? If everyone's doing it, it must be right.<br />
<br />
Well, I don't agree with this sentiment. Don't get me wrong, many of the so-called best practices are good for most organisations, but blindly following them without thought for your specific business could cause as many problems as you solve. I see best practice like buying an off-the-peg suit - it will fit most people acceptably well if they are a fairly 'normal' size and shape. However, it will never fit as well as a tailored suit and isn't an option for those of us who are outside the bounds of 'normal' according to the retailers.<br />
<br />
The real problem is that no company is actually normal, i.e. exactly the same as other companies. Best practice is very useful for small to medium sized enterprises (SMEs), who can't afford to have an expensive security team on hand permanently - security architects and strategic security leaders that can actually turn security into a business enabler demand 6-figure salaries. In the absence of these people, you have little choice other than to follow everyone else or hire consultants in to advise on what really matters and what is right for your business.<br />
<br />
Large enterprises, however, can afford in-house security teams and should be demanding more from their security leadership than simple, formulaic repeating of the toolsets that everyone else deploys and that they've seen implemented in their previous organisations. So why do large enterprises follow best practice without much thought? To my mind it's for one of two reasons: it's either that they know no better, or it's so they can defend an audit and protect their jobs. For example, the <abbr title="Information Commissioner's Office">ICO</abbr> won't fine you after a breach if you've followed best practice, but if you've done something unusual then you'll have to justify it and defend it. If you have done the job properly though, this defence is easy as you will have gone through a logical set of steps to arrive at that solution. It is a much stronger defence to be able to justify your deployed capabilities rather than just saying that everyone else does it.<br />
<br />
Technology should be the last thing that you decide upon once you know what your control objectives are, which you will only be able to articulate when you really understand the specific business in front of you and the strategic objectives. Then you have to look at the threat scenarios for your business and balance the risks accordingly. Don't follow the crowd blindly; I encourage you to strive for the best solution, not best practice.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-17373990501921357732017-02-24T22:47:00.002+00:002019-09-07T19:41:20.447+01:00Cyber Security Predictions for 2017<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
I was asked to sit on a panel of experts, gaze into the crystal ball and make my predictions for what 2017 holds in store for cyber security, which got me thinking. Let's start with more breaches, more ransomware, more cyber security jobs, wage increases for security professionals, more 'qualified' professionals who don't really know what they're doing but have a piece of paper and, of course, vendors making even more money out of Fear, Uncertainty and Doubt (FUD). However, none of those is terribly interesting or any different from 2016, or 2015 for that matter, or indeed anything other than trends in the industry. <br />
<br /></div>
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
So what does 2017 hold in store for us in the security industry and is there anything new to worry about? Well an obvious one to call out is the EU's General Data Protection Regulation (GDPR). So what is GDPR? Well, GDPR replaces the previous data protection directive and aims to improve and harmonize data protections for EU citizens. This will impact non-EU companies that hold data on EU citizens as well as EU companies and agencies. Why is this such a big thing? Well, the regulation increases accountability and responsibility on companies, makes it law to disclose breaches and increases potential fines up to €20m or 4% of global turnover from the previous year, whichever is greater. <br />
<br /></div>
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
When does it come into effect? 25th May 2018. So why talk about it as a prediction in 2017? Companies will have to be prepared well before this date and vendors will start working towards selling services specifically aimed at GDPR compliance this year. The problem I have with this is that I believe companies will take their collective eye off the ball and be so busy with GDPR that they won't keep pace with the changes in technology and threat landscape. <br />
<br /></div>
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
I also believe that fines should be handed out more readily. Too often we have companies suffering a breach saying that they were compliant and it must have been an 'advanced attack' or 'nation state' actor. This is mostly complete rubbish! What's actually happening is that people do whatever gives them a tick in the compliance box without paying any mind as to whether it actually makes them secure. They use compliance as an insurance policy instead of following the principles to make themselves more secure. Most breaches occur through the same broad issues as a decade ago (or more). Frankly, if, for example, you have an OWASP Top 10 in your web app/service and you are breached, you should have the full fine thrown at you and those in charge should face negligence charges. There is simply no excuse for such well-known vulnerabilities to exist in live systems. Another point to remember with GDPR is that Brexit won't make us immune in Britain as the Information Commissioners Office (ICO) has already committed to it, so companies will have to prepare.</div>
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
<br />
What else could we see in 2017? The IT industry is embracing DevOps, continuous integration, Platform as a Service (PaaS), software defined networks and, of course, agile. Many of these systems or vendor offerings have poor or non-existent security models. That industry needs to catch up; fast. In my opinion, the reason why we haven't seen more issues with these technologies is that they haven't, until now, been adopted by the big target companies, e.g. the banks. This is changing now and I think we'll see more focus on these technologies over the course of this year in situations where security is of high importance. <br />
<br /></div>
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
This isn't just about the technologies though, agile and the speed of deployment will change the way security professionals have to work. Gone are the days when the security professional has time to assess a solution at their leisure and fully test and assure it before go-live. I think threat modelling is going to become more important in this arena. Threat models can be built ahead of time and applied to new systems as they are developed. The emphasis then has to be on preventing the threat scenario as a whole (through a layered approach) not focusing on every single individual vulnerability/weakness. Basic security hygiene has to be brought up to an acceptable level across the board to enable this new way of working, as we can't rely on stopping a project whilst we fix every bit of it. </div>
<br />
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
Something else I think will become more prevalent is big data and behavioural analytics. Companies are now starting to realise the power of big data and this is spilling over into the security industry. Some security teams are now employing data analysts and setting them anomaly detection problems or running behavioural reports on their employees, which is one of the best ways to catch the rogue insider. These are interesting developments and this type of data analysis is the future of security (alongside more traditional technologies and policy as well). </div>
<br />
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
What else? I think that third party suppliers, the supply chain and smaller businesses will start to become more heavily targeted as the main targets get harder to breach. Smaller businesses can't usually afford the experienced cyber security teams that are required to secure them. So, they turn to vendors to sell them a silver bullet... on a budget. That's not going to work. Actually, basic security hygiene doesn't have to cost that much and doesn't require huge pay-outs to vendors. It does take expertise though and that is in short supply. As an industry I think we could do more to help smaller businesses with things like best practices and Security Technical Implementation Guides (STIGs) before the epidemic hits.</div>
<br />
<div style="font-family: Calibri; font-size: 11.0pt; margin: 0in;">
Finally, my fifth prediction is that we will start to see more attacks on connected systems, such as connected vehicles, building management systems, IoT devices, etc. I have worked with vehicle manufacturers and those involved in smart cities and smart homes/offices, and I can safely say that security is not top of their agendas - safety may be, but not security. Unfortunately, a lack of security can lead to a lack of safety in these cases, but I think a few harsh events will happen before the lessons are learned. Will 2017 be the year for this? Possibly not, as I think adoption of the technologies may not quite be there yet, but if we don't start dealing with it now we'll be in for a whole world of pain later.</div>
Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-17873768336675736152017-02-08T20:54:00.002+00:002019-09-07T19:42:04.810+01:00The Threat Landscape RoundtableI was invited along to SC Media's roundtable on The Threat Landscape last week and they have written an article on it. I was also interviewed and appear in their video summary. The article and video can be found here: <a href="https://www.scmagazineuk.com/roundtable-the-threat-landscape/article/635652/" target="_blank">https://www.scmagazineuk.com/roundtable-the-threat-landscape/article/635652/</a>Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-64570755636231121002017-02-01T23:20:00.002+00:002019-09-07T19:42:48.366+01:00The one question to ask a security team that will tell you if their company is secureWell, okay, it won't actually tell you whether they are secure or not and there are other questions you could ask, but the point is you can tell a lot about a company's security by how they answer security questions. I was recently at a security round table and the conversation turned to third parties and how you can assure yourself of their security. Some advocated scoring companies or certifications, while others advocated sending questionnaires. The argument against questionnaires is that they are a point in time view of the organisation. However, you can ask process and policy based questions and you can tell a lot from how they answer.<br />
<br />
So, what is the question that will reveal all? Well, as I said it's not one question as such, more a type of question. It should be about something basic, some security control you're sure they have because everyone does. For example:<br />
<br />
<i>Why do you have a firewall?</i><br />
<br />
Probable answers:<br />
<ul>
<li>"because everyone has one"/"because the course I went on said I should have one"/"because my last organisation has one and they are very secure" - bad answer, you're not thinking about controls or security, but instead just buying popular products or whatever the vendor sells you and undoubtedly have a false sense of security</li>
<li>"because our PCI/ISO/HIPAA/Other certification says we have to" - bad answer, you're ticking boxes and chasing compliance rather than actually trying to be secure</li>
<li>"well, a firewall is part of a secure layered architecture and enables segregation at the network level, restricting the ingress and egress... etc." - okay answer, at least you know what it does and may understand its limitations</li>
<li>"our threat modelling has identified threat actors and attack scenarios that can be mitigated, in part, by introducing a firewall at this location in our network" - good answer, you understand the technology, you are thinking how to deploy it, what technologies could help you secure your assets and what are the best projects/controls you can spend your limited budget on to reduce risk</li>
</ul>
<br />
I have done (and still do) many third party assessments and I do advocate asking them questions rather than just trusting someone else's word or a rating/certification of some sort, but I'm mostly interested in how they answer questions. I've seen too many 'compliant' companies say "We're secure, the U.S. Government uses us!" or "All the high street banks use our service!", yet fail close inspection and have glaring weaknesses or vulnerabilities.<br />
<br />
Trust your own judgement; ask them a question. And if you're a third party, ask yourself the question... with all your controls.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-15287948419479843652016-05-08T23:03:00.002+01:002019-09-07T19:43:29.783+01:00File Deletion versus Secure Wiping (and how do I wipe an SSD?)When is a deleted file actually removed from your device, or at least when does it become unrecoverable? It turns out that this question isn't always easy to answer, nor is a secure file deletion easy to achieve in all circumstances. <br />
<br />
To better understand this we have to start from the basic principle that when you delete a file on your computer you are only deleting the pointer to the file, not the actual data. The data on your hard disk drive (HDD) is stored magnetically in sectors on platters that spin round inside the HDD (we'll come onto SSDs in a bit). So, how does the computer know where to look for your file? It has a table of indexes such as the File Allocation Table (FAT) or Master File Table (MFT) in NTFS. When you delete a file in your OS, all you are actually doing is removing its entries from the table of indexes so your OS can't find it any more and doesn't know it's there. However, all the data is still stored on the disk and IS STILL RECOVERABLE! Tools like Piriform's Recuva can scan your disk for orphaned files and file fragments and allow you to recover them. <br />
<br />
So, how do you actually securely delete a file so that it is unrecoverable? The most common way to securely delete a file is to overwrite it one or more times with other data before removing the entries in the index table. Different schemes for overwriting the data exist from NIST, the US DoD, HMG, Australian Government, etc. These usually consist of 1-3 rounds of writing all zeros, all ones or random patterns to the sectors, i.e. physically overwriting the data on the disk before 'deleting' it. There are many tools available to securely delete files and securely wipe drives according to these requirements. <br />
<br />
Excellent, we've solved the problem of secure file deletion. Or have we? Well, no. There are usually some hidden areas of drives such as bad sectors that haven't actually failed, Host Protected Area (HPA), Device Configuration Overlay (DCO), etc. Interestingly, with DCO it is possible that you have a significantly bigger HDD capacity than is reported by the drive. Some manufacturers will sell bigger HDDs with the capacity artificially reduced for a variety of reasons. However, the important point here is that there are areas of the drive that you cannot normally access, but that may contain remnants of your data. <br />
<br />
What of Solid State Drives (SSD)? Are they easier or harder to securely wipe? It turns out that they are much harder to wipe. SSDs can store your data anywhere and the controllers are programmed to 'wear' the drive evenly by keeping track of areas that get a lot of use and moving data around on the drive. So, assuming you keep roughly the same file size, when you edit your file on an HDD the original physical sectors will usually get overwritten with the new version. However, with SSDs, it is likely that the new version will be written to new areas of the disk, leaving the originals intact. It is very difficult to know where an SSD actually writes your data. They also have many hidden areas as above as well as capacity used to cope with failing sectors or evening up the wear. The long and tall of it is that if you use software to overwrite the file, like you would on an HDD, you probably haven't overwritten the data at all, but you will have reduced the life of your drive. <br />
<br />
So how do we secure delete a file on an SSD? There aren't that many manufacturers of SSDs and most of them provide utilities to securely wipe their drives using the ATA Secure Erase (SE) command, which is a firmware supported command to securely wipe the whole drive, releasing electrons from the storage cells, thus wiping the storage. That's just wiped our whole drive though; how do I wipe just a file? Well, you can't really. You either wipe the drive or don't bother. <br />
<br />
There is a 'gotcha' here as well though. I said earlier that there aren't many SSD manufacturers, but if you go to buy one there seem to be loads. Well, people like HP and IBM rebrand other people's SSDs (I believe they use Crucial). What's the harm in this? Well, they will sometimes re-flash the firmware to have their own feature set. That means that the original manufacturer's Secure Erase software may not work on them and the IBMs and HPs don't always provide an alternative (other than the traditional overwriting you would do on an HDD). <br />
<br />
There must be something you can do though, surely? Well, yes there is. If you first encrypt your drive, or use file-level encryption, then the data that is on the drive should be unrecoverable (assuming you haven't stored the keys on the drive as well). This is actually your best bet for an SSD, but also does no harm on a traditional HDD. <br />
<br />
OK, so if I want to get rid of a drive that is End of Life, what should I do? If it's an HDD, you should secure wipe it by overwriting the whole drive several times as described above, degauss it (i.e. using electro magnets to wipe the magnetic data on the platters) and then shred the drive. Yes, I did say shred the drive... into tiny pieces. You can get some impressive machinery to do this, or use a service to shred them on site for you. What about SSDs? Use the ATA Secure Erase function from the manufacturer's software and then shred them as before (just make sure the shredding process actually destroys the chips so they can't be re-floated onto another board to read them). <br />
<br />Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-20593610757093606092016-01-04T20:58:00.002+00:002019-09-07T19:44:11.387+01:00SC: Video Interview: Bankers v hackersSecurity professionals can't afford to work in isolated bubbles when the attackers are openly sharing information about system vulnerabilities...<br />
<br />
Watch my video interview for SCMagazine <a href="http://www.scmagazineuk.com/sc-video-interview-bankers-v-hackers-with-dr-luke-hebbes/article/462752/">here</a>. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-49120204883848371592015-11-09T22:35:00.002+00:002019-09-07T19:44:58.237+01:00Black Box versus White Box testing and when to use themI have recently been speaking to many security professionals and asking them about black box and white box testing. I have used it as an interview question on many occasions as well. People's answers are varied and interesting, but I thought I would share my views briefly here.<br />
<br />
Firstly, what are black box testing and white box testing, or grey box testing for that matter? Simply put, a black box test is one where the tester has no knowledge of the internal structure or workings of the system and will usually test with security protections in place. They may not even be given credentials to a system that requires authentication. This would be equivalent to what a hacker would have access to. <br />
<br />
The opposite extreme is a white box test, where the tester has full knowledge of the system and access to the code, system settings and credentials for every role, including the administrator. The tester will likely be testing from inside the security perimeter. Grey box testing sits somewhere in the middle, where the tester will have knowledge of the functionality of the system and the overall components, but not detailed knowledge. They will usually have credentials, but may still test with some security controls in place. <br />
<br />
So, when would you use the different levels of testing? Personally, I think that grey box testing is neither one thing nor the other and holds little value. For me, the motivation behind black box testing is compliance, whereas the motivation behind white box testing is security. With a white box test you are far more likely to find security issues, understand them and be able to fix or mitigate them effectively, so why wouldn't you do it? The black box test is supposedly what a hacker would see, but they have far more time, so it isn't even representative. The only reason to perform a black box test is to pass some audit that you are afraid you might fail if you perform a full white box test, in my opinion. <br />
<br />
If you actually want to be secure, then make sure you always commission white box tests from your security testers. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-41015681402321998822015-04-30T23:48:00.002+01:002019-09-07T19:45:51.234+01:00Improving Usability AND Security - it is possible?I believe so, but only if security teams start to listen to what's important to the usability experts and adapt the security provision accordingly. As many have said before, there is no such thing as 100% security and we don't even necessarily want governmental levels of security for everything. Security provision should be appropriate to the systems and the information it protects. <br />
<br />
I have worked on several projects with user experience designers and it has really changed my approach to securing systems. One particular project I was brought in to work on was having problems because the UX team were refusing to put in additional security measures and the security team were refusing to let them go live. To cut a long story short, it turns out that there are known drop-out rates for registrations or user journeys based on the number of fields people have to fill in and how many clicks they have to do. So, the requirements from the security team meant that the drop-out rates would be so high the service wasn't going to work. How can you deliver a secure service in this instance? Well we split the registration journey and allowed the first transaction with lighter weight security information. This won't work in all cases, but the idea is the same - what security is appropriate for this system?<br />
<br />
The key here is to understand the user journey. Once you understand this, you can categorise the individual journeys and the information used. Not all journeys will access the same level of information and not all information has the same sensitivity. Authentication should be appropriate to the journey and information. Don't make the user enter loads of authentication information all the time or to do the most simple task. Some user journeys won't actually need authentication at all. For those that do, you should consider step-up authentication - that is simple authentication to begin with, but as the user starts to access more sensitive information or make changes/transactions that are high risk, ask them for additional credentials. For example, a simple username and password could be used for the majority of user journeys, but perhaps a one-time token for more high-risk journeys. <br />
<br />
It is possible to have both usability and security. In order for this to work though, you have to:<br />
<ul>
<li>understand the user journeys</li>
<li>ensure that it is usable most of the time for most tasks</li>
<li>categorise the information and set appropriate access levels</li>
<li>use step-up authentication for high-risk tasks rather than make the whole service hard to use</li>
<li>use risk engines transparently in the background to force step-up authentication or decline transactions/tasks when risk is above the acceptable threshold</li>
</ul>
<br />Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-51106312527215395202015-02-20T14:00:00.002+00:002019-09-07T19:46:51.920+01:00EU Commission Working Group looking at privacy concerns in IoTThe Article 29 Working Group advising the EU Commission on Data Protection has published their <a href="http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf" target="_blank">opinion</a> on the security and privacy concerns of the Internet of Things. A couple of interesting quotes come from this document and it points to possible future laws and regulations.<br />
<blockquote>
"Many questions arise around the vulnerability of these devices, often deployed outside a traditional IT structure and lacking sufficient security built into them."</blockquote>
<blockquote>
"...users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific."</blockquote>
One thing is for sure, privacy is likely to get eroded further with the widespread adoption of IoT devices and wearables. It is critical that these devices, and the services provided with them, have security built in from the start.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-36884097272945008982015-02-10T23:07:00.002+00:002019-09-07T19:47:27.614+01:00Internal cyber attacks - more thoughtsI presented on a panel today at the European Information Security Summit 2015, entitled 'Should you launch an internal cyber attack?' We only had 45 minutes and I thought I'd share some of my thoughts, and what I didn't get to say, here. <br />
<br />
Firstly, as we all know, the concept of a network perimeter is outdated and there is a real blurring of whether devices should be considered internal or external these days. It's not just about <acronym title="Bring Your Own Device">BYOD</acronym>, but most organisations provide laptops for their employees. These laptops get connected at home, airports, hotels, etc. Any number of things could have happened to them during that time, so when they are reconnected to the network, they may have been compromised. For this reason, it should be every system for itself, to a certain extent, in the network, i.e. assume that the internal machines are compromised and try to provide reasonable levels of security anyway.<br />
<br />
Secondly, the user is the weakest link. It has been said many times that we spend our time (and budget) on protecting the first 2000 miles and forget about the last 2 feet. This is less and less true these days, as security departments are waking up to the fact that education of the users is critical to the security of the information assets. However, the fact still remains that users make mistakes and can compromise the best security. <br />
<br />
So, should we launch internal cyber attacks against ourselves? Yes, in my opinion - for several reasons. <br />
<br />
Internal testing is about audit and improvements. If we launch an internal Pentest or Phishing attack, we can see the effectiveness of our controls, policies and user education. The critical point is to not use the results as an excuse to punish or name and shame - this is not Big Brother looking to punish you. If a user does click on a link in a Phishing email then we should see it as our failure to educate properly. If a user bypasses our controls then our controls haven't been explained properly or they are not appropriate (at least there may be a better way). <br />
<br />
An example was discussed on the panel about people emailing a presentation to their home email account to work on it from home. In the example, this was a breach of policy and, if the categorisation of the presentation is confidential or secret, then they shouldn't be doing this. However, rather than punish the user immediately, try asking why they felt that they needed to email it to their home computer. Was it that they don't have a laptop? Or their laptop isn't capable enough? Or that they think they are doing a good thing by emailing it so that they don't have to take their corporate laptop out of the office as they know they're going to the pub for a couple of hours and are worried about it getting stolen? There are motivations and context to people's decisions. We see, and usually focus on, the effects without stopping to ask why did they do it? Most people are rational and have reasons for acting as they do. We need to get to the heart of those reasons. <br />
<br />
Education is critical to any security system and as security professionals we need to learn to communicate better. Traditionally (and stereotypically) security people are not good at communicating in a clear, non-technical, non-jargon-filled way. This has to change if we want people to act in a secure way. We have to be able to explain it to them. In my opinion, you have to make the risks and downsides real to the user in order to make them understand why it is that we're asking them to do or not do something. If you just give someone a directive or order that they don't understand then they will be antagonistic and won't follow it when it is needed, because they don't see the point and it's a hassle. If they understand the reasoning then they are likely to be more sympathetic. Nothing does this better than demonstrating what could happen. Hence the internal attacks. <br />
<br />
The next question we have to ask ourselves is what constitutes the internal part of an internal attack. Is it just our systems, or does it include all those third party systems that touch our data? I could quite happily write a whole blog post on outsourcing to third parties and the risks, so I won't delve into it here. <br />
<br />
I do also have to say that it worries me that we seem to be educating our users into clicking on certain types of unsolicited emails that could easily be Phishing attacks. An example that I used was the satisfaction or staff survey that most companies perform these days. These often come from external email addresses and have obscured links. To my mind we should be telling our users to never click on any of these links and report them to IT security. Why shouldn't they ask our advice on an email they're unsure about? We're the experts. <br />
<br />
One final point was suggested by a speaker, which I think is a good idea. If we educate users about the security of their family and assist them with personal security incidents and attacks as if they are those of our company, then we are likely to win strong advocates. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-88027323654971769422014-08-08T14:23:00.002+01:002019-09-07T19:48:12.298+01:00Security groups should sit under Marketing, not ITOk, so I'm being a little facetious, but I do think that putting Security departments under IT is a bad idea, not because they don't naturally fit well there, but because usually it gives the wrong impression and not enough visibility. <br />
<br />
Security is far more wide reaching than IT alone and touches every part of the business. By considering it as part of IT, and utilising IT budgets, it can be pigeonholed and ignored by anyone who wouldn't engage IT for their project or job. Security covers all information, from digital to paper-based and is concerned with aspects such as user education as much as technology. <br />
<br />
There is a clear conflict of interest between IT and Security as well. Part of the Security team's function is to monitor, audit and assess the systems put in place and maintained by the IT department. If the Security team sits within this department then there can be a question over the segregation of duties and responsibility. In addition to this, Security departments can end up competing with other parts of IT for budget. How well does this work when project budgets are allocated to one department responsible for producing new features and fixing the vulnerabilities in old ones? <br />
<br />
The Security department should answer directly to the board and communicate risk, not technology. It is important that they are involved with all aspects of the business from Marketing, through Procurement and Legal, to the IT department. You will, more often than not, get a much better idea of what the business does and what's important to it by sitting with the Marketing team than with the IT team. Hence the title of this post. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-27020500506400933132014-05-24T18:36:00.002+01:002019-09-07T19:49:09.829+01:00eBay's Weak Security ArchitectureWell eBay are in the news due to their breach of 145 million users' account details. There are a few worrying things about this breach, beyond the breach itself, that point to architectural issues in eBay's security.<br />
<br />
<div>
The first issue is that a spokeswoman (according to <a href="http://uk.reuters.com/article/2014/05/21/us-ebay-password-idUKBREA4K0B420140521" target="_blank">Reuters</a>) claimed "that it used 'sophisticated', proprietary hashing and salting technology to protect the passwords." This sounds very much like <a href="http://blog.rlr-uk.com/2009/05/security-through-obscurity.html">security through obscurity</a>, which doesn't work. So, either they are using a proprietary implementation of a publicly known algorithm, or they have created their own. Both of these situations are doomed. As always, no one person can think of all the attacks on an algorithm, which is why we have public scrutiny. Even the best cryptographers in the world can't create new algorithms with acceptable levels of security every time. Do eBay have the best cryptographers in the world working for them? I don't believe so, but I could be wrong.<br />
<br />
Also, if their argument is that hackers don't know the algorithm so can't attack it, then I'm fairly sure they're wrong there too. Even if the algorithm was secure enough to stand up to analysis of the hashes only, as hackers have eBay staff passwords perhaps they also have access to the code! If, on the other hand, they have their own implementation of a public algorithm I have to question why? Many examples are available of implementations that have gone wrong and introduced vulnerabilities, e.g. Heartbleed in OpenSSL. Do they think they know better?<br />
<br />
The second issue is that they don't seem to encrypt Personally Identifiable Information (<acronym title="Personally Identifiable Information">PII</acronym>). This is obviously an issue if a breach should occur, but, admittedly, doesn't solve all problems as vulnerabilities in the web application could still expose the data. However, it is likely to have helped in this situation.<br />
<br />
Finally, and most importantly, how did gaining access to eBay staff accounts give attackers access to the data? Database administrators shouldn't have access to read the data in the databases they manage. Why would they need it? Also, I would hope that there are <acronym title="Virtual Private Networks">VPNs</acronym> between the corporate and production systems with 2-factor authentication. So how did they get in? Well, either eBay don't use this standard simple layer of protection, they leave their machines logged into the VPN for extended periods or they protect the VPN with the same password as their account.<br />
<br />
Even if eBay do implement VPNs properly with 2-factor authentication, the production servers shouldn't have accounts on them that map to user accounts on the corporate network. Administrative accounts on production servers should have proper audited account control with single use passwords. Administrators should have to 'sign out' an account and be issued with a one-time password for it by the security group responsible for Identity and Access Management (IAM).<br />
<br />
All this leads me to think that eBay have implemented a weak security architecture. </div>
Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-22358586084828494042013-06-10T20:50:00.002+01:002019-09-07T19:49:43.557+01:00Denial of Service (DoS) and Brute-Force ProtectionRecently it has become clear to me that, although the terms Denial of Service (DoS), Distributed Denial of Service (DDoS) and Brute-Force are used by many, people don't really understand them. This has caused confusion and problems on more than one project, so I thought I would write my thoughts on their similarities, differences and protection mechanisms.<br />
<br />
A <strong>Denial of Service </strong>is anything that happens (usually on purpose, but not necessarily) that takes a service off line or makes it unavailable to legitimate users. This could range from a hacker exploiting a vulnerability and taking the service off line, to someone digging up a cable in the road. However, a Denial of Service could also be triggered by legitimate use of a service without any 'vulnerabilities'. Consider a service that performs operations on large sets of data that take a few seconds to complete. If I put in multiple requests for this service then I could tie it up and make it unresponsive for several minutes. Similarly, consider a website that has a page with a large video or flash animation on it. Again, relatively few requests for this resource could make the server slow and unresponsive. DoS is not just about hackers finding vulnerabilities.<br />
<br />
<strong>Distributed Denial of Service</strong>, on the other hand, is a deliberate attempt by someone to deny service by performing large numbers of requests from a large number of hosts at once. Whilst it is relatively easy to spot a single host attempting a large number of requests and block them, it can be hard to pick up on many hosts making few requests and harder to block them. There are many solutions to combat DDoS by caching content and providing high bandwidth to large numbers of nodes, such as those available from the likes of <a href="http://www.akamai.com/" target="_blank">Akamai</a>. However, logic flaws or lengthy processing in the application can only really be fixed by the application developers. <br />
<br />
<strong>Brute-Force</strong>, on the other hand, has nothing to do with DoS or making a service slow or unavailable. I was amazed that people didn't know this! Brute-Force is all about submitting a, usually, large number of requests to a service to obtain information that was not intended by the developer. An example would be having no account lockout after several incorrect login attempts. It would then be possible to try a whole dictionary or even every character combination to eventually find the password for a user. This is an example of Brute-Force, but there are many others, such as finding database versions, telephone numbers, transactions, <a href="http://blog.rlr-uk.com/2011/09/city-link-and-gathering-data-for-spear_4263.html">parcel delivery addresses</a>, etc. This can only really be stopped with application logic. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-41340460737800514112013-04-29T21:03:00.002+01:002019-09-07T19:50:35.248+01:00The Disconnect between Security and Senior ManagementThere is often a fundamental disconnect between security professionals and senior management. As I have stated in a <a href="http://blog.rlr-uk.com/2009/09/human-factors-in-information-security.html">previous post</a> about slips, mistakes and violations, if senior management don't 'buy in' to security then nor will the rest of the organisation and ultimately it will fail. Middle management want to be senior management and will model themselves on them, often seeing the breaking of rules as a mark of status. So, it is vital that senior management lead by example.<br />
<br />
Unfortunately, it is often very hard to get senior management to 'buy in' to this concept and not have a 'them-and-us' attitude of there being those rules that apply to the rest of the organisation and those that apply to them. This is as much the fault of the security professionals as senior management though. Security professionals have spent so long saying "no" to everyone and stalwartly refusing to budge or see someone else's point of view that people have stopped listening and taking note. To be honest, rightly so. <br />
<br />
If you want someone to change their point of view or come round to your way of thinking, by far the easiest way is to sell it to them as a positive thing that will be beneficial to them and 'bring them with you' rather than dictate. Saying "no" all the time is not positive and will ultimately fail as people will stop listening. Make it personal to them and put it in terms they understand. Relating security to risk and money will usually be more successful. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-16452878955070625602012-12-19T14:18:00.002+00:002019-09-07T19:53:00.358+01:00Pentests Don't Make You SecureI was asked to provide details of the 'Penetration Testing Phase' for a particular project by someone who was putting together a Test Approach Document today. The categories I was asked to fill in were:<br />
<ul>
<li>Objective of the phase</li>
<li>Responsibility & Authority</li>
<li>Dependencies, risks & assumptions</li>
<li>Entry & Exit criteria</li>
</ul>
When discussing what they really wanted it became clear that they didn't know what a penetration test was or why we do them. The questions and document were set up expecting a deliverable from the pentest itself. The report was being treated as the deliverable without any thought of why a report was being produced or how it will be used. It was a tick in the box - "We require a pentest to be able to go live, so if we've had the report we can tick that box and move on." <br />
<br />
Pentesting is not an end in itself. Pentesting is a standard, finite snapshot of the security of a system, which, if taken in isolation as a goal, is fairly useless. Pentests don't make you secure. Performing a pentest and having a report with lots of pretty colours and charts saying that high and critical vulnerabilities exist is only any good if you then remediate or mitigate those vulnerabilities. You could pentest your system every month, but if you never change anything in the system, every report will be the same and you will be as much at risk as you were before you had the pentest done. Indeed, you are likely to get progressively worse results as new vulnerabilities are discovered all the time. <br />
<br />
The test and report themselves don't do anything for security. A pentest is used by security professionals to inform and shape a project and decisions. The actions taken based on the findings from a pentest are what improve your security and help you identify the best use of finite resources or, at the very least, enable you to understand the risk. Do you need to perform a pentest? Absolutely you do in order to understand the threat landscape properly and identify vulnerabilities, but it's what you then do with that knowledge that is important and will make you more secure (or not). Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-82289639163032129042012-11-16T16:30:00.002+00:002019-09-07T19:53:52.435+01:00Web Hosting Security Policy & GuidelinesI have seen so many websites hosted and developed insecurely that I have often thought I should write a guide of sorts for those wanting to commission a new website. Now I have have actually been asked to develop a web hosting security policy and a set of guidelines to give to project managers for dissemination to developers and hosting providers. So, I thought I would share some of my advice here.<br />
<br />
Before I do, though, I have to answer why we need this policy in the first place? There are many types of attack on websites, but these can be broadly categorised as follows: Denial of Service (DoS), Defacement and Data Breaches/Information Stealing. Data breaches and defacements hurt businesses' reputations and customer confidence as well as having direct financial impacts. <br />
<br />
But surely any hosting provider or solution developer will have these standards in place, yes? Well, in my experience the answer is no. It is true that they are mostly common sense and most providers will conform to many of my recommendations, but I have yet to find one that, by default, conforms to them all. <br />
<h4>
Site Categorisation</h4>
There are several different categories of hosting and several different ways to categorise sites, with different requirements. However, in my opinion, sites should be categorised based on the information that they contain and the level of interaction allowed. Sites should then be logically and physically separated into their categories. <br />
<br />
Sites can be categorised as brochure sites if they have static content or do not collect information. These sites can then further be categorised into public or private depending on whether the data that they contain is public or not. Sites within these categories may be co-hosted with other sites in the same category, but the two categories should be segregated. <br />
<br />
Sites can be classed as data collection apps if they collect sensitive or personally identifiable information (PII) from the user. Sites within this category should be hosted on their own servers with no co-hosting and be segregated from all other sites. The data must be stored on separate segregated database servers that are secured and firewalled off. <br />
<br />
Finally, any site with even more sensitive data on it or company secrets should be hosted internally if you have the expertise in house. <br />
<h4>
Hosted Environment</h4>
The following list is an example of the requirements for secure web hosting. It is not necessarily complete, but if you do not have the following then you may have issues in the future. All websites and web applications must:<br />
<ul>
<li>be hosted on a dedicated environment - the hosting machine may be virtual or physical, but must not be shared with any 3rd parties. Multiple websites and applications from the same company may be hosted on the same machines according to the categories above</li>
<li>have DDoS protection in place</li>
<li>have AV running and configured properly on the server along with appropriate responses and reporting</li>
<li>be hosted behind a Web Application Firewall (WAF) to protect against common attacks, plus allow the ability to configure it for specific services</li>
<li>be hosted on security hardened Operating Systems (OS) and services to an agreed build standard</li>
<li>be subject to regular and timely patching of the OS and services</li>
<li>be subject to regular security testing and patching of any Content Management System (CMS) in a timely manner if used</li>
<li>be subject to active monitoring and logging by the provider for security breaches and reporting to/from the organisation</li>
<li>have formal incident management processes for both identifying and responding to incidents</li>
<li>not be co-hosted with additional public services beyond HTTP/HTTPS (e.g. no public FTP)</li>
<li>not allow DNS Zone Transfers</li>
<li>use proper public verified SSL certificates - with a preference for Extended Validation (EV) certificates</li>
<li>ensure that management services and ports are on different IP addresses and domain names preferably, but must not be available through the normal login or visible on the website</li>
<li>ensure that administrative interfaces and services are restricted to certain IP addresses at least, but make use of client-side certificates or two factor authentication (2FA) if possible</li>
<li>ensure staging servers are available for test and development, which must not be shared with live sites and should be securely wiped at the end of testing as soon as the site is deployed live</li>
<li>ensure staging and test environments are not available on the public Internet or, if there is no alternative, they must be devoid of branding and sensitive information in all ways and restricted as above</li>
<li>be built on a tiered architecture, or at least the database (DB) server must not sit on the same server as the web front end, must not be accessible from the Internet and must be securely segregated from the front end</li>
<li>use encrypted storage for all sensitive information, (e.g. passwords and sensitive information)</li>
</ul>
<h4>
Hosting Services</h4>
It is up to the hosting provider and third party developers, but should be backed up by specific contractual clauses, to ensure that:<br />
<ul>
<li>the site is backed up regularly off site in a secure location using encrypted media where the keys are stored separately from the media and able to be restored in a reasonable time frame with a suitable rotation and retention policy</li>
<li>hardware and media that has reached the end of its life is securely destroyed</li>
<li>all sites are made available for pentesting prior to going live and at regular intervals</li>
<li>all vulnerabilities considered of medium risk and above should be remediated prior to go-live</li>
<li>all sites are available for on-going regular automated Vulnerability Assessments</li>
<li>domain names, code and SSL certificates are registered to the company and not a third party</li>
<li>there are agreed processes for identifying approved personnel to authorise changes</li>
<li>change management processes that track all changes are in place along with rollback and test plans</li>
<li>capacity and bandwidth are actively managed and monitored</li>
<li>all management actions are accountable (unique accounts allocated to individuals)</li>
<li>all management should be through secure ingress from trusted locations</li>
<li>egress filtering should be in place to block all non-legitimate traffic</li>
</ul>
Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com10tag:blogger.com,1999:blog-519657500829397637.post-3117619687269457872012-10-21T13:47:00.002+01:002019-09-07T19:54:26.027+01:00Here come the Security PoliceSecurity teams often attract antagonism from the business that they are supposed to serve, appearing as self-appointed policemen in a police state. This is unhelpful and not what we are or should be aiming for. Security departments should be providing a secure environment in which business users are free to do what they want. Obviously this environment will have boundaries, but they must be agreed with the business and not just imposed arbitrarily. <br />
<br />
Take an example from children's play areas, children should be safe within the confines of the soft play area and not too much harm will come to them. They can run around and play whatever game they like as long as they stay within the boundaries. Children can't wear shoes in a soft play area as they may hurt another child, but this doesn't stop them from doing what they want as the play area has been engineered so that they don't need shoes to stop them from hurting their feet or getting wet and dirty. <br />
<br />
The same principles can be applied to security. If we build a safe and secure environment that has everything that people need within it already then they are free to do what they want and need, and are far less likely to break the rules or circumvent security controls. The architecture has to be secure and services should be tailored to the business functions and not just imposed by the security teams. A good example is to provide a Choose Your Own device (CYO) offering to avoid the problems of Bring Your Own (BYO) or the restrictions of imposing a single device. It is possible to support a range of devices and then even offer a restricted service on some further devices, but allow the users choice. <br />
<br />
In the end there will always be a certain amount of policing required, but if, as a security professional, you are spending most of your time in that role then your network, architecture and attitude are wrong. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com2tag:blogger.com,1999:blog-519657500829397637.post-68618201689522295232012-07-13T15:14:00.002+01:002019-09-07T19:54:56.712+01:00Bank Card Phone Scam - new version of an old techniqueThere is a new take on an old phone scam currently hitting people. The old scam was to pretend to be the telephone company and phone someone saying that they are about to be cut-off if they don't pay a smallish amount by card over the phone immediately. If people don't believe them they are actually encouraged to hang-up and then try to make a call. When they hang-up and then pick the phone up again it is dead. How do they do this? Well it's actually very simple - the scammer doesn't hang-up, they just put their phone on mute. The call was never torn down. <br />
<br />
So, what's the 'new take' on this scam? Well, they are now hitting bank and credit card customers. The scammers now pretend to be from the bank and start asking for card details, etc. If you get suspicious (or even sometimes prompted by the scammer themselves) you are encouraged to hang up and call them back on the telephone number shown on the back of your card. They then provide you with an extension number or a name to ask for. <br />
<br />
When you hang up they do not, similar to before. However, this time they play the sound of the dialling tone to you until you start 'dialling' the number. All they have to do is wait for you to finish dialling the number then play the ringing tone to you. All the while they haven't hung up and you haven't dialled your bank at all. The scammers then 'answer' the phone and pass you to the person you were speaking to before. You now think you're speaking to your bank. <br />
<br />
You did the right thing, but were still trapped. What can you do about this? My suggestion is to call back on a different line. Call your bank back on your mobile, not the landline you first received the call on.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com5tag:blogger.com,1999:blog-519657500829397637.post-57117413316624796952012-06-13T15:34:00.002+01:002019-09-07T19:55:52.950+01:00HTTP Header InjectionSometimes user input may be reflected in the HTTP Response Header from the server. If this is the case we may want to inject additional headers to perform other tasks, e.g. setting extra cookies, redirecting the user's browser to another site, etc. One example of this is a file download from a website with a user defined filename that I tested.<br />
<br />
The web application took a user inputted description for a dataset that was used in several places. It was passed through several layers of validation for output to the screen and to a CSV file for download. However, it was also used as the filename for the CSV download and was not subject to enough validation. The filename was written to the HTTP headers as an attachment, e.g.:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">Content-Disposition: attachment; filename="output.csv"</span></blockquote>
<span style="font-family: "courier new" , "courier" , monospace;"></span>However, if we want to add a redirect header to the response from the server then we have to manipulate the filename/description. If we add a CRLF (carriage return line feed – i.e. a new line) then we can add a new header, such as: <br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">Refresh: 0; url=http://www.google.com/#q="password.csv"</span></blockquote>
This will redirect the user's browser to the URL after 0 seconds, i.e. give them no chance to abort it. We need to send the CRLF ASCII character codes to the server to force it to put a new line in. This can be achieved by adding <span style="font-family: "courier new" , "courier" , monospace;">%0d%0a</span> (CRLF) into the description. In this case the <span style="font-family: "courier new" , "courier" , monospace;">.csv"</span> was added to the end automatically, which could be ignored by the malicious website or used as in this example above. So the full description becomes:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">output.csv" %0d%0aRefresh: 0; url=http://www.google.com/#q="password</span></blockquote>
The output of this in the HTTP Header is:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">Content-Disposition: attachment; filename="output.csv"</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Refresh: 0; url=http://www.google.com/#q="password.csv"</span></blockquote>
In this case, though, I came up with a problem. If I used the above injection I got the following error: <br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">Error 500: Invalid LF not followed by whitespace</span></blockquote>
It turns out that the character set is not properly dealt with by the web server. You cannot just add a space after the codes either as this will appear as a space at the beginning of the header line that we are injecting, which is interpreted by the browser as a continuation of the previous header line. The solution came from <a href="https://www.aspectsecurity.com/blog/to-redacted-thanks-for-everything-utf-8/">https://www.aspectsecurity.com/blog/to-redacted-thanks-for-everything-utf-8/</a> where overly long data is inserted knowing that it will be truncated to the correct codes. The following codes will be truncated to the CRLF: <br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">%c4%8a</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%c8%8a</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%cc%8a</span></blockquote>
Now the working attack payload becomes:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">output.csv" %cc%8aRefresh: 0; url=http://www.google.com/#q="password</span> </blockquote>
The simplest way to fix this is to use a hardcoded output filename, e.g. output.csv. The user can change this when they download it if they want. Otherwise, more sophisticated validation is required to look for certain character codes and sequences.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com1tag:blogger.com,1999:blog-519657500829397637.post-42968723540912901742012-04-30T11:32:00.002+01:002019-09-07T20:47:49.422+01:00Security standards are like getting a driving licenseWhen will people learn that compliance does NOT equal security? I <a href="http://blog.rlr-uk.com/2009/09/compliance-does-not-equal-security.html">blogged about this</a> back in September 2009. Recently Global Payments has suffered a breach despite being PCI-DSS compliant (article from <a href="http://www.theregister.co.uk/2012/03/30/visa_mastercard_breach/">The Register</a>)<br />
<br />
Security standards, and being assessed against them, are like getting a driving license. Passing your driving test means that you have achieved a minimum standard of driving, but it doesn't mean that you are a good driver or that you will never have an accident. The same is true of compliance to a particular standard - it doesn't mean that you can be any less vigilant about security or that you will never be compromised, it just means that you have met an agreed minimum level. <br />
<br />
People forget that the PCI-DSS is only concerned about payment card data and won't necessarily look at all systems and processes. It is perfectly possible that a system is legitimately considered out of scope, but that the compromise that system allows a platform to attack a system that is within scope. The penetration tests performed are usually more focused on external access to PCI data as well. What if I can compromise the administrator's laptop though? Attacks from more adept hackers won't always go straight for the target; there are often easier ways. <br />
<br />
PCI-DSS, and any other standard, should not even be considered the minimum requirement. It should be a given that the organisation will pass their compliance as they should be aiming so far beyond the standards. I realise that resources are not unlimited, but that doesn't mean that you should be satisfied with scraping through audits. If fewer resources were wasted trying to fudge results to pass compliance then more could be spent on actually securing the environment and compliance would be practically automatic. <br />
<br />
<em><strong>The goal is a secure, trusted environment, not getting a bit of paper from the auditors. </strong></em>Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-26142367038484087412012-04-06T10:53:00.002+01:002019-09-07T22:21:31.391+01:00‘isSuperUser = true’ and other client-side mistakesRecently I have tested a couple of commercial web-based applications that send configuration details to the client-side to determine the functionality available to the user. These details were sent as XML to a Java applet or JavaScript via Ajax. So, what’s the problem?<br />
<br />
The applications in question had several user roles associated with them, from low privilege users up to administrative users. All these users log into the same interface and features are enabled or disabled according to their role. In addition, access to the underlying data is also provided based on their role. However, in both cases, features were turned on and off in client-side code – either XML or JavaScript. One application actually sent<span style="font-family: "courier new";"> isSuperUser = true </span>for the administrative account and <span style="font-family: "courier new";">isSuperUser = false</span> for others. A simple change in my client-side proxy actually succeeded in giving me access to administrative features. <br />
<br />
The other application had several parameters that could be manipulated, such as <span style="font-family: "courier new";">AllowEdit</span>. This gave access to some features, but I noticed that there were other functions available in the code that weren’t called by the page. It was a simple matter of looking for the differences between the page delivered to the administrator and that delivered to a low privilege user to find the missing code to call the functions. This was duly injected into the page via a local proxy again and new buttons and menus were added that gave administrative functionality enabled by manipulating the parameters sent, as above. Some might argue that this attack isn’t realistic as I needed an administrative account in the first place, but the code injected would work on every install of the application. You only need that access to one installation of the application, which could be on your own machine, then you can copy and paste into any other instance (or you could simply Google for the code).<br />
<br />
It shouldn’t be this easy! Anything set on the client can be manipulated by the user easily. The security of a web application must reside on the server, not on the client. Web application developers must start treating the browser as a compromised client and code the security into the server accordingly.Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-23480649662512716872011-12-10T12:59:00.002+00:002019-09-07T20:50:22.819+01:00Citrix & RemoteApp File upload and Breakout using MS OfficeIt is possible to deliver applications remotely to users via a solution such as Citrix or Microsoft RemoteApp (part of their Remote Desktop solution). This has the advantage of only delivering the application rather than the whole desktop to the user. The user isn't even necessarily aware that the application is running remotely, as it will appear like any locally installed application when running. An example of the type of application delivered in this way might be Microsoft Office. <br />
<br />
If, however, the Citrix or RemoteApp environment hasn't been set up properly, then this can lead to security problems such as arbitrary file upload and running commands remotely. I'm not going to look at macro security, even though this can lead to complete compromise of a system. However, what some people are not aware of is that you can upload files through the Open and Save As dialogs in Office. These files can then be executed on the remote system through the same dialogs. <br />
<br />
The figure below shows the options in the Open dialog of Word, with All Files (*.*) selected as the file type and having navigated into the Windows directory. Selecting either Open or, in this case, Run as administrator will execute the application. The same could be done with a batch file or script file after first uploading it by copying and pasting into this same dialog. Arbitrary files can be uploaded to a remote system and executed in this way.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-qpOY98RHy7M/TuSbHIewwFI/AAAAAAAAAFY/9HksUM3nOz0/s1600/WordRunRegedit.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="259" src="https://3.bp.blogspot.com/-qpOY98RHy7M/TuSbHIewwFI/AAAAAAAAAFY/9HksUM3nOz0/s320/WordRunRegedit.jpg" width="320" /></a></div>
<br />
What if you don't have direct access to Office applications? If they are installed on the system, you may still be able to exploit this. Consider Internet Explorer for instance. If this application is delivered remotely and Office is installed on the system, then you will probably have the option to edit the page in Office as the screenshot below shows, with the 'Export to Microsoft Excel' option in the context menu. <br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-5qrr-NOniRA/TuSdvAGxqLI/AAAAAAAAAFo/Pv7KwD6saWk/s1600/EditInExcelWebBrowser.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://1.bp.blogspot.com/-5qrr-NOniRA/TuSdvAGxqLI/AAAAAAAAAFo/Pv7KwD6saWk/s320/EditInExcelWebBrowser.jpg" width="305" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
In a remote application environment, this will open a new window to allow you to interact with the new application. You can then upload your file and execute it as before. If you are deploying remote applications, you will have to think carefully about what you are delivering and secure the deployment properly with group policies, etc., to make sure that you do not fall foul of such simple tricks. Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com0tag:blogger.com,1999:blog-519657500829397637.post-39583586283351294982011-11-25T12:24:00.002+00:002019-09-07T22:24:57.836+01:00Encrypted ZIP Archives Leak InformationThis post is just a quick note to remind people who use encrypted ZIP archives to store or transfer confidential information, that the headers of the archive are not encrypted. Therefore, the filenames, dates and sizes of all the files within the archive can be read by anyone, without the key. Is this a problem?<br />
<br />
Well, I believe it is. Many people and organisations have naming conventions for files. How do you know which report to open if the filename doesn't give you some clue? Often filenames will include project names or codes, departments and even the names of the people writing the report. Would you give this information out to anyone walking down the street? I have seen targeted Spear Phishing attacks on users whereby emails have been sent with what look like project spreadsheets attached with the correct naming conventions and project codes. These attacks were very convincing for an unsuspecting user. Filenames can leak enough data to start launching social engineering attacks and to concentrate cracking effort on the correct files. <br />
<br />
What can you do? Either don't use encrypted ZIP archives to send sensitive data, or rename every single file to random names <em><strong>before</strong></em> adding them to the encrypted archive (remember that you should really do this to all files every time you want to add anything to an encrypted archive, even if the filename doesn't reveal anything as otherwise you will again be potentially advertising the really sensitive files).Luke Hebbeshttp://www.blogger.com/profile/15100190691403603777noreply@blogger.com1