Skip to main content

‘isSuperUser = true’ and other client-side mistakes

Recently I have tested a couple of commercial web-based applications that send configuration details to the client-side to determine the functionality available to the user. These details were sent as XML to a Java applet or JavaScript via Ajax. So, what’s the problem?

The applications in question had several user roles associated with them, from low privilege users up to administrative users. All these users log into the same interface and features are enabled or disabled according to their role. In addition, access to the underlying data is also provided based on their role. However, in both cases, features were turned on and off in client-side code – either XML or JavaScript. One application actually sent isSuperUser = true for the administrative account and isSuperUser = false for others. A simple change in my client-side proxy actually succeeded in giving me access to administrative features.

The other application had several parameters that could be manipulated, such as AllowEdit. This gave access to some features, but I noticed that there were other functions available in the code that weren’t called by the page. It was a simple matter of looking for the differences between the page delivered to the administrator and that delivered to a low privilege user to find the missing code to call the functions. This was duly injected into the page via a local proxy again and new buttons and menus were added that gave administrative functionality enabled by manipulating the parameters sent, as above. Some might argue that this attack isn’t realistic as I needed an administrative account in the first place, but the code injected would work on every install of the application. You only need that access to one installation of the application, which could be on your own machine, then you can copy and paste into any other instance (or you could simply Google for the code).

It shouldn’t be this easy! Anything set on the client can be manipulated by the user easily. The security of a web application must reside on the server, not on the client. Web application developers must start treating the browser as a compromised client and code the security into the server accordingly.

Comments

Popular Posts

Trusteer or no trust 'ere...

...that is the question. Well, I've had more of a look into Trusteer's Rapport, and it seems that my fears were justified. There are many security professionals out there who are claiming that this is 'snake oil' - marketing hype for something that isn't possible. Trusteer's Rapport gives security 'guaranteed' even if your machine is infected with malware according to their marketing department. Now any security professional worth his salt will tell you that this is rubbish and you should run a mile from claims like this. Anyway, I will try to address a few questions I raised in my last post about this. Firstly, I was correct in my assumption that Rapport requires a list of the servers that you wish to communicate with; it contacts a secure DNS server, which has a list already in it. This is how it switches from a phishing site to the legitimate site silently in the background. I have yet to fully investigate the security of this DNS, however, as most

Security Through Obscurity

I have been reminded recently, while looking at several products, that people still rely on the principle of 'security through obscurity.' This is the belief that your system/software/whatever is secure because potential hackers don't know it's there/how it works/etc. Although popular, this is a false belief. There are two aspects to this, the first is the SME who thinks that they're not a target for attack and nobody knows about their machines, so they're safe. This is forgivable if misguided and false. See my post about logging attack attempts on a home broadband connection with no advertised services or machines. The second set of people is far less forgivable, and those are the security vendors. History has shown that open systems and standards have a far better chance of being secure in the long run. No one person can think of every possible attack on a system and therefore they can't secure a system alone. That is why we have RFCs to arrive at ope

How Reliable is RAID?

We all know that when we want a highly available and reliable server we install a RAID solution, but how reliable actually is that? Well, obviously, you can work it out quite simply as we will see below, but before you do, you have to know what sort of RAID are you talking about, as some can be less reliable than a single disk. The most common types are RAID 0, 1 and 5. We will look at the reliability of each using real disks for the calculations, but before we do, let's recap on what the most common RAID types are. Common Types of RAID RAID 0 is the Stripe set, which consists of 2 or more disks with data written in equal sized blocks to each of the disks. This is a fast way of reading and writing data to disk, but it gives you no redundancy at all. In fact, RAID 0 is actually less reliable than a single disk, as all the disks are in series from a reliability point of view. If you lose one disk in the array, you've lost the whole thing. RAID 0 is used purely to speed up dis