I have been reminded recently, while looking at several products, that people still rely on the principle of 'security through obscurity.' This is the belief that your system/software/whatever is secure because potential hackers don't know it's there/how it works/etc. Although popular, this is a false belief. There are two aspects to this, the first is the SME who thinks that they're not a target for attack and nobody knows about their machines, so they're safe. This is forgivable if misguided and false. See my post about logging attack attempts on a home broadband connection with no advertised services or machines.
The second set of people is far less forgivable, and those are the security vendors. History has shown that open systems and standards have a far better chance of being secure in the long run. No one person can think of every possible attack on a system and therefore they can't secure a system alone. That is why we have RFCs to arrive at open standards that work. An example of a product that failed due to this is DiskLock. This was a few years ago now, but there are modern products that follow a similar philosophy. However, it's not my intention to pick on a particular vendor or product. DiskLock, though, was a program that encrypted files with the DES algorithm. No problems there, but they stored the key with the file, relying on people not knowing this or the scheme used to hide it. Unfortunately, with reverse engineering and chosen-key/plaintext attack techniques this is possible to work out. The problem is that the secrecy won't last long and when that has been bypassed the system should remain secure. If it does, then there was no need to keep it secret in the first place.
The only other time this phrase is used is when talking about the level of security given by implementing NAT. Here the addresses of the internal machines are obscured and an attacker doesn't know how many machines are there or what the internal topology is. Of course NAT will only allow outgoing connections or connections to specific ports due to port forwarding, so that does reduce the chances of attacking some machines. However, a web server will still have ports 80 and 443 open and, if it isn't properly patched, will suffer in exactly the same way as if it wasn't behind NAT.
I'm not saying that you should tell everyone exactly how you have implemented your security, but you can't rely on secrecy to last. The important thing is to thoroughly test your security, preferably with an outside independent agency. This is particularly important if you want others to rely on your system and must include an audit of your code for software and settings for your hardware. Are customers more likely to trust an independent testing agency or a vendor trying to sell a product or system?
The second set of people is far less forgivable, and those are the security vendors. History has shown that open systems and standards have a far better chance of being secure in the long run. No one person can think of every possible attack on a system and therefore they can't secure a system alone. That is why we have RFCs to arrive at open standards that work. An example of a product that failed due to this is DiskLock. This was a few years ago now, but there are modern products that follow a similar philosophy. However, it's not my intention to pick on a particular vendor or product. DiskLock, though, was a program that encrypted files with the DES algorithm. No problems there, but they stored the key with the file, relying on people not knowing this or the scheme used to hide it. Unfortunately, with reverse engineering and chosen-key/plaintext attack techniques this is possible to work out. The problem is that the secrecy won't last long and when that has been bypassed the system should remain secure. If it does, then there was no need to keep it secret in the first place.
The only other time this phrase is used is when talking about the level of security given by implementing NAT. Here the addresses of the internal machines are obscured and an attacker doesn't know how many machines are there or what the internal topology is. Of course NAT will only allow outgoing connections or connections to specific ports due to port forwarding, so that does reduce the chances of attacking some machines. However, a web server will still have ports 80 and 443 open and, if it isn't properly patched, will suffer in exactly the same way as if it wasn't behind NAT.
I'm not saying that you should tell everyone exactly how you have implemented your security, but you can't rely on secrecy to last. The important thing is to thoroughly test your security, preferably with an outside independent agency. This is particularly important if you want others to rely on your system and must include an audit of your code for software and settings for your hardware. Are customers more likely to trust an independent testing agency or a vendor trying to sell a product or system?
Comments
Post a Comment