Friday, October 14, 2011

Top 16 Embedded Security Pitfalls

Every once in a while I'm asked to take a look at a scheme for ensuring that an embedded system is secure for a variety of reasons. Generally the issue isn't really secrecy so much as making sure that an embedded device is really what you think it is, or that a license fee has been paid for a service, or that some bad guy hasn't broken in to a system to issue bogus commands. These are hard problems for any system, but can be made especially difficult for embedded systems because severe cost constraints can discourage the use of bank-grade IT security solutions.

There is plenty out there on the web about security, so I'm not going to attempt a comprehensive tutorial here. (There is a chapter in my book that covers the basics as applied to embedded systems.) But one thing that can be difficult to find is a comprehensive list of high-level pitfalls for embedded security all in one compact list. So here it is. Ignore these hard-won lessons from the security community at your own peril.
  1. Security by obscurity never works against a serious adversary. Any sentence that contains the phrase "the bad guys will never figure out how" is false.
  2. Almost nobody is good enough to cook up their own cryptography. Use a good standard algorithm. If that costs too much, then realize you're just keeping out the clueless.
  3. Secret algorithms never remain secret. Any security that hinges upon the secrecy of an encryption algorithm is a bad idea. (Those sorts of systems are snake oil.)
  4. Tamper-proofing only raises the bar to steal the secrets inside a chip. Tamper-proofing is worth doing, but don't count on it as a definitive security strategy.
  5. Even if an algorithm is secret, it can be broken.
  6. Weak algorithms will be broken, and probably publicized for bragging rights.
  7. Even if an algorithm isn't broken, it might be subject to simple cloning or a replay attack that captures and repeats an encrypted message such as a door unlock command.
  8. Never use a back-door key, or manufacturer secret key. The information will get out. Place your faith only in a large, unique-per-device random secret key.
  9. Make sure you don't generate weak or non-random keys. All the above fallacies apply to the algorithm you use for generating random keys.
  10. Don't forget the possibility of a disgruntled employee or a rubber hose attack disclosing your algorithm or master key.
  11. Don't use encryption as your primary tool when what you really need is a secure hash, signature, or MAC for authentication. There are endless possibilities for getting it wrong if you use the wrong tool for the job.
  12. Don't forget to have a plan for upgrading security or fixing bugs when they appear. Remember that update mechanisms can be attacked (for example, via a trojan horse update file).
  13. The system owner might be an attacker (heard of jailbreaking?)
  14. The attacker does not need to be sophisticated to break tough security (see "script kiddies")
  15. Figure out export control issues as you design your system, and adapt the design if required.
  16. Instances of poor security biting embedded system designers are ubiquitous. You just haven't heard most of them because nobody is allowed to talk about their own company's problems.
The point of the above list is the following: if you don't understand what a bullet is and why I'm saying it, you should probably dig deeper into security before you ship a product that needs to be secure. There are always exceptions, but they are rare. This is one of those cases where 80% of ordinary designers think they're the 1% exception case that is clever enough to get away with bending the rules -- and it's just not so.


If you have suggestions for additional items for high level of pitfalls I'd love to hear them.

Job and Career Advice

I sometimes get requests from LinkedIn contacts about help deciding between job offers. I can't provide personalize advice, but here are...