Monday, September 28, 2015

Open Source IoT Code Is Not The Entire Answer


Summary: Whether or not to open sourcing embedded software is the wrong question. The right question is how can we ensure independent checks and balances on software safety and security. Independent certification agencies have been doing this for decades. So why not use them?

In the wake of the recent Volkswagen diesel software revelations, there has been a call from some that automotive software and even all Internet of Things software should be open source. The idea is that if the software is released publicly, then someone will notice if there is a security problem, a safety problem, or skulduggery of some sort. While open source can make sense, this is neither an economically realistic nor necessary step to apply across-the-board.

The Pro list for open source is pretty straightforward: if you publish the code, someone will come and read it and find all the problems.

The Con list is, however, more reflective of how things really work. You have to assume that someone with enough technical skill will actually spend the time to look, and will actually find the problem. That doesn't always happen. The relatively simple Heartbleed bug was there for all to see in OpenSSL, and it stayed there for a couple years despite being a widely used, crucial piece of open source Internet infrastructure software. Presumably a lot more people care about OpenSSL than your toaster oven's software.

Some of the opponents of open sourcing IoT software invoke the security bogeyman. They say that if you publish the source you'll be vulnerable to attacks. Well sure, it might make it easier to find a way to attack, but it doesn't make you "vulnerable." If your code was already full of vulnerabilities, publishing source code just might make it a little easier for someone to find them.  Did you notice that the automotive security exploits published recently did not rely on source code?  I can believe that exploits could, at least sometimes, be published more quickly for open source code, but I don't see this as a compelling argument for keeping code secret and un-reviewed.

A more fundamental point is that software is often the biggest competitive advantage in making products that would otherwise be commodities. Asking companies to reveal their most important trade secrets (their software), so that a hypothetical person with the time and skills might just happen to find a problem sounds like a hard sell to me.  Especially since there is the well established alternative of having an external, independent certification agency look things over in private.

Safety critical systems have had standards and independent review systems in place for decades. Aviation uses DO-178c and other standards, and has a set of independent reviewers called Designated Engineering Representatives (DERs) that provide design reviews during the development cycle. Rail systems follow EN-50126/8/9 and typically involve oversight from acquisition consultants. The chemical process industry generally follows IEC-61508, and has long used independent certification organizations to check their work (typically I see reviews have been done by Exida or TUV). The consumer appliance industry has long had Underwriters Laboratories (UL) certification, and is moving to a more comprehensive software safety standard approach based on IEC 60730, including external independent certification. There are also more recent domain-specific security standards that can be applied. (It is worth noting that ensuring safety and security requires a lot more than just source code, but that's a topic for another day.)

Cars have long had the option to use the MISRA software safety guidelines, and more recently the ISO 26262 safety standard. Historically, some companies have had external agencies certify automotive components to those standards. But, at least some car companies have not taken advantage of this external audit opportunity, and thus there has been no independent check and balance on their software until we their problems show up in the news. Software safety and security audits are not required to sell cars in the US. (There is some vehicle-level testing according to FMVSS requirements, but it's about vehicle behaviors, not the actual source code.)

For Internet of Things it will be interesting to see how things play out. As I understand it the EU is already requiring IEC 60730 compliance, which means external safety checks for safety critical IoT applications. We could see that mandate spread to more IoT products sold in Europe if there are high-profile problems. And perhaps we'll see a push on automotive software regulation too.

So, there is a well established alternative to open source in the form of external certifying organizations issuing compliance certificates based on international safety and security standards. Rather than get distracted by an open source debate, what we should be doing is asking "what's the most effective way to ensure adequate software safety and dependability in a way that doesn't put companies out of business." Sometimes that might be open source, especially for underlying infrastructure. But other times, probably most times, independent review by a trusted certification party will be up to the task. The question is really what it will take to make companies produce verifiably adequate software.

Having checks and balances works. We should use them.

(For the record, I made some of my source code public domain before "open source" was even a buzzword, and have released other source code under an older version of GPL (Ballista robustness testing) and Creative Commons BY 4.0 (CRC Hamming Distance length calculation). Some code I copyright and release. And some I keep as a trade secret. My interest here is in the public being able to use safe and secure embedded software. We should focus on that, and not let things get sidetracked into another iteration of the open source vs. proprietary software debate.)

Monday, September 7, 2015

Essential Embedded Software Skills

I spend a lot of time trying to grapple with what makes embedded systems different than desktop computer systems in terms of skills and development processes.  Often the answer to this question on  discussion groups ends up being something like "everything has to be super-optimized," or "you need to meet real-time deadlines." But those are technical measures that seem to me to be more symptoms of particular embedded system projects rather than root cause of the differences.  And, such answers tend to be a bit one-dimensional.

After some thought, perhaps the distinctive attributes of embedded systems can be summarized in the following way:

Interaction with the physical world:
Embedded systems generally have a primary goal of interacting with the physical world using sensors  and actuators. This in turn encompasses various topics depending on the application, including:
  - Real time responsiveness (scheduling, concurrency management, timekeeping)
  - Analog & digital interfacing
  - Control approaches
  - Signal processing
  - Coordination via networked and Cloud services
  - Reliability, safety, system robustness

Special-purpose computing platform:
Most embedded systems don't use a general purpose computing platform (a desktop comptuer, laptop, tablet, smart phone, etc.).  Rather, they use a customized hardware platform that is permanently embedded into the product. (Even those that do use somewhat standardized hardware often have specialized I/O devices attached.)  This in turn encompasses various topics depending on the application, including:
  - Software optimization (squeezing to fit into a cost-constrained platform)
  - Close-to-hardware programming (interrupts, device interfacing)
  - Hardware specialization (application-specific hardware, DSP platforms)
  - Specialized network protocols
  - Special-purpose human interaction devices
  - Hardware-dependent testing approaches
  - Customized operating system (or custom non-OS task manager)
  - Power management

Domain-centric development:
Outside the consumer electronics area, in my experience it is rare to meet a deeply embedded system developer with a primary college degree in computer engineering or computer science.  Generally they have a degree more relevant to their product domain. Yet, nonetheless, here they are writing significant amounts of code for a living. Those trained in software development are also missing somewhat different pieces. Regardless of background, developers usually need to understand the following areas:
  - General software process and technical practice literacy (for domain experts) / Domain expertise (for software experts)
  - Life-cycle support for long-lived, hard-to-update products
  - Distributed and federated system architecture design
  - Domain-optimized development (e.g., model-based design for control systems)
  - Domain-specific aspects of security

Looking at this list, it becomes clear that skills such as knowing how to write super-optimized code are merely pieces of a larger puzzle. In general, you need to be at least literate in all the topics above to be a well-rounded embedded system developer.  Sure, not everyone and not every project needs deep expertise in everything. But if you're planning on a career in embedded systems you'll likely hit just about everything on the list -- I know that I certainly have. (And, if you're a hiring manager, now you have a shopping list for skills for your senior developers.)

Job and Career Advice

I sometimes get requests from LinkedIn contacts about help deciding between job offers. I can't provide personalize advice, but here are...