Monday, April 13, 2015

FAA CRC and Checksum Report


I've been working for several years on an FAA document covering good practices for CRC and Checksum use.  At long last this joint effort with Honeywell researchers has been issued by the FAA as an official report.  Those of you who have seen my previous tutorial slides will recognize much of the material. But this is the official FAA-released version.

Selection of Cyclic Redundancy Code and Checksum Algorithms to Ensure Critical Data Integrity
DOT/FAA/TC-14/49
Philip Koopman, Kevin Driscoll, Brendan Hall

http://users.ece.cmu.edu/~koopman/pubs/faa15_tc-14-49.pdf


Abstract:

This report explores the characteristics of checksums and cyclic redundancy codes (CRCs) in an aviation context. It includes a literature review, a discussion of error detection performance metrics, a comparison of various checksum and CRC approaches, and a proposed methodology for mapping CRC and checksum design parameters to aviation integrity requirements. Specific examples studied are Institute of Electrical and Electronics Engineers (IEEE) 802.3 CRC-32; Aeronautical Radio, Incorporated (ARINC)-629 error detection; ARINC-825 Controller Area Network (CAN) error detection; Fletcher checksum; and the Aeronautical Telecommunication Network (ATN)-32 checksum. Also considered are multiple error codes used together, specific effects relevant to communication networks, memory storage, and transferring data from nonvolatile to volatile memory.

Key findings include:
  • (1) significant differences exist in effectiveness between error-code approaches, with CRCs being generally superior to checksums in a wide variety of contexts; 
  • (2) common practices and published standards may provide suboptimal (or sometimes even incorrect) information, requiring diligence in selecting practices to adopt in new standards and new systems; 
  • (3) error detection effectiveness depends on many factors, with the Hamming distance of the error code being of primary importance in many practical situations; 
  • (4) no one-size-fits-all error-coding approach exists, although this report does propose a procedure that can be followed to make a methodical decision as to which coding approach to adopt; and 
  • (5) a number of secondary considerations must be taken into account that can substantially influence the achieved error-detection effectiveness of a particular error-coding approach.
You can see my other CRC and checksum posts via the CRC/Checksum label on this blog.

Official FAA site for the report is here: http://www.tc.faa.gov/its/worldpac/techrpt/tc14-49.pdf



Monday, February 2, 2015

Tester To Developer Ratio Should Be 1:1 For A Typical Embedded Project

Believe it or not, most high quality embedded software I've seen has been created by organizations that spend twice as much on validation as they do on software creation.  That's what it takes to get good embedded software. And it is quite common for organizations that don't have this ratio to be experiencing significant problems with their projects. But to get there, the head count ratio is often about 1:1, since developers should be spending a significant fraction on their time doing "testing" (in broad terms).

First, let's talk about head count. Good embedded software organizations tend to have about equal number of testers and developers (i.e., 50% tester head count, 50% developer head count). Also, typically, the testers are in a relatively independent part of the organization so as to reduce pressure to sign off on software that isn't really ready to ship. Ratios can vary significantly depending on the circumstance.  5:1 tester to developer ratio is something I'd expect in safety-critical flight controls for an aerospace application.  1:5 tester to developer ratio might be something I'd expect on ephemeral web application development. But for a typical embedded software project that is expected to be solid code with well defined functionality, I typically expect to see 1:1 tester to developer staffing ratios.

Beyond the staffing ratio is how the team spends their time, which is not 1:1.  Typically I expect to see the developers each spend about two thirds of their time on development, and one-third of their time on verification/validation (V&V). I tend to think of V&V as within the "testing" bin in terms of effort in that it is about checking the work product rather than creating it. Most of the V&V effort from developers is spent doing peer reviews and unit testing.

For the testing group, effort is split between software testing, system testing, and process quality assurance (SQA).  Confusingly, testing is often called "quality assurance," but by SQA what I mean is time spent creating, training the team on, and auditing software processes themselves (i.e., did you actually follow the process you were supposed to -- not actual testing). A common rule of thumb is that about 5%-6% of total project effort should be SQA, split about evenly between process creation/training and process auditing. So that means in a team of 20 total developers+testers, you'd expect to see one full time equivalent SQA person. And for a team size of 10, you'd expect to see one person half-time on SQA.

Taking all this into account, below is a rough shot at how effort across a project should break down in terms of head counts and staff.

Head count for a 20 person project:
  • 10 developers
  • 9 testers
  • 1 SQA (both training and auditing)
Note that this does not call out management, nor does it deal with after-release problem resolution, support, and so on. And it would be no surprise if more testers were loaded at the end of the project than the beginning, so these are average staffing numbers across the length of the project from requirements through product release.

Typical effort distribution (by hours of effort over the course of the project):
  • 33%: Developer team: Development (left-hand side of "V" including requirements through implementation)
  • 17%: Developer team: V&V (peer reviews, unit test)
  • 44%: V&V team: integration testing and system testing
  • 3%: V&V team: Process auditing
  • 3%: V&V team: Process training, intervention, improvement
If you're an agile team you might slice up responsibilities differently and that may be OK. But in the end the message is that you're probably only spending about a third of your time actually doing development compared to verification and validation if you want to create very high quality software and ensure you have a rigorous process while doing so.  Agile teams I've seen that violated this rule of thumb often did so at the cost of not controlling their process rigor and software quality. (I didn't say the quality was necessarily bad, but rather what I'm saying is they are flying blind and don't know what their software and process quality is until after they ship and have problems -- or don't.  I've seen it turn out both ways, so you're rolling the dice if you sacrifice V&V and especially SQA to achieve higher lines of code per hour.)

There is of course a lot more to running a project than the above. For example, there might need to be a cross-functional build team to ensure that product builds have the right configuration, are cleanly built, and so on. And running a regression test infrastructure could be assigned to either testers or developers (or both) depending on the type of testing that was included. But the above should set rough expectations.  If you differ by a few percent, probably no big deal. But if you have 1 tester for every 5 developers and your effort ratio is also 1:5, and you are expecting to ship a rock-solid industrial-control or similar embedded system, then think again.

(By way of comparison, Microsoft says they have a 1:1 tester to developer head count. See: How We Test Software at Microsoft, Page, Johston & Rollison.)









Monday, November 17, 2014

Not Getting Software Wrong


The majority of time creating software is typically spent making sure that you got the software right (or if it's not, it should be!).  But, sometimes, this focus on making software right gets in the way of actually ensuring the software works.  Instead, in many cases it's more important to make sure that software is not wrong.  This may seem like just a bit of word play, but I've come to believe that it reflects a fundamental difference in how one views software quality.

Consider the following code concept for stopping four wheel motors on a robot:

// Turn off all wheel motors
  for (uint_t motor = 1; motor < maxWheels; motor++)  {
    SetSpeed(motor, OFF);
  }

Now consider this code:
// Turn off all wheel motors
  SetSpeed(leftFront,  OFF);
  SetSpeed(rightFront, OFF);
  SetSpeed(leftRear,   OFF);
  SetSpeed(rightRear,  OFF);

(Feel free to make obvious assumptions, such as leftFront being an const uint_t, and modify to your favorite naming conventions and style guidelines. None of that is the point here.)

Which code is better?   Well, from our intro to programming course probably we want to write the first type of code. It is more flexible, "elegant," and uses the oh-so-cool concept of iteration.  On the other hand, the second type of code is likely to get our fingers smacked with a ruler in a freshman programming class.  Where's the iteration?  Where is the flexibility? What if there are more than four items that need to be turned off? Where is the dazzling display of (not-so-advanced) computer science concepts?

But hold on a moment. It's a four wheeled robot we're building.  Are you really going to change to a six wheeled robot?  (Well, maybe, and I've even worked a bit with such robots.  But sticking on two extra wheels isn't all that common after the frame has been welded!)  So what are you really buying by optimizing for a change that is unlikely to happen?

Let's look at it a different way. Many times I'm not interested in elegance, clever use of computer science concepts, or even number of lines of code. What I am interested in is that the code is actually correct. Have you ever written the loop type of code and gotten it wrong?  Be honest!  There is a reason that "off by one error" has its own wikipedia entry. How long did it take you to look at the loop and make sure it was right? You had to assume or go look up the value of maxWheels, right?  If you are writing unit tests, you need to somehow figure out a way to test that all the motors got turned off -- and account for the fact that maxWheels might not even be set to the correct value.

But, with the second set of code, it's pretty easy to see that all four wheels are getting turned off.  There's no loop to get wrong.  (You do have to make sure that the four wheel consts are set properly.)  There is no loop to test.  Cyclomatic complexity is lower because there is no looping branch.  You don't have to mentally execute the loop to make sure there is no off-by-one error. You don't have to ask whether the loop can (or should) execute zero times. Instead, with the second set of codes you can just look at it and count up that the wheels are being turned off.

Now of course this is a pretty trivial example, and no doubt one that at least someone will take exception to.  But the point I'm making is the following.  The first code segment is what we were all taught to do, and arguably easier to write. But the second code segment is arguably easier to check for correctness.  In peer reviews someone might miss a bug in the first code. I'd say that missing a bug is less likely in the second code segment.

Am I saying that you should avoid loops if you have 10,000 things to initialize?  No, of course not. Am I saying you should cut and paste huge chunks of code to avoid a loop?  Nope. And maybe you actually do want to use the loop approach in your situation even with 3 or 4 things to initialize for a good reason. And if you get lots of lines of code that is likely to be more error-prone than a well-considered loop. All that I'm saying is that when you write code, consider carefully the tradeoff between writing concise, but clever, code and writing code that is hard to get wrong. Importantly, this includes the risk of a varying style causing an increased risk of getting things wrong. How that comes out will depend on your situation.  What I am saying is think about what is likely to change, what is unlikely to change, and what the path is to least risk of having bugs.

By the way, did you see the bug?  If you did, good for you.  If not, well, then I've already proved my point, haven't I?  The wheel number should start at 0 or the limit should be maxWheels+1.  Or the "<" should be "<=" -- depending on whether wheel numbers are base 0 or base 1.  As it is, assuming maxWheels is 4 as you'd expect, only 3 out of 4 wheels actually get turned off.  By the way, this wasn't an intentional trick on the reader. As I was writing this article, after a pretty long day, that's how I wrote the code. (It doesn't help that I've spent a lot of time writing code in languages with base-1 arrays instead of base-0 arrays.) I didn't even realize I'd made that mistake until I went back to check it when writing this paragraph.  Yes, really.  And don't tell me you've never made that mistake!  We all have. Or you've never tried to write code at the end of a long day.  And thus are bugs born.  Most are caught, as this one would have been, but inevitably some aren't.  Isn't it better to write code without bugs to begin with instead of find (most) bugs after you write the code?

Instead of trying to write code and spending all our effort to prove that whatever code we have written is correct, I would argue we should spend some of our effort on writing code that can't be wrong.  (More precisely, code that is difficult to get wrong in the first place, and easy to detect is wrong.)  There are many tools at our disposal to do this beyond the simple example shown.  Writing code in a style that makes very rigorous static analysis tools happy is one way. Many other style practices are designed to help with this as well, chief among them being various type checking strategies, whether automated or implemented via naming conventions. Avoiding high cyclomatic complexity is another way that comes to mind. So is creating a statechart before actually writing the code, then using a switch statement that traces directly to the statechart. Avoiding premature optimization is also important in avoiding bugs.  The list goes on.

So the next time you are looking at a piece of code, don't ask yourself "do I think this is right?"  Instead, ask yourself "how easy is it to be sure that it is not wrong?"  If you have to think hard about it -- then maybe it really is incorrect code even if you can't see a bug.  Ask whether it could be rewritten in a way that is more obviously not wrong.  Then do it.

Certainly I'm not the first to have this idea. But I see small examples of this kind of thing so often that it's worth telling this story once in a while to make sure others have paused to consider it.  Which brings us to a quote that I've come to appreciate more and more over time.  Print it out and stick it on the water cooler at work:

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies."

        — C.A.R. Hoare, The 1980 ACM Turing Award Lecture  (CACM 24(2), Feb 1981, p. 81)

(I will admit that this quote is a bit clever and therefore not a sterling example of making a statement easy to check for correctness.  But then again he is the one who got the Turing Award, so we'll allow some slack for clever wording in his acceptance essay.)

Monday, October 13, 2014

Safety Culture

A weak safety culture makes it extremely difficult to create safe systems.

Consequences:
A poor safety culture dramatically elevates the risk of creating an unsafe product. If an organization cuts corners on safety, one should reasonably expect the result to be an unsafe outcome.

Accepted Practices:
  • Establish a positive safety culture in which all stakeholders put safety first, rigorous adherence to process is expected, and all developers are incentivized to report and correct both process and product problems.
Discussion:
A “safety culture” is the set of attitudes and beliefs employees have to attaining safety. Key aspects of such a culture include a willingness to tell management that there are safety problems, and an insistence that all processes relevant to safety be followed rigorously.

Part of establishing a healthy safety culture in an organization is a commitment to improving processes and products over time. For example, when new practices become accepted in an industry (for example, the introduction of a new version of the MISRA C coding style, or the introduction of a new safety standard such as ISO 26262), the organization should evaluate and at least selectively adopt those practices while formally recording the rationale for excluding and/or slow-rolling the adoption of new practices. (In general, one expects substantially all new accepted practices in an industry to be adopted over time by a company, and it is simply a matter of how aggressively this is done and in what order.)

Ideally, organizations should identify practices that will improve safety proactively instead of reactively. But regardless, it is unacceptable for an organization building safety critical systems to ignore new safety-relevant accepted practices with an excuse such as “that way was good enough before, so there is no reason to improve” – especially in the absence of a compelling proof that the old practice really was “good enough.”

Another aspect of a healthy safety culture is aggressively pursuing every potential safety problem to root cause resolution. In a safety-critical system there is no such thing as a one-off failure.  If a system is observed to behave incorrectly, then that behavior must be presumed to be something that will happen again (probably frequently) on a large deployed fleet.  It is, however, acceptable to log faults in a hazard log and then prioritize their resolution based on risk analysis such as using a risk table (Koopman 2010, ch. 28).

Along these lines, blaming a person for a design defect is usually not an acceptable root cause. Since people (developers and system operators alike) make mistakes, saying something like “programmer X made a mistake, so we fired him and now the problem is fixed” is simply scapegoating. The new replacement programmer is similarly sure to make mistakes. Rather, if a bug makes it through a supposedly rigorous process, the fact that the process didn’t prevent, detect, and catch the bug is what is broken (for example, perhaps design reviews need to be modified to specifically look for the type of defect that escaped into the field). Similarly, it is all too easy to scapegoat operators when the real problem is a poor design or even when the real problem is a defective product. In short, blaming a person should be the last alternative when all other problems have been conclusively ruled out – not the first alternative to avoid fixing the problem with a broken process or broken safety culture.

Believing that certain classes of defects are impossible to the degree that there is no point even looking for them is a sure sign of a defective safety culture. For example, saying that software defects cannot possibly be responsible for safety problems and instead blaming problems on human operators (or claiming that repeated problems simply didn’t happen) is a sure sign of a defective safety culture. See, for example, the Therac 25 radiation accidents. No software is defect free (although good ones are nearly defect free to begin with, and are improved as soon as new hazards are identified). No system is perfectly safe under all possible operating conditions. An organization with a mature safety culture recognizes this and responds to an incident or accident in a manner that finds out what really happened (with no preconceptions as to whether it might be a software fault or not) so it can be truly fixed. It is important to note that both incidents and accidents must be addressed. A “near miss” must be sufficient to provoke corrective action. Waiting for people to die (or dozens of people to die) after multiple incidents have occurred and been ignored is unacceptable (for an example of this, consider the continual O-ring problems that preceded the Challenger space shuttle accident).

The creation of safe software requires adherence to a defined process with minimal deviation, and the only practical way to ensure this is by having a robust Software Quality Assurance (SQA) function. This is not the same as thorough testing, nor is it the same as manufacturing quality. Rather than being based on testing the product, SQA is based on defining and auditing how well the development process (and other aspects of ensuring system safety) have been followed. No matter how conscientious the workers, independent checks, balances, and quantifiable auditing results are required to ensure that the process is really being followed, and is being followed in a way that is producing the desired results. It is also necessary to make sure the SQA function itself is healthy and operational.

Selected Sources:
Making the transition from creating ordinary software to safety critical software is well known to require a cultural shift that typically involves a change from an all-testing approach to quality to one that has a balance of testing and process management. Achieving this state is typically referred to as having a “safety culture” and is necessary step in achieving safety. (Storey 1996, p. 107)  Without a safety culture it is extremely difficult, if not impossible, to create safe software. The concept of a “safety culture” is borrowed from other, non-software fields, such as nuclear power safety and occupational safety.

MISRA Software Guidelines Section 3.1.4 Assessment recommends an independent assessor to ensure that required practices are being followed (i.e., an SQA function).

MISRA provides a section on “human error management” that includes: “it is recommended that a fear free but responsible culture is engendered for the reporting of issues and errors” (MISRA Software Guidelines p. 58) and “It is virtually impossible to prevent human errors from occurring, therefore provision should be made in the development process for effective error detection and correction; for example, reviews by individuals other than the authors.”

References:


Friday, October 3, 2014

A Case Study of Toyota Unintended Acceleration and Software Safety

Oct 3, 2014:  updated with video of the lecture

Here is my case study talk on the Toyota unintended acceleration cases that have been in the news and the courts the past few years.

The talk summary is below and embedded slides are below.  Additional pointers:
(Please see end of post for video download and copyright info.)

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =





A Case Study of Toyota Unintended Acceleration and Software Safety 

Abstract:
Investigations into potential causes of Unintended Acceleration (UA) for Toyota vehicles have made news several times in the past few years. Some blame has been placed on floor mats and sticky throttle pedals. But, a jury trial verdict was based on expert opinions that defects in Toyota's Electronic Throttle Control System (ETCS) software and safety architecture caused a fatal mishap.  This talk will outline key events in the still-ongoing Toyota UA litigation process, and pull together the technical issues that were discovered by NASA and other experts. The results paint a picture that should inform future designers of safety critical software in automobiles and other systems.

Bio:
Prof. Philip Koopman has served as a Plaintiff expert witness on numerous cases in Toyota Unintended Acceleration litigation, and testified in the 2013 Bookout trial.  Dr. Koopman is a member of the ECE faculty at Carnegie Mellon University, where he has worked in the broad areas of wearable computers, software robustness, embedded networking, dependable embedded computer systems, and autonomous vehicle safety. Previously, he was a submarine officer in the US Navy, an embedded CPU architect for Harris Semiconductor, and an embedded system researcher at United Technologies.  He is a senior member of IEEE, senior member of the ACM, and a member of IFIP WG 10.4 on Dependable Computing and Fault Tolerance. He has affiliations with the Carnegie Mellon Institute for Software Research (ISR) and the National Robotics Engineering Center (NREC).

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

I am getting an increasing number of requests to do this talk in person, both as a keynote speaker and for internal corporate audiences. Audiences tell me that while the video is nice, an in-person experience of both the presentation and small-group follow-up discussions has a lot more impact for organizations who need help in coming to terms with creating high quality software and safety critical systems. If you are interested please get in touch for details: koopman@cmu.edu

Other info:
  • Download copy of the video file set of talk (340 MB .zip file of a web directory. Experts only!  Please do not ask me for support -- it works for me but I don't have any details about this format beyond saying to unzip it and open Default.html in a web browser.)
All materials (slides & video) are licensed under Creative Commons Attribution BY v. 4.0.
Please include "Prof. Philip Koopman, Carnegie Mellon University" as the attribution.
If you are planning on using the materials in a course or similar, I would appreciate it if you let me know so I can track adoption.  If you need a variation from the CC BY 4.0 license (for example, to incorporate materials in a situation that is at odds with the license terms) please contact me and it can usually be arranged.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =