Showing posts with label tools. Show all posts
Showing posts with label tools. Show all posts

Saturday, August 1, 2020

LINT does not do peer reviews

https://pixabay.com/vectors/code-programming-head-computer-2858768/

Once in a while I run into developers who think that peer review can be completely automated by using a good static analysis (generically "lint" or compiler warnings).  In other words, run PC-LINT (or whatever), and when you have no warnings peer review is done.

Nope.

But the reality has some nuance, so here's how I see it.

There are two critical aspects to style:
  (1) coding style for compilers  (will the compiler generate the code you're expecting)
  (2) coding style for humans   (can a human read the code)

A good static analysis tool is good at #1.  Should you run a static analysis tool?  Absolutely.  Pick a good tool.  Or at least do better than -Wall for Gcc (hint, "all" doesn't mean what you think it means (*see note below)).  When your code compiles clean with all relevant warnings turned on, only then is it time for a human peer review.

For #2, capabilities vary widely, and no automated tool can evaluate many aspects of good human-centric coding style.  (Can they use heuristics to help with #1?  Sure.  Can they replace a human?  Not anytime soon.)

My peer review checklist template has a number of items that fall into the #1 bin. The reason is that it is common for embedded software teams to not use static analysis at all, or to use inadequate settings. So the basics are there.  As they become more sophisticated at static analysis, they should delete the automated checks (subsuming them into item #0 -- has static analysis been done?).  Then they should add additional items they've found from experience are relevant to them to re-fill the list to a couple dozen total items.

Summarizing: static analysis tools don't automate peer reviews. They automate a useful piece of them if you are warning-free, but they are no substitute for human judgement about whether your code is understandable and likely to meet its requirements.

Pointers:
* Note: in teaching I require these gcc flags for student projects:
-Werror -Wextra -Wall -Wfloat-equal -Wconversion -Wparentheses -pedantic -Wunused-parameter -Wunused-variable -Wreturn-type -Wunused-function -Wredundant-decls -Wreturn-type -Wunused-value -Wswitch-default -Wuninitialized -Winit-self






Friday, February 16, 2018

Robustness Testing of Autonomy Software (ASTAA Paper Published)

I'm very pleased that our research team will present a paper on Robustness Testing of Autonomy Software at the ICSE Software Engineering in Practice session in a late May. You can see a preprint of the paper here:  https://goo.gl/Pkqxy6

The work summarizes what we've learned across several years of research stress testing many robots, including self-driving cars.

ABSTRACT
As robotic and autonomy systems become progressively more present in industrial and human-interactive applications, it is increasingly critical for them to behave safely in the presence of unexpected inputs. While robustness testing for traditional software systems is long-studied, robustness testing for autonomy systems is relatively uncharted territory. In our role as engineers, testers, and researchers we have observed that autonomy systems are importantly different from traditional systems, requiring novel approaches to effectively test them. We present Automated Stress Testing for Autonomy Architectures (ASTAA), a system that effectively, automatically robustness tests autonomy systems by building on classic principles, with important innovations to support this new domain. Over five years, we have used ASTAA to test 17 real-world autonomy systems, robots, and robotics-oriented libraries, across commercial and academic applications, discovering hundreds of bugs. We outline the ASTAA approach and analyze more than 150 bugs we found in real systems. We discuss what we discovered about testing autonomy systems, specifically focusing on how doing so differs from and is similar to traditional software robustness testing and other high-level lessons.

Authors:
Casidhe Hutchison
Milda Zizyte
Patrick Lanigan
David Guttendorf
Mike Wagner
Claire Le Guoes
Philip Koopman


Monday, June 4, 2012

Cool Reliability Calculation Tools

I ran across a cool set of tools for computing reliability properties, including reliability improvements due to redundancy, MTBF based on testing data, availability, spares provisioning, and all sorts of things.  The interfaces are simple but useful, and the tools are a lot easier than looking up the math and doing the calculations from scratch.  If you need to do anything with reliability it's worth a look:

http://reliabilityanalyticstoolkit.appspot.com

The one I like the most right now is the one that tells you how long to test to determine MTBF based on test data, even if you don't see any failures in testing:
http://reliabilityanalyticstoolkit.appspot.com/mtbf_test_calculator

Here is a nice rule of thumb based on the results of that tool. If you want to use testing to ensure that MTBF is at least some value X, you need to test about 3 times longer than X without ANY failures being observed. That's a lot of testing! If you do observe a failure, you have to test even longer to determine if it was an unlucky break or whether MTBF is smaller than it needs to be. (This rule of thumb assumes 95% confidence level and no testing failures observed -- as well as random independent failures. Play with the tool to evaluate other scenarios.)



Static Analysis Ranked Defect List

  Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...