Showing posts with label requirements. Show all posts
Showing posts with label requirements. Show all posts

Tuesday, January 5, 2021

62 Software Experience Lessons by Karl Weigers

Karl Weigers has an essay about lessons he's learned from a long career in software development. You should benefit from his experience. The essay covers requirements, project management, quality, process improvement, and other insights.

https://medium.com/swlh/62-lessons-from-50-years-of-software-experience-2db0f400f706

A good example from the article is:

"You don’t have time to make every mistake that every software practitioner before you has already made. Read and respect the literature. Learn from your colleagues. Share your knowledge freely with others." 



Monday, February 13, 2017

Safety Requirements for Embedded Systems (Preview)

Here's a summary video on Embedded System Safety Requirements.


Safety Requirements Preview [ECR]

Other pointers on this topic (my blog posts unless otherwise noted):
For more about Edge Case Research and how to subscribe to our video training channel, please see this Blog posting.

Monday, October 18, 2010

Embedded Project Risk Areas -- Development Process Part 2

Series Intro: this is one of a series of posts summarizing the different red flag areas I've encountered in more than a decade of doing design reviews of industry embedded system software projects. You can read more about the study here. If one of these bullets applies to your project, you should consider whether that presents undue risk to project success (whether it does or not depends upon your specific project and goals). The results of this study inspired the chapters in my book.

Here are some of the Development Process red flags (part 2 of 2):
  • High requirements churn
Functionality required of the product changes so fast the software developers can’t keep up. This is likely to lead to missed deadlines and can result in developer burnout.
  • No SQA function
Nobody is formally assigned to perform an SQA function, so there is a risk that processes (however light or heavy they might be) aren’t being followed effectively regardless of the good intentions of the development team. Software Quality Assurance (SQA) is, in essence, ensuring that the developers are following the development process they are supposed to be following. If SQA is ineffective, it is possible (and in my experience likely) that some time spent on testing, design reviews, and other techniques to improve quality is also ineffective.
  • No mechanism to capture technical and non-technical project lessons learned
There is no methodical effort to identify technical, process, and management problems encountered during the course of the project so that the causes of these problems can be corrected. As a result, mistakes are repeated in future projects.

Wednesday, October 13, 2010

Embedded Project Risk Areas -- Development Process Part 1

Series Intro: this is one of a series of posts summarizing the different red flag areas I've encountered in more than a decade of doing design reviews of industry embedded system software projects. You can read more about the study here. If one of these bullets applies to your project, you should consider whether that presents undue risk to project success (whether it does or not depends upon your specific project and goals). The results of this study inspired the chapters in my book.

Here are some of the Development Process red flags (part 1 of 2):

  •  Informal development process
The process used to create embedded software is ad hoc, and not written down. The steps vary from project to project and developer to developer. This can result in uneven quality.
  • Not enough paper
Too few steps of development result in a paper trail. For example, test results may not be written down. Among other things, this can require re-doing tasks such as testing to make sure they were fully and correctly performed.
  • No written requirements
Software requirements are not written down or are too informal. They may only address changes for a new product version without any written document stating old version requirements. This can lead to misunderstandings about intended product functions and difficulty in designing adequate tests.
  • Requirements with poor measurability
Software requirements can’t be tested due to missing or subjective measurement criteria. As a result, it is difficult to know whether a requirement such as “product shall be user friendly” has been met.
  • Requirements omit extra-functional aspects
Product requirements may state hardware processing speed and hardware reliability, but omit software response times, software reliability, and other non-functional requirements. Implementing and testing these undefined aspects is left at the discretion of developers and might not meet market needs.

Monday, June 21, 2010

Don't require perfection

A common problem with requirements is that they mandate perfection that is unattainable. For example, it is common for embedded software requirements to state the the software shall never crash, shall be perfectly safe, and shall be defect-free. (In truth, more often these things aren't even written down, but those are the answers you get when you ask what the requirements are for dependability, safety, and software defect rates for high quality embedded systems.)

Perfect doesn't ever happen. "Never" is longer than you have available to test for software dependability. And it is the rare everyday embedded system that has taken a rigorous approach to ensuring safety.

It is also true that in many areas it is too risky from a liability point of view to write down a concrete requirement for less than perfection. And you may be guessing as to a target of less than perfection even if you do specify one. (Is it OK for your system to crash once every 1000 years, or will 900 years do? Did you guess when you answered that, or do you have a concrete basis for making that tradeoff?)

If you can and are permitted to specify a concrete, non-perfect set of requirements for your product, you should. But if you can't, consider instead defining a set of acceptance tests that will at least let you perform actual measurements to validate your system is good enough. These can be either process requirements or actual tests. Some examples include:

  • System shall not crash during one full week of stress tests.
  • All sources of crashes during testing shall be tracked down to root cause, and eliminated if appropriate.
  • System shall perform an emergency shutdown if a defined safety requirement is violated at run time. (This assumes you are able to monitor these requirements effectively at run time.)
  • All system errors shall be logged for analysis in failed units returned for factory service.
None of these will get you to perfection. But they, and other possible criteria like them, will give you a concrete way of knowing if you have worked hard enough in relevant areas before you release your software. You can find out more by reading Chapter 6 of my book, which discusses creating measurable requirements.
---

Static Analysis Ranked Defect List

  Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...