Friday, 31 May 2013

Revolutionising Software Quality Assurance

Executive Summary

In this blog we examine current software quality assurance and software delivery methods and look deeper to understand why software defects occur. Understanding the root causes of why software defects occur is really the only assured way to move to methods of software quality assurance and software delivery that guarantee resulting quality increases with attendant software delivery efficiency. Having identified the root causes we go on to map out how we can revolutionise both our expectations of software quality and software delivery by leveraging automation founded on mathematics and engineering practices.

The context

We are increasingly seeing terms such as "Software Quality Assurance", "Total Quality", "Zero Defects" and "CMMI" in our world of IT. From heads of procurement, quality directors and CIOs the clarion cry for better quality can be heard both in the corridors and the in board rooms with suppliers. We even hear it within the software service industry itself as we all try to deliver faster, cheaper with fewer defects higher quality and lower risk of failure.

Over the past half century our software industry has come up with yet another approach to software delivery, from the standard waterfall, to the V-model to agile delivery. And yet we are not happy. Waterfall projects don't seem to reflect the needs of the business fast enough in an ever-changing world, the V-model is similar and agile delivery is challenging our need for scale, visibility and predictability.

We all want to deliver faster, cheaper without compromising quality and yet we share the view of the great and the good that "quality is the pimple on the arse of progress".

In the IT world we do look at how quality and process efficiency can help manage the tension between velocity of delivery and quality of output in a changing world. The move towards using "lean" execution derives from the automotive industry. The adoption of Quality Function Deployment (QFD) and its sister House of Quality (HoQ) into six-sigma, a sign of quality in itself, was a result of that same lean movement from Toyota.

According to a one-time examiner for the institute of quality assurance, applied largely to the aerospace industry. He says "Quality Assurance is essentially about closing the loop between a customer’s requirements and what is delivered.”8, something we often miss. When we test we test against requirements but are they a true reflection of what the customer wants and needs?

In the practice of Software Quality Assurance we often take a leaf out of the standard works on quality engineering, just as we do with lean. But we fail to understand that "both mechanical and electrical/electronic items or systems exhibit variance which does not exist in the digital software arena and much of traditional Quality Control is about that. A piece of code is non-variant. It isn’t right within defined tolerances it is either right or wrong".

We need to truly understand why software defects occur in order to postulate any solution for software quality assurance and software delivery methods that support our desired notion of quality.

The root cause of software defects

This fundamental observation, that cries out to be heard is that software is either right or wrong. This needs to be at the forefront of our desire for software that is delivered faster such that quality is increased and, by extension, customer wants and needs are satisfied. All of the delivery methods and quality methods that get deployed fail to address this fundamental observation and instead rely on people to interpret customer wants and desires in a succession of refinements from customer interviews to wire frames to the written word, the painted screen, into programming languages or configurations into executable software that does stuff.

They say the devil is in the detail and the devil in this detail is always rooted in translations “from one to another”. The more ambiguous they are, the higher the propensity for errors. We say errors rather than defects because “defect” is too softer a word. A defect in metal casting may occur because air bubbles get trapped. In software we only have errors because the software is wrong. In a perfect world, a computer program is just maths and it can be proven right or wrong against a set of axioms. And if it is wrong it can be said to erroneous.

Thus the root cause of software defects is ambiguity that leads to erroneous interpretations, from one level of refinement to another and in doing so has moved away from the customers requirements.

The way forward

If we step back and consider this problem of ambiguity it lies in the fact that the language employed at any level does not have sufficient semantics and structure to support any formal abstraction or indeed refinement.

On the one hand, refinement is the process we enact when we add detail, but in adding detail we want to preserve the semantics. In our complex world the only languages that have rigor and have really precise semantics are mathematically based; Java has an interpreter and a runtime that enforces semantics; UML class diagrams and the translation into Java is based on formal semantics. On the other hand, abstraction is really an ability to pull out some structure from something more detailed; a code review often looks at the structure to see if it matches to some higher-level description. Ideally we want languages to support requirements at different levels so that we can show formally refinement and abstraction. We want the same for design and we already have it for coding.

JT observed, "There are however methods such as mathematical methods that can provide high confidence that the system will be fault free and that is what QA is about". So what mathematical methods can we use? What are the limitations of those methods and will their use give us total confidence or are we missing something?

If we break the SDLC down into the standard phases and examine them one at time, we start to see that the axioms upon which proof relies are themselves statistical in nature.

Those axioms are an expression of a customer’s wants and needs, whereas designs, code and executables have a more formal algebraic relationship that lends itself to proof.

This latter step we take for granted but consider a simple piece of java (or any programming language). The computer on which that code executes does not understand the programming language, it understands only binary configured for it's specific CPU and operating system. We use a compiler to do the translation and we never question its correctness in so doing. Under the covers the compiler is running mini-proof checkers to make type judgments and that ensures the translation is correct.

There is no reason, if we can find the right maths, why we cannot do the same between requirements and design and design and code. And if we could, and hide it all away just as a compiler does, use it to ensure designs express requirements and code expresses designs. The maths we need to leverage, and hide, has to be "Turing Complete" so that it expresses or captures all that can be computed. It has to be capable of dealing with the complexity of integration so that it captures the way in which components and services talk to each other. The latter should be in that integration to ensure, in this modern mobile and cloud age, that we can capture changing connections (i.e. moving from one cell to another in a mobile network or from one cloud to another). It needs to be able to capture the very basics of business transactions and what they might mean and how we might understand them. The Pi-calculus, which netted the late Prof Robin Milner a Turing award, does all of this. We won't go into detail, it is sufficient to know that such a mathematics exists and can be used.

But such algebraic proofs are not sufficient if the axioms upon which they are based are statistical and not absolute. Thus proving an executable implements some code which implements some design which meets some requirements is all well and good if we can say that the requirements are correct, because then we know through proof, not testing, that the executable is correct. But, alas, we can never prove requirements are correct.

If the fundamental tenant of Software Quality is that the software can be said to meet the wants and needs of the customer who requested it then, we need to look deeper into what sort of requirements need to be met and how we know or have confidence in those requirements. If we can do this, then leveraging an algebraic proof from requirements to executable code can reduce time to deliver and reduce the number of errors because we can get rid of the translation errors and quality as a measure of confidence can rise to the same levels we have for our requirements.

We mentioned HoQ before its adoption with QFD in six-sigma. But few six sigma practitioners, even those that are black belts, use either QFD or HoQ. And yet the seeds of confidence in requirements being correct, lay here. Joining up HoQ and the mathematics of Pi-calculus in such a way provides what is needed. In effect we ensure that the confidence levels are high in the requirements being correct and that means the axioms against which more algebraic proofs are made share the same confidence. This contrasts with how we do things today where the confidence levels that requirements are correct is often not very high and is compounded through design, code and test. This is why testing costs so much and why coding is inefficient with a lot of re-work happening through iterations in test and back to coding.

HoQ works through a simple statistical approach that relates stakeholders to requirements and enables a clear alignment, as requirements are refined back to the stakeholders who are impacted by the change and back to the higher-level requirements, business drivers and goals that necessitate change. Combining both cloud technology and HTML5 enables HoQ to be used on tablets such as iPads and Androids as well as on all other devices. This combination of mobility, low set-up costs provides a way to engage stakeholders in a structured discussion in order to tease on requirements, prioritise them based on the differing views of the stakeholders in a balanced way as well as capturing dependencies along the way. This technique brings well-known, statistical confidence to the process at each stage and through the refinement into actionable requirements. It encourages consensus across the stakeholders that increases the confidence in the actionable requirements.

If our actionable requirements are now deemed to be correct at a high balanced confidence level we have confidence in the axioms against which we can use algebraic proof. If we can then show that a design is correct by proof against the requirements we can assert that the confidence in the design is at the same level of confidence as the requirements. Furthermore we can then generate artefacts to drive coding and structure what is coded from that design and we can check that when the coding comes together in systems integration testing is conformant to that design. This provides us with the same level of confidence that what is coded is also correct against the requirements as it can be algebraically checked against the design itself.

As JT points out “The art of standing in the position of the customer or end user is so often missing. For some reason the independence of thought required to forget the processes that stand between requirement and assurance and to close that loop is often missing in both hardware and software.”. We can of course leverage all of this mathematics, automate things that were not automated before but still deliver something that does not meet the customers expectations. This stems from an inability to put ourselves in the position of the customer. To do this we can leverage HoQ, because one of the stakeholders is the customer or end user of what we produce. So at one level we can ensure that their needs are met and prioritised correctly. At another level, if we ensure alignment both statistically and algebraically of those customer and end-user needs we can drive automation in testing and in conformance checking to ensure that the customer and end user needs are shown to me met.


We have looked at the root cause of software defects (errors) and found it to be based on ambiguous communication and the multiple interpretations that result. We have made clear reference to the customer or end-user and looked at how we can put ourselves in their position and so ensure their needs are met. We have taken the postulation that mathematics, can provide a better solution to ridding or minimizing software defects and we have shown how we might do this using statistical methods for requirements and algebraic proofs thereafter. Thus the use of mathematics, postulated by JT, is a reality not a pipe dream but a practical reality that can help revolutionise both software quality assurance and delivery methods as we use automation techniques based on both statistics and hard-core algebra to reduce ambiguity, speed up delivery and increase the quality of the result.

Last but not least, if you hadn't guessed, the ZDLC Platform does exactly what we have presented. And it's not just is that think this, look at Ovum too.


1.     John R. Hauser (1993) How Puritan-Bennet used the house of quality. Sloan Management Review, Spring, 61-70.
2.     John R. Hauser & Don Clausing (1988) The house of quality. Harvard Business Review, May–June, 63-73 
3.     Terninko, John; Step-by-Step QFD Customer Driven Product Design; Second Edition, St. Lucie Press, 1997.
4.     Shillito, Larry M.; Advanced QFD Linking Technology to Market and Company Needs; John Wiley & Sons, Inc.; 1994.
5.     Day, Ronald G.; Quality Function Deployment-Linking a Company with its Customers; ASQC Quality Press; 1993.
6.     “House of Quality”, Jennifer Tapke Allyson Muller Greg Johnson Josh Sieck
7.     “Strategic Priorities and Lean Manufacturing Practices in Automotive Suppliers. Ten Years After.”  Juan A. Marin-Garcia and Tomas Bonavia
8.     Private Letter From John Talbot (former examiner for the institute of  Quality Assurance”.
9.     Private copy of “The Book of Kimbleisms”. Richard Kimble.
10.  “The Polyadic pi-Calculus: A Tutorial” Professor Robin Milner, LFCS report ECS-LFCS-91-180, School of Informatics Edinburgh University

Thursday, 30 May 2013

We have busy doing cool stuff

For over 3 years since I joined Cognizant we have been busy building out a new platform for the engineering of software. We call it the Zero Deviation Lifecycle and it comprises all of the work I have been doing since about 2000 and combines it with thinking from Dr Bippin Makoond. Visit the ZDLC blog page.