Skip to main content

Myth of Zero defect Software.


I am sure that anyone who has gone through a complete software life cycle will understand what I am talking about. As the release date starts nearing, discussions starts on the number of defects that are still open and how we plan to address the same. This debates proceeds almost till the date of the release and many a times as the software is released, a plan for the "Service pack" that will address critical issues that have seeped through the product is put in place.

Many of these discussions are painful and one of the primary reasons for the pain is due to the different perceptions on "Quality" of the software and whether one can say with a "clear conscience" that the product is ready to ship even though he knows that there are defects in the software. Obviously a good software practioner will not want to cause undue to pain to the end user of the software.

If one scans through the litrature on Software quality, we will find two broad views. One view which leans towards shipping defect free software and the other leaning towards shipping software with "Acceptable Quality".

I am not a great votary of "zero defect" software concept. In my view the goal of "zero defect" is achievable in manufacturing and related fields but is a very difficult goal in software engineering. "Defect" by its very name implies non conformance to a specification. This means that to have zero defects, we need to have the specifications spelled out completely and correctly. How many software practioners (especially people building comercial software) been exposed to a "Complete and correct set of specifications".

Over the release, software evolve and simultaneously the specifications also evolve. Software engineering is constrained heavily by the schedule and cost. Business mandate more features, in less time and with lesser cost. Given this reality, any goal of having a fully qualified specification will end up as mirage. My personal view is that, having such a goal itself is absurd. A good software practioner has to factor for change/modifications and should be able to control it.

In absence of a complete specification, the goal of a defect free software also end up as mirage and an absurd goal. On the contrary an ideal goal would be to define "Acceptable Quality". Doing so will remove the ambiguity and pain people undergo when making a decision on whether to ship or not ship the software to the field.

Comments

Post a Comment

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev