Skip to main content

Acceptable Quality


"Keep things simple stupid..."
The best way to answer this question would be to ask whether the Quality with which the product is being shipped is sufficient for the Customer. It goes back to our old theme - "Customer interest is paramount". Having said this, making a decision of whether the quality is sufficient from the customer perspective is a tricky one. Many issues gets raised on what is sufficient for the cusatomer and let us face it is a difficult call to make given the pressures on time to deliver and impact/side effects due to a fix.

Personally, I use the process listed below to evaluate the product quality.

Base Rules
- QA should have the same time (if not more) as the development to test the product. Most of the time QA lags the development and hence there is a learning curve for the QA. They need to know the features first, before they can start trying to break the features.
- QA should have exercised all the features of the release in an integrated environment. This minimizes the occurance of regression and side-effects.
- The Defect Rate should have consistently decreased and flattened out. If the curve is following the upward trajectory, then the product is no way in shape for shipping

Once the base rules are satisfied, we will be in a position to evaluate the set of open defects and make a call whether the product is of acceptable quality for the customer.

The goal at this point is not ship with zero defects, but ship with the acceptable defects. The basic premise to this goal is an underlying assumption that at this stage, the pressure on time to deliver, the cost of addressing the defects, and potential side-effects due to a bug fix are high.

To make this decision, I usually use the matrix displayed on the side and group the open defects into the four quadrants.

The ideal situation to be in is when most of the open defects are in Quadrant 2 or Quadrant 3. If you find that there are numourous defects falling in Quadrant 1, then the call should be to stop the release. Quadrant 4 is tricky. If there are many defects in Quadrant 2, we need to evaluate each of them and basically move them to either Quadrant 3 or Quadrant 1 as the case may be in order to make the right call.

Comments

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev