Skip to main content

Quality in Software - What is it?


Over last few months, I have been privy to lot of talks on Software quality and how we need processes and tools in place to improve software quality. This discussion brings up an very interesting question. What is Software Quality?

Basics first. Let us start with the definition of Quality:
IEEE 610.12-1990) Standard Glossary of Software Engineering Terminology defines quality as:
the degree to which a system. component, or process meets (1) specified
requirements, and (2) customer or user needs or expectations
(ISO 9003-3-1991) Guidelines for the application of ISO 9001 to the Development, Supply and Maintenance of Software defines quality as:
the totality of features and characteristics of a product or service that bear on its ability to satisfy specified or implied needs.
If we are to go by these definition, one thing becomes very clear. Quality is a function of user needs. As long as a product meets the specific requirements or needs of the customer/user, then it is considered to be of good quality. But the catch here are words like "Expectations" and "implied needs".

The question I would pose is - whether the above definitions are exhaustive and inclusive enough to cover all the aspects of Quality?

Let us take an example: Let us assume that you are a vendor of AC Generators. A client has a requirement is to have a 99.9% uptime of the AC Generator he purchases. One of the way to meet this requirement is to have a person deployed at the client site, with a backup AC Generator. If ever and whenever there is a problem the person is expected to switch the production AC Generator with a backup Generator. Through this mechanism the customer requirements can be satisfied.

But is this a quality solution to the customer requirement? I would argue it is. The user does not care two hoots on how you solve his problem as long as he doesn't perceive that he is incurring an unnecessary additional cost/effort. All that a normal customer will care about an efficient and reliable solution that correctly addresses his requirement and if having a person next to the AC Generator is an best of addressing his need, he would be happy with that.

But the question (for you) that remains is - whether this solution will continue to be cost effective(what about wage increases for manual labor?). Is the solution repeatable and scalable (What happens if the same requirement is posed by multiple clients?). Is the solution reliable and efficient (Since the process is manual, Is there a chance of a human error?).

There is definitely a need to augment the core definition of quality with some additional attributes. That is exactly the interpretation, Steve McConnell (of Code Complete fame) provides. He specifies the different attributes of Quality and classifies the same into two broad categories:

  • External Quality - which is customer facing and has attributes like correctness, efficiency and reliability as the attributes
  • Internal quality - which is directed inwards and deals more about structure, testability, maintainability, scalability, re usability and extensibility.
and the Software quality iceberg serves as a good metaphor for the same. (image: thanks to software quality blog). . Many a times, the testing efforts which are essentially inspectional in nature focuses only on the attributes of external quality and internal quality attributes which potentially can be the root causes for many a problems go unnoticed.

Comments

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev