Skip to main content

Iron Triangle of Software Quality


Whenever the subject of Software quality is broached, immediately the arguments pertaining to the various constraints that binds the quality of software gets raised. So what are these constraints and how deeply they affect Quality.

Before we take this discussion further, we need to root our discussions on one golden rule - "Customer/User is the King" and in case of any trade-off, the trade-off will be made in the best interest of the Customer/User.

The second aspect we need to be clear on is, whether quality is an independent or dependent variable. I posit that, quality is a dependent variable and is always subject to constraints placed by other factors that the stakeholders of the software can control.

Extending this argument further, let us look at what are the factors that control quality. By doing so we also get back to the idea behind the post i.e. the constraints on quality (more in context of software engineering). As I see it, the primary constraints that affect software quality will be:

  • Requirements or Feature set - The laundry list of items that need to be built. Under this title we can also include the scope creep that takes places during development.

  • Schedule - The time available to build the feature set.

  • Cost - This would include factors like the skill level and number of resources that can be deployed, the tools that can be used, the budget available, etc.

These three constraints can be seen as the three nodes of the "Iron Triangle" of Software Engineering. Quality can be visualized as the area that is bound by these three constraints. Looking at the triangle, it becomes obvious that we need to be realistic with respect to the three constraints to get a decent coverage for quality. In absence of that, the area of the triangle will shrink and quality will suffer.

This brings us to the next question. What is acceptable Quality? Who makes the decision on what is acceptable Quality? Is there a way that will help us to make the trade-offs?

We will discuss these in subsequent posts.

Comments

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev