Skip to main content

Scrum: Release Planning - Method in Madness


From a collection of writings I wrote in 2007 when I first explored this methodology.



Last week we had the release planning process for our next release. Given that this is going to be the first release that would be developed in agile, there was lot of expectations on the how to and what to do for release planning. The general idea was to get everyone who would be the primary stake holders for the release (which ironically ends up being senior leads, managers and architects, and product managers) in a single room and trash out the skeleton for the next release. So we had people flown in from US for a week, to be with us in a room to trash out the release contents. Listed below are my observations from the release planning experience.

The Process
- Get the backlog out - It should list a set of items which is desirable for the release and the priority of the same
- Have some offline discussions and research to determine what is involved for each of the item
- Give a approximate scope for each of the Backlog item
- Have the scrum teams set
- During the meetings - discuss each of the backlog items ( at least the top level items) to further the understanding of the items.
- Determine which is variable in the "golden triangle" - scope or time (the inherent assumption is that quality is not variable)
- Calculate the total velocity available for the scrum the team
- Make the scrum team owners to pick up the items they have (in order of the backlog priority) based on the skillsets available in the team and the remaining velocity available with the team
- The release will include all items till velocity of all the scrum teams is consumed.

The Good
- It helps to bring in all the people into a common forum to bring in their perspectives and concerns which results in a cumulative increase in the amount of information we have for the features for a release.
- A dedicated time and place ensures that people involved concentrate primarily on the forthcoming release without too much distraction. It goes without saying that a dedicated effort will be more beneficial than people spending some time daily for a longer duration (add the effects of different time zones too)
- It gives a more holistic assessment of the features, their dependencies, the expectation, and the risks involved in the release
- It gives a reasonable benchmark on what can be expected as part of the release

The Bad
- I think the process is costly
- Anyway the scoping and velocity calculations are hueristics
- Still not sure how the concept of release planning vibes with the ability to change priorities between iterations
- The discussions can digress and it takes lot more effort to keep the meeting focused

The Ugly
- It is people's issue stupid - Any process is as good as the people involved in it
- People participation is the key - Tuned out and unprepared stakeholders are a burden
- Meek never inherits the world - They stand to loose even if they have valid points
- People get caught up on semantics rather than the philosophy itself
- If not contained, you can see the political positioning in all its glory

Comments

  1. Hi there to every one, since I am genuinely keen of
    reading this website's post to be updated daily. It carries
    nice data.
    alternatives to kissanime

    ReplyDelete

Post a Comment

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev