Using Velocity in Agile

Home
TemplatesBlog
Rodrigo Silveira
Rodrigo Silveira
Head Of Engineering, Ohi
Posted on
Feb 12, 2022
Updated on
Sep 2, 2022
Table of Content

Introduction

None of us would hire a builder to build our dream home without estimating its cost and when we could move in. In much the same way, no matter how hard I’ve tried, I’m yet to find an executive whom I respect who will let me build a large software system without estimating its cost and completion.

Fortunately, in the last 30 years we have learned a lot from our experiments with project management (agile scrum), estimation (size, not effort) to develop estimations that help us build software systems relatively close to budget and schedule - ones that we can be proud of and delight our customers. 

This is a high-level discussion about how I leverage agile velocity to support agile teams under my leadership in building software systems, without death marches.

Terminology

Velocity

Velocity is the average amount of work performed by a software engineering team within a set period; nowadays, most scrum teams practicing agile employ two-week sprints, and we associate these teams’ velocity with the average amount of work they complete within a 2-week sprint. Notice that the focus is on size, not time, and on the team rather than the individual. Team velocity enables software engineering groups to estimate large systems using a collective understanding of the size of each of its parts, integrating the entire team’s skills, experience, and collective understanding of these systems’ complexity and uncertainty. Although it might tempt managers to focus on individual velocity, my experience has taught me that this is a fool’s errand that leads to no productivity growth; in agile, it’s always best to focus on the team instead of individual productivity.


Size

In Agile, we estimate the size of a requirement, not the effort it takes to implement it. Since it is counterintuitive to estimate size rather than effort required to execute a task, let’s imagine two friends standing at one end of a field who are considering racing to the other end. They know both will run the same distance, around 100 yards.  They see a fence with a gate midway through the course, suggesting they will have to account for the complexity of overcoming this obstacle. They also notice that the field is full of little holes and large rocks; although they don’t know where exactly  they’re located, they will have to be careful to avoid them or else they get hurt, suggesting they will have to account for the effort to deal with the uncertainty of these unknown obstacles’ locations. When all is said and done, the distance, complexity, and uncertainty are the same for both friends; the size of their challenge is the same for both friends. What might be different is the effort each will have to put in: a more skilled mountaineer might do better than a world-class sprinter! As with the case with our friends, although the agile estimator(s) agree on a requirement size, the effort required to implement the issue might vary, and that’s OK. What matters is the team’s aggregate size.

Which estimation methodologies you've tried before, what worked for you, what didn't and why

The common thread on all estimation methodologies I’ve used has been the analysis of the software we were going to build, forging a good understanding of its parts, how they related to each other, and how they plan to build them. I have been using UML for a long time to help me analyze systems, to understand their parts and how they relate to each other; UML enabled us to create a context through which we could reliably and consistently evaluate and communicate complex systems.

Another common thread has been my inability to assess the effort required for implementing these systems, and to deliver them on time and budget without compromising quality. I tried all kinds of techniques to estimate the number of hours or days that would be required to implement our tasks, or the size (lines of code, function points) paired with a (mostly arbitrary) constant to transform size into time. In the early days I relied on our experts for these estimates. With time, these highly inaccurate methods grew into attempts to use formulas based on historical data and regression analysis. Recently, we have been experimenting with integrating experts and formulas, which I’ll discuss in more detail in this article.

There are not many planning tools available even today and until the mid-2000s, I used several Gantt Chart approaches to collect these estimates into a plan. The bigger the project, the more unwieldy these charts became, and, invariably, I would barely use them towards the end of projects, relying instead on the prioritized list of things to do. From the mid-2000s forward, we abandoned Gantt Charts and institutionalized our list of things to do as sprint backlogs!


One of the beautiful aspects of agile is that it integrates half a century of accumulated wisdom into a simple and easy-to-use project management mechanism, including a sophisticated and adaptable estimation approach: velocity.

Why you might think it's important to measure velocity, how it affects the team and the business

Agile metric velocity gives our teams a simple data-driven mechanism to estimate workload and schedule for large quantities of work, to assess the impact of new requirements quickly, and to prioritize the tasks ahead, all based on the team’s recent performance. It is also a powerful mechanism to assess the impact of losing or onboarding a new team member.

>>Click here to learn whether scrum teams' commitments are influenced by velocity

Agile Velocity Calculation

How to Use Velocity to Assess New requirement

Let’s take a look at how we use velocity to assess the impact of new requirements discovered halfway into the implementation of a set of features. For instance, given a product backlog, roadmap of 518 story points and a team’s velocity of 62 story points (always round up):


After four sprints, the team implemented 262 points:


If a new requirement emerges, amounting to 93 story points, we have at least two data-driven alternatives:

Alternative 1

Integrate the new requirements in the product roadmap:


Alternative 2

Complete product roadmap as is, implement the new feature later:

Alternatives Summary

We either bundle the requirements into the existing effort and take 6 sprints to complete, or we finish the project in 4 sprints, perhaps 5, release it, and then come back to implement the new requirements later on in only 2 sprints.

I love how simple this is!

Resource Profile Change

Now let's use the same project and assess the impact of losing a team member at the end of the fourth sprint, when the size of the outstanding product backlog is now 265 points; assuming that this individual had an average velocity of 16 points:

This is a situation where it’s OK to leverage individual performance to assess the impact of losing a team member.

How does an engineering lead drive the team to be positive about estimates and avoid devs putting in mis-estimations just to "complete the sprint"?

Despite our best planning efforts, reality is stubborn; stakeholders discover new requirements, senior team members get sick or quit, software engineers uncover hidden complexity that negatively affects many tasks, our assumption that a library would help prove us wrong, and so on. Despite all of these challenges, our stakeholders want us to deliver the project on budget and on time. Without a robust mechanism to have data-driven conversations with our stakeholders showing them the impact of these events on our schedule, we have no option but to work extra hard, cut corners, and lose individual and team morale. No one likes death marches.

My practice has taught me that software engineers are more willing to embrace project changes when working for teams with a solid history of using sizing and velocity to estimate their work and changes to the project.

Thus, by embracing sizing and agile velocity, engineering leads and their software teams endow themselves with an elegant and robust data-driven mechanism to estimate large amounts of work and to re-estimate the rest of the effort as the project suffers unforeseen setbacks. As software teams and their stakeholders learn to leverage these tools to estimate deliverables, trust between them grows. We can only build amazing software systems when our stakeholders trust us, and vice versa.

About the author

Rodrigo Silveira
Head Of Engineering, Ohi

Related Posts

Contact Us
Thank you! Your message has been sent!
Oops! Something went wrong while submitting the form.
Close