By Andy Cleff
At the end of a Scrum team’s iteration aka “sprint,” the team has a velocity. In some cases, it is based on the number of story points they have completed within the sprint’s duration; other teams simply count the number of user stories done. 
Velocity (story points or user stories completed in a fixed time period) is one of the most commonly used metrics in agile software development. It is also one of the most abused. Teams and stakeholders fall into the trap of believing “increasing velocity” is a noble goal. It is not.
Why not? Because instead of focusing on delivering working software that has business value for stakeholders, the team will be concerned with simply delivering more story points (e.g., to meet a target velocity).
While team velocity might (arguably) be a good long-term predictor, if it becomes the team’s focus in the short term, it can become a negative influence. An increasing velocity doesn’t necessarily mean things are getting better.
For example, a team could achieve a higher sprint velocity by skipping tests or sacrificing quality with the side effects of brittle code, technical debt and escaped defect fix cycles. This will result in a lower velocity in the long term. (See the Hawthorn Effect: that which is measured will improve, at a cost, as well as Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure.)
Also, a Scrum team’s velocity is a lagging indicator – like unemployment. You don’t fix unemployment by focusing on the rate of employment. You fix unemployment by fixing the economy. In both cases, you are working in a complex emergent system, and the causes for increases or decreases in any metric are not immediately obvious nor predictable.
Measure Many Things
I’m not suggesting that an agile development team should never look at its velocity chart. But to help monitor experiments aimed at continual improvement, a team needs to measure more than just one thing.
Only when trends in multiple metrics are overlaid (technical as well as human) can a team begin to get a holistic perspective. And those multiple viewpoints, along with their variations, stability and trends can only serve to raise a flag that says “Look deeper here … what is going on?”
30+ Metrics for Agile Software Development Teams
To help jumpstart a “measure many things approach,” I have assembled below a listing of metrics for software development teams. (Shout out to Jason Tice @theagilefactor for the main buckets.)
The list is intended as a starting point, not an exhaustive inventory. Use the ideas as conversation openers for coming up with things that make sense for your development team. If you’re working outside of IT, perhaps on a sales team or a customer support team, you can use these buckets to stimulate your brainstorming.
Process Health Metrics
This category assess day-to-day delivery team activities and evaluates process changes.
- Cumulative Flow Diagrams
- Control Charts
- Percent Complete and Accurate
- Flow Efficiency
- Time Blocked per Work Item
- Blocker Clustering
This group directs focus on identifying impediments to continuous delivery.
- Escaped Defects
- Escaped Defect Resolution Time
- Release Success Rate
- Release Time
- Time Since Last Release
- Cost Per Release
- Release Net Promoter Score
- Release Adoption / Install Rate
Product Development Metrics
These help measure alignment of product features to user needs.
- Customer / Business Value Delivered
- Risk Burndown
- Push / Pull
- Product Forecast
- Product Net Promoter Score
- User Analytics
Technical / Code Metrics
The following help determine quality of implementation and architecture.
- Test Coverage
- Build Time
- Defect Density
- Code Churn
- Code Ownership
- Code Complexity
- Coding Standards Adherence
- Crash Rate
People/Team: The Human Elements
This group of metrics reveals issues that impact a team’s sustainable place and level of engagement.
- Team Happiness / Morale
- Learning Log
- Team Tenure
- Phone-a-Friend Stats
- Whole Team Contribution
- Transparency (access to data, access to customers, sharing of learning, successes and failures via sprint reviews)
- One of my favorites from Geoff Watt’s "Scrum Mastery:" Imagine a team mapping themselves against the 12 agile principles over time
With so many great metrics to choose from, a few obvious questions arise: How many should a team use? For how long should they use the ones selected? And who gets choose? My thoughts:
- Teams might decide on one or two important metrics to add to their current dashboard. They can then add one or two more over time. On a well established team, three, five or perhaps up to seven maximum might be in use at any given time. Any more, and they’ll likely get into analysis paralysis.
- The useful lifespan of a single metric could range from a couple of months to a couple of quarters. Probably never shorter than three iterations.
- Coaches or managers should not mandate any specific metric, nor a minimum number of metrics. Metrics are for teams to learn and explore how they can improve themselves through inspect-and-adapt cycles. Teams should choose those metrics they think will be useful in that regard.
- While some metrics can scale to teams of teams, i.e., the larger organization where all teams “own the metrics” for their parts of the company pie, neither teams nor managers should compare metrics across teams. Sure, use metric comparisons to start conversations, to share knowledge and insights gained across teams. But never: “My X-metric is better than yours …” -- never!
When selecting metrics, a team should be able to answer:
- Why “this metric?” – why does it matter?
- What insights might we gain from it?
- What is expected to change?
- How might it be gamed or misused? (Remember when Wells Fargo offered big bonuses to employees that cross-sold financial products to customers?)
- Is the metric measuring the team, and not individuals?
- What are some for trade-offs / costs of improvement? Working to improve one thing may temporarily reduce another (e.g., predictability may increase at the expense of throughput).
- How often would we like to “take a data point?”
- How long will we run an experiment related to this metric? (What is the half-life?)
- How will we know when we’re “done” with this metric (it has served its purpose, and it’s time to retire it and consider another)?
- Is this metric a leading or lagging indicator?
- How will we make our measurements visible – to promote knowledge sharing, collaboration with other teams and trust with our sponsors?
- That which is measured will improve, at a cost. Which metrics are used should be arrived at by team consensus – not mandated by management.
- When a measure becomes a target, it ceases to be a good measure. Look for and understand trends, not hitting magic numbers.
- Correlation may not mean causation, but it sure is a hint. (See Friedman’s Thermostat)
- Make your metrics visible.
- Expose how the team feels in near real time. Team happiness is a leading indicator of performance. Identify what’s going on now and they’ll likely know what is going to happen soon.
Radiate the Information
Did you catch that I snuck visibility in there twice? Sharing your metrics with stakeholders can be a bit scary, especially if you’ve ever been in a place where metrics have been used as a weapon, but take a leap. Be brave. Show them to others.
As Jos de Blök, director of Buurtzorg Foundation says in the forward of Geoff Watts’ “Scrum Mastery” book, “[Breed a culture of] trust and responsibility instead of control and suspicion.”
Only with transparency can we collectively look through enough lenses to build an integrated and holistic view. And then we can collaborate on improving “all the things.”
A team should not be simply striving for ever increasing values, as sometimes slowing down might be called for. Instead, teams should look at variations in their metrics, and then dig to get to the root causes of that variability (or at least develop a few good hypotheses). By striving for consistency and stability (i.e., predictability) teams will find that increased performance - delivery of value - will come as a natural side effect.
Original Article: https://www.frontrowagile.com/blog/posts/69-30-metrics-for-agile-software-development-teams