Market Leader in Development Analytics (G2 Crowd’s Winter, Summer & Spring 2022)
Backed by Y Combinator experience featured in TechCrunch
New Case Study: Learn how WOM leverage Waydev
Remote work? Learn how to gain visibility into your engineering teams and accelerate your product velocity.
The idea of optimizing software delivery performance is not new and many have sought ways of doing it. One common-sense conclusion everyone seems to agree with is: to improve something, you must be able to define it, split it into critical components, and then measure those. From here onwards, opinions on what to actually measure, and HOW vary. Project management , velocity, and code quality may each be important components of the development process. One team at Google has dedicated years of academic research to this endeavor and has managed to back their hypothesis with real data. The results of this research are the DORA Metrics.
DORA metrics enabled engineering managers to get clear views on their software development and delivery processes and improve DevOps performance. At Waydev, we believe best decisions are data-driven and help you track DORA DevOps Metrics in an easy to read report.
In this article we will define what DORA Metrics are and how valuable they prove to be, and explain what the groundbreaking research found. Also, we’ll provide industry values for these metrics and show you the tools you have in place to help you measure them.
DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR), and change failure rate (CFR).
The acronym DORA stands for DevOps Research and Assessment (now part of Google Cloud). The team that defined the metrics surveyed over 31,000 engineering professionals on DevOps practices, over the course of 6 years, making DORA the longest-running academic project in the field. The project’s findings and evolution were compiled in the State of DevOps report.
DORA Metrics have become an industry standard of how good organizations are at delivering software effectively, and are very useful for tracking improvement over time.
Did we get any better in the last year? Like most DevOps team leaders, this is a question you probably have to ask yourself a lot. Understanding DORA Metrics will help you assess your team’s current status, set goals to optimize their performance, and understand how to do it.
But this is by no means limited to them. As we’ll see in the following lines, the benefits of tracking DORA Metrics go well beyond team borders, and enable Engineering leaders to make a solid case for the business value of DevOps.
The origins of the DORA Metrics go a bit further back, when its 3 frontrunners, Nicole Forsgren, Jez Humble, and Gene Kim, set out to answer a simple but powerful question: how can we apply technology to drive business value?
They argued that delivery performance can be a competitive edge in business and wanted to identify the proven best way to effectively measure and optimize it.
Between 2013 and 2017, they interviewed more than 2000 tech companies and released their findings in a book titled: Accelerate, The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations.
Accelerate identified 4 major metrics for measuring software delivery performance (you will find them under slightly different names in the book, but for clarity and consistency with the current DORA naming, we will use the below):
While LTTC and CFR measure the code quality, DF and MTTR are velocity metrics. If they are consistently tracked, and if steps are taken to improve them, together, they can help DevOps leaders boost their team’s performance and bring real business results.
The 2019 Accelerate State of DevOps report shows that organizations are stepping-up their game when it comes to DevOps expertise. According to Google, the proportion of elites has almost tripled, making elite performance 20% of all organizations.
Let’s take a closer look at what each of these metrics means and what are the industry values for each of the performer types.
A velocity metric, LTTC is the amount of time between commit and release. When tracked and optimized properly, it enables engineering managers speed-up deployments, and this software time to market
What does LTTC look like for different performer types:
Pro tip: Companies that can fix bugs or make improvements faster tend to be more successful overall than the ones that take 2 to 3 months. To decrease LTTC, include testing in the development process. Your testers are the ones that can teach your developers how to write and automate tests, so you can take out an additional step. To be fast, you have to eliminate bottlenecks.
This metric indicates how often a team successfully releases software and is also a velocity metric.
How often should different performance types deploy:
Pro tip: if you release more often, and in smaller chunks, you will spend less time figuring out where the problem is. To minimize this risk, you should ship one pull request or change, individually, at a time.
For larger teams, where that’s not an option, you can create release trains, and ship code during fixed intervals throughout the day.
This metric measures downtime – the time needed to recover and fix all issues introduced by a release.
Measuring MTTR to evaluate your team’s performance:
Pro tip: It’s important to look at all the metrics together, and not just MTTR, so you don’t end up with ‘quick fixes’ that only aggravate the issue in the future. If releasing often is part of your team’s culture, so will fixing things quickly be. Use feature flags to add a toggle button to changes, so that in the case of a failure, you can quickly turn that feature off, and reduce your MTTR.
Is the metric that shows the percentage of releases that lead to downtime, or serious issues. CFR is a code quality metric, giving you insights on your team’s performance levels:
Pro tip: Looking at the change failure rate instead of the total number of failures, will eliminate the false impression that the number of failures decreases with the number of releases. The more often you release, and in small batches, the less serious and easy to fix the defects are. If possible, make sure the developer deploying is also involved in the production, so they can easily understand the change and the bug, and the team can learn from them. This can greatly reduce the risk of running into that specific issue again.
You can take the DevOps quick check to see the level of your team’s performance against industry benchmarks.
So what was so groundbreaking about the research? Well, for the first time in the engineering industry, it was able to collect thousands of real-life examples and data from engineers all across the globe and prove that:
Then, the last task at hand remains how to measure DORA, and this is where Waydev with its development analytics features comes into play.
Before Development Analytics existed, you had to use a similar method to the one used for the research – ask your developers. Check Jira statuses, create reports, and spend daily standups and 1:1s asking about updates until you get the full picture. However, engineering team managers are not (all) academics and have a ton of other things to think about so this was obviously a tiresome and inaccurate process, with flawed results.
Now,let’s imagine for a second that the DORA team could connect all the data sources of the people interviewed to one single tool and analyze their work. Not possible in this scenario, of course but it’s exactly what development analytics can do for you.
The Waydev platform analyzes data from your CI/CD tools, and automatically tracks and displays DORA Metrics in a single dashboard without you requiring to aggregate individual release data.
Let’s look at Greg’s team. Greg is the DevOps team lead and opens Waydev to get ready for a weekly check-in with his manager. His team is now a high performer and has made significant progress over the past 4 months from medium performance values.
While the deployment frequency is that of an elite performer, with multiple deploys per day, and Lead time to change high (under a week), recovery time can be significantly improved.
There are many more metrics you can track to gain more visibility into your team’s work. DORA metrics are a great starting point, but to truly understand your development teams’ performance, you need to dig deeper.
This is where Waydev’s reports come in handy for every engineering manager that want’s to go deeper.
You can find a list of all available Waydev features here.
In the end, the real takeaway here is: Focus on your team and goals, not on the metrics. Empower them, and give them the tools they need – they will be the ones able to make any changes.
Metrics and tools help your developers understand how they’re doing and if they’re progressing. Instead of relying on hunches, and gut feelings, they will be able to visualize their progress, spot roadblocks, and pinpoint what they need to improve.
This will make them feel more satisfied with their own work, more motivated, and engaged.
If you want to find out more about how Waydev can help you, schedule a demo.