Review Collaboration

Features

Review Collaboration

Understand how your engineering teams work collaboratively. Effectively communicate the healthy tension between speed and thoroughness in code review.

The Review Collaboration offers a unified view of submitter and reviewer metrics of the PR process. Understand how your engineering teams work collaboratively. Effectively communicate the healthy tension between speed and thoroughness in code review.

How to use the Review Collaboration feature

Review Collaboration shows code collaboration stats between the submitters and the reviewers. Engineers can play the role of both submitters and reviewers. You can select which team’s review collaboration stats to view, which repositories to be analyzed, and what time frame to analyze.

Submitter and Reviewer Metrics

Responsiveness and Reaction Time help you understand how quickly the team members are responding to and communicating with each other in reviews. A healthy code review workflow should aim to improve the velocity of code review communication. These metrics will improve accordingly. Team members don’t want to get stuck waiting on others for an answer or a response, and as managers, we generally want to help ensure that the team is getting what they need to move their work forward.

Involvement represents the percentage of PRs a reviewer participated in. It provides a measure of the engineering engagement in the code review process. It’s important to note that this metric is a highly context-based metric. At an individual or team level, “higher” is not necessarily better, as it can point to a behavior where people are overly-involved in the review process, but there are certain situations where you’d expect to see Involvement very high, sometimes from a particular person on the team and other times from a group that’s working on a specific project. Comments addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision.

Receptiveness is the ratio of follow-on commits to comments. This is an indicator of openness to constructive feedback, but you should never expect this metric to go up to 100%. This would mean that every comment lead to a change.

Influence is a ratio of follow on commits made after the reviewer commented. When this metric is viewed across a longer period of time, it can provide some insight into the likelihood that reviewers’ comments will lead to a follow on commit, thus their influence. You will also want to look for a healthy balance of influence, which speaks to the likeliness that a reviewer’s comments will lead to follow-on commits. We may be participating in code review, which will lower unreviewed PRs, but we also want to make sure that participation is meaningful, actionable feedback that drives improvements to the code and distributes knowledge between teammates.

Unreviewed PRs help you measure the thoroughness of the feedback. Unreviewed PRs shows you the number of pull requests that are open and merged without ever getting a comment. While there are always edge cases, this should be as close as possible to zero. You always want a second set of eyes on the code your customers will be interacting with. High rates of unreviewed PRs increase the chances of introducing bugs and eliminates the opportunity for engineers to learn from their teammates. This is a fundamental metric for us to understand our general tolerance for the risk of solutions that we’re moving forward to our customers. The higher the rate of unreviewed PRs, the riskier solutions we are moving to customers.

The Review Coverage indicates the number of pull requests that have been merged after review, and engineering leaders should try to bring this metric as close as possible to 100%. It helps us get an understanding of the thoroughness of a review between the submitter metrics and the reviewer metrics. You can get an understanding of how the team is leveraging the pull request process and how effective the process is.

Together, these metrics help you see how quickly and effectively your team responds to each other, and how thorough or substantial is the feedback they are providing to each other.

Sharing index

The Sharing Index report includes a visualization of the evolution of knowledge sharing throughout the organization. It is the ratio of active reviewers to submitters. Active reviewers is the count of active users who actually reviewed a PR in the selected time period. Submitters represent the total number of users who submitted a PR in the selected time period.

Collaboration map

The Review Collaboration feature also includes a map of code collaboration. If you hover over an engineer’s name in the right column, you will see whose pull requests he/ she reviewed. This is useful to gain insights about whether your senior engineers are active in the code review process.

By bringing the review collaboration report to retrospectives, the team can, over time, foster a culture that values, and can effectively communicate the healthy tension between speed and thoroughness in code review.

Featured In

Ready to improve your teams' performance?

Request a platform demo

DORA Metrics Playbook

DORA Metrics Playbook

Download Now!