Recently, a colleague reached out looking for advise on documentation for code reviews. It was a simple question, like so many others that arrive in my inbox phrased as a “quick question” yet don’t seem to have a “quick answer”. It struck me as odd that we didn’t have a template for this sort of thing. Why would we need this and under what circumstances would a template help?
After some careful contemplation, I landed on two scenarios for code review. One definitely needs a template, the other does not.
Detailed Code Analysis
If you’ve been tasked with writing up a detailed analysis of a code-base, I can see benefit for a structured document template. Strangely, I don’t have a template but I’ve done this task many different times. The interesting part of this task is that the need for the document is often to support a business case for change or to provide evidence to squash or validate business concerns. Understanding the driving need for the document will shape how you approach the task. For example, you may be asked to review the code to identify performance improvements. Or perhaps the business has lost confidence in their team’s ability to estimate and they want an outside party to validate that the code follows sound development practices (and isn’t a hot mess).
In general, a detailed analysis is usually painted in broad-strokes with high-level findings. Eg, classes with too many dependencies, insufficient error handling, lack of unit tests, insecure coding practices. As some of these can be perceived as the opinion of the author it’s imperative that you have hard evidence to support your findings. This is where tools like SonarCube shine as they can highlight design and security flaws, potential defects and even suggest how many hours of technical debt a solution has. Some tools like NDepend or JArchitect allow you to write SQL-like queries to find areas that need the most attention. For example, a query to “find highly used methods that have high cyclomatic complexity and low test coverage” can identify high yield pain points.
If I was to have a template for this sort of analysis, it would have:
- An executive summary that provides an overview of the analysis and how it was achieved
- A list of the top 3-5 key concerns where each concern has a short concise paragraph with a focus on the business impact
- A breakdown of findings in key areas:
- Security
- Operational Support and Diagnostics
- Performance
- Maintainability Concerns (code-quality, consistency, test automation and continuous delivery).
Peer Code Review
If we’re talking about code-review for a pull-request, my approach is very different. A template might be useful here, but it’s likely less needed.
First, a number of linting tools such as FXCop, JSLint, etc can be included in the build pipeline so warnings and potential issues with the code are identified during the CI build and can be measured over time. Members of the team that are aware of these rules will call them out where appropriate in a code-reviews or they’ll set targets on reducing these warnings over time.
Secondly, it’s best to let the team establish stylistic rules and formatting rather than trying to enforce a standard from above. The reasoning for this should be obvious: code style can be a religious war where there is no right answer. If you set a standard from above, you’ll spend your dying breath policing a codebase where developers don’t agree with you. In the end, consistency throughout the codebase should trump personal preference, so if the team can decide like adults which rules they feel are important then they’re less likely to act like children when reviewing each other’s work.
With linting and stylistic concerns out of the way, what should remain is a process that requires all changes to be reviewed by one or more peers, and they should be reading the code to understand what it does or alternative ways that the author hadn’t considered. I’ve always seen code-review as a discussion, rather than policing, which is always more enjoyable.