I read this book: Best Kept Secrets Of Peer Code Review so I thought I’d write about it.
Warning: the people who wrote this book sell code review software, so unsurprisingly they suggest that doing code review is worth the effort if you are using some code review software to make things easier. Also, the “waterfall” is everywhere; coding is an activity that is done after designing and before testing.
The book attempts to be an evidence-based review of code review. It looks at published literature and includes some new research carried out on code reviews that were done with their own software. The evidence, to my eyes, mostly looks like it doesn’t give any clear conclusions on how effective code review is likely to be. Mostly, unsurprisingly, the main suggestion is to do some reviews and gather lots of data on how many bugs were found for the hours taken to do the reviews. No shock there. There are a few very good things, including a review of eye movements in code reviewers.
The most interesting stuff is the concrete measurements of how much code can be reviewed effectively. This is given by two numbers·
How fast you can look through code; their measurements suggest that on Java code you should be aiming for no more than 400 lines per hour and less if the module being inspected is complex or critical
· How much time should the review take: they suggest that you should spend no more than 60 minutes reviewing a code set, after that time a person becomes “saturated” and can’t see any more bugs no matter how long they spend looking. (a code set could be a class, a function, a module, a group of related check ins in many files)
They also discuss expected “defect densities”. That is, the number of defects per thousand (k) Lines Of Code (kLOC).
- Of course, the question is; what is a defect? It can be anything that the reviewers think should be changed in the code and obviously vary from the trivial to the critical. There are no hard rules about what a defect is; that will depend on your system but they include such things as no checking errors, not validating inputs, not checking for nulls, “off by one” iteration, inconsistent comments, comments with spelling mistakes etc
- The harder and longer you look, the more “defects” you find. There is no real upper limit for the number of changes that could be requested after a very intensive review by multiple reviewers. Obviously at some point, the changes begin to be matters of taste rather than absolute “this is good engineering”.
- If you, really really want to know some hard numbers defect rates are between
- 5 defects per kLOC for very mature stable code with tight controls on the changes
- 100 defects per kLOC for new code written by “junior” developers or in a loose development environment
- And bear in mind that this isn’t “significant” lines of code, this includes comments and this is – in theory – working code.
- Critical code should be reviewed by more than one person to get some discussion going on best practices
- The book emphasized code review and didn’t really look into “design” review. The examples seemed to be more about reviewing check-ins to a stable code base, rather than large commits of new code, so there was some discussion of doing a “guided” review where the author writes a document that lists the changes to various files and attempts to guide the review through the changes. The risk here, of course, is that the reviewer sees what they expect to see, not what is actually there. The advantage is that in preparing for the review, the author checks more things and will find their own errors.
- They also suggest that author preparation is correlated with low defect rates and when people know that their code may be reviewed, it will have less defects as there is a social effect that means people raise their game so they don’t keep making code with the same defects
So, combining these things they recommend that a review should ideally be around 200 lines and last no longer than an hour.
I’ve made my own code review checklist that I think embodies these things.