Monday, November 03, 2008

Iterative and Incremental implementation of Code Reviews

Extreme Programming (XP) advocates for a pair-programming, taking the code review to its extreme. I have also found that pair programming generates a lot of speed and also helps producing better quality product in the first time. But, at the same time it's difficult to impress the management or prove return on investment associated with pair programming to the business people. So, what is the best possible solution? I believe, we can implement frequent peer code reviews to mimic the pair programming to some extent and get the benefit out of it.

In our team, we started implementing code reviews as we now recognize that there is no way to live without it. As usual, our approach is an iterative and incremental approach. So, we are taking small steps and eventually embracing the best practices in the team. I would like to share our plan here in this blog. The review process goes on the following work flow-

"When I am done with my work on task, I commit the code with the reviewer's name on the svn comment. CCNet automatically builds and sends a mail to the reviewer. The reviewer gets the review email through a filter and reviews the code and sends his feedback to me via IM/email."

Sprint #1: Review all the names or classes, files, methods, variables and so on. So, the reviewer emphasizes on the names and provides feedback on the following things-

1. Is the naming standard correctly followed?

2. Is this a meaningful identifier?

3. Does the name conform to the conventions used elsewhere in the code?

Sprint#2: Review the use of private methods and code structure. The to-do list is -

1. Would it make more sense to put some code into the private method for readability and/or reusability?

2. Is there any hard-coded constant directly used without referring to a const/readonly data type?

3. For all the methods, is it possible to reduce the dependency by using a simple parameter instead of a whole object?

4. Learn from sprint#1 and use the common learning.

Sprint#3: Review the use of loops, if-else blocks. At this iteration, the to-do is-

1. Is it possible to eliminate a loop?

2. Is it possible to avoid the nested loops?

3. Is there a large if-else-if-else block? Why is it required? Is this large conditional block a possible candidate for future change?

4. Learn from sprint#2 and use the common learning.

Well, this is pretty much it. The idea is, we feel its not practical to go with an overnight policy for implementing peer code reviews. But, its very much possible that we do it in small increments and at each sprint's retrospective we bring this point. So, we are all informed that we need to do reviews, which, eventually should sustain as a best practice!

If you think you have some ideas regarding this, please share it with me/my readers.