I would like to know your opinion on the same. With the increase in the usage and popularity of static analysis tools like PMD, Find Bugs in application development, how does this impact the process of manual code review/peer review. Do you feel these tools help cut corners and also are they more effective than the normal eyeball based code review process.
Can we have more reliance in these tools and cut down on time allocation or toss-out the manual code review process.
From personal experience, I have found that code reviews tend to expose areas where the coder has not followed coding guidelines, but rarely, if ever do code reviews expose defects (I have too often sat in code reviews for code that I knew had certain defects but never were those defects noted by any of the reviewers). With the various tools now built into Eclipse, for the last project I worked on, I configured Eclipse to mark coding guideline infractions as warnings and then prohibited anyone from checking in any code that contained a warning, unless there was a comment in the code explaining why the coding violation was necessary. I also supplied a formatting config and required everyone to reformat the source before saving. I added similar guideline checks to checkstyle and various other tools for ant and maven builds.
The one nice thing about the Eclipse warnings is that after a while you just tend to write warning-free code.
I still do plenty of code reviews. Just different types. Every so often, I'll run the static analysis tool with really strict settings. Most of these are things that I would never recommend someone change. Yet somehow I notice there is a correlation between "bad" code and code that has lots of these issues. Which lets me focus my manual code review on more troublesome code.
I also use code reviews for verifying the code follows the algorithm, checking newer people are following the team architecture or recommending coding techniques to reduce the amount of code needed. (for example someone wrote their own quicksort once.)
In other words, I let the static analysis tool do the brute force checking and I look at things it can't check.
I also request a code review of my own code periodically. Humans give good suggestions and help you improve.