Contrast's Eclipse plugin uses sensors to determine if there are security issues in code that actually gets run. I like this approach as the number of false positives is low. When getting a manual code review (or one from a tool), there are typical "issues" reported that can't actually occur. I call these theoretical (or hypothetical) issues depending on what mood I'm in. Because to actually have a vulnerability you'd have to change the code in a specific series of ways. As much as I find these annoying in a security
test, they do have value. It's good to be able to clean up that code before someone sees it and thinks it would be good to call.
Does anyone have any experience with either of these?