File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
http://aspose.com/file-tools
Win a copy of Clojure in Action this week in the Clojure forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

SonarCube - Most effective metrics?

 
Keith Barlow
Greenhorn
Posts: 5
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ann & Patroklos,

What do you consider to be the most effective metrics to monitor for code quality?

Thanks.

Keith
 
Burk Hufnagel
Ranch Hand
Posts: 814
3
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Keith,
I'n obviously not one of the authors, but I noticed your question and I was wondering if you could clarify it a bit? I'm not sure what goal you have in mind, and what sort of quality you're looking for, or how you want to measure it. In addition, are you interested in tracking the trade-offs? for example, in my experience, requiring developers to deliver something in half the time they estimated will generally means you'll get something that works, but the code quality will be poorer than normal.

I also suspect most metrics can't be used to judge code quality without some context. For example, if the number of bugs reported in the most current release is lower than the last, is that a Good Thing because the developers are catching more of them before the release got to QA, or it is a Bad Thing because QA couldn't figure out how to test the new code properly and missed them?

Does that make sense?

Burk
 
G. Ann Campbell
Author
Ranch Hand
Posts: 33
5
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Keith,

You're not going to like this answer, but I believe it depends. One school of thought is that duplications should come first - let's say you've got the same issue in 4 different spots because the code it's in was copy/pasted around several times. So... do you want to target issues & fix the same problem 4 times (hopefully in a consistent manner)? Or would you like to consolidate the issue and the clean it up.

You could probably find someone to argue for each metric. If I have to pick one, it will be issues - primarily because of my experience in introducing people to SonarQube. I find issues to be far more accessible to groups that just aren't used to the whole automated, centralized, ongoing code quality management paradigm. Point them to a list of "this is wrong's" and you'll get their attention. Point them to a list of "this is too complicated's" and ... it can be a little more challenging.

You asked for a metric, and I've given you my thoughts on that specifically, but I want to add one more factor to consider. Rather than just looking at a single raw metric, another approach is to focus on differentials: change since previous analysis, change since previous version, change since [insert your favorite delimiter]. If you're purely working maintenance on a project, then I'd pick a metric (or two) and move them in the right direction. But if you're on a project that's under active development, then just making sure things don't get worse can be a real achievement (ask me how I know. )
 
Keith Barlow
Greenhorn
Posts: 5
  • 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks! I think Ann picked up on where I was going with this. I have heard arguments for lines of code being one of the greatest contributors to increasing number of defects and also heard arguments towards increased levels of complexity as having the greatest contribution towards defects. As Ann points out, increasing numbers of duplications can also be a major contributor.

I have just started using SonarCube recently (in the past year) and have been inspecting my work across the various dimensions; however, at the outset, there appear to be some challenges. For starters, this is not an organizational effort, I am simply doing this on my own merit looking to see what kind of a difference I think it makes. So far, I have had a few observations:

1.) Existing projects are all over the board and very difficult to impact. Since this is not applied organizationally, there is not a concerted effort to clean up old code. If it falls within the scope of my work, I can try to address a few issues here or there but that tends not to have a substantial impact. In fact, I usually need to avoid anything that could have a substantial impact as it could mean substantial rework. So measuring legacy code is mostly an exercise in what the numbers look for a given piece of code.
2.) For new projects, I like to inspect my work and get a baseline. I try to address areas that are showing as outside of acceptable norms (higher complexity, higher LCOM, etc.) and resolve any issues reported. However, it's really hard to see the result of such efforts right from the outset. It's not until I have to go back and upgrade/enhance the project that I will be able to see just how maintainable it is or how much of an effect the effort has had.

What I was asking for is a feel, from an experienced perspective, of just what dimensions tend to have the most influence on overall maintainability of a project. Defects reported by FindBugs, PMD, etc. seem to be mostly langauge usage and syntax defects - their severity determines how detrimental it is (e.g. a misnamed variable or a resource leak). While it's great to catch them, I think it still leaves open the whole realm of logical defects. I am not sure if any generic (non-domain specific) automated testing can catch them but I have a tendency to think the dimensions questioned above (LOC, Complexity, duplications) can be great contributors to that class of defects.

This actually raises a question for me: is it possible to write domain-specific defect detections for Sonar? What would this look like - a set of rules or a static code analysis plugin?

Thanks.

Keith
 
Jeanne Boyarsky
author & internet detective
Marshal
Posts: 33697
316
Eclipse IDE Java VI Editor
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Keith Barlow wrote:This actually raises a question for me: is it possible to write domain-specific defect detections for Sonar? What would this look like - a set of rules or a static code analysis plugin?

You can write custom Sonar rules. (Or custom PMD or FindBugs ones for that matter.) You can also adjust the calculations for SQALE and the like to value some metrics more than others.

This was a great question. Welcome to CodeRanch!
 
G. Ann Campbell
Author
Ranch Hand
Posts: 33
5
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
In addition to crafting your own rules, there are existing rules for Java which let you do things like flag when package A uses package B. Without knowing more about your domain-specific issues it's hard to give a more specific response.

In terms of your lone struggle, I would urge you to show SonarQube to your fellow developers. There will always be the one in the bunch who does the eye roll & walks away. Most of the rest will likely go "yeah... that's good stuff." I typically start that discussion with issues - again the accessibility. But back to my point... turn your lone struggle into a grass roots movement. Once you've got other developers on board (I'm assuming/hoping you're not a one-man development team) then you can start talking to low-level management & work your way up the chain. That was my approach & it worked pretty well. Just know that it won't happen over night.
 
Keith Barlow
Greenhorn
Posts: 5
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks for your words of encouragement. I have presented it to the team in one of our lunch and learns but, while noone really rejected the notions I asserted, I didn't get much buy-in either. I think it will take time as you say. I am spending time getting acquainted with how it works and how I can best leverage it myself in the mean time. I recently took a class on Software Metrics as part of my MS Soft. Eng. curriculum at Stevens that really sold me on the concepts. I like the idea of being able to quantify issues and have numerical indicators for things that might need attention. Projects can get so big and heavy that it's hard to keep track of it all while your focusing on a section of the project. Adding one more level of monitoring is a great idea in my opinion.

Just to give you an idea, the current situation I am thinking of for the domain-specific rules would be this: I implement connectors to other applications and these connectors are built using an internal SDK so they have some standardization about them. We have developed some best practices for their implementation but the list of checkpoints is really starting to grow. I am thinking, if it's possible, it might be nice if to implement some automated checkpoints. I know this is a pretty vague description. I will look into the customization/extension capabilities of SonarQube to see what options are available. It sounds like there are some options, though, which is nice to hear.

Thanks.

Keith
 
Burk Hufnagel
Ranch Hand
Posts: 814
3
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Keith Barlow wrote:Thanks! I think Ann picked up on where I was going with this. I have heard arguments for lines of code being one of the greatest contributors to increasing number of defects and also heard arguments towards increased levels of complexity as having the greatest contribution towards defects. As Ann points out, increasing numbers of duplications can also be a major contributor.
<snip>
What I was asking for is a feel, from an experienced perspective, of just what dimensions tend to have the most influence on overall maintainability of a project. Defects reported by FindBugs, PMD, etc. seem to be mostly langauge usage and syntax defects - their severity determines how detrimental it is (e.g. a misnamed variable or a resource leak). While it's great to catch them, I think it still leaves open the whole realm of logical defects. I am not sure if any generic (non-domain specific) automated testing can catch them but I have a tendency to think the dimensions questioned above (LOC, Complexity, duplications) can be great contributors to that class of defects.

This actually raises a question for me: is it possible to write domain-specific defect detections for Sonar? What would this look like - a set of rules or a static code analysis plugin?

Keith,
I haven't used SonarQube, but I did use Sonar and I expect it still works in much the same way. You can run PMD and/or FindBugs and feed the results in to Sonar /SonarQube to make the results easier to comprehend through the plug-ins. I know you can configure FindBugs to look for issues you believe to be a problem, and report on it - so there is a way to write domain-specific rules.

One more thing, if you're going to try to make major changes to a legacy system - where legacy means you don't have much (if any) automated tests - I highly recommend getting Michael Feather's book "Working Effectively with Legacy Code" and using the information in it to keep you sane and safer.

Burk
 
Keith Barlow
Greenhorn
Posts: 5
  • 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Burk,

SonarQube is Sonar. It was just rebranded not too long ago due to some kind of licensing restriction. You're advice seems to be inline with what the others have suggested. I certainly intend to check it out. Thanks for the tip on the book as well. I will certainly check that out too, as I am always looking for good reading material. I suspect it is a very worthy read!

Thanks!

Keith
 
G. Ann Campbell
Author
Ranch Hand
Posts: 33
5
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Burk Hufnagel wrote:I haven't used SonarQube, but I did use Sonar and I expect it still works in much the same way.


In fact, they are the same thing, just renamed from one version to the next.
 
Burk Hufnagel
Ranch Hand
Posts: 814
3
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Keith Barlow wrote:Burk,

SonarQube is Sonar. It was just rebranded not too long ago due to some kind of licensing restriction. You're advice seems to be inline with what the others have suggested. I certainly intend to check it out. Thanks for the tip on the book as well. I will certainly check that out too, as I am always looking for good reading material. I suspect it is a very worthy read!

Thanks!

Keith

Thanks, Keith. I knew it had been rebranded, just wasn't sure how long ago and whether there had been major changes in how it works - kind of like Struts and Struts 2.

The book is a goodie - a little expensive as I recall, but well worth it.

Burk
 
Burk Hufnagel
Ranch Hand
Posts: 814
3
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
G. Ann Campbell wrote:
Burk Hufnagel wrote:I haven't used SonarQube, but I did use Sonar and I expect it still works in much the same way.


In fact, they are the same thing, just renamed from one version to the next.

Thanks, Ann.

I wasn't sure when the name change happened and didn't know if there'd been any major changes since I last deployed it a year or so ago.

Hmm. If Sonar had to change it's name I wonder how long before Spock gets a new one too.

Burk
 
Consider Paul's rocket mass heater.
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic