This week's giveaway is in the Android forum.
We're giving away four copies of Android Security Essentials Live Lessons and have Godfrey Nolan on-line!
See this thread for details.
The moose likes IDEs, Version Control and other tools and the fly likes SonarQube in Action: Continuous Inspection Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Android Security Essentials Live Lessons this week in the Android forum!
JavaRanch » Java Forums » Engineering » IDEs, Version Control and other tools
Bookmark "SonarQube in Action: Continuous Inspection" Watch "SonarQube in Action: Continuous Inspection" New topic
Author

SonarQube in Action: Continuous Inspection

Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
A comment from this topic thread mentioned the idea of Continuous Inspection.

I'm familiar with Continuous Integration, Testing, and Delivery, but this is the first I've heard of Continuous inspection. From the Table of Contents, I see that Chapter 9 talks about it, but I was hoping you could describe it a little here too.

Thanks!
Burk


SCJP, SCJD, SCEA 5 "Any sufficiently analyzed magic is indistinguishable from science!" Agatha Heterodyne (Girl Genius)
G. Ann Campbell
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 33
    
    5
I'm sure that Patroklos will chime in here as well - this is a topic near and dear to his heart - but I'll go ahead & weigh in.

Continuous Inspection is the practice of measuring your code on a very regular basis - not as part of a continuous integration build - but once or twice a day (assuming there have actually been code changes.) Once you're analyzing regularly and frequently then your trending data is richer, and the value of differentials really kicks in. Analyze once a week and your changes "since previous analysis" cover a whole lot of ground - probably too much. With continuous inspection, you're going to find new problems as quickly as possible (perhaps even in your IDE with the SonarQube-Eclipse integration or with the Issues Report plugin if you're not using Eclipse) so that you can fix them almost immediately - while the code in question is still fresh in the developer's mind.
Patroklos Papapetrou
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 32
    
    5

Hi Keith
As I wrote in the book, I'd prefer to use the term Continual instead of Continuous to point out that this inspection is done constantly and is part of the development process.

To achieve this you should at least once a day run SonarQube analysis and examine its results on a daily basis. The idea is simple. Initially you define some quality thresholds. For instance you don't want critical or blocker issues introduced in newly committed code or you don't want the complexity / class to increase over 8. Then you compare (again daily) today's results with yesterday's and if something is over or under (it depends) the threshold you defined then you immediately do something to get back to the right track.

Finally as Ann said, Eclipse plugin and Issue plugin might be helpful especially for issues.


Follow me on twitter ( @ppapapetrou76 ) or see my linked profile and connect with me
You can slso subscribe to my technical blog
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
G. Ann Campbell wrote:With continuous inspection, you're going to find new problems as quickly as possible (perhaps even in your IDE with the SonarQube-Eclipse integration or with the Issues Report plugin if you're not using Eclipse) so that you can fix them almost immediately - while the code in question is still fresh in the developer's mind.

Ann,
I get the feeling that by 'problems' you don't mean bugs, or broken tests, but rather code duplication, or coding violations, etc., that impact the code quality.

Is that right?

Thanks,
Burk
Patroklos Papapetrou
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 32
    
    5

You can even find bugs ( all blocker issues are considered as bugs ) and of course broken tests since SonarQube reports not only coverage but also success, failures and errors in tests
G. Ann Campbell
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 33
    
    5
Yes, Burk. I wanted to be more general than Issues, which I why I used the word 'problems'
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
Patroklos Papapetrou wrote:To achieve this you should at least once a day run SonarQube analysis and examine its results on a daily basis. The idea is simple. Initially you define some quality thresholds. For instance you don't want critical or blocker issues introduced in newly committed code or you don't want the complexity / class to increase over 8. Then you compare (again daily) today's results with yesterday's and if something is over or under (it depends) the threshold you defined then you immediately do something to get back to the right track.

Is there a downside to doing the comparison with each check-in/build of the code base?

Thanks,
Burk
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
Patroklos Papapetrou wrote:To achieve this you should at least once a day run SonarQube analysis and examine its results on a daily basis. The idea is simple. Initially you define some quality thresholds. For instance you don't want critical or blocker issues introduced in newly committed code or you don't want the complexity / class to increase over 8. Then you compare (again daily) today's results with yesterday's and if something is over or under (it depends) the threshold you defined then you immediately do something to get back to the right track.

Patroklos,
Can you configure SonarQube to automate the comparison against quality thresholds and notify people if the comparison fails? Seems like the kind of thing you'd want to automate.

Thanks,
Burk
Patroklos Papapetrou
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 32
    
    5

Yes Burk

You can configure for each quality profile its thresholds and you can see these alerts in project dashboard. You can also install the build breaker plugin that breaks your CI build ( let's say Jenkins ) if a threshold is hit, which means you can send CI notifications when a build is broken.
Finally, developers can also subscribe to a list of notification events such as new alerts, new issues etc.

So with all the above you've got a fully automated notification mechanism when quality falls below your defined standards.
And to bless our beard, all these are covered in details in SonarQube in Action
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
Patroklos Papapetrou wrote:You can configure for each quality profile its thresholds and you can see these alerts in project dashboard. You can also install the build breaker plugin that breaks your CI build ( let's say Jenkins ) if a threshold is hit, which means you can send CI notifications when a build is broken.
Finally, developers can also subscribe to a list of notification events such as new alerts, new issues etc.

So with all the above you've got a fully automated notification mechanism when quality falls below your defined standards.
And to bless our beard, all these are covered in details in SonarQube in Action

Patroklos,
That's way cool. Can you configure different threshold values for different projects, too? I'm guessing you can, but I don't know.
Thanks,
Burk
G. Ann Campbell
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 33
    
    5
Burk Hufnagel wrote:
Is there a downside to doing the comparison with each check-in/build of the code base?


The obvious upside is that you'd get truly continuous Continuous Inspection, which I do find attractive. On the downside, analysis will slow down your CI build, which I'm told is a Bad Thing. It will also dilute the effectiveness of the "since last analysis" differential. And if I'm still looking at my "since last analysis" differential when your analysis hits... at best I will understand what happened and be irritated; at worst I'll spend a lot of time swimming through some really confusing results.

Another factor to consider is that you can't run multiple, simultaneous analyses of the same project. If you and I check in within minutes of each other, that would lead to two overlapping builds/analyses in some setups. The first one to fire would win and the second one would not only loose out, but also mark the build failed.

Burk Hufnagel wrote:Can you configure different threshold values for different projects, too?


Alert thresholds are configured in the rule profile. If you want to set up a profile per project, then you could accomplish different thresholds for different projects, but I wouldn't recommend it.
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
G. Ann Campbell wrote:The obvious upside is that you'd get truly continuous Continuous Inspection, which I do find attractive. On the downside, analysis will slow down your CI build, which I'm told is a Bad Thing. It will also dilute the effectiveness of the "since last analysis" differential. And if I'm still looking at my "since last analysis" differential when your analysis hits... at best I will understand what happened and be irritated; at worst I'll spend a lot of time swimming through some really confusing results.

Another factor to consider is that you can't run multiple, simultaneous analyses of the same project. If you and I check in within minutes of each other, that would lead to two overlapping builds/analyses in some setups. The first one to fire would win and the second one would not only loose out, but also mark the build failed.

Hmm. Sounds like the analysis takes longer than I'd expect, on the projects I configured for Sonar (a year or so ago) it took about a minute, as I recall. I can see that if you've got a lot of check-ins happening daily, or if everyone tends to check in their code at the same time, it could cause a problem.

As far as the 'since last analysis' differential goes, can you use something like the timeline plugin to see the changes over multiple analyses? I understand that if the analysis is done too often, the differentials may be too small to notice any trends, but many agile methodologies seem to encourage checking in several times a day - as new code passes unit tests. Given that sort of environment, it seems to me that the plug-in should allow you to examine the differentials from today and the previous day, regardless of how many check-ins/analyses were done.

G. Ann Campbell wrote:Alert thresholds are configured in the rule profile. If you want to set up a profile per project, then you could accomplish different thresholds for different projects, but I wouldn't recommend it.
In general, I think that makes sense, but I also could see the benefit of having one set of profiles for new projects and another for legacy projects.

Thanks,
Burk
G. Ann Campbell
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 33
    
    5
Burk Hufnagel wrote:Hmm. Sounds like the analysis takes longer than I'd expect, on the projects I configured for Sonar (a year or so ago) it took about a minute, as I recall.

It will clearly depend on the size of your project, and the resources available on the box.

Burk Hufnagel wrote:As far as the 'since last analysis' differential goes, can you use something like the timeline plugin to see the changes over multiple analyses? I understand that if the analysis is done too often, the differentials may be too small to notice any trends, but many agile methodologies seem to encourage checking in several times a day - as new code passes unit tests. Given that sort of environment, it seems to me that the plug-in should allow you to examine the differentials from today and the previous day, regardless of how many check-ins/analyses were done.


First, how many lines do you want to configure in the Timeline Plugin graph? And/or how many graph instances? Second, by default only one analysis snapshot is kept per day after the first 24 hours, then one per week after the first week. Otherwise your database would bloat beyond manageability. So that data is very ephemeral & won't actually show up as discrete points on a timeline beyond "today".

Burk Hufnagel wrote:
G. Ann Campbell wrote:Alert thresholds are configured in the rule profile. If you want to set up a profile per project, then you could accomplish different thresholds for different projects, but I wouldn't recommend it.
In general, I think that makes sense, but I also could see the benefit of having one set of profiles for new projects and another for legacy projects.


One rule set for the "good kids" and a less stringent one for the "bad kids"? Focus on differentials rather than on raw numbers and it's very doable to measure both with the same rule set and hold both to the same standard: no new technical debt.
Burk Hufnagel
Ranch Hand

Joined: Oct 01, 2001
Posts: 814
    
    3
Ann,
Hmm. Looks like I've still got a lot to learn/remember about Sonar(Qube).

And, if the book is anything like you're writing here, I'm going to enjoy reading it. "One set of rules for the 'good kids' and a less stringent set for the 'bad kids'? Nice metaphor - I like it.

Thank you,
Burk
G. Ann Campbell
Author
Ranch Hand

Joined: Aug 06, 2013
Posts: 33
    
    5
Thanks Burk!
 
It is sorta covered in the JavaRanch Style Guide.
 
subject: SonarQube in Action: Continuous Inspection
 
Similar Threads
jQuery bxSlider Infinite Loop Delay?
How To Delete An Element From Array?
Is There any Emphasis on Continuous Inspection
Issue with ArrayList and JSTL
Ajax In Practice vs Apress