• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Optimizing Java: Tips for evaluating performance monitoring tools/frameworks?

 
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Ben, et. al,

I didn't see any references to tools like Spring Actuator, Spring Boot Admin, AppDynamics, and others like these.

What do you think are the most important 20% things to consider when evaluating tools like these for use in your project so that you can address 80% of your common performance tuning concerns?

And welcome to CodeRanch!
 
author
Posts: 67
10
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Junilu,

I'm not familiar with Spring Actuator - but AppDynamics and most other tools out there suffer from the "Big Wall of Data Dashboard" problem - the tool presents the user with a wall of data but no indication of the significance of the data or which bits are important, or the statistical tools needed to correctly handle the data.

Data skills are not the same as software development skills (a key theme in the book) and many devs have a tendency to jump to conclusions - so while the dashboards are helpful in some circumstances (e.g. for Kanban-style "getting a feel of system behaviour" or production steady-state BAU monitoring), they are in my experience not helpful for diagnosis in all but the most glaring of cases.

The first and most important rule for evaluating tools is that you must not expect the tool to do your job for you. Different tools also do different functions. Observability is not the same thing as diagnosis, is not the same as alerting, is not the same thing as production monitoring, is not the same as performance regression testing.

Far more important than tools is to understand the nature of your production deployment - What does "normal" look like for your system? Are you collecting the right data so you can confirm whether or not the system is normal or not? What does a typical acceptable customer journey look like? How many transactions / customers are currently not receiving "acceptable" or better service? What are the separate subpopulations that exist within your customer data? Are certain customers or functions standing out as suboptimal?

Technical solutions are all well and good, but good data and some insight into the nature and behaviour of production systems (& the business) will usually beat tools.
 
Junilu Lacar
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Ben Evans wrote:The first and most important rule for evaluating tools is that you must not expect the tool to do your job for you. Different tools also do different functions...
Far more important than tools is to understand the nature of your production deployment - What does "normal" look like for your system?


Thanks, Ben. This advice reminds me of The Book of Five Rings and knowing the proper applications of weapons in different settings. Also, to paraphrase JFK, "Ask not what your tools can do for you, ask what you can do with your tools."

Follow-up question: Are there any technologies (existing or emerging) that you think have good potential of opening up new opportunities for improving or even radically changing the nature of performance tuning practices and tooling?

There's been more and more chatter about AI and Machine Learning lately. I used to work for the networking giant that recently acquired AppDynamics and there was a lot of activity around data analytics in the security and network monitoring groups that I worked with in my role as a tech lead there. The last project I worked on there did involve a lot the "Big Wall of Data Dashboard" you mentioned. However, my focus was on good software development practices and engineering, which is hard enough as it is to get the basics right, let alone getting things like performance monitoring and tuning right.
 
Ben Evans
author
Posts: 67
10
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
No problem!

In terms of tooling, I would be really remiss if I didn't mention jClarity (the startup I founded with Martijn Verburg & Kirk Pepperdine) - and our tool Illuminate (https://www.jclarity.com/) - which is an ML / AI based tool that does automatic diagnosis - you install the tiny daemon and it will be able to spot & root cause certain types of problem without human intervention.

I believe that this approach will change the nature of performance analysis - and more tooling companies are now starting to agree with us. Which is flattering in its own way, even if it does mean we might have some competition soon!

Aside from that - I think that the focus needs to be on getting fundamentals right - sound SDLC, proper engineering practices, good observability & awareness of what constitutes "normal operation" for your systems. Only then can you build up to looking at more high-level concerns.
 
Junilu Lacar
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Ben Evans wrote:
Aside from that - I think that the focus needs to be on getting fundamentals right - sound SDLC, proper engineering practices, good observability & awareness of what constitutes "normal operation" for your systems. Only then can you build up to looking at more high-level concerns.


In case anybody is wondering, I did not pose that question to set Ben up to plug jClarity I honestly did not know that he had any connection to that tool.

So, it's the same principles as with application security: Security starts with well-designed software; poorly-designed systems are inherently insecure and it's hard to make them secure. Get the basic engineering right and all the other good things will follow. Thanks again for sharing your thoughts, Ben.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic