Hi Junilu,
I'm not familiar with Spring Actuator - but AppDynamics and most other tools out there suffer from the "Big Wall of Data Dashboard" problem - the tool presents the user with a wall of data but no indication of the significance of the data or which bits are important, or the statistical tools needed to correctly handle the data.
Data skills are not the same as software development skills (a key theme in the book) and many devs have a tendency to jump to conclusions - so while the dashboards are helpful in some circumstances (e.g. for Kanban-style "getting a feel of system behaviour" or production steady-state BAU monitoring), they are in my experience not helpful for diagnosis in all but the most glaring of cases.
The first and most important rule for evaluating tools is that you must not expect the tool to do your job for you. Different tools also do different functions. Observability is not the same thing as diagnosis, is not the same as alerting, is not the same thing as production monitoring, is not the same as performance regression
testing.
Far more important than tools is to understand the nature of your production deployment - What does "normal" look like for your system? Are you collecting the right data so you can confirm whether or not the system is normal or not? What does a typical acceptable customer journey look like? How many transactions / customers are currently not receiving "acceptable" or better service? What are the separate subpopulations that exist within your customer data? Are certain customers or functions standing out as suboptimal?
Technical solutions are all well and good, but good data and some insight into the nature and behaviour of production systems (& the business) will usually beat tools.