I think that from the development perspective, there isn't a big deal of difference between a monolith and microservices. Basically, once you have properly packaged your monolith in terms of business
domains etc like invoicing, finance, HR...., then it's just a question of creating a bunch of jar files in the proper format. Of course common modules may go in to web-inf/lib so all micro services can use that code.
However, what about monitoring after it goes to production? do you believe the IT industry currently has a good set of tools to say,
1) deploy 1/2 microservices in isolation,
2) administer a subset of services independent of others like performing property updates.
3) restart one/two microservices and ensure that the App is fine.
4) Restarting dependent services in proper order.
As a comparison, for a monolith running in jboss 7, all the above could be done via JMX-Console. Ideally something like this would suit for micro services.
Actually, JBoss started out as sort of a micro-services container itself. It consisted of a bunch of components wired together according to the plan laid out in the main JBoss configuration XML file and monitored/controlled via JMX. Tomcat has a similar approach, although it never got as deeply into JMX for everyday monitoring and control.
I've not been as deeply involved in Wildfly, but it seems some of that has been smoothed over lately.
A full-on Microservice-based infrastructure for the Enterprise is likely to consist of multiple types and instances of service providers and ideally some of those services will be common to more than one application system, so no, it's not really that much like drawing up a monolthic system.
Monitoring and control of a heterogeneous microservice-based system would generally be difficult using simple JMX, since there'd be multiple servers involved and some of them might not even be Java servers. They might use SNMP or something more friendly to whatever platform they ran under.
More likely you'd want to use an external central service product. I worked in one shop that used Big Brother, I use Nagios myself, I've also worked with CA-Unicenter and there are many, many other free and commercial products willing to step in and help.
Bjoke: A "Bully Joke". A Statement or action made with malicious intent - unless challenged. At which point it magically transforms into "I was just funnin'" or "What's the matter, can't take a joke?"
Ah, Ok. You are right in the context of a full fledged micro services for an Enterprise. That would be definitely something different from laying out a traditional web app. I was referring to the use case of splitting up just one monolith -- say a gaint EAR application -- in to a bunch of reusable small micro services as a pilot project. But I think even while doing so, one needs to think at the "enterprise" level you alluded to.
On a different note, when I did the TOGAF certification recently, I was reading about "Common use applications". These are, per TOGAF standard,
Development of applications used across the enterprise is preferred over the development of similar or duplicative applications which are only provided to a particular organization
Are micro services the same as Common use applications described above? If so, would it be correct to say that "MicroServices" are one way of realizing the TOGAF goal of "common use applications"?
So, monitoring. It's a subject important enough that I wrote a whole chapter about it.
Monitoring is more complex. The key though is two things:
1. Collect lots of data on a per node basis
2. Aggregate it together somewhere where you can zoom out and drill down
Get all your normal stats, and get them off your machine. Push metrics into a appropriate time-series store. Get your logs in one place. Once you've done that, it will be easier to stop high-level trends, and then drill into where the problem is.
You can't just monitor a given service in isolation either. Imagine tow services A & B, and A talks to B. You can have situations where A says it is up, B is up, but A & B can't talk to each other because networks! SO you also need to monitor ALL downstream connections on a per-node basis.
Then think about alerting. With a more monolithic system, you get away with just alerting if the CPU somewhere is pegged at 100%. But when you have multiple services, which in turn might be scaled on to multiple nodes, and one of them is pegged at 100%, do you get someone out of bed at 3am? Instead you'll find yourself relying more on forms of semantic monitoring, like injecting synthetic transactions into your system to mimic key user journeys, alerting if these things fail. Then you jump into action and start looking for anomalies (like nodes at 100% CPU or whatever).