Stephan van Hulst wrote:In general I would say that breaking a system up in smaller chunks makes it more understandable, not less. However, any design methodology can be taken to the extreme. Making classes or services that do "too little work" can definitely impair the reader's understanding of the system. It's key to identify parts of the system that naturally lend themselves for modularization. Check out this cool article by Martin Fowler: http://martinfowler.com/bliki/MonolithFirst.html
In the article, Martin Fowler explains that it's costly to set up microservices from scratch. This is the primary disadvantage of this architecture style, I think.
That is a very good article and I agree that "divide and conquer" generally is a good idea when building systems (and also can make maintenance easier).
With some common sense the right granularity (i.e. building micro rather than say nano services) should be possible but what about the potential with availability problems? Already building very granular distributed systems poses many new challenges and requires each part to have higher availability than a single monolitic sysystem would need. Building a system from possibly hundreds (or more) services takews this to the next level...
Deploying in the cloud makes this problem somewhat easier to solve since there are very good support (at least in AWS) for on the infrastructure level configure that several (or even an elastic number) of instances should be created and that failed instances should be autromatically re-started) making it easier to build services with very high availablity. But when runing in your own data centre you have to write scripts or use some frameworks like say Zoo-keeper etc (that you must configure and manage) to do theese things and many IT organisations lack the needed knowledge/exprerience of this today...
A similar consern applices in a way for scalability - if the services are so small that many different ones are deployed together in the same container it become more complex to scale. Lets say that microservice A (that is deployed together with B, C ... etc) starts to get a lot traffic - if scaling on the infrastructure level (as mentioned easy to do in for instance AWS) one would then create more identical containers (server instances) and in effect also create more copies of service B, C etc (that are co-deployed with A) even though this is not strictly necessary. This is in contrast to how it works if services are kept large enough to be deployed on their own server instance where scaling becomes more straight forward.
Once agin cloud providers may (perhaps already do?) offer advanced containers that can dynamically create additional containers that only contain "hot" microservices as needed but in your own data center this will again require "advanced plumbing" that is not trivial to set-up and manage.
This all makes me feel that the a good cloud service is, if not a requisite, at least a big advantage to use microservices optimally!
This thesis is supported by the fact that Microsofts new "micro service" based architecture is heavilly promoted towards Azure...