I think the way microservices have evolved over the years, the intent is to make every service more or less independent so that even when there's any kind of issue in one microservice it doesn't affect the other parts of the overall application. Having said that, there's no one answer to your question around database architecture, as it also depends on which DB service is being used, what's the interdependency between services, is data being stored in a single/multiple regions or if there are any specific data localization regulations which need to be adhered for the microservices, In general, the most common pattern which I have started to see is that microservices typically tend to use simple data storage and access patterns for which NoSQL offerings like AWS DynamoDB are best. So if thats the case, then each of the microservice can have its independent table and that keeps proper isolation across the board. On the other hand, if there's a need to have a relational database, like SQL server, then most of the times, I see customers use same database but with different tables/schemas to again create logical boundaries between microservices. So net-net, there's no one single pattern which fits the bill and it depends on various reasons like I mentioned earlier.
Hope this helps and look forward to you having your feedback on the book :-)
Speaking from my own experience, I prefer central databases that the services can share. I don't really want, for example, to bloat a Docker image with an embedded full-scale DBMS just to support SQL persistency. I'm not even that keen on linking a separate image holding a DBMS. My primary database server has an entire VM to itself, and it's backed-up and secured as an entity in its own right.
large-scale database functions are best managed by a DBA, and most DBA's don't really want to go hop-scotching around the virtual site dealing with per-application DBMS instances. Excluding, perhaps specialised DBMS instances like, say Oracle Financials.
DBMS's can also be a nuisance when containerized, since in Docker, for example, there's the static (base) image and the active (operating instance) image, and while they start out identical, an operating image with an embedded database causes them to quickly diverge. And since I have run into situations where the Docker server went down and the operating instances hadn't been snapshotted into static images in their own right, lost a lot of work. So I definitely have a preference for keeping my data both out of the container world and on discrete external storage. That is, something I can mount as a filesystem and run offline utilities on regardless of the state of the DBMS container image.
When it comes to destroying a civilization, gas chambers cannot hold a candle to echo chambers.
posted 9 months ago
Hey Tim -
I think we are saying the same thing, but using different words :-)
I fully agree that we don't need a separate DB/microservice and in fact the methodology of including DB in your docker image itself is an anti-pattern to me. So like I said earlier the exact level of segregation will also depend on the use-case and the target database platform. So like with DynamoDB there's no concept of a centralized DB whereas in the case of an RDBMS it might be a different case. In fact, AWS has a pretty detailed whitepaper on this topic, which I am sure will be pretty useful to refer in this context: