Granny's Programming Pearls
"inside of every large program is a small program struggling to get out"
JavaRanch.com/granny.jsp

John Carnell

Author
+ Follow
since Sep 27, 2002
Merit badge: grant badges
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
6
Received in last 30 days
0
Total given
1
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by John Carnell

Hi Mark,

Great question.  Microservices are very well suited for batch processing.  Since microservices are small, independent and easily deployable chunks of code, you can easily scale up to large numbers of instances to process records coming from a batch.   A few things to keep in mind:

1.  With Java-based microservices you can still leverage multi-threading to take advantage of multiple CPUs on your server.  You still get a lot of advantages in taking advantage of the underlying machine.  My team works exclusively in the integration and back-end layer so we often times have to be creative with threading and queuing because our data consumption patterns can wildly change.

2.  If you are working in a cloud (e.g. AWS) you can spin up microservices instances and use different classes of machines based on the workload you are process.  So for instance if you do relatively light batch processing with services X during the day and your heavy volumes during the evening, it is very easy in a microservices environment  spin up additional service instances.  You can even use different classes of machines with more CPU or higher IO throughput.  We use this approach in a slightly different manner.  We get most of our volume during the day (we are message-base, not batch) so we spin up extra machines in our dev and test environments.  Then after hours we spin down to a very small footprint to save money so that we are not burning dollars on unused machine capacity.

3.  Microservices are extremely well suited to horizontal scaling because their small footprint means instances can spun up very quickly.  If you are using Docker, additional services can be spun up in seconds to handle load.

4.  Batch processing can put a lot of pressure on your security infrastructure, but most security platforms allow you to perform near and far caching of application credentials so that you do not have to pound on security servers.  For instance,
Apache Shiro will allow you to maintain a cache and a token after initial authentication so that you do not have to services authenticate credentials on each request.  One of the lessons I have learned with microservices is that caching is king.   As it can greatly reduce the amount of load on many services.  


I hope I answered your questions.

    Thanks,
         John
6 years ago
Hi guys,

I was a J2EE developer for years and have a lot of thoughts on this space.

1.  Microservices by themselves can make an application more resilient.  Often times J2EE became extremely bloated and deployed as one single artifact to a cluster of servers.  In the worse cases, I would see multiple applications deployed within the same container.  A microservice is meant to be extremely small in size and the container is often packaged right within the microservice.  In my experience launching a microservice takes seconds versus minutes in some of the more "enterprise" containers."

2.  Service discovery makes microservices extremely resilient.  Most service discovery engines will include a health check monitoring to detect when a service instance becomes unresponsive and start routing traffic to other service instances.  If you combine this with container deployment technologies like Docker Swarm, Kubernetes or event Amazon's Autoscaling Groups when a service fails its health check, a new service instance can be started automatically.  In my organization, we heavily leverage AWS autoscaling groups so services are automatically restarted if they go down and on several occasions we have scaled up automatically when other parts of our application have gone down and we have needed to scale up service instances to "catch" up on the load when we are back up.

3.  While microservice applications are complex because of all of the distributed components in the application they are naturally resilient because failure of one service is not likely to cause problems throughout the application.  We actually
use Netflix's Chaos Monkey to randomly kill our service instances to help identify and drive out dependencies that can cause entire outages.

I hope that answers your questions.

    Thanks,
        John
6 years ago
Hi guys,

Thanks so much for having me on the Java Ranch.  I hope I answered everyones questions.  I appreciate the kind words on the book.  It took almost 2 years and a couple of false starts but whenever I heard from a reader during the writing process, it would make my day.  If anyone has questions about the book please do not hesitate to drop me a line at john_carnell@yahoo.com.  Also, I will be occasionally posting new material out to my blog at Thoughtmechanix.com.


     Thanks,
         John
6 years ago
Hi Divya,

In the book I focus on basic microservice patterns (service decomposition, service discovery, routing, security) but I always provide code examples for each of the patterns.  I tried to make this book a good blend of engineering and architectural points of few.  Frankly to be successful with a microservice application you need to look at microservices from 3 perspectives:  the architects (how do I decompose a microservice into the appropriate granularity and how do I  have my services communicate),  the developer (how do I build my applications to scale and be resilient), and DevOps (how do I build a pipeline that ensures the microservices are built, tested, packaged and deployed in repeatable, consistent and immutable fashion).

  Thanks,
      John
6 years ago
Hi Takaya,

This is a question I get a lot.  The trick is not to bite off big pieces at first.   A few thoughts.

1.  Look for natural boundaries in your data model and strong relationships between the entities in your data model.  Microservices tend to follow the natural layout out of data entities.
2.  Pay attention to granularity.  Its better to start with a coarser grained service that does too much and refactor then it is to have to consolidate services.  
3.  Make sure your microservices do not just become simple CRUD services.  Your services should have business logic in them and if you find yourself with very simple CRUD services your too fine-grained in your approach.
4.  Use the URL to express relationships between data.  For example if you have a route /customers/{customer-id}/addresses/{address-id} you naturally start defining a natural vocabulary.   It also gives you a way of migrating your services.  So for instance, you
    might have customer information and address information under the same data model that your service is going to talk to.  By masking the data with a URL you can always refactor your address data object into a separate microservice (if it warrants it), while
    having the original URL simply forward the request to the address service.
5.  I personally like to use a version number for my services right in the URL.  It makes it very visible on invoking the service and if you do make a breaking change between versions, you can still allow clients to call the old routes until they have successfully migrated.
6.  I mentioned there is Microservices pattern book in the works at Manning by Chris Richardson.  I have gotten a early version of the book.  It is a great book and goes into far more design patterns then I can cover here.

   Thanks,
       John
6 years ago
Hi Saket,  

Thank you for the kind words.  You can build and run the services without Docker.  However, you have to manually start each service and also make sure you have a Postgres database running.  That being said, I tried to build all of the chapters so you could start all of the services without having  to do all of the I just described.  Thats why I used Docker and Docker Compose so that all of the services could be started locally.  You can run these exercises on a Dev box.  All of the instructions you need to build and run the code material in the Chapter can be found in the Appendix and the READMEs in each individual Chapter.

If you are space constrained on your work laptop, I would recommend you get a Amazon AWS account.  When I was writing the book, I used the 1 year free tier to pretty much run all of the infrastructure.  I know that adds even more complexity if you are beginning.  An alternative is to run some of the services like Postegres, Kafka and Redis on a Dev server and then run and test your services locally.  Frankly, that is one of the downsides of a microservice architecture is that if you do want to run the app locally you end up running dozens of containers.  What we do in my company is we make heavy use of unit and integration tests with mocks to build and test services locally.  As long as you know what your remote service calls are going to return, its usually pretty easy to mock out services using mock-server.

I hope I answered your question.

   Thanks,
       John
6 years ago
Hi Jesper,

I dont think Spring Boot is obsolete at all.  I know several companies (including mine) that a build a majority of their services in Spring Boot or are adopting it.  Also you have to remember in the end Spring Boot just builds down to a executable jar or war.  You still need a server or container to run service.  For our application we have Spring Boot based services deployed on both standalone AWS servers and in Docker containers.  Since, we let the teams choose their own technologies so teams have chosen to deploy their Spring Boot based applications using Docker.

In my book, I build all of services and the Spring Cloud services using Spring Boot and then deploy the Spring Boot based services using Docker.  In Chapter 10, I deploy all of the Spring Boot-based services that I built into Docker containers to Amazon ECS (Amazon's Docker Container Service).  I do think microservices will become a dominant architecture.  The only real question I see is  whether or not the deployment model of a microservice will change.  Will development teams continue with a server/container model or will more companies deploy their microservices as Functions-as-a-service (e.g. Lambda).  The other wildcard in the mix from a deployment model is UniKernels.  Last quarter, during our semi-annual Hackathon I actually worked on team that deployed a Spring-Boot service using a Unikernel.  A Unikernel is single memory address space with almost no operating system.  Unikernels startincredibly fast and because there is almost no operating system present for the kernel to run have a very small attack surface from a security perspective.

   Thanks,
       John  
6 years ago
Paul and Paul :-)

Yes I am talking about a cookie with some sort of session id that is managed in a distributed cache or a database.
My book focuses on building the services and the surrounding technologies (like Service Discovery, Routing, Messaging ) that are using by the application to use the services.

Thanks.  Hope that was useful.

   - John
6 years ago
Hi German,

While my background for the last couple of years is cloud-based development, I have a pretty heavy background in traditional software development projects (my last job, was doing 8 years of inhouse development and architecture at a financial services company).  All of the material I write about in my book can be applied in the context of building applications that are going to be deployed within a corporate data center.  In chapter 10 of the book, I do demonstrate how to pick up and deploy the application to AWS, but there is nothing in the book that requires you to use a cloud-based solution.

I hope that helps.

   Thanks,
      John
6 years ago
Hi Paul,

No disagreements there on transaction management.  That is the next step after you have learned the basics of the technologies.  Frankly, one of the challenges I have seen with people learning the microservices approach is that while the act of writing a service is easy, building a full-blown application using truly distributed services is difficult.  Its kinds a like I used to tell people I worked with when Ruby on Rails came out.  Yes, you can build a simple blog in 10 minutes, but real applications require real work.  Thanks for the feedback for blog material.  I just recently relaunched my blog (thoughtmechanix.com) and have been starting a list of material to write about :-).

    Thanks,
       John
6 years ago
Hi Claude,

Thanks for posting.   Let me see if I can answer your questions in order.

1.  Spring supports JSON serialization/deserialization through the Jackson serializer.  Spring will automatically serialize all incoming and outgoing requests from a POJO if you use the@RestController annotation while defining your service routes.  For most of my interfaces, I never have to go beyond using out of the box serialization.  I agree with you that Java JSON manipulation can be pain, especially since dynamic languages like Groovy and Clojure make it so easy.  

2.   In a microservice application, the GUI application is usually deployed as a completely separate application in a separate set of containers.  The GUI should be communicating with the services only through their JSON/REST interfaces.  One of the questions I often encounter is whether or not the GUI should handle orchestration across multiple services.  I usually like to keep my microservice invocations from simple from the GUI.  Thus, if I have to invoke multiple services to complete a transaction, I will wrap the entire process with another microservice so that the GUI only has one service to call.

3.  Service discovery - Microservices do not have the concept of a service discovery like a SOAP-style catalog.  Instead, services like Eureka allow for service discovery by allowing individual service instances to register themselves with a service discovery engine. Then API libraries like Feign and Ribbon allow you to query the service discovery engine using the a service name key.  Service discovery allows you to discover the location of a service, but they tend not to provide the "cataloging" features advertised in SOAP UUID service discovery engines.

4.  MTOM- We use messages quite a bit in our microservices application.  We use Kafka as a backbone for the messages.  Usually we use messages to notify of of important system events and data state changes.  Messaging is an important part of a microservice architecture because it allows us to do complex process independently of on another.  For example, one of the major pieces of functionality that I am working on right now is our upgrade functionality for our customers.  We use messaging heavily to convey the state of the upgrades for the customer and use Kafka, AWS SQS and AWS SWF as the process choregrapher.

My books covers most of this topics and provides hands on examples.  Chapter 2 deals with microservice developement, Chapter 4 service discovery and Chapter 8 on messaging.

I hope thats answers your question.

    Thanks,
        John
6 years ago
Hi Pranav,

Spring Cloud does not provide anything of the box for handling transaction management across multiple microservices.  In a microservice environment you are usually going to have to manage your own transactions and use a routing slip pattern or state database to keep track of your data changes.  If an exception occurs, you have to work through the service orchestration of unwinding changes.

There are good patterns for modeling and building this out.  However, I felt it was out of scope for this book because I was focused really just on getting started with Spring Microservices and using Spring Cloud to handle many of the operational capabilities needed to get an your application up and running.

  Thanks,
      John
6 years ago
Hi Mohammed,

Thank your for the question.  I personally tried to keep the book very engineering focused.  Chapter 2 of the books introduces the basic concepts of microservices and how to think about microservices at a basic level.  I then tried to cover topics: like service discovery, service routing, client resiliency, event based services and log aggregation.  What I did not cover is a lot of more of the conceptual architectural topics like data replication, transaction and state management and service orchestration.  I think those are important topics, but fell outside the immediate task of getting developers jump started with Spring, Spring Cloud and Spring Microservices.

If you are looking for a patterns based book, I highly recommend you take a look at Chris Richardson's MEAP from Manning, Microservices Patters in Action.  He covers more of the conceptual topics, but does not get into many of the implementation examples.

   Thanks,
      John
6 years ago
Hi Paul,

I do not cover it session management in my book other then dealing with OAUTH in chapter 7 of the book.  With a microservice architecture, you are usually striving to carry as little session information with you as you are making your service calls.  Context is usually passed in the HTTP header values or in a token of some form.  (OAuth with JWT (Javascript WebTokens) allows you encode custom context in the token).  Personally, if you look at the application I work with we carry a few basic pieces of information in our HTTP headers (organization-id (since we are multi-tenant), user-id and then then OAuth token).  We then use cache for storing any additional context.  We do very little session management beyond that.  

Thats probably one of the bigger mindshift's that we moved away from as Java J2EE developers to multi-language microservice developers.  State is not carried around in the container in the session.  Instead, it is captured as state within the data layer and we use caching to help minimize the number of calls.  Let me know if that answered your question.

  Thanks,
     John
6 years ago
Hi Krystian,

I look at like this.  If you are already a heavy Spring shop Spring Cloud makes a great deal of sense.  If you are going to use other frameworks, the key things you have to look at is how cleanly do your Spring-based services integrate with the operational capabilities of the other framework.  If your evaluating Lightbend's Lagom the question should be is how much work your development teams need to do in order for you to use Lagoms service discovery, service routing, logging capabilities.

The same question should be examined if you are writing non-Spring microservices.  For example, if your company is heavily into Node.js, Python, etc.... does Spring Cloud make sense if you need the to do a bunch of work to integrate in with lets say Spring Cloud Config.  A well thought out microservice will allow you to plug your services in  regardless of the language you are building in.  I like Spring Cloud for Spring services because it all just works well out of the box.  I might not choose Spring Cloud for doing my development if the languages I am using can integrate better with other platforms.

I hope that answers your questions.

   Thanks,
       John
6 years ago