Can you please provide your suggestions on the below queries?
1) Which one is a better approach:
a single container in a pod and in a node or
multiple containers in a pod and in a node or
single container in a pod but multiple pods in a node or
multiple containers in a pod but with multiple pods in a node
Most of the references go with single container in a pod and have its dedicated single node. But from the aspects of cost, reliability, efficiency which approach would be the best way. If there are any use cases for every paradigm it would help understand better.
2) Who does the actual requests load balancing API Gateway or Kubenetes or Load Balancer ELB? Consider a scenario where microservices are deployed in EKS kubernetes environment having API Gateway and ELB. API Gateway usually handles all the requests as it acts as a facade and also does Load balancing, Authentication and Authorisation, API metering, Logging etc. with HA assured. But Kubernetes also does the load balancing. How these three differ in the functionality of handling requests?
3) How to handle the session management or stateful sessions in Kubernetes environment in case In Memory Engine like Redis is not available?
4) Where to have service related monitoring, metering, logging, configuration, security etc. at API Gateway or SideCar proxy?
These are few confusing and redundant components in Cloud. Clarifications on these are much appreciated.
Thanks for your questions! I will try to give you brief answers to them, and I will highlight if deeper research from your side is needed especially when the questions may not have enough context.
#1: It depends on your application architecture. The most common approach is 1 container/pod and multiple pods/node, however, this approach assumes your app follows micro/macro services architecture or is very similar to that. Other approaches such as single container/pod/node can be useful when you have a monolithic app. I rarely see a solid use case for multi-container in a single pod, if we exclude the side-car container architecture, we will not find that many use cases for a pod with multiple containers.
#2: Load balancing happens on different layers, it depends on your network architecture and which types of load balancers that you use. API Gateways does mean to do load balancing. In typical AWS architecture, you either go with NLB (layer 3) or ALB (layer 7). Kubernetes itself has its own logic to route requests to the right k8s service but we can’t consider it a load balancer.
You need either AWS LB or another external LB, or to deploy a load balance inside your cluster.
#3: Kubernetes does not have a built-in option for that, you need to deploy a service to do that, either Redis pods, or a similar solution.
#4: Ideally you delegate these services to a service mesh like Istio.