Java

What are Services in context of OpenShift ?

We have heard about Load Balancer and how important is this for application continuity. So, before I talk about Services in context of OpenShift. Let’s understand Load Balancer briefly.

Load Balancer first acts as a delegate between client ( devices, application, network ) and then the backend (devices, application, network). This delegate or middleman/woman/both receives requests and responses and decides where to forward the data, based on the best and efficient available server (A) which can handle the request compared to the other server (B).  

Note: Load Balancer can be used between client to client communication OR server to server communication as well.

Well the fancy mechanism for Load Balancer in OpenShift is named as Services. Services play a role as an internal load balancer. It checks for a set of replicated pods so that it can proxy the connection to them when needed and internally act as load balancer that decides which pods to send the data to.

Remember Pods also have their own IP address and then the Services also have their own IP address. Services forward the data to pod’s listening port. Services also use REST objects mechanism same as pods. Let’s check out the example below.

Note: Texts in Bold are description.

Example (Template for Defining a Service for the pod):

apiVersion: v1

kind: Service  (If you look at the key’s value it is declared as Service)

metadata:

  name: my-cute-registry  (Environment variable to map with service IP on same namespace)     

spec:

  selector:                 

    docker-registry: default

  portalIP: 189.27.116.122   (Virtual IP of the service)

  ports:

  – nodePort: 0

    port: 4530      (Port Services Listen to)        

    protocol: TCP

    targetPort: 4780 ( Port on the backing pods to which the service forwards connections )

This is the basics on Services and if you need to further deep dive into this please follow Red hat OpenShift documentation.

Leave a Reply