Fig 1: How deployment figuratively looks like, Source: https://entwickler.de/programmierung/deployment-automatisch-praktisch-gut-001

Step 0: Get a Project

I created a GitHub repository person_grpc [1]: a Go package, which serves as a personal database, dealing with get / set person commands. The communication between server and client is achieved using gRPC (built on top of TCP).

Step 1: Go Project => Docker Image

Requirements: Docker + Docker Desktop are installed on your computer.

Docker image ( = a program) is basically a schema of Docker containers ( = running processes).

Fig 2: Docker Images and Containers, Source: https://jfrog.com/knowledge-base/a-beginners-guide-to-understanding-and-building-docker-images/

Building a Docker Image

Dockerfile is used for building a from a specific implementation. It specifies the following commands as below.

FROM <image-version> [AS <image-alias>]:

  • A parent image is fetched and used in the following commands.
  • Until next FROM command, the target image remains unchanged.

WORKDIR <path>:

  • The current working directory is changed to <path> inside a container.
  • It is linked to the nearest previous FROM command.

COPY [--from=<image-alias>|<image-version>] <src-path> <dest-path>:

  • It moves the specified file / folder from source directory / image to the destination path.
  • Same as WORKDIR, it is also linked to the previous FROM command.
  • If an image alias is given --from=<image-alias>, source path is looked up in given image.
  • If the source image is only used once, image version can be used instead.

RUN <build-cmd>: When building Docker image, this command is executed inside target image.

ENTRYPOINT | CMD ["<run-cmd-arg1>", "<run-cmd-arg2>", ..."]: When starting the Docker container, these commands are executed.

An example of expected behaviour: A program ./main inside target image requires exactly one positional argument $port:

ENTRYPOINT ["/bin/sh", "-c", "./main"]; CMD ["8080"]

  • docker run -it <image> => $port is set to 8080
  • docker run -it <image> 6379 => $port is set to 6379

ENTRYPOINT ["/bin/sh", "-c"]; CMD ["./main", "8080"]

  • docker run -it <image> => $port is set to 8080, ./main is running
  • docker run -it <image> ./main => ./main fails to start because no positional argument is given
  • docker run -it <image> ./main 6379=> $port is set to 6379, ./main is running

An example of Dockerfile (with build and run stages):

Fig 3: Dockerfile (with build and run stages)

A Docker image can be built using docker build . -t <image-name>. If you have multiple images in the same project, you can define more than one Dockerfiles and specify the Dockerfile using the flag -f <custom-path-to-Dockerfile>.

Running & Debugging a Docker Container

For running the Docker container in the background and exposing it in the local host, enter: docker run [--rm] -d -p [<host>:]<port>:<intern-port> <image-name>:latest. This does the following:

  • --rm: Remove the container if stopped
  • -d: Detach the container process from the terminal and return the container id instead
  • -p [<host>:]<port>:<intern-port>: Expose the application running actually on <intern-port> to <port> at current host (<host> if specified, otherwise any host)

docker inspect <container-id> prints all details of a container:

Fig 4: Inspecting a container
  • Running command is ./server -f config.yaml (see "Path" and "Args").
  • "50051/tcp" is successfully exposed in address: localhost:50051 (see "PortBindings").

docker run -it <image-name> /bin/sh opens the shell terminal inside a Docker container:

Fig 5: Showing inside Docker container
  • config.yaml is copied successfully — so hosts and ports are successfully configured at each build.
  • ./server is there too — running ./server -f config.yaml inside terminal has the same effect as running the container.

docker ps shows all the active Docker containers. If used with -a flag, it shows all containers.

Fig 6: Showing all active Docker containers
  • Docker container is accessible on port 50051 for TCP connections (and universal host 0.0.0.0)

Let’s run the client:

Fig 7: Running the client

docker logs <container-id> prints all the messages logged by server. If used with -f flag, it monitors the logs until termination.

Fig 8: Logging inside Docker container

Now we’ll see how to deploy it on a local Kubernetes cluster.

Step 2: Docker Image => Kubernetes Deployment

Requirements: Minikube and Helm installed on your computer.

Helm is an essential package manager for Kubernetes, which automates the Kubernetes applications lifecycles [4] and includes many advanced features as listed in [5]. It is however out of scope of this article.

First, let’s start a Minikube cluster using minikube start:

Fig 9: Starting a local cluster

Creating a Kubernetes Deployment

Since the local cluster doesn’t know about the Docker image, let’s load the image into the cluster minikube image load <image> (this may take a few minutes, depending on how big your image is).

Docker image is downloaded, but nothing is deployed yet. Kubernetes-equivalent for a Docker image is called deployment.

To create a new deployment, we create a new <image>.yaml inside a new directory called deploy with the following content:

Fig 10: Deployment configuration

I copied-pasted it from the docs. Differences:

  • imagePullPolicy: Never: We already pulled the image from Docker manually. Setting it to Never guarantees that Kubelet won’t attempt to download it automatically.
  • replicas: 1: We want to keep it very simple. Only one pod is needed (also the Kubernetes-equivalent to Docker containers).

Then we create the deployment using kubectl apply -f deploy/<image>.yaml inside the cluster.

Enter kubectl get deployments to see that there is one deployment:

Fig 11: Deployments

and similarly kubectl get pods to see all the pods:

Fig 12: Pods

Or if you have many pods from other deployments, you can filter the pods using labels. For example: -l app=person-server would get the person-server pod.

If we delete a pod using kubectl delete pod <pod-name>, a new pod is created automatically:

Fig 13: New pod is created

A few analogies:

  • docker inspect <container-id> v. kubectl describe pod <pod-name>
  • docker build ... v. kubectl apply ...
  • docker run ... is handled automatically by Kubernetes after apply (that’s why it is called deployment)
  • docker run -it <container-id> <command> v. kubectl exec -it <pod-name> -- <command>
  • docker logs <container-id> [-f] v. kubectl logs <pod-name> [-f]

Exposing the Application

Now the whole application is running in the local cluster, so it can communicate with other pods inside the local cluster, but not with the external world yet. Exposing the application can be done in numerous ways:

Option #1: Exposing a Pod

kubectl port-forward <pod-name> -p <port>:<intern-port>

Fig 14: Kubernetes Pod exposed locally
Fig 15: Client can connect
Fig 16: Kubernetes Pod handling the requests

Option #2: Exposing a (NodePort) Service

Let’s define a new service in Kubernetes in deploy/<image>-service.yaml:

Fig 17: Service configuration

The template can be downloaded from the docs as well. Differences:

  • The service is a NodePort, which serves as the external entry point for incoming requests for your app [7].
  • The service has the same app name as the deployment person-server, so they belong together.
  • The actual deployment runs on port 50051. The service, which is actually running on port 32222 in local cluster, is accessible for TCP connections on port 8080.

If we run kubectl port-forward svc/<service> -p <port>:<port>, we would get the same results as before.

If the application is able to handle HTTP requests, it can be also exposed using: minikube service <service> --url.

Option #3: Exposing an Ingress

Exposing a NodePort may not be a solution itself, but can be seen as an intermediary step to create an Ingress: a backend for your services, which makes your services accessible from outside of the local cluster using HTTP requests. Using an Ingress, the whole application would look like:

Fig 18: Ingress, Source: https://banzaicloud.com/blog/k8s-ingress/

I don’t want to go into detail, because Ingress can only route HTTP requests (and this does not fit into our application). Hypothetically, if our application were communicating using HTTP requests, Ingress for our application could be defined as below:

Fig 19: Ingress configuration

Final Thoughts

This concludes our series From Zero to Microservice. Now we have built a very simple gRPC server, which is then Dockerized and deployed in a local Kubernetes cluster.

This article is not based on any documentation: I wrote based on my practical experiences and what I have learned by trial-and-error in my job. Neither does it qualify me as an expert, nor did I write a flawless article. I am also new to the whole tech stack, so I may have made some mistakes. I’d love to receive your feedback, thank you for reading!

References

[1] My repository: https://github.com/yamaceay/person_grpc
[2] What is Docker?: https://www.docker.com
[3] Dockerfile: https://docs.docker.com/engine/reference/builder/
[4] What is Helm?: https://sysdig.com/learn-cloud-native/kubernetes-101/what-is-helm-in-kubernetes/
[5] Using Helm: https://helm.sh/docs/intro/using_helm/
[6] What is Kubernetes?: https://kubernetes.io
[7] NodePort: https://cloud.ibm.com/docs/containers?topic=containers-nodeport
[8] Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/

--

--

Yamac Eren Ay
Yamac Eren Ay

No responses yet