From Zero to Microservice [2]
How to Dockerize Your App & Deploy using Kubernetes?
Previous articles:
Step 0: Get a Project
I created a GitHub repository person_grpc [1]: a Go package, which serves as a personal database, dealing with get / set person commands. The communication between server and client is achieved using gRPC (built on top of TCP).
Step 1: Go Project => Docker Image
Requirements: Docker + Docker Desktop are installed on your computer.
Docker image ( = a program) is basically a schema of Docker containers ( = running processes).
Building a Docker Image
Dockerfile is used for building a from a specific implementation. It specifies the following commands as below.
FROM <image-version> [AS <image-alias>]
:
- A parent image is fetched and used in the following commands.
- Until next
FROM
command, the target image remains unchanged.
WORKDIR <path>
:
- The current working directory is changed to
<path>
inside a container. - It is linked to the nearest previous
FROM
command.
COPY [--from=<image-alias>|<image-version>] <src-path> <dest-path>
:
- It moves the specified file / folder from source directory / image to the destination path.
- Same as
WORKDIR
, it is also linked to the previousFROM
command. - If an image alias is given
--from=<image-alias>
, source path is looked up in given image. - If the source image is only used once, image version can be used instead.
RUN <build-cmd>
: When building Docker image, this command is executed inside target image.
ENTRYPOINT | CMD ["<run-cmd-arg1>", "<run-cmd-arg2>", ..."]
: When starting the Docker container, these commands are executed.
An example of expected behaviour: A program ./main
inside target image requires exactly one positional argument $port
:
ENTRYPOINT ["/bin/sh", "-c", "./main"]; CMD ["8080"]
docker run -it <image>
=>$port
is set to 8080docker run -it <image> 6379
=>$port
is set to 6379
ENTRYPOINT ["/bin/sh", "-c"]; CMD ["./main", "8080"]
docker run -it <image>
=>$port
is set to 8080,./main
is runningdocker run -it <image> ./main
=>./main
fails to start because no positional argument is givendocker run -it <image> ./main 6379
=>$port
is set to 6379,./main
is running
An example of Dockerfile
(with build and run stages):
A Docker image can be built using docker build . -t <image-name>
. If you have multiple images in the same project, you can define more than one Dockerfiles and specify the Dockerfile using the flag -f <custom-path-to-Dockerfile>
.
Running & Debugging a Docker Container
For running the Docker container in the background and exposing it in the local host, enter: docker run [--rm] -d -p [<host>:]<port>:<intern-port> <image-name>:latest
. This does the following:
--rm
: Remove the container if stopped-d
: Detach the container process from the terminal and return the container id instead-p [<host>:]<port>:<intern-port>
: Expose the application running actually on<intern-port>
to<port>
at current host (<host>
if specified, otherwise any host)
docker inspect <container-id>
prints all details of a container:
- Running command is
./server -f config.yaml
(see"Path"
and"Args"
). "50051/tcp"
is successfully exposed in address:localhost:50051
(see"PortBindings"
).
docker run -it <image-name> /bin/sh
opens the shell terminal inside a Docker container:
config.yaml
is copied successfully — so hosts and ports are successfully configured at each build../server
is there too — running./server -f config.yaml
inside terminal has the same effect as running the container.
docker ps
shows all the active Docker containers. If used with -a
flag, it shows all containers.
- Docker container is accessible on port
50051
for TCP connections (and universal host0.0.0.0
)
Let’s run the client:
docker logs <container-id>
prints all the messages logged by server. If used with -f
flag, it monitors the logs until termination.
Now we’ll see how to deploy it on a local Kubernetes cluster.
Step 2: Docker Image => Kubernetes Deployment
Requirements: Minikube and Helm installed on your computer.
Helm is an essential package manager for Kubernetes, which automates the Kubernetes applications lifecycles [4] and includes many advanced features as listed in [5]. It is however out of scope of this article.
First, let’s start a Minikube cluster using minikube start
:
Creating a Kubernetes Deployment
Since the local cluster doesn’t know about the Docker image, let’s load the image into the cluster minikube image load <image>
(this may take a few minutes, depending on how big your image is).
Docker image is downloaded, but nothing is deployed yet. Kubernetes-equivalent for a Docker image is called deployment.
To create a new deployment, we create a new <image>.yaml
inside a new directory called deploy
with the following content:
I copied-pasted it from the docs. Differences:
imagePullPolicy: Never
: We already pulled the image from Docker manually. Setting it to Never guarantees that Kubelet won’t attempt to download it automatically.replicas: 1
: We want to keep it very simple. Only one pod is needed (also the Kubernetes-equivalent to Docker containers).
Then we create the deployment using kubectl apply -f deploy/<image>.yaml
inside the cluster.
Enter kubectl get deployments
to see that there is one deployment:
and similarly kubectl get pods
to see all the pods:
Or if you have many pods from other deployments, you can filter the pods using labels. For example: -l app=person-server
would get the person-server
pod.
If we delete a pod using kubectl delete pod <pod-name>
, a new pod is created automatically:
A few analogies:
docker inspect <container-id>
v.kubectl describe pod <pod-name>
docker build ...
v.kubectl apply ...
docker run ...
is handled automatically by Kubernetes afterapply
(that’s why it is called deployment)docker run -it <container-id> <command>
v.kubectl exec -it <pod-name> -- <command>
docker logs <container-id> [-f]
v.kubectl logs <pod-name> [-f]
Exposing the Application
Now the whole application is running in the local cluster, so it can communicate with other pods inside the local cluster, but not with the external world yet. Exposing the application can be done in numerous ways:
Option #1: Exposing a Pod
kubectl port-forward <pod-name> -p <port>:<intern-port>
Option #2: Exposing a (NodePort) Service
Let’s define a new service in Kubernetes in deploy/<image>-service.yaml
:
The template can be downloaded from the docs as well. Differences:
- The service is a NodePort, which serves as the external entry point for incoming requests for your app [7].
- The service has the same app name as the deployment
person-server
, so they belong together. - The actual deployment runs on port 50051. The service, which is actually running on port 32222 in local cluster, is accessible for TCP connections on port 8080.
If we run kubectl port-forward svc/<service> -p <port>:<port>
, we would get the same results as before.
If the application is able to handle HTTP requests, it can be also exposed using: minikube service <service> --url
.
Option #3: Exposing an Ingress
Exposing a NodePort may not be a solution itself, but can be seen as an intermediary step to create an Ingress: a backend for your services, which makes your services accessible from outside of the local cluster using HTTP requests. Using an Ingress, the whole application would look like:
I don’t want to go into detail, because Ingress can only route HTTP requests (and this does not fit into our application). Hypothetically, if our application were communicating using HTTP requests, Ingress for our application could be defined as below:
Final Thoughts
This concludes our series From Zero to Microservice. Now we have built a very simple gRPC server, which is then Dockerized and deployed in a local Kubernetes cluster.
This article is not based on any documentation: I wrote based on my practical experiences and what I have learned by trial-and-error in my job. Neither does it qualify me as an expert, nor did I write a flawless article. I am also new to the whole tech stack, so I may have made some mistakes. I’d love to receive your feedback, thank you for reading!
References
[1] My repository: https://github.com/yamaceay/person_grpc
[2] What is Docker?: https://www.docker.com
[3] Dockerfile: https://docs.docker.com/engine/reference/builder/
[4] What is Helm?: https://sysdig.com/learn-cloud-native/kubernetes-101/what-is-helm-in-kubernetes/
[5] Using Helm: https://helm.sh/docs/intro/using_helm/
[6] What is Kubernetes?: https://kubernetes.io
[7] NodePort: https://cloud.ibm.com/docs/containers?topic=containers-nodeport
[8] Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/