Rails on Kubernetes with TLS
certmanager
- tags
- rails
- kubernetes
- terraform
- github
Contents
I wanted to see how to really use kubernetes like I'm used to using heroku, so lets recreate everything using terraform, digital ocean, kubernetes and MAGIC!
Sample rails app
Build image
First thing we'll do is to create a docker image that we'll use to build our rails app.
Dockerfile.build
:
|
|
Now we can build and run this image to generate our application:
|
|
Once you are inside the image, create a new rails app:
|
|
Then quit out of it.
Developing the app
Now inside of the rails app, we'll create a Dockerfile.dev
that will
let us develop the app:
|
|
Now we need to create a docker-compose.yml
to set up the environment.
|
|
And a nice little .dockerignore
file:
|
|
Now we start it up:
|
|
And then we need to create the database:
|
|
Develop the app
We're going to do some basic stuff here that shows
- How to connect to a database
- How to connect to redis
- How to deploy sidekiq
Scaffold
Then lets create a scaffold for a database object:
|
|
Sidekiq
|
|
Lets turn on the :sidekiq
adapter in config/application.rb
:
|
|
Then lets create a simple job that will process the message.
|
|
And the job itself app/jobs/process_message_job.rb
:
|
|
Then we schedule it in app/controllers/messages_controller.rb
, inside
of the create
method:
|
|
Finally we add the routes in config/routes.rb
:
|
|
Testing
|
|
Now you can visit http://localhost:3000 to see your working rails app.
Add a message, you will see that it's processed = false
, and when you
go back to the index sidekiq should have processed in the message in
the background.
Production Image
Now that we've "developed" our application locally, lets spin it up and deploy it.
Then we need a Dockerfile
to build the thing. Lets create a
Dockerfile.prod
to make it happen.
|
|
Then build the container
|
|
Setting up continious integration
But we don't want to do that all by hand, so lets setup github actions to build and push to dockerhub.
First create a new repository on github. Once you have that add the
remote to the favoriteapp
local git repository.
Now we need to add some secrets and environment variables.
First go to your docker hub security page and create a new access token. Copy this.
Then go to the settings on your github repo, and add the secrets:
DOCKERHUB_TOKEN | the copied token |
DOCKERHUB_USERNAME | Your username |
RAILS_MASTER_KEY | what's in config/master.key |
Then we need to create a .github/workflows/build-and-push.yaml
file
that tells GitHub what to do:
|
|
If everything goes well, this will all be pushed to docker hub and we are ready to begin building out the infrastructure.
Note that by default these images are public.
Terraform: Provision the infrastructure
Now that we have a working application that's packaged up in a docker
container, lets define the infrastructure that we will deploy it on.
We are going to use terraform to provision a kubernetes cluster and
postgres cluster on digital ocean, and then inside that cluster we
will setup a deployment
of our application, a job
to run the database
migrations, with a service
and ingress
to present it to the outside
world. We'll use helm
(as part of terraform) to install a redis
instance, cert-manager
to handle certificates, and nginx-ingress
on
the cluster to expose the application.
Finally we will use dnssimple
to make sure that our application has a
name.
The providers
We need tokens from digital ocean and dnsimple (if that's the provider you use, it's easy to swap out for something else.)
The section basically defines the terraform plugins that we will use to provision the platform.
|
|
Cluster
Now we can define the cluster itself.
digitalocean_kuberenetes_cluster
defines the kubernetes cluster
itself, and here we are creating a 3 node cluster.
We also define the kubernetes
and helm
terraform providers here, using
the host
and certificates
that we get from the digitalocean provider.
|
|
Datastores
We are going to setup 2 different datastores, one is a
digitalocean_database_cluster
of postgres with one node, and the other
is redis running on the cluster that we defined (in standalone
). We
are using the bitnami redis helm chart.
I'm also setting a password on the redis instance as an example of how to do this. It's only accessible from within the cluster so I'm not sure it's strictly needed but it can't hurt.
|
|
Ingress Controller
We are installing the ingress-nginx
controller here, again using helm.
This will setup the digital ocean load balanacer. The data
terraform
block is there to expose the ip address of the load balancer, which we
will use to setup the DNS name.
|
|
DNS
I use dnsimple for my domain, and I'm calling this site k8
. Why not.
|
|
Cert Manager
cert-manager
keeps track of certificates as a custom resource within
kubernetes. We will use this to get our TLS traffic good to go.
|
|
Config
Finally, we are going to stick the data that we just got from creating these endpoints into a kubernetes config map that our application will use to wire itself up.
We also create a namespace for all of our app stuff just to keep things organized.
|
|
Option 1: ClusterIssuer
custom resource definition
I had some trouble with putting adding this resource before the cluster has started, hopefully they've fixed it in a later release. But in the meantime you may want to only add this file after everything is up.
|
|
Option 2: Setup using cluster-issuer.yml
Instead of using the kubernetes-alpha
way of setting up the cluster
issuer, we can do a simple yml
file and do it the kubectl way.
cluster-issuer.yml
:
|
|
Then apply it
|
|
And we can look at it like so
|
|
|
|
NAME READY SECRET AGE issuer-account-key True issuer-account-key 34m
App deployment
Finally, we define our app itself. It has to moving pieces that can be scaled independantly.
One is called favoriteapp
that is initially set to have 2 replicas.
We define two types of containers here, one is the init_container
that
basically runs on each pod startup to run the migration (command =
["rake", "db:migrate"]
) and the other is the container itself that
serves the rails application on port 3000.
The other is favoriteapp-workers
which runs the sidekiq
command.
|
|
Now that we have the deployments
running, we need to expose them first
to the cluster as a service
(basically this gives them a name and a
port that other kubernetes services can access).
Once that service is defined, we define an ingress
that lets the
outside world connect to the internal service, which in turn connects
to the pods running in the deployment.
|
|
terraform
and kubectl
Now we run terraform apply
and, if you've entered in all of your
credentials correctly, the application should start up with all of the
correct datasources, migrations run, and the whole thing.
You can walk through the flow to make sure that the app is working, that things get stored in the database, and that the sidekiq jobs processed what is needed.
You can also configure kubectl
locally so that you can examine the
cluster.
|
|
Manually reissuing the certificate
First look to see what the status of your certiticate is:
|
|
And you can also look at the certificate request itself to see if everything is good.
|
|
Install the cert-manager
plugin locally:
|
|
Looking at the deployment
Logs
Webapp:
|
|
Workers:
|
|
Migration:
|
|
Deploying a new version
First we make a change to the app, then check it in. Once things are finished building we can manually trigger a restart:
|
|
Setting up automatic deployment
We can also extend our github action to use kubectl itself to trigger the deployment. (You'll probably want to add a step in there to run tests also!) This is what that looks like.
First, to you need to add your .kube/config
to the repositories
secrets. First convert to base64, then add a new secret named
KUBE_CONFIG_DATA
:
|
|
Then, add the following steps to build-and-push.yml
:
|
|
Final thoughts
What a journey this post has been! Stepping back a while bunch it's
not really clear to me that this is an improvement. I've a lot of
applications on Heroku, which has a much simplier workflow. heroku
create
, git push
and there you go. It locks you into a certain way of
doing things and buildpacks, while a bit more constaining compared to
Dockerfiles
are about a zillion times easier to work with.
And on the otherside, you have things like cloud functions, either using something like OpenFaaS or even different deployment models all together. If you are in the Node or Deno ecosystems what's going on with Deno Deploy or even NextJS is a much easier way to actually get something up and running. The level of complexity for kubernetes is truely mind boggling, and a number of times during this write up I was muttering under my breath about a simplier world were we could FTP PHP files around…
Basically, I'm not sure that I often find myself with the problem where kubernetes is the right solution. It's certainly very cool, and the idea of having a bunch of resources that, with a little guidance, and sort of manage and heal themselves is pretty amazing. But I also feel that there's way too much going on than what I properly understand, and it's a lot of ceremony to make stuff happen.
References
- https://docs.bitnami.com/tutorials/deploy-rails-application-kubernetes-helm/
- https://docs.openfaas.com/reference/ssl/kubernetes-with-cert-manager/
- https://dev.to/michaellalatkovic/deploying-on-kubernetes-part-1-a-rails-api-backend-2ojl
- https://cert-manager.io/docs/tutorials/acme/ingress/
- https://github.com/docker/build-push-action
- https://github.com/steebchen/kubectl
Previously
Next