I wanted to see how to really use kubernetes like I'm used to using heroku, so lets recreate everything using terraform, digital ocean, kubernetes and MAGIC!

Sample rails app

Build image

First thing we'll do is to create a docker image that we'll use to build our rails app.

Dockerfile.build:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
FROM ruby:3.0.1

WORKDIR /app

# nodejs and yarn and cloc
RUN curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN curl -sL https://deb.nodesource.com/setup_16.x | bash -
RUN apt-get update && apt-get install -y nodejs yarn

# install bundler
RUN gem install bundler:2.1.4 && gem install rails
CMD bash

Now we can build and run this image to generate our application:

1
2
  docker build . -f Dockerfile.dev -t railsdev
  docker run --rm -it -v $(pwd):/app railsdev

Once you are inside the image, create a new rails app:

1
  rails new favoriteapp -d=postgresql

Then quit out of it.

Developing the app

Now inside of the rails app, we'll create a Dockerfile.dev that will let us develop the app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM ruby:3.0.1

WORKDIR /app

# nodejs and yarn and cloc
RUN curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN curl -sL https://deb.nodesource.com/setup_16.x | bash -
RUN apt-get update && apt-get install -y nodejs yarn

# install bundler
RUN gem install bundler:2.1.4

# Bundle gems
COPY Gemfile* /app/
RUN bundle install

# Install node stuff
COPY package.json yarn.lock /app/
RUN yarn install --check-files
COPY . /app/

EXPOSE 3000

CMD rm -f tmp/pids/server.pid;bundle exec rails server -b 0.0.0.0

Now we need to create a docker-compose.yml to set up the environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
  version: "3.7"

  services:
    postgres:
      image: postgres:13.1
      environment:
        POSTGRES_PASSWORD: awesome_password
      ports:
        - "5432:5432"

    pgadmin:
      image: dpage/pgadmin4:5.4
      environment:
        PGADMIN_DEFAULT_EMAIL: admin@example.com
        PGADMIN_DEFAULT_PASSWORD: SuperSecret
        GUNICORN_ACCESS_LOGFILE: /dev/null
      ports:
        - "4000:80"
      depends_on:
        - postgres

    favoriteapp:
      build:
        context: .
        dockerfile: Dockerfile.dev
      depends_on:
        - postgres
        - redis
      volumes:
        - type: bind
          source: ./
          target: /app
      ports:
        - "3000:3000"
      environment:
        - DATABASE_URL=postgresql://postgres:awesome_password@postgres:5432/favoriteapp?encoding=utf8&pool=5&timeout=5000
        - REDIS_URL=redis://redis:6379/0
        - RAILS_ENV=development

    sidekiq:
      build:
        context: .
        dockerfile: Dockerfile.dev
      command: bundle exec sidekiq
      depends_on:
        - postgres
        - redis
      environment:
        - DATABASE_URL=postgresql://postgres:awesome_password@postgres:5432/favoriteapp?encoding=utf8&pool=5&timeout=5000
        - REDIS_URL=redis://redis:6379/0
        - RAILS_ENV=development

    redis:
      image: 
    redis:
      image: redis:6.0.9
      ports:
        - '6379:6379'

And a nice little .dockerignore file:

1
2
# node_modules
tmp

Now we start it up:

1
docker-compose up --build

And then we need to create the database:

1
  docker-compose run --rm favoriteapp rails db:migrate

Develop the app

We're going to do some basic stuff here that shows

  1. How to connect to a database
  2. How to connect to redis
  3. How to deploy sidekiq

Scaffold

Then lets create a scaffold for a database object:

1
2
3
  docker-compose run --rm favoriteapp rails g scaffold messages body:string processed:boolean
  docker-compose run --rm favoriteapp rake db:setup
  docker-compose run --rm favoriteapp rake db:migrate

Sidekiq

1
  docker-compose run --rm favoriteapp bundle add sidekiq

Lets turn on the :sidekiq adapter in config/application.rb:

1
2
3
4
  class Application < Rails::Application
    # ...
    config.active_job.queue_adapter = :sidekiq
  end

Then lets create a simple job that will process the message.

1
  docker-compose run --rm favoriteapp rails g job process_message

And the job itself app/jobs/process_message_job.rb:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  class ProcessMessageJob < ApplicationJob
    queue_as :default

    def perform(job)
      logger.info "Processing message #{job.id}"
      m = Message.find( job.id )
      m.processed = true
      m.save
    end
  end

Then we schedule it in app/controllers/messages_controller.rb, inside of the create method:

1
2
      if @message.save
        ProcessMessageJob.perform_later @message

Finally we add the routes in config/routes.rb:

1
2
3
4
5
6
7
  require 'sidekiq/web'

  Rails.application.routes.draw do
    mount Sidekiq::Web => "/sidekiq" # mount Sidekiq::Web in your Rails app
    resources :messages
    root "messages#index"
  end

Testing

1
docker-compose up --build

Now you can visit http://localhost:3000 to see your working rails app. Add a message, you will see that it's processed = false, and when you go back to the index sidekiq should have processed in the message in the background.

Production Image

Now that we've "developed" our application locally, lets spin it up and deploy it.

Then we need a Dockerfile to build the thing. Lets create a Dockerfile.prod to make it happen.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
FROM ruby:3.0.1

WORKDIR /app

# nodejs and yarn and cloc
RUN curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN curl -sL https://deb.nodesource.com/setup_16.x | bash -
RUN apt-get update && apt-get install -y nodejs yarn

# install bundler
RUN gem install bundler:2.1.4

# Set up environment
RUN bundle config set without 'development test'
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true

# Bundle gems
COPY Gemfile* /app/
RUN bundle install

# Install node stuff
COPY package.json yarn.lock /app/
RUN yarn install --check-files
COPY . /app/

#RUN yarn install --check-files
ARG RAILS_MASTER_KEY
RUN bundle exec rake assets:precompile

EXPOSE 3000

CMD rm -f tmp/pids/server.pid;bundle exec rails server -b 0.0.0.0

Then build the container

1
docker build . -f Dockerfile.prod -t wschenk/favoriteapp --build-arg RAILS_MASTER_KEY=$(cat config/master.key)

Setting up continious integration

But we don't want to do that all by hand, so lets setup github actions to build and push to dockerhub.

First create a new repository on github. Once you have that add the remote to the favoriteapp local git repository.

Now we need to add some secrets and environment variables.

First go to your docker hub security page and create a new access token. Copy this.

Then go to the settings on your github repo, and add the secrets:

DOCKERHUB_TOKENthe copied token
DOCKERHUB_USERNAMEYour username
RAILS_MASTER_KEYwhat's in config/master.key

Then we need to create a .github/workflows/build-and-push.yaml file that tells GitHub what to do:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
  name: Build and Push

  on: push

  jobs:
    docker:
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v2
        -
          name: Login to DockerHub
          uses: docker/login-action@v1 
          with:
            username: ${{ secrets.DOCKERHUB_USERNAME }}
            password: ${{ secrets.DOCKERHUB_TOKEN }}
        -
          name: Docker meta
          id: meta
          uses: docker/metadata-action@v3
          with:
            images: wschenk/favoriteapp

        -
          name: Build and push
          id: docker_push
          uses: docker/build-push-action@v2
          with:
            push: true
            tags: ${{ steps.meta.outputs.tags }}
            labels: ${{ steps.meta.outputs.labels }}
            file: Dockerfile.prod
            build-args: RAILS_MASTER_KEY=${{ secrets.RAILS_MASTER_KEY }}

If everything goes well, this will all be pushed to docker hub and we are ready to begin building out the infrastructure.

Note that by default these images are public.

Terraform: Provision the infrastructure

Now that we have a working application that's packaged up in a docker container, lets define the infrastructure that we will deploy it on. We are going to use terraform to provision a kubernetes cluster and postgres cluster on digital ocean, and then inside that cluster we will setup a deployment of our application, a job to run the database migrations, with a service and ingress to present it to the outside world. We'll use helm (as part of terraform) to install a redis instance, cert-manager to handle certificates, and nginx-ingress on the cluster to expose the application.

Finally we will use dnssimple to make sure that our application has a name.

The providers

We need tokens from digital ocean and dnsimple (if that's the provider you use, it's easy to swap out for something else.)

The section basically defines the terraform plugins that we will use to provision the platform.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
  terraform {
    required_providers {
      digitalocean = {
        source = "digitalocean/digitalocean"
        version = "~> 2.0"
      }
      dnsimple = {
        source = "dnsimple/dnsimple"
      }
    }
  }

  provider "digitalocean" {
    token   = var.do_token
  }

  provider "dnsimple" {
    token   = var.dnsimple_token
    account = var.dnsimple_account_id
  }

  variable "do_token" {
    description = "digitalocean access token"
    type        = string
  }

  variable "dnsimple_token" {
    description = "dnssimple api access token"
  }

  variable "dnsimple_account_id" {
    description = "dnsimple account id"
  }

  variable "dnsimple_domain" {
    description = "dnsimple domain"
  }

Cluster

Now we can define the cluster itself.

digitalocean_kuberenetes_cluster defines the kubernetes cluster itself, and here we are creating a 3 node cluster.

We also define the kubernetes and helm terraform providers here, using the host and certificates that we get from the digitalocean provider.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  resource "digitalocean_kubernetes_cluster" "gratitude" {
    name    = "gratitude"
    region  = "nyc1"
    version = "1.21.2-do.2" # or "latest"

    node_pool {
      name       = "worker-pool"
      size       = "s-2vcpu-2gb"
      node_count = 3
    }
  }

  output "cluster-id" {
    value = "${digitalocean_kubernetes_cluster.gratitude.id}"
  }

  provider "kubernetes" {
    host             = digitalocean_kubernetes_cluster.gratitude.endpoint
    token            = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate
    )
  }

  provider "helm" {
    kubernetes {
      host = digitalocean_kubernetes_cluster.gratitude.endpoint
      cluster_ca_certificate = base64decode( digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate )
      token = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    }
  }

Datastores

We are going to setup 2 different datastores, one is a digitalocean_database_cluster of postgres with one node, and the other is redis running on the cluster that we defined (in standalone). We are using the bitnami redis helm chart.

I'm also setting a password on the redis instance as an example of how to do this. It's only accessible from within the cluster so I'm not sure it's strictly needed but it can't hurt.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
  resource "random_password" "redis_password" {
    length           = 16
    special          = false
  }

  resource "helm_release" "redis" {
    repository = "https://charts.bitnami.com/bitnami"
    chart = "redis"
    name = "redis"
  
    set {
      name = "auth.password"
      value = random_password.redis_password.result
    }
  
    set {
      name = "architecture"
      value = "standalone"
    }
  }

  resource "kubernetes_secret" "redispassword" {
    metadata {
      name = "redispassword"
    }
  
    data = {
      password = random_password.redis_password.result
    }
  }

  resource "digitalocean_database_cluster" "favoriteapp-postgres" {
    name       = "favoriteapp-postgres-cluster"
    engine     = "pg"
    version    = "11"
    size       = "db-s-1vcpu-1gb"
    region     = "nyc1"
    node_count = 1
  }

Ingress Controller

We are installing the ingress-nginx controller here, again using helm. This will setup the digital ocean load balanacer. The data terraform block is there to expose the ip address of the load balancer, which we will use to setup the DNS name.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
  resource "helm_release" "ingress-nginx" {
    name = "ingress-nginx"
    repository = "https://kubernetes.github.io/ingress-nginx"
    chart = "ingress-nginx"
  
  }

  data "kubernetes_service" "ingress-nginx" {
    depends_on = [ helm_release.ingress-nginx ]
    metadata {
      name = "ingress-nginx-controller"
    }
  }

  output "cluster-ip" {
    value = data.kubernetes_service.ingress-nginx.status.0.load_balancer.0.ingress.0.ip
    #value = data.kubernetes_service.ingress-nginx.external_ips
  }

DNS

I use dnsimple for my domain, and I'm calling this site k8. Why not.

1
2
3
4
5
6
7
  resource "dnsimple_record" "k8" {
    domain = var.dnsimple_domain
    name   = "k8"
    value  = data.kubernetes_service.ingress-nginx.status.0.load_balancer.0.ingress.0.ip
    type   = "A"
    ttl    = 300
  }

Cert Manager

cert-manager keeps track of certificates as a custom resource within kubernetes. We will use this to get our TLS traffic good to go.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    resource "helm_release" "cert-manager" {
      repository = "https://charts.jetstack.io"
      chart = "cert-manager"
      name = "cert-manager"
      namespace = "cert-manager"
      create_namespace = true
  
      set {
        name = "installCRDs"
        value = "true"
      }
    }

Config

Finally, we are going to stick the data that we just got from creating these endpoints into a kubernetes config map that our application will use to wire itself up.

We also create a namespace for all of our app stuff just to keep things organized.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
  # We use this for the rails master key, adjust to your location

  data "local_file" "masterkey" {
    filename = "favoriteapp/config/master.key"
  }

  resource "kubernetes_namespace" "favoriteapp" {
    metadata {
      name = "favoriteapp"
    }
  }

  resource "kubernetes_config_map" "favoriteapp-config" {
    metadata {
      name = "favoriteapp-config"
      namespace = "favoriteapp"
    }

    data = {
      RAILS_MASTER_KEY = data.local_file.masterkey.content
      RAILS_ENV = "production"
      DATABASE_URL = digitalocean_database_cluster.favoriteapp-postgres.private_uri
      REDIS_URL = "redis://user:${random_password.redis_password.result}@redis-master.default.svc.cluster.local:6379"
    }
  }

Option 1: ClusterIssuer custom resource definition

I had some trouble with putting adding this resource before the cluster has started, hopefully they've fixed it in a later release. But in the meantime you may want to only add this file after everything is up.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
  provider "kubernetes-alpha" {
    host             = digitalocean_kubernetes_cluster.gratitude.endpoint
    token            = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate
      )
  }

  resource "kubernetes_manifest" "cluster_issuer" {
    depends_on = [ digitalocean_kubernetes_cluster.gratitude, helm_release.cert-manager ]
    provider = kubernetes-alpha

    manifest = {
      apiVersion = "cert-manager.io/v1"
      kind = "ClusterIssuer"
      metadata = {
        name = "letsencrypt-prod"
      }
      spec = {
        acme = {
          email = "wschenk@gmail.com"
          server = "https://acme-v02.api.letsencrypt.org/directory"
          privateKeySecretRef = {
            name = "issuer-account-key"
          }
          solvers = [
            {
              http01 = {
                ingress = {
                  class = "nginx"
                }
              }
            }
          ]
        }
      }
    }
  }

Option 2: Setup using cluster-issuer.yml

Instead of using the kubernetes-alpha way of setting up the cluster issuer, we can do a simple yml file and do it the kubectl way.

cluster-issuer.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
  apiVersion: cert-manager.io/v1
  kind: ClusterIssuer
  metadata:
    name: letsencrypt-prod
  spec:
    acme:
      # You must replace this email address with your own.
      # Let's Encrypt will use this to contact you about expiring
      # certificates, and issues related to your account.
      email: wschenk@gmail.com
      server: https://acme-v02.api.letsencrypt.org/directory
      privateKeySecretRef:
        name: issuer-account-key
      # Add a single challenge solver, HTTP01 using nginx
      solvers:
      - http01:
          ingress:
            class: nginx

Then apply it

1
  kubectl apply -f cluster-issuer.yml

And we can look at it like so

1
  kubectl describe clusterissuer letsencrypt-prod
1
  kubectl get cert --namespace favoriteapp
NAME                 READY   SECRET               AGE
issuer-account-key   True    issuer-account-key   34m

App deployment

Finally, we define our app itself. It has to moving pieces that can be scaled independantly.

One is called favoriteapp that is initially set to have 2 replicas. We define two types of containers here, one is the init_container that basically runs on each pod startup to run the migration (command = ["rake", "db:migrate"]) and the other is the container itself that serves the rails application on port 3000.

The other is favoriteapp-workers which runs the sidekiq command.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
  resource "kubernetes_deployment" "favoriteapp" {
    metadata {
      name = "favoriteapp"
      labels = {
        app = "favoriteapp"
      }
      namespace = "favoriteapp"
    }

    spec {
      replicas = 2

      selector {
        match_labels = {
          app = "favoriteapp"
        }
      }

      template {
        metadata {
          name = "favoriteapp"
          labels = {
            app = "favoriteapp"
          }
        }

        spec {
          init_container {
            image = "wschenk/favoriteapp:master"
            image_pull_policy = "Always"
            name = "favoriteapp-init"
            command = ["rake", "db:migrate"]
            env_from {
              config_map_ref {
                name = "favoriteapp-config"
              }
            }
          }
          container {
            image = "wschenk/favoriteapp:master"
            image_pull_policy = "Always"
            name = "favoriteapp"
            port {
              container_port = 3000
            }
            env_from {
              config_map_ref {
                name = "favoriteapp-config"
              }
            }
          }
        }
      }
    }
  }

  resource "kubernetes_deployment" "favoriteapp-workers" {
    metadata {
      name = "favoriteapp-workers"
      namespace = "favoriteapp"

    }
    spec {
      replicas = 1

      selector {
        match_labels = {
          app = "favoriteapp-workers"
        }
      }

      template {
        metadata {
          name = "favoriteapp-workers"
          labels = {
            app = "favoriteapp-workers"
          }
        }

        spec {
          container {
            image = "wschenk/favoriteapp:master"
            name = "favoriteapp-workers"
            command = ["sidekiq"]
            env_from {
              config_map_ref {
                name = "favoriteapp-config"
              }
            }
          }
        }
      }
    }
  }

Now that we have the deployments running, we need to expose them first to the cluster as a service (basically this gives them a name and a port that other kubernetes services can access).

Once that service is defined, we define an ingress that lets the outside world connect to the internal service, which in turn connects to the pods running in the deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
  resource "kubernetes_service" "favoriteapp-service" {
    metadata {
      name = "favoriteapp-service"
      namespace = "favoriteapp"
    }

    spec {
      port {
        port = 80
        target_port = 3000
      }

      selector = {
        app = "favoriteapp"
      }
    }
  }

  resource "kubernetes_ingress" "favoriteapp-ingress" {
    wait_for_load_balancer = true
    metadata {
      name = "favoriteapp-ingress"
      annotations = {
        "kubernetes.io/ingress.class" = "nginx"
        "cert-manager.io/cluster-issuer" = "letsencrypt-prod"
        "cert-manager.io/acme-challenge-type" = "http01"
      }
      namespace = "favoriteapp"
    }
    spec {
      rule {
        host = "k8.willschenk.com"
        http {
          path {
            path = "/"
            backend {
              service_name = "favoriteapp-service"
              service_port = 80
            }
          }
        }
      }

      tls {
        hosts = [ "k8.willschenk.com" ]
        secret_name = "issuer-account-key"
      }
    }
  }

terraform and kubectl

Now we run terraform apply and, if you've entered in all of your credentials correctly, the application should start up with all of the correct datasources, migrations run, and the whole thing.

You can walk through the flow to make sure that the app is working, that things get stored in the database, and that the sidekiq jobs processed what is needed.

You can also configure kubectl locally so that you can examine the cluster.

1
2
3
4
5
6
7
  export CLUSTER_ID=$(terraform output -raw cluster-id)
  mkdir -p ~/.kube/
  curl -X GET \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${TF_VAR_do_token}" \
  "https://api.digitalocean.com/v2/kubernetes/clusters/$CLUSTER_ID/kubeconfig" \
  > ~/.kube/config

Manually reissuing the certificate

First look to see what the status of your certiticate is:

1
kubectl get cert --namespace favoriteapp

And you can also look at the certificate request itself to see if everything is good.

1
kubectl describe certificaterequest issuer-account-key --namespace favoriteapp

Install the cert-manager plugin locally:

1
2
3
4
  cd /tmp
  curl -L -o kubectl-cert-manager.tar.gz https://github.com/jetstack/cert-manager/releases/download/v1.4.0/kubectl-cert_manager-linux-amd64.tar.gz
  tar xzf kubectl-cert-manager.tar.gz
  sudo mv kubectl-cert_manager /usr/local/bin

Looking at the deployment

Logs

Webapp:

1
kubectl logs --namespace favoriteapp deployment/favoriteapp

Workers:

1
  kubectl logs --namespace favoriteapp deployment/favoriteappworker

Migration:

1
  kubectl logs --namespace favoriteapp jobs/favoriteapp-migration

Deploying a new version

First we make a change to the app, then check it in. Once things are finished building we can manually trigger a restart:

1
2
  kubectl rollout restart --namespace favoriteapp deployment/favoriteapp
  kubectl rollout restart --namespace favoriteapp deployment/favoriteapp-workers

Setting up automatic deployment

We can also extend our github action to use kubectl itself to trigger the deployment. (You'll probably want to add a step in there to run tests also!) This is what that looks like.

First, to you need to add your .kube/config to the repositories secrets. First convert to base64, then add a new secret named KUBE_CONFIG_DATA:

1
cat $HOME/.kube/config | base64

Then, add the following steps to build-and-push.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
        -
          name: Deploy App
          id: k8app
          uses: steebchen/kubectl@v2.0.0
          with: # defaults to latest kubectl binary version
            config: ${{ secrets.KUBE_CONFIG_DATA }}
            command: rollout restart --namespace favoriteapp deployment/favoriteapp
        -
          name: Deploy workers
          id: k8workers
          uses: steebchen/kubectl@v2.0.0
          with: # defaults to latest kubectl binary version
            config: ${{ secrets.KUBE_CONFIG_DATA }}
            command: rollout restart --namespace favoriteapp deployment/favoriteapp-workers

Final thoughts

What a journey this post has been! Stepping back a while bunch it's not really clear to me that this is an improvement. I've a lot of applications on Heroku, which has a much simplier workflow. heroku create, git push and there you go. It locks you into a certain way of doing things and buildpacks, while a bit more constaining compared to Dockerfiles are about a zillion times easier to work with.

And on the otherside, you have things like cloud functions, either using something like OpenFaaS or even different deployment models all together. If you are in the Node or Deno ecosystems what's going on with Deno Deploy or even NextJS is a much easier way to actually get something up and running. The level of complexity for kubernetes is truely mind boggling, and a number of times during this write up I was muttering under my breath about a simplier world were we could FTP PHP files around…

Basically, I'm not sure that I often find myself with the problem where kubernetes is the right solution. It's certainly very cool, and the idea of having a bunch of resources that, with a little guidance, and sort of manage and heal themselves is pretty amazing. But I also feel that there's way too much going on than what I properly understand, and it's a lot of ceremony to make stuff happen.

Previously

Setting up GitHub Actions for Continuous Integration automating all of the things

2021-07-13

Next

Uploading to S3 on the command line throwing data into a bucket

2021-08-22

howto

Previously

Setting up GitHub Actions for Continuous Integration automating all of the things

2021-07-13

Next

Uploading to S3 on the command line throwing data into a bucket

2021-08-22