We are going to look at how to use Terraform to deploy a Kubernetes cluster on Digital Ocean, add a managed postgres database, and redis and OpenFaaS in kubernetes. This will show how to use Terraform to manage the configuration and how we can access both cloud and kubernetes managed services from OpenFaaS functions.

We are going to use the digitalocean, kubernetes, and helm terraform providers.

The plan

  1. Provision a digitalocean_kubernetes_cluster
  2. Provision a digitalocean_database_cluster
  3. Provision 2 kubernetes_namespace for openfaas and openfaas-fn
  4. Provision a helm_release for openfaas
  5. Provision a helm_release for redis
  6. Provision 2 kubernetes_secret to point to the databases
  7. Deploy an OpenFaaS function that reads those secrets and talks to the database.

Let's go.

Install the software

I'm using Debian, your mileage may vary.

OpenFaaS

1
curl -sL https://cli.openfaas.com | sudo sh

Terraform

1
2
3
4
5
sudo apt-get install apt-transport-https --yes
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update
sudo apt install terraform

kubectl

1
2
3
4
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

helm

This is optional since we are using terraform, but here for reference.

1
2
3
4
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Terraform

Providers

First we need to define out providers, which we will do in providers.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
  terraform {
    required_providers {
      digitalocean = {
        source = "digitalocean/digitalocean"
        version = "~> 2.0"
      }
    }
  }

  variable "do_token" {
    description = "digitalocean access token"
    type        = string
  }

  provider "digitalocean" {
    token             = var.do_token
  }

Then run

1
terraform init

To load them locally.

Also, you should define your do_token perhaps in an environment variable TF_VAR_do_token.

Digital Ocean Resources

Define digitalocean.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
  resource "digitalocean_kubernetes_cluster" "gratitude" {
    name    = "gratitude"
    region  = "nyc1"
    version = "1.20.2-do.0"

    node_pool {
      name       = "worker-pool"
      size       = "s-2vcpu-2gb"
      node_count = 3
    }
  }

  resource "digitalocean_database_cluster" "gratitude-postgres" {
    name       = "gratitude-postgres-cluster"
    engine     = "pg"
    version    = "11"
    size       = "db-s-1vcpu-1gb"
    region     = "nyc1"
    node_count = 1
  }

  output "cluster-id" {
    value = "${digitalocean_kubernetes_cluster.gratitude.id}"
  }

We can spin these up using terraform apply. This takes about 6 minutes for me.

Kubernetes

Now we can add our kubernetes namespaces. In another file called kubernetes.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  provider "kubernetes" {
    host             = digitalocean_kubernetes_cluster.gratitude.endpoint
    token            = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate
    )
  }

  resource "kubernetes_namespace" "openfaas" {
    metadata {
      name = "openfaas"
      labels = {
        role = "openfaas-system"
        access = "openfaas-system"
        istio-injection = "enabled"
      }
    }
  }

  resource "kubernetes_namespace" "openfaas-fn" {
    metadata {
      name = "openfaas-fn"
      labels = {
        role = "openfaas-fn"
        istio-injection = "enabled"
      }
    }
  }

We'll need to run terraform init again since we added a provider, and then we can terraform apply.

Helm

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
  provider "helm" {
    kubernetes {
      host = digitalocean_kubernetes_cluster.gratitude.endpoint
      cluster_ca_certificate = base64decode( digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate )
      token = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    }
  }

  resource "helm_release" "openfaas" {
    repository = "https://openfaas.github.io/faas-netes"
    chart = "openfaas"
    name = "openfaas"
    namespace = "openfaas"

    set {
      name = "functionalNamepsace"
      value = "openfaas-fn"
    }

    set {
      name = "generateBasicAuth"
      value = "true"
    }

    set {
      name = "ingress.enabled"
      value = "true"
    }
  }

  resource "random_password" "redis_password" {
    length           = 16
    special          = true
    override_special = "_%@"
  }

  resource "helm_release" "redis" {
    repository = "https://charts.bitnami.com/bitnami"
    chart = "redis"
    name = "redis"

    set {
      name = "auth.password"
      value = random_password.redis_password.result
    }

    set {
      name = "architecture"
      value = "standalone"
    }
  }

Once you have this file, do terraform init and then terraform apply and both OpenFaaS and Redis should be deployed to your cluster.

Secrets

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
  resource "kubernetes_secret" "redispassword" {
    metadata {
      name = "redispassword"
      namespace = "openfaas-fn"
    }

    data = {
      password = random_password.redis_password.result
    }
  }

  resource "kubernetes_secret" "postgresconnection" {
    metadata {
      name = "postgresconnection"
      namespace = "openfaas-fn"
    }

    data = {
       host     = digitalocean_database_cluster.gratitude-postgres.private_uri
    }
  }

Verifying the deployment

Setup kubectl

1
2
3
4
5
6
7
  export CLUSTER_ID=$(terraform output -raw cluster-id)
  mkdir -p ~/.kube/
  curl -X GET \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${TF_VAR_do_token}" \
  "https://api.digitalocean.com/v2/kubernetes/clusters/$CLUSTER_ID/kubeconfig" \
  > ~/.kube/config

If you have your TF_VAR_do_token setup correctly, it should create a valid config file.

Test this with

1
kubectl cluster-info
Kubernetes control plane is running at https://39cef8c8-ca33-40f1-9454-3373707a22ef.k8s.ondigitalocean.com
CoreDNS is running at https://39cef8c8-ca33-40f1-9454-3373707a22ef.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Verifying OpenFaaS

We can then verify the deploy with:

1
kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"
NAMEREADYUP-TO-DATEAVAILABLEAGE
alertmanager1/11119m
basic-auth-plugin1/11119m
gateway1/11119m
nats1/11119m
prometheus1/11119m
queue-worker1/11119m

Connecting to OpenFaaS

Setup the proxy

In a new window, lets setup port forwarding from your local machine to connect to the openfaas gateway in kubernetes.

1
kubectl port-forward svc/gateway -n openfaas 8080:8080

Get the OpenFaaS login credentials and login

1
2
3
4
5
# This command retrieves your password
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

# This command logs in and saves a file to ~/.openfaas/config.yml
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
Calling the OpenFaaS server to validate the credentials...
credentials saved for admin http://127.0.0.1:8080

And now we can list out our deployed functions:

1
faas-cli list
FunctionInvocationsReplicas

Not a whole lot there yet.

Testing out deploying a function

1
2
faas-cli store deploy nodeinfo
echo | faas-cli invoke nodeinfo

Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/nodeinfo

Hostname: nodeinfo-8545846564-wpqm6

Arch: x64
CPUs: 2
Total mem: 1995MB
Platform: linux
Uptime: 361

Writing a OpenFaaS function that talks to Redis

Get the template running

Lets create our first function. We need to pull the templates locally, so lets do that with:

1
faas-cli template pull
Fetch templates from repository: https://github.com/openfaas/templates.git at 

And create our function, I'm going to use ruby.

1
faas-cli new --lang ruby rubyredis

Change the image in rubyredis.yml to be your Docker hub user name, and then lets deploy it:

1
faas-cli up -f rubyredis.yml

And if that's successful, we can invoke it with:

1
echo | faas-cli invoke rubyredis
Hello world from the Ruby template

Adding redis

Now that we have it working, lets add redis to the picture.

First we need to add the secret to the rubyredis.yml file, so that it references the secret we defined above in terraform:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
  version: 1.0
  provider:
    name: openfaas
    gateway: http://127.0.0.1:8080
  functions:
    rubyredis:
      lang: ruby
      handler: ./rubyredis
      image: wschenk/rubyredis:latest
      secrets:
      - redispassword

In the Gemfile add the redis gem:

1
2
3
source 'https://rubygems.org'

gem "redis"

Now we need to change handler.rb to conenct to the redis service on the cluster, which is redis-master.default (default is the namespace that it's in) with the password that we load from /var/openfass/secrets/password.

1
2
3
4
5
6
7
8
  require 'redis'

  class Handler
    def run(req)
      @redis = Redis.new( host: "redis-master.default", password: File.read( '/var/openfaas/secrets/password' ) )

      return @redis.incr( 'mykey' ) end
  end

We can then redeploy using

1
faas-cli up -f rubyredis.yml

And we can invoke it now using

1
echo | faas-cli invoke rubyredis

Each time you run this you should see the result increment.

Writing a OpenFaaS function that talk to Postgres

Start a remplate

1
faas-cli new --lang ruby rubypostgres

Then lets tweak the rubypostgres.yml file to add the secret (and docker username!)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
  version: 1.0
  provider:
    name: openfaas
    gateway: http://127.0.0.1:8080
  functions:
    rubypostgres:
      lang: ruby
      handler: ./rubypostgres
      image: wschenk/rubypostgres:latest
      build_args:
        ADDITIONAL_PACKAGE: build-base postgresql-dev
      secrets:
      - postgresconnection

Then we need to add the 'pg' gem:

1
2
3
4
5
  source 'https://rubygems.org'

  gem "pg", "~> 1.2"
  gem "database_url"
  gem "json"

Then in the handler

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
  require 'json'
  require 'database_url'
  require 'pg'

  class Handler
    def run(req)
      c = DatabaseUrl.to_active_record_hash(File.read( '/var/openfaas/secrets/host' ) )

  #    {"adapter":"postgresql","host":"private-gratitude-postgres-cluster-do-user-1078430-0.b.db.ondigitalocean.com","database":"defaultdb","port":25060,"user":"doadmin","password":"ievezzbyz0a1stxa"}

      # Output a table of current connections to the DB
      conn = PG.connect(
        c[:host],
        c[:port],
        nil,
        nil,
        c[:database],
        c[:user],
        c[:password] )

      r = []
      conn.exec( "SELECT * FROM pg_stat_activity" ) do |result|
        r << "     PID | User             | Query"
        result.each do |row|
          r << " %7d | %-16s | %s " %
               row.values_at('pid', 'usename', 'query')
        end
      end

      return r.join( "\n" );
    end
  end

Now we build it:

1
faas-cli up -f rubypostgres.yml

And invoke:

1
echo | faas-cli invoke rubypostgres
     PID | User             | Query
      76 | postgres         | <insufficient privilege> 
      69 | postgres         | <insufficient privilege> 
      65 |                  | <insufficient privilege> 
      67 | postgres         | <insufficient privilege> 
      72 | postgres         | <insufficient privilege> 
      78 | _dodb            | <insufficient privilege> 
   22113 | doadmin          | SELECT * FROM pg_stat_activity 
      63 |                  | <insufficient privilege> 
      62 |                  | <insufficient privilege> 
      64 |                  | <insufficient privilege>

Conclusion

When you are done, you can use terraform destroy to remove everything. Don't do that for production!!!

Terraform is pretty nifty in that it lets you spin up the whole environment easily, and OpenFaaS is a very nice way to work with functions easily. Kubernetes is a bit daunting but once it's up and running gives you a great way to scale things up and down.

We setup the cluster and on it deployed OpenFaaS as well as redis. We showed how to connect to redis from OpenFaaS, as well as how to connection to a managed postgres install using an OpenFaaS function.

Previously

Sending files with wormhole tools I didn’t know

2021-05-19

Next

TanStack/query

2021-07-01

howto

Previously

Controlling docker in golang So meta

2021-05-15

Next

Setting up GitHub Actions for Continuous Integration automating all of the things

2021-07-13