Recently I came across an article on faasd, and I thought I'd give it a try and see how easy it is to use. Server templating is an easy way to create a server with DNS, and following the whole disposability principals, we'll whip something up and see how it goes.

What is faasd?

Functions as a service are a way to easily package up simple functions as an API, with a minimal amount of overhead. Firing up a whole rails project, configuring a server or spinning up a heroku instance, kubernetes cluster, etc may be overkill for a simple proof of concept, and that proof of concept may be enough to last you a long time. AWS Lambda is the more common form of serverless functions, but that means you have to sign up for the whole AWS console, which I can barely understand. faasd is a very stripped down platform that you can run on your VPS or even a RaspberryPI, which will let you throw up something simple without having to, you know, deploy kubernetes.

Set up the server

We first need to setup a clean server. I'm going to use debian since I always do, but the installation instructions say to use Ubuntu.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
resource "dnsimple_record" "faas" {
  domain = var.dnsimple_domain
  name   = "faas"
  value  = digitalocean_droplet.faas.ipv4_address
  type   = "A"
  ttl    = 3600
}

resource "digitalocean_droplet" "faas" {
  name     = "faas"
#  image    = "ubuntu-18-04-x64"
  image    = "debian-10-x64"
  size     = "s-2vcpu-2gb"
  monitoring = true
  region   = var.do_region
  ssh_keys = [
      var.ssh_fingerprint
  ]
}

Then terraform apply to create it.

Install faasd

Ssh into your server as root and run the following commands:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
  export LETSENCRYPT_EMAIL=wschenk@gmail.com # Change to yours
  export FAASD_DOMAIN=faas.willschenk.com    # Change to yours

  apt-get install -y git
  cd /tmp
  git clone https://github.com/openfaas/faasd --depth=1
  cd faasd

  ./hack/install.sh

  cat /var/lib/faasd/secrets/basic-auth-password
  echo

The last line will print out the auth password which you will need for the ui and to use the cli.

We can run this on the remote server using:

1
ssh root@faas.willschenk.com < install.sh

Just visit your domain to see the UI. The username is admin and the password was printed out by the previous command.

Install faas-cli

My server is faas.willschenk.com so lets first install the cli tool, and then point it to it.

1
2
3
  curl -sSL https://cli.openfaas.com | sudo sh
  export OPENFAAS_URL=https://faas.willschenk.com
  ssh root@faas.willschenk.com cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login --username admin --password-stdin

We can then deploy our first function by

1
  faas-cli store deploy nodeinfo

And run it with:

1
  curl https://faas.willschenk.com/function/nodeinfo
Hostname: localhost

Arch: x64
CPUs: 2
Total mem: 1998MB
Platform: linux
Uptime: 269013

Nice and easy. A list of functions available from the store can be found with

1
  faas-cli store list

Templates and registries

The basic idea is that we pull down one of the templates, code it up, and then push it up to our server. The way that it works is that we build the container, push it to a registry, and then tell our faasd server to pull that container from a registry and deploy it.

To get a list of available templates:

1
  faas-cli template store list

Writing a node12 function

We'll ceate a new one (I'm taking this code from the serverless for everyone else book) and we are doing to push it to docker hub. I'm already logged in as wschenk so that's what I'm setting the OPENFAAS_PREFIX to.

1
2
3
  export OPENFAAS_PREFIX=wschenk

  faas-cli new --lang node12 starbot

Lets install the axios package.

1
2
  cd starbot
  npm install --save axios

Then lets change the starbot/handler.js to

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
const axios = require("axios")

module.exports = async (event, context) => {
    let res = await axios.get("http://api.open-notify.org/astros.json")
    let body = `There are currently ${res.data.number} astronauts in space.`
    return context
        .status(200)
        .headers({"Content-type": "application/json"})
        .succeed(body)
}

Then we deploy the function

1
  faas-cli up -f starbot.yml

And we can run it with:

1
curl https://faas.willschenk.com/function/starbot
There are currently 7 astronauts in space.

Nice!

Command as a cloud function

Lets look at how to package up a command as a function.

1
faas-cli new --lang dockerfile fortune

Then we edit the fortune/Dockerfile. Add we are adding here is RUN apk add fortune and setting the fprocess to fortune.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
FROM ghcr.io/openfaas/classic-watchdog:0.1.4 as watchdog

FROM alpine:3.12

RUN mkdir -p /home/app
RUN apk add fortune

COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog

# Add non root user
RUN addgroup -S app && adduser app -S -G app
RUN chown app /home/app

WORKDIR /home/app

USER app

# Populate example here - i.e. "cat", "sha512sum" or "node index.js"
ENV fprocess="fortune"
# Set to true to see request in function logs
ENV write_debug="false"

EXPOSE 8080

HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1

CMD ["fwatchdog"]

Then you can build and deploy with

1
faas-cli up -f fortune.yml

Which now is available as:

1
curl https://faas.willschenk.com/function/fortune
When I get real bored, I like to drive downtown and get a great
parking spot, then sit in my car and count how many people ask me if
I'm leaving.
		-- Steven Wright

The whole process took me under a minute for the first time that I did it, so that's super cool.

Monitoring

OpenFaas uses what they call the PLONK Stack, which is a goofy but fun name, but the P stands for Prometheus. Lets see how that works.

First we setup a ssh tunnel so we can connect to the local prometheus instance.

1
  ssh -L 9090:127.0.0.1:9090 root@faas.willschenk.com

Once this tunnel is up, we can connect to Prometheus locally.

Grafana

Now on your faas server, edit /var/lib/faasd/docker-compose.yaml to add the grafana service.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
  grafana:
    image: docker.io/grafana/grafana:latest
    environment:
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_BASIC_ENABLED=false
    volumes:
      # we assume cwd == /var/lib/faasd
      - type: bind
        source: ./grafana/
        target: /etc/grafana/provisioning/
    cap_add:
      - CAP_NET_RAW
    depends_on:
      - prometheus
    ports:
      - "127.0.0.1:3000:3000"

We need to create the grafana directory and restart everything:

1
2
  mkdir -p /var/lib/faasd/grafana/
  systemctl daemon-reload && systemctl restart faasd

Finally on our local machine we need to open up an ssh tunnel to grafana so we can access. In my case:

1
  ssh -L 3000:127.0.0.1:3000 root@faas.willschenk.com

Add your prometheus datasource, pointing to http://prometheus:9090 and you are good to start adding panels.

In closing

This makes it really easy to quickly spin something up. Getting logging and monitoring built in "for free" is a huge leg up on spinning up your own solutions. I've haven't jumped too much into the kubernetes bandwagon, but this makes it so simple to expose a tiny bit of functionality in a way that seems so easy to maintain.

If you usecase is about gluing a couple of things together this is really pretty fascinating!

Previously

jacktrip-webrtc/jacktrip-webrtc

2021-02-02

Next

Interacting With Git via HTTP Looking at git http traffic

2021-02-11

howto

Previously

Tailwind and Rails postcss setup

2020-11-18

Next

Interacting With Git via HTTP Looking at git http traffic

2021-02-11