fswatch is a cross platform command that will let you watch a directory for changes. Lets give it a try:

1
  brew install fswatch

And we can test it

1
2
  mkdir
  fsevent tmp

And then in another terminal:

1
  touch tmp/b

And see if the first one prints out anything!

Docker

Ok so that's simple. Lets see if we can put it in a Docker container and see if it works across volumes

1
2
3
4
5
  from debian:12

  run apt-get -q update && apt-get install -y fswatch

  cmd fswatch /data

Then build

1
  docker build . -t fswatch_test

Finally, start it up to test:

1
docker run --rm -it -v ./tmp:/data fswatch_test

Filtering on events

Events are a funny name here, because we can use it both to look at the Created and Updated event, but also to filter out on file types like IsFile or IsDirectory.

1
  fswatch -x --event Updated --event IsFile tmp

Simple job processing queue

Lets have two scripts, one that writes files out into a directory when recording events, and another that loads them up as they come in and does something with them.

job_add:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
  #!/bin/bash

  directory=queue
  mkdir -p ${directory}
  if [ -z "$1" ]; then
      command=event=ping
  else
      command=${@}
  fi

  time=$(date +"%Y-%m-%d-%H:%M:%S")

  outfile=${directory}/${time}.job

  jo ${command} > ${outfile}
  cat ${outfile}

We can take a look at this and some various outputs:

1
bash job_add
1
{"event":"ping"}

or

1
bash job_add event=build id=1234
1
{"event":"build","id":1234}

Watch for new files

Now we can implement job_watcher. This first looks into the queue directory for all of the job files, and if it don't have a log file then calls process_job. After that, it starts up fswatch and runs each file as it changes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
  #!/bin/bash

  function setup {
      directory=queue
      mkdir -p ${directory}
  }

  # Watch for file changes
  function watch_for_changes {
      fswatch --event Updated \
              --event IsFile \
              ${directory} | \
          while read line ;
          do
              # Only react to .job files
              if [[ $line =~ ".job" ]]; then
                  if [ -f $line ]; then
                      process_job $line
                  fi
              fi
          done
  }

  # Look for all jobs that haven't been run yet
  function catch_up {
      ls -1 ${directory}/*job | \
          while read job ;
          do
            outfile=$(echo $job | sed s/\.job/\.log/)

            if [ ! -f $outfile ]; then
                echo Running $job
                process_job $job
            fi
          done
  }

  function process_job {
      type=$(cat $1 | jq -r '.event')
      outfile=$(echo $1 | sed s/\.job/\.log/)

      if [ $type == 'ping' ]; then
          echo pong > $outfile
          echo Got ping event
      else
          echo error > $outfile
          echo Unknown event $type
      fi

  }

  setup
  catch_up
  watch_for_changes

Dockerfy

And lets see if we can communicate across the containers! With a new Dockerfile.watcher file:

1
2
3
4
5
6
7
8
  from debian:12

  run apt-get -q update && apt-get install -y fswatch jo jq git

  workdir /app
  copy job_watcher .

  cmd bash /app/job_watcher

Easy build with:

1
  docker build . -f Dockerfile.watcher -t watcher_test

Then

1
  docker run --rm -it -v ./queue:/app/queue watcher_test

Then if we add a couple of jobs

1
2
  bash job_add
  bash job_add event=test

We'll get in return:

1
2
Got ping event
Unknown event test

References

Previously

rivian trusts the driver

2023-07-02

Next

A Taxonomy of AI Panic Facilitators

2023-07-04

labnotes

Previously

Installing gitolite Intercepting git

2023-06-30

Next

Updating date in org file simple and fun

2023-07-10