Manage deployment transitions for static application

Posted on 2021-04-25 in Trucs et astuces

When you deploy a frontend app, most of the time the name of your assets contains their hash so you can easily cache the files. So instead of just having main.js you will have something like main.1234.js. The problem is that your HTML will reference main.1234.js, so on the next deploy, once this file is rebuilt if it was changed, it will be named something like main.5678.js. If you were to load all you files immediately when the browser opens your index.html this wouldn't be a problem: your user either already has the old file or will load the new one.

Note

Depending on your build process, this can concern JS, CSS, images or video files.

However, on big JS apps, your application is decomposed in many small chunks that are loaded on demand. So you could get in a situation where the user starts using the app in the old version that references main.1234.js. You deploy your application the old main.1234.js file is removed. Now your user needs to download main.1234.js but it doesn't exist any more. Ouch.

To avoid that, the idea is to keep serving previous files of your applications. So instead of removing them right away, we keep them around for a given amount of time. I think a week is a good compromise between assets to keep (and that takes space) and user session. Of course, user sessions can be way longer that that. For these fringe cases, we accept the break.

Depending on how you deploy your application, I think there are two main strategies for that:

  1. You copy build files on a server: in this case, copy the new files and then run find . -mtime +7 -delete in the folder in which you deployed the files. This will delete all files that haven't been modified since more than one week. Since each time you build a file that hasn't changed it will get the same hash, it will get modified on the copy if it still exists and thus won't be deleted.
  2. You build the files into some archive (a docker image for instance) and you deploy this: you must restore old files from a cache to have them in your next deployment. This is this more complex technique (when coupled with Docker) that I will discuss in more detail further.

Given my constraints, I decided to store the previous files into a Docker image and use multi-stage build to copy them after building the new files. My Dockerfile looks like this:

 1 FROM mydocker-registry.io/my-image:latest AS previous-assets
 2 # Create directory if it doesn't exist.
 3 # This should only happen during the first build when the cache is empty.
 4 RUN mkdir -p /var/www/frontend-app/
 5 
 6 
 7 # Build the application.
 8 FROM node:14.16.0-slim AS builder
 9 WORKDIR /app
10 
11 COPY . ./
12 # Copy the previous assets into a dedicated folder so our new build won't delete
13 # them by accident.
14 COPY --from=previous-assets /var/www/frontend-app/ ./previous-builds/
15 
16 RUN yarn install --frozen-lockfile
17 RUN yarn build && \
18     # Restore static files from previous builds. Use --no-clobber to avoid overriding files from
19     # current build (this option is not available from the cp command in alpine). This option is
20     # not available to the COPY command. This way, existing files will have an updated modification
21     # time.
22     # Use --archive to be sure to preserve their modification time so we can correctly delete
23     # them later.
24     cp -R --no-clobber --archive ./previous-builds/* ./build/ && \
25     # Delete source maps.
26     find build -name \*.js\*.map -type f -delete && \
27     # Delete files older than 7 days
28     find build -mtime +7 -type f -delete
29 
30 
31 # Run the app in nginx.
32 FROM nginx:latest AS runner
33 RUN mkdir -p /var/www/frontend-app
34 WORKDIR /var/www/frontend-app
35 
36 COPY --from=builder /app/build /var/www/frontend-app/

The build steps are then like this:

  1. Pull or create the asset image with a custom scripts: create-assets-image.sh mydocker-registry.io/my-image. The script is:

     1 #!/usr/bin/env bash
     2 
     3 set -e
     4 set -u
     5 set -o pipefail
     6 
     7 ASSETS_IMAGE_NAME="${1:-previous-assets}"
     8 ASSETS_IMAGE_TAG=latest
     9 readonly ASSETS_IMAGE_NAME
    10 readonly ASSETS_IMAGE_TAG
    11 
    12 # If we don't already have the image for previous assets, we create it to be sure to
    13 # have this base image.
    14 if ! docker pull "${ASSETS_IMAGE_NAME}:${ASSETS_IMAGE_TAG}"; then
    15     docker pull nginx:latest
    16     docker tag nginx:latest "${ASSETS_IMAGE_NAME}:${ASSETS_IMAGE_TAG}"
    17 fi
    
  2. Build the images and tag them: docker build --tag mydocker-registry.io/my-image:$COMMIT_SHA --tag mydocker-registry.io/my-image:latest .. I need to tag two images with the same content. One will be the production image I deploy and contains the commit hash in its version. The other one will always have the same name so I know which image to use to fetch the previous assets.

  3. Push the image: docker push mydocker-registry.io/my-image:$COMMIT_SHA mydocker-registry.io/my-image:latest

Note

To avoid issue with the latest tag and always have file from the previous builds, you must wait for a build to end before starting a new one.