Enable basic authentication to all pages of a NextJS site

Posted on 2021-03-28 in Trucs et astuces Last modified on: 2021-03-30

It's not as obvious at it seems. You can protect your API routes or some pages by following the documentation, but nothing to protect everything in one go with basic authentication (to protect your pre-production site from normal user for instance). Despite NexJS having a server component, I didn't find a way to do it easily with a middleware. So I decided to put an nginx in front of NexJS to handle the authentication.

Since this site is deployed in kubernetes, I used the sidecar patterns to have a container with nginx next to my NexJS container.

My nginx configuration is like this:

upstream app_server {
  server 127.0.0.1:{{ .Values.container.port }} fail_timeout=0;
}

server {
    listen 80;
    root /var/www/website/;
    client_max_body_size 1G;

    access_log stdout;
    error_log  stderr;

    location / {
        # Only protect / with authentication and not @nextjs by placing the directives here.
        # If you don't, nginx will require you to authenticate for the /api/health route even
        # if you disable authentication for it since it's forwarded to @nextjs.
        {{ if .Values.sidecar.nginx.enableBasicAuth -}}
        auth_basic           "Pre-Production. Access Restricted";
        auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
        {{- end }}

        location /nghealth {
            {{ if .Values.sidecar.nginx.enableBasicAuth -}}
            auth_basic off;
            {{- end }}
            return 200;
        }

        location /api/health {
            {{ if .Values.sidecar.nginx.enableBasicAuth -}}
            auth_basic off;
            {{- end }}
            try_files $uri @nextjs;
        }

        try_files $uri @nextjs;
    }

    location @nextjs {
        proxy_connect_timeout 30;
        proxy_send_timeout 30;
        proxy_read_timeout 30;
        send_timeout 30;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        # We have another proxy in front of this one. It will capture traffic
        # as HTTPS, so we must not set X-Forwarded-Proto here since it's already
        # set with the proper value.
        # proxy_set_header X-Forwarded-Proto $schema;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://app_server;
    }
}

Astuce

Always use a route to check that nginx is ok and one to check that your app is ok. This way, in case of failure, it will be easier to spot the faulty component.

Astuce

Never protect the health routes with authentication: while you can configure your probes to pass the Authorization header, when I tried I encountered error with my GCP load balancers which also needs to check everything is fine to route traffic correctly directly to the pod.

As you can guess, I'm using Helm to deploy this. So this configuration file is in a dedicated ConfigMap template like this:

apiVersion: v1
kind: ConfigMap
metadata:
name: website-reverse-proxy
data:
website.conf: |
    upstream app_server {
        server 127.0.0.1:{{ .Values.container.port }} fail_timeout=0;
    }
    [Cut for brievety]

Since the authentication is only there to prevent people that are not in the company to view the site, I decided to include the content of the .htpasswd file in the ConfigMap above. You probably don't want to do that if it's sensitive and rely on a secret instead. For that, I just created the .htpassword file locally with the htpasswd command and copied its content into my config map.

I can then mount both of these values into the container so it can use them directly in my deployment.yaml template:

containers:
    [NextJS ommited for brievety]
    - name: nginx-sidecar
      image: nginx:stable
      imagePullPolicy: Always
      ports:
        - name: http
          containerPort: {{ .Values.service.port }}
          protocol: TCP
      volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/conf.d
          readOnly: true
      {{ if .Values.sidecar.nginx.probe.enabled -}}
      livenessProbe:
        httpGet:
          # When we can access this route, nginx is alive, but it is not ready (ie cannot serve
          # traffic yet).
          path: {{ .Values.sidecar.nginx.probe.path }}
          port: {{ .Values.service.port }}
        timeoutSeconds: {{ .Values.sidecar.nginx.probe.livenessTimeOut }}
      readinessProbe:
        httpGet:
          # The container cannot be ready (that is accepting traffic) until it can talk to the
          # container. So we need to pass through nginx (with the port) to the container (with
          # the path) to check this.
          # Since it can take a few seconds, we have an initialDelaySeconds.
          path: {{ .Values.container.probe.path }}
          port: {{ .Values.service.port }}
        initialDelaySeconds: {{ .Values.sidecar.nginx.probe.initialDelaySeconds }}
        timeoutSeconds: {{ .Values.sidecar.nginx.probe.livenessTimeOut }}
      {{- end }}
volumes:
  - name: nginx-conf
    configMap:
      name: website-reverse-proxy

I need the two probes:

  • livenessProbe to check that nginx is OK and ready to serve requests.
  • readinessProbe to check that nginx can communicate with NexJS and can serve actual traffic. So in this one, I target the health probe of NextJS through nginx by using its port and not the port of NextJS. Hence the need for these two routes to be accessible without authentication.

Note

I also have both probes in my NexJS deployment and they both target directly the /api/health route. Again, this seems required for GCP load balancers to work correctly.

Lastly, my NexJS route in pages/api/health.ts:

import { NextApiRequest, NextApiResponse } from "next";

export default async (req: NextApiRequest, res: NextApiResponse) => {
  return res.status(200).json({});
};

History

  • 2021-03-30: I fixed the configurations and added some notes to make it works better. With the previous implementation, some problems could occur as explained in the body of the article. Go here to view the changes.