Protect an Application

This simple quickstart guide walks you through the steps required to protect an application with heimdall. Here, you'll learn how you can make use of heimdall as an authentication & authorization proxy in front of your application and also how you could implement Edge-level Authorization Architecture (EAA) with heimdall's help.

Overview

In this guide we’re going to configure two setups, both protecting a service which exposes a couple of endpoints:

  • The /public endpoint is as the name implies public. Every request to it should be forwarded as is.

  • The /user endpoint should only be accessible to users with the role user.

  • The /admin endpoint should only be accessible to users with the role admin and

  • the /private endpoint, as well as any other potentially exposed endpoint should not be accessible at all. So all requests should be rejected.

For authentication purposes, we’re going to use JWTs, containing the respective role. The authorization will happen with the help of Open Policy Agent (OPA).

In both setups, we’re going to create minimal but complete environments using docker compose with

  • traefik as an edge proxy,

  • containous/whoami (that service just echoes back everything it receives) which mimics our service exposing the abovesaid endpoints,

  • an NGINX server serving the public key for the verification purposes of the JWTs mentioned above (it just mimics the JWKS endpoint typically exposed by an OIDC provider),

  • OPA, which evaluates the above said authorization policy and

  • heimdall, orchestrating everything to implement the above said requirements.

This and similar quickstarts, demonstrating other (integration) options are also available on GitHub.

Prerequisites

To be able to follow this guide, you’ll need the following tools installed locally:

Configure

  1. Heimdall can be configured via environment variables, as well as using a configuration file. For simplicity reasons, we’ll use a configuration file here. So create a config file named heimdall-config.yaml with the following contents:

    log: (1)
      level: debug
    
    tracing:
      enabled: false
    
    metrics:
      enabled: false
    
    serve: (2)
      decision:
        trusted_proxies:
        - 0.0.0.0/0
    
    mechanisms: (3)
      authenticators:
        - id: deny_all (4)
          type: unauthorized
        - id: anon (5)
          type: anonymous
        - id: jwt_auth (6)
          type: jwt
          config:
            jwks_endpoint: http://idp:8080/.well-known/jwks
            assertions:
              issuers:
                - demo_issuer
      authorizers:
        - id: opa (7)
          type: remote
          config:
            endpoint: http://opa:8181/v1/data/{{ .Values.policy }}
            payload: "{}"
            expressions:
              - expression: |
                  Payload.result == true
      finalizers:
        - id: create_jwt (8)
          type: jwt
        - id: noop (9)
          type: noop
    
    default_rule: (10)
      methods:
        - GET
        - POST
      execute:
        - authenticator: deny_all
        - finalizer: create_jwt
    
    providers:
      file_system: (11)
        src: /etc/heimdall/rules.yaml
        watch: true
    1Since heimdall emits logs on error level by default, and we would like to see what is going on, we are setting the log level to debug. This way, we’ll see not only the results of a particular rule execution (which is what you would see if we set the log level to info), but also what is going in a rule. In addition, we disable tracing and metrics collection as these are pulled by default to an OTEL agent to avoid error statements related to unavailability of the agent. You can find more information about available observability options in the Observability chapter.
    2This configuration instructs heimdall to trust X-Forwarded-* headers from any sources. We need it here for integration purposes with Traefik, which uses these headers while forwarding requests to heimdall and which IP depends on your local docker configuration. Never do this in production and use allowed IPs instead! Please take also a look at the documentation of trusted_proxies property and Security Considerations for more details.
    3Here we define our catalogue of mechanisms to be used in upstream service specific rules. In this case we define authenticators, authorizer and finalizers
    4These two lines define the unauthorized authenticator named deny_all. It rejects all requests.
    5These two lines define the anonymous authenticator named anon. It allows any request passing through and creates a subject with ID set to anonymous. You can find more information about the subject and other objects here.
    6This and the following lines define and configure the jwt authenticator named jwt_auth. With the given configuration it will check whether a request contains an Authorization header with a bearer token in JWT format and validate it using key material fetched from the JWKS endpoint. It will reject all requests without a valid JWT or create a subject with ID set to the value of the sub claim from the token and add also add all claims as key-value map to subject’s Attribute property.
    7Here we define and configure a remote authorizer named opa. Please note, how we allow overriding of particular settings, which application you’ll find below, when we define the rules.
    8The following two lines define the jwt finalizer. Without any configuration, as used here, it will create a jwt out of the subject object with standard claims and set the sub claim to the value of subject’s ID. The key material used for signature creation purpose will be generated on start up. This is fine for our demo purpose. For real scenarios you should definitely define it.
    9These two lines conclude the definition of our mechanisms catalogue and define the noop finalizer, which as the type implies, does nothing.
    10With the above catalogue in place, we can now define a default rule, which will kick in if no other rule matches the request. In addition, it acts as a base for the definition of regular (upstream service specific) rules. In this case it allows only HTTP GET and POST requests and defines a secure default authentication & authorization pipeline, which refuses any request by making use of the deny_all authenticator, and if the regular rule overrides that authenticator, will create a JWT thanks to the used jwt finalizer.
    11The last few lines of the configure the file_system provider, which allows loading of regular rules from the file system. Btw. the provider is configured to watch for changes. So you can modify the rules, we’re going to create, while playing around.
  2. Now, create a rule file named upstream-rules.yaml, which will implement the authentication and authorization requirements of our service, and copy the following contents to it:

    version: "1alpha4"
    rules:
    - id: demo:public  (1)
      match:
        path: /public
      forward_to:
        host: upstream:8081
      execute:
      - authenticator: anon
      - finalizer: noop
    
    - id: demo:protected  (2)
      match:
        path: /:user
        with:
          path_glob: {/user,/admin}
      forward_to:
        host: upstream:8081
      execute:
      - authenticator: jwt_auth
      - authorizer: opa
        config:
          values:
            policy: demo/can_access
          payload: |
            {
              "input": {
                "role": {{ quote .Subject.Attributes.role }},
                "path": {{ quote .Request.URL.Path }}
              }
            }
    1This rule matches our /public endpoint and forwards the request to our upstream service. It doesn’t perform any kind of request verification or transformation.
    2This rule matches the /user and the /admin endpoints and performs the required authentication as well as authorization steps.
    Please note, that we don’t define any finalizer in the pipeline of the second rule. Since we have a default rule with a finalizer configured, it is reused here. There is no need for other rules as well as our default rule will block requests to any other endpoints.
  3. Having everything related to heimdall configuration, let us now create a policy, OPA is going to use. So, create a file named policy.rego with the following contents.

    package demo
    
    default can_access = false (1)
    
    can_access { split(input.path, "/")[1] == input.role } (2)

    Here, we say, our policy can_access is located in the demo package. The policy itself is pretty simple and evaluates only to true or false.

    1Per default, the can_access policy evaluates to false.
    2And it evaluates only to true, if the last path fragment of the request is equal to the user’s role.
  4. Let us now configure NGINX to expose a static endpoint serving a JWKS document under the .well-known path, so heimdall is able to verify the JWTs, we’re going to use. Create a file named idp.nginx with the following content:

    worker_processes  1;
    user       nginx;
    pid        /var/run/nginx.pid;
    
    events {
      worker_connections  1024;
    }
    
    http {
        keepalive_timeout  65;
    
        server {
            listen 8080;
    
            location /.well-known/jwks {
                default_type  application/json;
                root /var/www/nginx;
                try_files /jwks.json =404;
            }
        }
    }

    In addition, create a file named jwks.json with the public key required to verify the tokens we’re going to use.

    {
      "keys": [{
        "use":"sig",
        "kty":"EC",
        "kid":"key-1",
        "crv":"P-256",
        "alg":"ES256",
        "x":"cv6F6SgBSNWMZKdApZXSuPD6QPtvQyMpk-iRfZxT-vo",
        "y":"C1r3OClUvyDgmDQdvxMdB-ucmZ28b8s4uM4Yg-0BZZ4"
      }]
    }

    We will add it to the above referenced /var/www/nginx folder, when we define our setup environments.

  5. Time to configure the environment to play with. If you want to run heimdall as proxy, create or copy the following docker-compose.yaml file and modify it to include the correct paths to your heimdall-config.yaml, upstream-rules.yaml, policy.rego, idp.nginx and the jwks.json files from above:

    version: '3.7'
    
    services:
      heimdall: (1)
        image: dadrus/heimdall:latest
        ports:
        - "9090:4455"
        volumes:
        - ./heimdall-config.yaml:/etc/heimdall/config.yaml:ro
        - ./upstream-rules.yaml:/etc/heimdall/rules.yaml:ro
        command: -c /etc/heimdall/config.yaml serve proxy
    
      upstream: (2)
        image: containous/whoami:latest
        command:
        - --port=8081
    
      idp: (3)
        image: nginx:1.25.4
        volumes:
        - ./idp.nginx:/etc/nginx/nginx.conf:ro
        - ./jwks.json:/var/www/nginx/jwks.json:ro
    
      opa: (4)
        image: openpolicyagent/opa:0.62.1
        command: run --server /etc/opa/policies
        volumes:
        - ./policy.rego:/etc/opa/policies/policy.rego:ro
    1These lines configure heimdall to use our config and rule file and to run in proxy operation mode.
    2Here, we configure the "upstream" service. As already written above, it is a very simple service, which just echoes back everything it receives.
    3This is our NGINX, which mimics an IDP system and exposes an JWKS endpoint with our key material.
    4And these lines configure our OPA instance to use our authorization policy
  6. Alternatively, if you would like to implement EAA with heimdall, create or copy the following docker-compose-eaa.yaml file and modify it to include the correct paths to the heimdall-config.yaml, upstream-rules.yaml, policy.rego, idp.nginx and the jwks.json files from above as well:

    version: "3"
    
    services:
      proxy: (1)
        image: traefik:2.11.0
        ports:
        - "9090:9090"
        command: >
          --providers.docker=true
          --providers.docker.exposedbydefault=false
          --entryPoints.http.address=":9090"
          --accesslog --api=true --api.insecure=true
        volumes:
        - "/var/run/docker.sock:/var/run/docker.sock:ro"
        labels:
        - traefik.enable=true
        - traefik.http.routers.traefik_http.service=api@internal
        - traefik.http.routers.traefik_http.entrypoints=http
        - traefik.http.middlewares.heimdall.forwardauth.address=http://heimdall:4456  (2)
        - traefik.http.middlewares.heimdall.forwardauth.authResponseHeaders=Authorization
    
      heimdall:  (3)
        image: dadrus/heimdall:latest
        volumes:
        - ./heimdall-config.yaml:/etc/heimdall/config.yaml:ro
        - ./upstream-rules.yaml:/etc/heimdall/rules.yaml:ro
        command: -c /etc/heimdall/config.yaml serve decision
    
      upstream:  (4)
        image: containous/whoami:latest
        command:
        - --port=8081
        labels:
        - traefik.enable=true
        - traefik.http.services.whoami.loadbalancer.server.port=8081
        - traefik.http.routers.whoami.rule=PathPrefix("/")
        - traefik.http.routers.whoami.middlewares=heimdall
    
      idp: (5)
        image: nginx:1.25.4
        volumes:
        - ./idp.nginx:/etc/nginx/nginx.conf:ro
        - ./jwks.json:/var/www/nginx/jwks.json:ro
    
      opa: (6)
        image: openpolicyagent/opa:0.62.1
        command: run --server /etc/opa/policies
        volumes:
        - ./policy.rego:/etc/opa/policies/policy.rego:ro
    1These lines configure Traefik, which is used to dispatch the incoming requests and also forward all of them to heimdall before routing to the target service. We’re using the ForwardAuth middleware here, which requires an additional configuration on the route level.
    2Here we configure Trafik to forward the requests to heimdall
    3These lines configure heimdall to use our config and rule file and to run in decision operation mode.
    4Here, we configure the "upstream" service. As already written above, it is a very simple service, which just echoes back everything it receives. As also written above, we need to provide some route level configuration here to have the requests forwarded to heimdall. We could however also have a global configuration (which we decided not to do to avoid yet another configuration file).
    5This is our NGINX, which mimics an IDP system and exposes an JWKS endpoint with our key material.
    6And these lines configure our OPA instance to use our authorization policy

Start Environment

Open your terminal and start the services in the directory, the above docker-compose.yaml file is located in with

$ docker-compose up

Consume the API

Roll up your sleeves. We’re going to play with our setup now. Open a new terminal window and put it nearby the terminal, you started the environment in. This way you’ll see what is going on in the environment when you use it.

  1. Let’s try the /public endpoint first

    $ curl 127.0.0.1:9090/public

    You should see an output similar to the one shown below

    Hostname: 94e60bba8498
    IP: 127.0.0.1
    IP: 172.19.0.3
    RemoteAddr: 172.19.0.4:53980
    GET /public HTTP/1.1
    Host: upstream:8081
    User-Agent: curl/8.2.1
    Accept: */*
    Accept-Encoding: gzip
    Forwarded: for=172.19.0.1;host=127.0.0.1:9090;proto=http

    That was obviously expected as we’ve sent a request to our public endpoint.

  2. Let’s try some other endpoints:

    $ curl -v 127.0.0.1:9090/admin

    The -v flag has be added to the curl command by intention. Without it, we’ll not see any output. With it, we’ll see the response shown below

    * processing: 127.0.0.1:9090/admin
    *   Trying 127.0.0.1:9090...
    * Connected to 127.0.0.1 (127.0.0.1) port 9090
    > GET /admin HTTP/1.1
    > Host: 127.0.0.1:9090
    > User-Agent: curl/8.2.1
    > Accept: */*
    >
    < HTTP/1.1 401 Unauthorized
    < Date: Wed, 06 Mar 2024 16:14:05 GMT
    < Content-Length: 0
    <
    * Connection #0 to host 127.0.0.1 left intact

    That is, unauthorized. Requests to any other endpoint, but /public will result in the same output.

  3. Let us now use a proper JWT, which will allow us to send requests to either the /admin or the /user endpoint. Below, you’ll find a new request using curl to our /admin endpoint again. This time however, it contains an Authorization header with a bearer token in it which should allow us getting access. Try it out.

    $ curl -H "Authorization: Bearer eyJhbGciOiJFUzI1NiIsImtpZCI6ImtleS0xIiwidHlwIjoiSldUIn0.eyJleHAiOjIwMjUxMDA3NTEsImlhdCI6MTcwOTc0MDc1MSwiaXNzIjoiZGVtb19pc3N1ZXIiLCJqdGkiOiI0NjExZDM5Yy00MzI1LTRhMWYtYjdkOC1iMmYxMTE3NDEyYzAiLCJuYmYiOjE3MDk3NDA3NTEsInJvbGUiOiJhZG1pbiIsInN1YiI6IjEifQ.mZZ_UqC8RVzEKBPZbPs4eP-MkXLK22Q27ZJ34UwJiioFdaYXqYJ4ZsatP0TbpKeNyF83mkrrCGL_pWLFTho7Gg" 127.0.0.1:9090/admin

    We can now access the endpoint and see the following output

    Hostname: 94e60bba8498
    IP: 127.0.0.1
    IP: 172.19.0.2
    RemoteAddr: 172.19.0.4:43688
    GET /admin HTTP/1.1
    Host: upstream:8081
    User-Agent: curl/8.2.1
    Accept: */*
    Accept-Encoding: gzip
    Authorization: Bearer eyJhbGciOiJFUzM4NCIsImtpZCI6IjEzNTQxODg3NGFiNzQwN2I3ZWQ0MmU5MmM4NWIzY2ZkNDJmZDk5NDgiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MDk3NDI2NzYsImlhdCI6MTcwOTc0MjM3NiwiaXNzIjoiaGVpbWRhbGwiLCJqdGkiOiJhYTZkZDE1MC0yMzhiLTQ2YWEtOTIzMi00MDRjMWNiMGM4ZDMiLCJuYmYiOjE3MDk3NDIzNzYsInN1YiI6IjEifQ.84QF4F7-WKSAV4KcC2Z_7SG4VkiEXg0fUu1hLS-zKR8-SfpM3XVphLz3QVg4aDXe4AxiNIfqyA5rE9ZEFnYAlFfWOIt2R7i0PZh2gf1PBOQMj6cMLbDSUw_YZ9x1XWcf
    Forwarded: for=172.19.0.1;host=127.0.0.1:9090;proto=http

    Take a closer look at the JWT echoed by our service, e.g. by making use of https://jwt.io. It has been issued by heimdall and is not the token you’ve sent using curl.

  4. Guess what would happen, when we try the same request, but to the /user endpoint? You’re right, it will be refused due to the wrong role. Let us then use another JWT. Try the request shown below. It contains a token which should give us access.

    $ curl -H "Authorization: Bearer eyJhbGciOiJFUzI1NiIsImtpZCI6ImtleS0xIiwidHlwIjoiSldUIn0.eyJleHAiOjIwMjUxMDA3NTEsImlhdCI6MTcwOTc0MDc1MSwiaXNzIjoiZGVtb19pc3N1ZXIiLCJqdGkiOiIzZmFmNDkxOS0wZjUwLTQ3NGItOGExMy0yOTYzMjEzNThlOTMiLCJuYmYiOjE3MDk3NDA3NTEsInJvbGUiOiJ1c2VyIiwic3ViIjoiMiJ9.W5xCpwsFShS0RpOtrm9vrV2dN6K8pRr5gQnt0kluzLE6oNWFzf7Oot-0YLCPa64Z3XPd7cfGcBiSjrzKZSAj4g" 127.0.0.1:9090/user

    Was successful, right? We omitted the output for brevity reasons. This guide is already long enough.

  5. Try to send requests to the /private endpoint using any of the tokens from above. Yep. Useless. Heimdall will not let us through.

Cleanup

Just stop the environment with CTRL-C and delete the created files. If you started docker compose in the background, tear the environment down with

$ docker-compose down

Last updated on Apr 30, 2024