version: "1alpha4"
name: my-rule-set
rules:
- id: rule:1
match:
routes:
- path: /**
hosts:
- type: exact
value: my-service1.local
methods: [ "GET" ]
forward_to:
host: ${UPSTREAM_HOST:="default-backend:8080"}
execute:
- authorizer: foobar
Rule Providers
Rule providers manage rules in heimdall. They load, reload or remove rules when new rule sets appear, changes are detected, or rule sets are deleted.
Providers allow definition of sources to load the rule sets from. These make heimdall’s behavior dynamic. All providers, you want to enable for a heimdall instance must be configured within the providers
section of heimdall’s configuration.
Below, you can find the description and configuration options for currently supported providers.
Filesystem
The filesystem provider allows loading of regular rule sets in JSON or YAML format from a file system.
Configuration
The configuration of this provider goes into the file_system
property. This provider is handy for e.g. starting playing around with heimdall, e.g. locally, or using a container runtime, as well as if your deployment strategy considers deploying a heimdall instance as a Side-Car for each of your services.
Following configuration options are supported:
src
: string (mandatory)Can either be a single file, containing a rule set, or a directory with files, each containing a rule set.
watch
: boolean (optional)Whether the configured
src
should be watched for updates. Defaults tofalse
. If thesrc
has been configured to a single file, the provider will watch for changes in that file. Otherwise, if thesrc
has been configured to a directory, the provider will watch for files appearing and disappearing in this directory, as well as for changes in each particular file in this directory. Recursive lookup is not supported. That is, if the configured directory contains further directories, these, as well as their contents are ignored.env_vars_enabled
: boolean (optional)Whether to enable environment variables access in the rule set files. Defaults to
false
. If set totrue
, environment variables usage using Bash syntax is possible as with the static configuration.All environment variables, used in the rule set files must be known in advance to the heimdall process (when it starts). In addition, the usage of that functionality might lead to security issues. If an adversary is somehow able to add new or update existing rule sets, it would be theoretically able exfiltrating environment variables available to the heimdall process by crafting contextualizers or authorizers, which would forward the corresponding values to a controlled service. So, use with caution, disable the watching of rule set updates and try to avoid! Example 1. Rule set making use of environment variables
Examples
/path/to/rules/dir
directory and watch for changes.file_system:
src: /path/to/rules/dir
watch: true
/path/to/rules.yaml
file without watching it for changes.file_system:
src: /path/to/rules.yaml
HTTP Endpoint
This provider allows loading of regular rule sets in JSON or YAML format from any remote endpoint accessible via HTTP(s). The format selection happens based on the Content-Type
set in the response from the endpoint, which must be either application/yaml
or application/json
, otherwise an error is logged and the response from the endpoint is ignored.
The loading and removal of rules happens as follows:
if the response status code is an HTTP 200 OK and contains a rule sets in a known format (see above), the corresponding rules are loaded (if the definitions are valid)
in case of network issues, like dns errors, timeouts and alike, the rule sets previously received from the corresponding endpoints are preserved.
in any other case related to network communication (e.g. not 200 status code, empty response body, unsupported format, etc.), the corresponding rules are removed if previously loaded.
Configuration
The configuration of this provider goes into the http_endpoint
property. In contrast to the Filesystem provider, it can be configured with as many endpoints to load rule sets from as required for the particular use case.
Following configuration options are supported:
HTTP caching according to RFC 7234 is enabled by default. It can be disabled on the particular endpoint by setting http_cache.enabled to false . |
Examples
Here the provider is configured to load a rule set from one endpoint without polling it for changes.
http_endpoint:
endpoints:
- url: https://foo.bar/ruleset1
Here, the provider is configured to poll the two defined rule set endpoints for changes every 5 minutes.
The configuration for both endpoints instructs heimdall to disable HTTP caching. The configuration of the second endpoint uses a couple of additional properties. One to ensure the communication to that endpoint is more resilient by setting the retry
options and since this endpoint is protected by an API key, it defines the corresponding options as well.
http_endpoint:
watch_interval: 5m
endpoints:
- url: https://foo.bar/ruleset1
http_cache:
enabled: false
- url: https://foo.bar/ruleset2
http_cache:
enabled: false
retry:
give_up_after: 5s
max_delay: 250ms
auth:
type: api_key
config:
name: X-Api-Key
value: super-secret
in: header
Cloud Blob
This provider allows loading of regular rule sets from cloud blobs, like AWS S3 buckets, Google Cloud Storage, Azure Blobs, or other API compatible implementations and supports rule sets in YAML, as well as in JSON format. The format selection happens based on the Content-Type
set in the metadata of the loaded blob, which must be either application/yaml
or application/json
, otherwise an error is logged and the blob is ignored.
The loading and removal of rules happens as follows:
if the response status code is an HTTP 200 OK and contains a rule set in a known format (see above), the corresponding rules are loaded (if the definitions are valid)
in case of network issues, like dns errors, timeouts and alike, the rule sets previously received from the corresponding buckets are preserved.
in any other case like, not 200 status code, empty response body, unsupported format, etc, the corresponding rules are removed if previously loaded.
Configuration
The configuration of this provider goes into the cloud_blob
property. As with HTTP Endpoint provider, it can be configured with as many buckets/blobs to load rule sets from as required for the particular use case.
Following configuration options are supported:
watch_interval
: Duration (optional)Whether the configured
buckets
should be polled for updates. Defaults to0s
(polling disabled).buckets
: BlobReference array (mandatory)Each BlobReference entry in that array supports the following properties:
url
: string (mandatory)The actual url to the bucket or to a specific blob in the bucket.
prefix
: string (optional)Indicates that only blobs with a key starting with this prefix should be retrieved
The differentiation which storage is used is based on the URL scheme. These are:
s3
for AWS S3 bucketsgs
for Google Cloud Storage andazblob
for Azure Blob Storage
Other API compatible storage services, like Minio, Ceph, SeaweedFS, etc. can be used as well. The corresponding and other options can be found in the Go CDK Blob documentation, the implementation of this provider is based on.
The communication to the storage services requires an active session to the corresponding cloud provider. The session information is taken from the vendor specific environment variables, respectively configuration. See AWS Session, GC Application Default Credentials and Azure Storage Access for more information. |
Examples
Here the provider is configured to load rule sets from all blobs stored on the Google Cloud Storage bucket named "my-bucket" without polling for changes.
cloud_blob:
buckets:
- url: gs://my-bucket
cloud_blob:
watch_interval: 2m
buckets:
- url: gs://my-bucket
prefix: service1
- url: gs://my-bucket
prefix: service2
- url: s3://my-bucket/my-rule-set?region=us-west-1
Here, the provider is configured to poll multiple buckets with rule sets for changes every 2 minutes.
The first two bucket reference configurations reference actually the same bucket on Google Cloud Storage, but different blobs based on the configured blob prefix. The first one will let heimdall loading only those blobs, which start with service1
, the second only those, which start with service2
.
The last one instructs heimdall to load rule set from a specific blob, namely a blob named my-rule-set
, which resides on the my-bucket
AWS S3 bucket, which is located in the us-west-1
AWS region.
Kubernetes
This provider is only supported if heimdall is running within Kubernetes and allows usage (validation and loading) of RuleSet
custom resources deployed to the same Kubernetes environment.
Configuration
The configuration of this provider goes into the kubernetes
property and supports the following configuration options:
auth_class
: string (optional)By making use of this property, you can specify which rule sets should be used by this particular heimdall instance. If specified, heimdall will consider the value of the
authClassName
attribute of eachRuleSet
resource deployed to the cluster and validate, respectively load only those rules, whichauthClassName
values matching the value ofauth_class
. If not set allRuleSet
resources will be used.tls
: TLS (optional)If configured, heimdall will start and expose a validating admission controller service on port
4458
listening on all interfaces. This service allows integration with the Kubernetes API server enabling validation of the appliedRuleSet
resources before these are made available to heimdall for loading. This way you will get a direct feedback about issues without the need to look into heimdall logs if aRuleSet
resource could not be loaded (See also API documentation for more details).To let the Kubernetes API server use the admission controller, there is a need for a properly configured
ValidatingWebhookConfiguration
. The Helm Chart shipped with heimdall does this automatically as soon as this property is configured. It does however need acaBundle
to be set or injected. Otherwise, the Kubernetes API server won’t trust the configured TLS certificate and won’t use the endpoint.
Since multiple heimdall deployments with different configured That also means, if there is no heimdall deployment feeling responsible for the given |
Examples
Here, the provider is just enabled. Since no auth_class
is configured, it will load all RuleSet
resources deployed to the Kubernetes environment.
kubernetes: {}
auth_class
setHere, the provider is configured to consider only those RuleSet
resources, which authClassName
is set to foo
.
kubernetes:
auth_class: foo
auth_class
set and enabled validating admission controllerAs with the previous example, the provider is configured to consider only those RuleSet
resources, which authClassName
is set to foo
. The admission controller is enabled as well and will validate RuleSet
resources before these are made available for loading.
kubernetes:
auth_class: foo
tls:
# below is the minimal required configuration
key_store:
path: /path/to/file.pem
This provider requires a RuleSet CRD being deployed, otherwise heimdall will not be able to monitor corresponding resources and emit error messages to the log. If you have used the Helm Chart to install heimdall, this CRD is already installed. You can however install it also like this:
|
Last updated on Feb 13, 2025