version: "1alpha3"
name: my-rule-set
rules:
- id: rule:1
match:
url: https://my-service1.local/<**>
forward_to:
host: ${UPSTREAM_HOST:="default-backend:8080"}
methods: [ "GET" ]
execute:
- authorizer: foobar
Rule Providers
Providers define the sources to load the Rule Sets from. These make heimdall’s behavior dynamic. All providers, you want to enable for a heimdall instance must be configured within the providers
section of heimdall’s configuration.
Supported providers, including the corresponding configuration options are described below
Filesystem
The filesystem provider allows loading of rule sets in a format defined in Rule Sets from a file system. The configuration of this provider goes into the file_system
property. This provider is handy for e.g. starting playing around with heimdall, e.g. locally, or using a container runtime, as well as if your deployment strategy considers deploying a heimdall instance as a Side-Car for each of your services.
Following configuration options are supported:
src
: string (mandatory)Can either be a single file, containing a rule set, or a directory with files, each containing a rule set.
watch
: boolean (optional)Whether the configured
src
should be watched for updates. Defaults tofalse
. If thesrc
has been configured to a single file, the provider will watch for changes in that file. Otherwise, if thesrc
has been configured to a directory, the provider will watch for files appearing and disappearing in this directory, as well as for changes in each particular file in this directory. Recursive lookup is not supported. That is, if the configured directory contains further directories, these, as well as their contents are ignored.env_vars_enabled
: boolean (optional)Whether to enable environment variables access in the rule set files. Defaults to
false
. If set totrue
, environment variables usage using Bash syntax is possible as with the static configuration.All environment variables, used in the rule set files must be known in advance to the heimdall process (when it starts). In addition, the usage of that functionality might lead to security issues. If an adversary is somehow able to add new or update existing rule sets, it would be theoretically able exfiltrating environment variables available to the heimdall process by crafting contextualizers or authorizers, which would forward the corresponding values to a controlled service. So, use with caution, disable the watching of rule set updates and try to avoid! Example 1. Rule set which makes use of environment variables
/path/to/rules/dir
directory and watch for changes.file_system:
src: /path/to/rules/dir
watch: true
/path/to/rules.yaml
file without watching it for changes.file_system:
src: /path/to/rules.yaml
HTTP Endpoint
This provider allows loading of rule sets in a format defined in Rule Sets from any remote endpoint accessible via HTTP(s) and supports rule sets in YAML, as well as in JSON format. The differentiation happens based on the Content-Type
set in the response from the endpoint, which must be either application/yaml
or application/json
, otherwise an error is logged and the response from the endpoint is ignored.
The loading and removal of rules happens as follows:
if the response status code is an HTTP 200 OK and contains a Rule Sets in a known format (see above), the corresponding rules are loaded (if the definitions are valid)
in case of network issues, like dns errors, timeouts and alike, the rule sets previously received from the corresponding endpoints are preserved.
in any other case related to network communication (e.g. not 200 status code, empty response body, unsupported format, network issues, etc.), the corresponding rules are removed if these were previously loaded.
The configuration of this provider goes into the http_endpoint
property. In contrast to the Filesystem provider it can be configured with as many endpoints to load rule sets from as required for the particular use case.
Following configuration options are supported:
watch_interval
: Duration (optional)Whether the configured
endpoints
should be polled for updates. Defaults to0s
(polling disabled).endpoints
: RuleSetEndpoint array (mandatory)Each entry of that array supports all the properties defined by Endpoint, except
method
, which is alwaysGET
. enable_http_cacheAs with the Endpoint type, at least theurl
must be configured. Following properties are defined in addition:rule_path_match_prefix
: string (optional)This property can be used to create kind of a namespace for the rule sets retrieved from the different endpoints. If set, the provider checks whether the urls specified in all rules retrieved from the referenced endpoint have the defined path prefix. If not, a warning is emitted and the rule set is ignored. This can be used to ensure a rule retrieved from one endpoint does not collide with a rule from another endpoint.
HTTP caching according to RFC 7234 is enabled by default. It can be disabled by setting enable_http_cache to false . |
Here the provider is configured to load a rule set from one endpoint without polling it for changes.
http_endpoint:
endpoints:
- url: http://foo.bar/ruleset1
Here, the provider is configured to poll the two defined rule set endpoints for changes every 5 minutes.
The configuration for the first endpoint instructs heimdall to ensure all urls defined in the rules coming from that endpoint must match the defined path prefix.
The configuration for the second endpoint defines the rule_path_match_prefix
as well. It also defines a couple of other properties. One to ensure the communication to that endpoint is more resilient by setting the retry
options and since this endpoint is protected by an API key, it defines the corresponding options as well.
http_endpoint:
watch_interval: 5m
endpoints:
- url: http://foo.bar/ruleset1
rule_path_match_prefix: /foo/bar
- url: http://foo.bar/ruleset2
rule_path_match_prefix: /bar/foo
retry:
give_up_after: 5s
max_delay: 250ms
auth:
type: api_key
config:
name: X-Api-Key
value: super-secret
in: header
Cloud Blob
This provider allows loading of rule sets in a format defined in Rule Sets from cloud blobs, like AWS S3 buckets, Google Cloud Storage, Azure Blobs, or other API compatible implementations and supports rule sets in YAML, as well as in JSON format. The differentiation happens based on the Content-Type
set in the metadata of the loaded blob, which must be either application/yaml
or application/json
, otherwise an error is logged and the blob is ignored.
The loading and removal of rules happens as follows:
if the response status code is an HTTP 200 OK and contains a rule set in a known format (see above), the corresponding rules are loaded (if the definitions are valid)
in case of network issues, like dns errors, timeouts and alike, the rule sets previously received from the corresponding buckets are preserved.
in any other case related to network communication (like, not 200 status code, empty response body, unsupported format, etc.), the corresponding rules are removed if these were previously loaded.
The configuration of this provider goes into the cloud_blob
property. As with HTTP Endpoint provider, it can be configured with as many buckets/blobs to load rule sets from as required for the particular use case.
Following configuration options are supported:
watch_interval
: Duration (optional)Whether the configured
buckets
should be polled for updates. Defaults to0s
(polling disabled).buckets
: BlobReference array (mandatory)Each BlobReference entry in that array supports the following properties:
url
: string (mandatory)The actual url to the bucket or to a specific blob in the bucket.
prefix
: string (optional)Indicates that only blobs with a key starting with this prefix should be retrieved
rule_path_match_prefix
: string (optional)Creates kind of a namespace for the rule sets retrieved from the blobs. If set, the provider checks whether the urls patterns specified in all rules retrieved from the referenced bucket have the defined path prefix. If that rule is violated, a warning is emitted and the rule set is ignored. This can be used to ensure a rule retrieved from one endpoint does not override a rule from another endpoint.
The differentiation which storage is used is based on the URL scheme. These are:
s3
for AWS S3 bucketsgs
for Google Cloud Storage andazblob
for Azure Blob Storage
Other API compatible storage services, like Minio, Ceph, SeaweedFS, etc. can be used as well. The corresponding and other options can be found in the Go CDK Blob documentation, the implementation of this provider is based on.
The communication to the storage services requires an active session to the corresponding cloud provider. The session information is taken from the vendor specific environment variables, respectively configuration. See AWS Session, GC Application Default Credentials and Azure Storage Access for more information. |
Here the provider is configured to load rule sets from all blobs stored on the Google Cloud Storage bucket named "my-bucket" without polling for changes.
cloud_blob:
buckets:
- url: gs://my-bucket
cloud_blob:
watch_interval: 2m
buckets:
- url: gs://my-bucket
prefix: service1
rule_path_match_prefix: /service1
- url: gs://my-bucket
prefix: service2
rule_path_match_prefix: /service2
- url: s3://my-bucket/my-rule-set?region=us-west-1
Here, the provider is configured to poll multiple buckets with rule sets for changes every 2 minutes.
The first two bucket reference configurations reference actually the same bucket on Google Cloud Storage, but different blobs based on the configured blob prefix. The first one will let heimdall loading only those blobs, which start with service1
, the second only those, which start with service2
.
As rule_path_match_prefix
are defined for both as well, heimdall will ensure, that rule sets loaded from the corresponding blobs will not overlap in their url matching definitions.
The last one instructs heimdall to load rule set from a specific blob, namely a blob named my-rule-set
, which resides on the my-bucket
AWS S3 bucket, which is located in the us-west-1
AWS region.
Kubernetes
This provider is only supported if heimdall is running within Kubernetes and allows usage (validation and loading) of Rule Set resources deployed to the same Kubernetes environment. The configuration of this provider goes into the kubernetes
property and supports the following configuration options:
auth_class
: string (optional)By making use of this property, you can specify which RuleSets should be used by this particular heimdall instance. If specified, heimdall will consider the value of the
authClassName
attribute of each RuleSet deployed to the cluster and validate, respectively load only those rules, whichauthClassName
values match the value ofauth_class
. If not set all RuleSets will be used.tls
: TLS (optional)If configured, heimdall will start and expose a validating admission controller service on port
4458
listening on all interfaces. This service allows integration with the Kubernetes API server enabling validation of the applied RuleSet resources before these are made available to heimdall for loading. This way you will get a direct feedback about RuleSet issues without the need to look into heimdall logs if a RuleSet could not be loaded (See also API documentation for more details).To let the Kubernetes API server use the admission controller, there is a need for a properly configured
ValidatingWebhookConfiguration
. The Helm Chart shipped with heimdall does this automatically as soon as this property is configured. It does however need acaBundle
to be set or injected. Otherwise, the Kubernetes API server won’t trust the configured TLS certificate and won’t use the endpoint.
Since multiple heimdall deployments with different configured That also means, if there is no heimdall deployment feeling responsible for the given RuleSet (due to |
Here, the provider is just enabled. Since no auth_class
is configured, it will load all RuleSets deployed to the Kubernetes environment.
kubernetes: {}
auth_class
setHere, the provider is configured to consider only those RuleSets, which authClassName
is set to foo
.
kubernetes:
auth_class: foo
auth_class
set and enabled validating admission controllerAs with the previous example, the provider is configured to consider only those RuleSets, which authClassName
is set to foo
. The admission controller is enabled as well and will validate RuleSet
resources before these are made available for loading.
kubernetes:
auth_class: foo
tls:
# below is the minimal required configuration
key_store:
path: /path/to/file.pem
This provider requires a RuleSet CRD being deployed, otherwise heimdall will not be able to monitor corresponding resources and emit error messages to the log. If you have used the Helm Chart to install heimdall, this CRD is already installed. You can however install it also like this:
|
RuleSet resource
As written above, the kubernetes
provider supports only rules, deployed as customer RuleSet
resources.
Each RuleSet
has the following attributes:
name
: string (required)The name of a rule set.
authClassName
: string (optional)References the heimdall instance, which should use this
RuleSet
.rules
: Rule Configuration array (mandatory)List of the actual rules.
apiVersion: heimdall.dadrus.github.com/v1alpha3
kind: RuleSet
metadata:
name: "<some name>"
spec:
authClassName: "<optional auth_class reference (see above)> "
rules:
- id: "<identifier of a rule 1>"
match:
url: http://127.0.0.1:9090/foo/<**>
execute:
- authenticator: foo
- authorizer: bar
In addition to configuration attributes described above, a RuleSet
resource has a status
stanza, which provides information about the usage status as soon as a RuleSet
has been loaded by at least one heimdall instance.
By making use of kubectl get -n <your namespace> rulesets.heimdall.dadrus.github.com
you’ll get an overview of deployed RuleSet
resources in a particular namespace, like e.g. shown below
NAME ACTIVE IN AGE
test-rules 2/2 32m
The value 2/2
in ACTIVE IN
means, <active in heimdall instances>/<matching instances>. With
"matching instances" being those heimdall instances, which
auth_class
matches theauthClassName
in theRuleSet
and"active in heimdall instances" are those from the "matching instances", which were able to load the
RuleSet
.
In addition, you can also get further information about the executed reconciliations by the deployed heimdall instances by taking a look at the .status.conditions
field. The reconciliation status of matching instances is present there. That also means, if there were errors while loading the RuleSet
, these are present in this condition list
E.g.
$ kubectl describe -n test rulesets.heimdall.dadrus.github.com test-rules
Name: test-rules
Namespace: test
...
Status:
Conditions:
Last Transition Time: 2023-11-08T21:55:36Z
Message: heimdall-6fb66c47bc-kwqqn instance successfully loaded RuleSet
Observed Generation: 1
Reason: RuleSetActive
Status: True
Type: heimdall-6fb66c47bc-kwqqn/Reconciliation
Last Transition Time: 2023-11-08T21:55:36Z
Message: heimdall-6fb66c47bc-l7skn instance successfully loaded RuleSet
Observed Generation: 1
Reason: RuleSetActive
Status: True
Type: heimdall-6fb66c47bc-l7skn/Reconciliation
Active In: 2/2
Events: <none>
Last updated on Nov 23, 2023