Audit Logging is one of the key security features provided by Kubernetes. Once enabled in the Kubernetes API server, it provides a chronological record of operations performed on the cluster by users, administrators and other cluster components.
Audit logging is also a key requirement of the Kubernetes CIS benchmark.
For more details, you can refer to the upstream documentation.
KKP provides two levels of support for the Audit Logging:
- Audit Logging on user-cluster level
- Audit Logging on a datacenter level
Kubernetes Audit Logging is optional and is not enabled by default, since it requires additional memory and storage resources, depending on the specific configuration used.
Audit logs, if enabled, are emitted by a sidecar container called audit-logs
in the kubernetes-apiserver
Pods on the Seed Cluster in your cluster namespace. Setting up the MLA stack on Master / Seed will allow storing the audit logs alongside other Pod logs collected by the MLA stack.
if you do not choose an audit policy preset, KKP will set up a minimal audit policy for you.
This file is stored in a ConfigMap named audit-config
on the Seed Cluster in your cluster namespace. To modify the default policy, you can edit this ConfigMap using kubectl
:
$ kubectl edit -n cluster-<YOUR CLUSTER ID> configmap audit-config
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
Audit Policy Presets
KKP supports a set of maintained audit policies as presets in case you do not want to tune the audit policy for yourself.
A preset can be selected during cluster creation in the UI or by setting the field auditLogging.policyPreset
on a
user-cluster spec (when audit logging is enabled). The preset selection can be unset by setting the field to an empty string.
Enabling an audit policy preset on your user-cluster will override any manual changes to the audit-config
ConfigMap.
The following presets are available right now:
metadata
: Logs metadata for any request (matches the default policy configured when using no policy preset)minimal
: Is considered the bare minimum that allows to audit for key operations on the cluster. Logs the following operations:- any modification to
Pods
, Deployments
, StatefulSets
, DaemonSets
and ReplicaSets
(complete request and response bodies) - any access to Pods via shell (by using
exec
to spawn a process) or port-forwarding/proxy (complete request and response bodies) - access to container logs (metadata only)
- any access (read, write or delete) to
Secrets
and ConfigMaps
(metadata only, as the request body could include sensitive information)
recommended
: Logs everything in minimal
plus metadata for any other request. This is the most verbose audit policy preset, but is recommended due to its extended coverage of security recommendations like the CIS Benchmark
Custom Output Configuration
In some situations the default behaviour of writing the audit logs to standard output and processing them alongside regular container logs might not be desirable. For those cases, Cluster
objects support custom configuration for the fluentbit sidecar via spec.auditLogging.sidecar
(also see CRD reference).
In specific, spec.auditLogging.sidecar.config
has three fields that allow custom elements in the fluent-bit configuration. All sections in this are maps, which means any key and value can be given to set specific values.
Configuration options are not validated before being passed to fluentbit, so it is strongly recommended to test settings on non-production user clusters before applying them.
The available options are:
service
: Configures the [SERVICE] section, enabling fine-tuning of fluentbit. Note that some options alter fluentbit’s behaviour and can cause issues in some cases (for example, adjusting the daemon
setting).filters
: Configures one or several [FILTER] sections. To match audit logs, put a match: '*'
directive in your filter definition. See fluentbit documentation for available filters.outputs
Configures one or several [OUTPUT] sections. See fluentbit documentation for available outputs.
Since this setting is part of the cluster specification, you might have the requirement to avoid disclosing credentials used to access your log output targets (a company-wide logging system, for example). In those situations, it is recommended to set up a central forwarder in your seed cluster that is then used by fluentbit outputs. This is possible by e.g. setting up fluentd and using a forward output.
Be aware that the API server network policy feature will block the sidecar from sending logs to an external output by default. You will need to set up a custom egress NetworkPolicy that matches the app=apiserver
Pod label. The specific policy depends on where you are planning to send your logs (for example, an in-cluster service can be targeted via label matching, while external services will need to be allowed by IP address).
An example configuration could look like this:
# Cluster spec snippet, not a complete configuration
spec:
auditLogging:
enabled: true
sidecar:
config:
service:
Flush: 10
filters:
- Name: grep
Match: *
Regex: "user@example.com"
outputs:
- Name: forward
Match: *
Host: "fluentd.audit-forward.svc.cluster.local"
This configures the fluentbit sidecar to flush incoming audit logs every 10 seconds, filters them by a string (user@example.com
) and writes them to a manually deployed fluentd service available in-cluster.
Audit Logs Source Identification
Depending on your architecture, it might be advisable to use the sidecar configuration options to enrich logs with metadata, e.g. the cluster name. This is likely necessary to differentiate the source of your audit logs in a central storage location. This can be done via a filter plugin, like this:
# snippet, needs to be added to spec.auditLogging.sidecar
filters:
- Name: record_modifier
Match: *
Record: cluster <CLUSTER ID>
Replace <CLUSTER ID>
with the ID of your cluster.
Future KKP releases may add an environment variable to automatically get the cluster ID or even enrich records with this information by default.
User Cluster Level Audit Logging
To enable user-cluster level Audit Logging, simply check Audit Logging
in the KKP dashboard Create Cluster
page. You can either select “custom” to be able to edit the ConfigMap for audit logging later on or set your cluster up with a preset:
For exiting clusters, you can go to the cluster page, edit your cluster and enable (or disable) Audit Logging
:
Datacenter Level Audit Logging
KKP also supports enabling Audit Logging on the datacenter level. In this case, the option is enforced on all user-clusters in the datacenter. The user-cluster level flag is ignored in this case.
To enable this, you will need to edit your datacenter definitions in a Seed, and set enforceAuditLogging
to true
in the datacenter spec.
Webhook Backend For Audit Logs
User clusters can be also be configured to send audit logs to a webhook backend, KKP admin needs to create kubernetes secret that holds the audit webhook backend configuration, on the seed cluster which can then be used to enable webhook backend at the cluster or the datacenter level.
The Kubernetes api-server expects the webhook configuration file to have a format similar to kubeconfig.
apiVersion: v1
kind: Config
clusters:
- name: audit-webhook
cluster:
server: http://<webhook-server>
contexts:
- name: audit-webhook
context:
cluster: audit-webhook
user: ""
current-context: audit-webhook
users: []
preferences: {}
Once we have the audit webhook configuration we can base64 encode it and create the secret.
apiVersion: v1
data:
webhook.yaml: <base64 encoded audit webhook configuration>
kind: Secret
metadata:
name: audit-webhook-backend
namespace: cluster-e9s4w7jk6t
type: Opaque
User Cluster Level Audit Webhook Backend
To enable webhook backend for an existing user cluster, first create the secret that has the webhook backend configuration in the cluster namespace (cluster-<cluster-id>)
on the seed cluster & then edit the cluster from the KKP GUI to specify the secret.
Datacenter Level Audit Webhook Backend
Audit webhook backend can be enabled at the datacenter level as well, this enforces the audit webhook backend on all the user cluster in the datacenter, this can be done by specifying enforcedAuditWebhookSettings
for the datacenter where we want enable webhook backend.
enforcedAuditWebhookSettings:
auditWebhookConfig:
name: audit-webhook-backend-secret
namespace: kubermatic
auditWebhookInitialBackoff: 15s
Existing user clusters in the datacenter aren’t update to enable the audit webhook backend, only the ones created after the webhook backend settings are applied on the datacenter comes up with audit webhook backend enabled.
Network Policy For Accessing Audit Webhook Backend Server
The egress for the user cluster’s Kubernetes api-server is restricted with the help of network policies therefore once the audit webhook backend is enabled a network policy also needs to be created for allowing the api-server egress to the webhook backend server. For example for a webhook backend server running on 172.31.43.54
on port 30001
the network policy may look like the one below.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: audit-webhook-allow
namespace: cluster-e9s4w7jk6t
spec:
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 172.31.43.54/32
ports:
- protocol: TCP
port: 30001
podSelector:
matchLabels:
app: apiserver