The expose strategy defines the entry point for the control plane of the user clusters managed by Kubermatic Kubernetes Platform (KKP). Find how to configure the expose strategy in your KKP cluster in this guide.
The kubelets of the worker nodes and the pods running on them will reach the
Kubernetes API Server (KAS) in different ways depending on the chosen expose
The components that are exposed for each user cluster are:
Currently, the supported expose strategies are:
NodePort will be created for every exposed service on the user cluster.
Clients will use the combination of the FQDN and the port to connect.
A wildcard DNS record (A record) should be created and maintained by the KKP operator for each of the seed clusters, using the following pattern:
It must point to one or more of the seed cluster node IPs.
Note that as clients will target the seed nodes directly, the IPs used in the DNS entries should be routable from the user cluster worker networks.
An extension to the previous strategy that simplifies the operations is to use
one LoadBalancer per seed cluster. The routing to the right user cluster and
its exposed services is based on the port. Services of type Nodeport are used
to guarantee the uniqueness of the allocation.
When using this strategy the
NodeportProxy will be deployed into the seed.
It will create a Kubernetes Service of type
The advantage of this solution is that it uses a single point of entry. The requirement in terms of DNS configuration is to setup a wildcard entry (A or CNAME record) pointing to the static IPv4 address or FQDN associated to the load balancer. The DNS entry should follow this pattern:
NodePortProxy is composed by a set of Envoy proxies and a control plane to
configure them dynamically when clusters are added or removed, and when the
exposed service endpoints change (e.g. KAS pods are created or terminated).
The Envoy proxies are needed, because chaining Kubernetes services is not allowed.
A third option is to create one load balancer per user cluster.
This will result in one service of type
LoadBalancer per user cluster being
NodeportProxy will be used in this strategy too, to avoid
creating a load balancer per exposed service.
This is simple to setup, but will result in one service of type
LoadBalancer per cluster
KKP manages. This my result in additional charges by your cloud provider.
This strategy is based on a single load balancer, like the aforementioned
NodePort with Global LoadBalancer strategy. The main difference is that it is
not relying on Services of type
NodePort. The traffic will be routed based on
SNI and based on tunneling techniques (e.g. HTTP/2 CONNECT).
The reasons why we cannot rely solely on SNI routing are two:
kubernetesservice in default namespace, using the ClusterIP. This means that no SNI information will be present in the
Client Helloduring the TLS handshake.
The traffic that cannot be routed based on SNI will be tunneled trough agents
running on the user cluster worker nodes.
When using this strategy the
NodeportProxy will be deployed into the seed
cluster. It will also create a Kubernetes Service of type
pointing to it.
The requirement in terms of DNS configuration is to setup a wildcard entry (A or CNAME record) pointing to the static IPv4 address or FQDN associated to the load balancer.
The DNS entry should follow this pattern: