Add Seed Cluster to Kubermatic Kubernetes Platform (KKP) CE
This document describes how a new seed cluster can be added to an existing KKP master cluster.
It expects that all steps fromĀ Install Kubermatic Kubernetes Platform (KKP) CE
have been completed.
For smaller scale setups it’s possible to use the existing master cluster as a seed cluster (a “shared”
cluster installation). In this case both master and seed components will run on the same cluster and in
the same namespace. It is however not possible to use the same cluster for multiple seeds.
Please note that the Community Edition is limited to a single seed with a fixed name. To run a multi-seed
environment, please refer to the Enterprise Edition.
Please refer to the architecture diagrams for more information
about the cluster relationships.
Terminology
In this chapter, you will find the following KKP-specific terms:
- Master Cluster – A Kubernetes cluster which is responsible for storing central information about users, projects and SSH keys. It hosts the KKP master components and might also act as a seed cluster.
- Seed Cluster – A Kubernetes cluster which is responsible for hosting the control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd and more) of a User Cluster.
- User Cluster – A Kubernetes cluster created and managed by KKP, hosting applications managed by users.
Overview
The setup procedure for seed clusters happens in multiple stages:
- You have to prepare the Kubernetes cluster to be used as seed (set up MinIO for cluster backups and ensure that usable storage is available).
- You create a
Seed
resource on the master cluster. - The KKP Operator checks if the configured
Seed
cluster is valid and installs all KKP components like the
seed-controller-manager and necessary CRDs. This is an automated process.
Skip to Installation if you plan to use a different storage backend for cluster backups or
do not want to configure cluster backups at all.
KKP can perform regular backups of User Clusters by snapshotting the etcd of each cluster to a S3-compatible
storage backend. If no storage backend outside the seed cluster exists, an in-cluster MinIO
service can be installed via the minio
Helm chart provided with the KKP installer.
For more details on cluster backups, see Automatic Etcd Backups and Restore.
The following content assumes you are using the provided minio
Helm chart.
Create Backup StorageClass
MinIO requires a storage class which will be used as a backend for the exposed object storage. You can view the
storage classes available on the cluster using the following command:
kubectl get storageclasses
#NAME PROVISIONER AGE
#kubermatic-fast kubernetes.io/aws-ebs 195d
#kubermatic-backup kubernetes.io/aws-ebs 195d
#standard (default) kubernetes.io/aws-ebs 2y43d
It’s recommended that MinIO uses a separate storage class with a different location/security level,
but you can also use the default one if you desire.
As MinIO does not require any of the SSD’s advantages, you can use cheaper, HDD-backed storage. It’s recommended that MinIO uses
a separate storage class with a different location/security level. The following provides examples for several cloud providers:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-backup
provisioner: kubernetes.io/aws-ebs
parameters:
type: sc1
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-backup
provisioner: kubernetes.io/azure-disk
parameters:
kind: Managed
storageaccounttype: Standard_LRS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-backup
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-backup
provisioner: csi.vsphere.vmware.com
For other providers, please refer to the respective CSI driver documentation. It should guide you through setting up a StorageClass
. Ensure that the StorageClass
you create is named kubermatic-backup
. The final resource should look something like this:
# snippet, this is not a valid StorageClass!
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-backup
provisioner: csi.example.com
# CSI driver specific parameters
parameters:
parameter1: value1
parameter2: value2
You can copy and adjust the correct StorageClass
to a file (e.g. kubermatic-backup.yaml
) and apply it via kubectl apply -f ./kubermatic-backup.yaml
.
Prepare MinIO Configuration
If you are setting up MinIO, make sure to refer to the later section MinIO Backup Location
when creating your Seed
resource.
To configure the storage class to use and the size of backing storage, edit the minio
section in your values.yaml
file.
For more information about the Minio options, take a look at
the minio chart’s values.yaml
.
minio:
storeSize: '200Gi'
# specified storageClass will be used as a storage provider for minio
# which will be used store the etcd backup of the seed hosted User Clusters
storageClass: kubermatic-backup
# access key/secret for the exposed minio S3 gateway
credentials:
# generated access key length should be at least 3 characters
accessKey: "YOUR-ACCESS-KEY"
# generated secret key length should be at least 8 characters
secretKey: "YOUR-SECRET-KEY"
As a good practice, we also recommend installing the s3-exporter
Helm chart, which provides metrics regarding user
cluster backups.
Installation
The Kubermatic Installler is the recommended way to setup new seed clusters. A manual installation is possible if you do not want
to use the installer.
Applying Kubermatic CRDs is no (longer) necessary in recent KKP releases, since the logic to install them has moved to Kubermatic Operator.
The installer only ensures a suitable StorageClass
exists and the MinIO charts are installed.
Create a StorageClass
Apart from the previously mentioned StorageClass for MinIO, a seed setup also needs the same kubermatic-fast
StorageClass
that was already set up as part of the master installation. Please refer to those
instructions for setting the StorageClass up on your seed as well. Note that you might need to pass a different value to the
flag if your seed runs on a different cloud provider than your master.
You should skip this (by not passing the --storageclass
flag at all) if you are setting up a shared master/seed setup as
the StorageClass
has been created already during master installation.
If you do not want to install MinIO, the only thing to do is ensure a suitable StorageClass
named kubermatic-fast
exists
on the Seed Cluster (choose Option 3 from below). This StorageClass
should fulfill the performance requirements as explained
in the master installation documentation. The installer is capable of setting up
a suitable StorageClass
and is therefore still recommended to use.
Option 1: Use the Installer
Similar to how the Master Cluster can be installed with the installler, run the deploy kubermatic-seed
command. You still need to
manually ensure that the StorageClass you configured for MinIO exists already.
export KUBECONFIG=/path/to/seed-cluster/kubeconfig
./kubermatic-installer deploy kubermatic-seed \
# uncomment the line below after updating it to your respective provider; remove flag if provider is not supported or cluster is shared with master (see above)
# --storageclass aws \
--config kubermatic.yaml \
--helm-values values.yaml
The command above will take care of installing/updating the CRDs, setting up MinIO and the S3-exporter and attempts
to provide you with the necessary DNS settings after the installation has completed.
Option 2: Manual Installation
If you want to install MinIO charts manually, you can install them via helm
:
helm --namespace minio upgrade --install --create-namespace --wait --values /path/to/your/helm-values.yaml minio charts/minio/
helm --namespace kube-system upgrade --install --wait --values /path/to/your/helm-values.yaml s3-exporter charts/s3-exporter/
You will also need to manually ensure that a suitable StorageClass
called kubermatic-fast
exists.
Option 3: No Installation
If you have manually ensured that a suitable StorageClass
called kubermatic-fast
exists (see Create a StorageClass)
and do not want to install MinIO, no installation step is needed here. Everything else will be set up by the Kubermatic Operator once
the Seed
resource has been created.
Set Up MinIO Bucket (only needed with MinIO)
If you are using MinIO, a bucket needs to be created for cluster backups to be stored in. This can be done for example
via a Job
resource that spawns a Pod
running the mc
command against the freshly deployed MinIO service. Below you
find an example Job
definition. If you want to change the bucket name, replace src/kkpbackup
with src/YOUR_BUCKET_NAME
in the args
part of the template.
apiVersion: batch/v1
kind: Job
metadata:
name: create-minio-backup-bucket
namespace: minio
spec:
backoffLimit: 2
template:
spec:
containers:
- name: mc
image: quay.io/kubermatic/util:2.4.0
args:
- /bin/sh
- -c
- mc --insecure config host add src http://minio.minio.svc.cluster.local:9000 "$MINIO_ACCESS_KEY" "$MINIO_SECRET_KEY" && mc --insecure mb src/kkpbackup
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: accessKey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secretKey
securityContext:
runAsNonRoot: false
runAsUser: 0
restartPolicy: Never
Put this into a file (e.g. called minio-bucket-job.yaml
) and create it via:
kubectl create -f ./minio-bucket-job.yaml
#Job/create-minio-backup-bucket created
Supervise the Job
to ensure the bucket gets created successfully.
Add Seed Resource
Next you need to prepare and apply the Seed
resource that will connect the master cluster to the new seed cluster to finalize
the seed setup (applying CRDs and creating seed-specific workloads).
Create Seed Kubeconfig
To connect the new Seed Cluster with the Master, you need to create a kubeconfig Secret and a Seed resource
on the Master Cluster. This allows the KKP components in the Master Cluster to communicate with the Seed Cluster and
reconcile user-cluster control planes.
If you have any potential networking restrictions (like firewalls) in place, make sure that your Master Cluster is allowed
to connect to your Seed Cluster’s Kubernetes API endpoint.
The separate kubeconfig Secret needs to be provided even when a shared Master/Seed Cluster is being set up. Make sure that
the kubeconfig you provide to the Seed resource has an Kubernetes API endpoint configured that is reachable from within the cluster.
To make sure that the kubeconfig stays valid forever, it must not contain temporary login tokens. Depending on the
cloud provider, the default kubeconfig that is provided may not contain username+password / a client certificate, but instead
try to talk to local token helper programs like aws-iam-authenticator
for AWS or gcloud
for the Google Cloud (GKE).
These kubeconfig files will not work for setting up Seeds.
The kubermatic-installer
tool provides a command convert-kubeconfig
that can be used to prepare a kubeconfig for
usage in Kubermatic. The script will create a ServiceAccount in the seed cluster, bind it to the cluster-admin
role
and then put the ServiceAccount’s token into the kubeconfig file. Afterwards the file can be used in KKP.
./kubermatic-installer convert-kubeconfig <ORIGINAL-KUBECONFIG-FILE> > my-kubeconfig-file
Seed Resource Snippet
The Seed
resource itself must be called kubermatic
(for the Community Edition) and needs to reference the new
kubeconfig Secret
. Below you find a starting point for your Seed
:
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig-kubermatic
namespace: kubermatic
type: Opaque
data:
# You can use `base64 -w0 my-kubeconfig-file` to encode the
# kubeconfig properly for inserting into this Secret.
kubeconfig: <base64 encoded kubeconfig>
---
apiVersion: kubermatic.k8c.io/v1
kind: Seed
metadata:
# The Seed *must* be named "kubermatic".
name: kubermatic
namespace: kubermatic
spec:
# These two fields are only informational.
country: DE
location: Hamburg
# List of datacenters where this seed cluster is allowed to create clusters in; see below for examples.
datacenters: {}
# etcd backup and restore configuration. See below for how to configure this section, depending
# on the storage backend you chose. Omit this field if you do not wish to configure etcd backups.
etcdBackupRestore: {}
# Reference to the kubeconfig to use when connecting to this seed cluster.
kubeconfig:
name: kubeconfig-kubermatic
namespace: kubermatic
Refer to the Seed resource example for a
complete example of the Seed CustomResource and all possible datacenters. You can also check the CRD documentation
for a full reference.
Key considerations for creating your Seed
resource are:
- Configure appropriate datacenters (see below).
- Configure backup locations if you wish to use cluster backups (see below).
- Some global settings such as the expose strategy can be overridden on a per-seed level.
Each Seed
has a map of so-called Datacenters (under .spec.datacenters
), which define the cloud
provider locations that User Clusters can be deployed to. Every datacenter name is globally unique in a KKP setup.
Users will select from a list of datacenters when creating User Clusters and their clusters will
automatically get scheduled to the seed that defines that datacenter.
Check the CRD reference for a full reference of all possible
fields for a datacenter definition. Below you can find a few examples as a starting point to define your
datacenters:
# Datacenter for AWS 'eu-central-1' region
aws-eu-central-1a:
country: DE
location: EU (Frankfurt)
spec:
aws:
region: eu-central-1
# Datacenter for AWS 'eu-west-1' region
aws-eu-west-1a:
country: IE
location: EU (Ireland)
spec:
aws:
region: eu-west-1
# Datacenter for Azure 'westeurope' location
azure-westeurope:
country: NL
location: West Europe
spec:
azure:
location: westeurope
# Datacenter for GCP 'europe-west3' region
# this is configured to use three availability zones and spread cluster resources across them
gce-eu-west-3:
country: DE
location: Frankfurt
spec:
gcp:
region: europe-west3
regional: true
zoneSuffixes: [a,b,c]
# Datacenter for a vSphere setup available under https://vsphere.hamburg.example.com
vsphere-hamburg:
country: DE
location: vSphere Hamburg
spec:
vsphere:
cluster: Hamburg
datacenter: Hamburg
datastore: hamburg1
endpoint: "https://vsphere.hamburg.example.com"
rootPath: /Hamburg/vm/kubernetes
templates:
ubuntu: ubuntu-20.04-server-cloudimg-amd64
Note that for many private datacenter providers (such as OpenStack, vSphere or Nutanix), the templates
section is mandatory
for providing default images to use for various OSes.
Within your Seed
resource, the .spec.etcdBackupRestore
key configures cluster backup locations. Depending
on which storage backend you chose to work with before, this will look slightly different. Below you will find
two examples. Be aware that you can configure multiple destinations and as such could configure both the MinIO
backup location and another S3-compatible storage backend at the same time.
Omit defaultDestination
if you do not wish to enable default etcd backups on all clusters.
Additional backup locations can also be added after installation either by updating the Seed
resource or via
the UI.
MinIO Backup Location
If MinIO was installed from the provided Helm chart, the etcd backup location configuration
should look like this (the credentials secret is created by the minio
Helm chart):
# snippet, not a valid seed resource!
apiVersion: kubermatic.k8c.io/v1
kind: Seed
metadata:
name: kubermatic
namespace: kubermatic
spec:
[...]
etcdBackupRestore:
defaultDestination: minio
destinations:
minio:
# use the bucket name chosen for the create-minio-backup-bucket Job from above.
bucketName: kkpbackup
credentials:
name: kubermatic-s3-credentials
namespace: kube-system
endpoint: http://minio.minio.svc.cluster.local:9000
Any S3-compatible Storage Backend
If another S3-compatible storage backend is supposed to be used for cluster backups, ensure you have an endpoint,
access key ID and secret access key available. Put access key information in a Secret
like the one below (the name
is given as an example and does not have to be s3-backup-credentials
):
apiVersion: v1
kind: Secret
metadata:
name: s3-backup-credentials
namespace: kube-system
stringData:
ACCESS_KEY_ID: <YOUR_ACCESS_KEY_ID>
SECRET_ACCESS_KEY: <YOUR_SECRET_ACCESS_KEY>
Apply it via kubectl
. Afterwards, update your .spec.etcdBackupRestore
to reference your Secret
and the
storage backend’s endpoint (replacing bucket, secret reference and endpoint as appropriate):
# snippet, not a valid seed resource!
apiVersion: kubermatic.k8c.io/v1
kind: Seed
metadata:
name: kubermatic
namespace: kubermatic
spec:
[...]
etcdBackupRestore:
defaultDestination: s3
destinations:
s3:
bucketName: examplebucketname
credentials:
name: s3-backup-credentials
namespace: kube-system
endpoint: https://s3.amazonaws.com
Create Seed on Master Cluster
Apply the manifest above in the master cluster and KKP will pick up the new Seed and begin to reconcile it by installing the
required KKP components.
kubectl apply -f seed-with-secret.yaml
#Secret/kubeconfig-kubermatic created.
#Seed/kubermatic created.
You can watch the progress by using kubectl
and watch
on the master cluster:
watch kubectl -n kubermatic get seeds
#NAME CLUSTERS LOCATION KKP VERSION CLUSTER VERSION PHASE AGE
#kubermatic 0 Hamburg v2.21.2 v1.24.8 Healthy 5m
Watch the PHASE
column until it shows “Healthy”. If it does not after a couple of minutes, you can check
the kubermatic
namespace on the new seed cluster and verify if there are any Pods showing signs of issues:
kubectl get pods -n kubermatic
#NAME READY STATUS RESTARTS AGE
#kubermatic-api-55765568f7-br9jl 1/1 Running 0 5m4s
#kubermatic-api-55765568f7-xbvz2 1/1 Running 0 5m13s
#kubermatic-dashboard-5d784d586b-f46f8 1/1 Running 0 35m
#kubermatic-dashboard-5d784d586b-rgl29 1/1 Running 0 35m
#kubermatic-master-controller-manager-f58d4df59-w7rkz 1/1 Running 0 5m13s
#kubermatic-operator-7f6957869d-89g55 1/1 Running 0 5m37s
#nodeport-proxy-envoy-6d8bb6fbff-9z57l 2/2 Running 0 5m6s
#nodeport-proxy-envoy-6d8bb6fbff-dl58l 2/2 Running 0 4m54s
#nodeport-proxy-envoy-6d8bb6fbff-k4gp8 2/2 Running 0 4m44s
#nodeport-proxy-updater-7fd55f948-cll8n 1/1 Running 0 4m44s
#seed-proxy-kubermatic-6dd5cc95cf-r6wvb 1/1 Running 0 80m
If you experience issues with the seed cluster setup, for example nothing happening in the kubermatic
namespace,
check the Kubermatic Operator logs on the master cluster, for example via:
kubectl --namespace kubermatic logs -l app.kubernetes.io/name=kubermatic-operator -f
Update DNS
Depending on the chosen Expose Strategy, the control planes of all User Clusters
running in the Seed Cluster will be exposed by the nodeport-proxy
or using services of type NodePort
directly.
By default each User Cluster gets a virtual domain name like [cluster-id].[seed-name].[kubermatic-domain]
, e.g.
hdu328tr.kubermatic.kkp.example.com
for the Seed from the previous step with kkp.example.com
being the main domain
where the KKP dashboard/API are available.
A wildcard DNS record *.[seed-name].[kubermatic-domain]
must be created. The target of the DNS wildcard record should be the
EXTERNAL-IP
of the nodeport-proxy
service in the kubermatic
namespace or a set of seed nodes IPs.
With LoadBalancers
When your cloud provider supports LoadBalancers, you can find the target IP / hostname by looking at the
nodeport-proxy
Service:
kubectl -n kubermatic get services
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
#nodeport-proxy LoadBalancer 10.47.248.232 8.7.6.5 80:32014/TCP,443:30772/TCP 449d
The EXTERNAL-IP
is what you need to put into the DNS record.
Without LoadBalancers
Without a LoadBalancer, you will need to point to one or many of the seed cluster’s nodes. You can get a
list of external IPs like so:
kubectl get nodes -o wide
#NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
#worker-node-cbd686cd-50nx Ready <none> 3h36m v1.15.8-gke.3 10.156.0.36 8.7.6.4
#worker-node-cbd686cd-59s2 Ready <none> 21m v1.15.8-gke.3 10.156.0.14 8.7.6.3
#worker-node-cbd686cd-90j3 Ready <none> 45m v1.15.8-gke.3 10.156.0.22 8.7.6.2
DNS Record
Create an A or CNAME record as needed pointing to the target:
*.kubermatic.kkp.example.com. IN A 8.7.6.5
; or for a CNAME:
*.kubermatic.kkp.example.com. IN CNAME myloadbalancer.example.com.
Once your DNS settings have propagated (this takes a few minutes depending on your environment), your seed setup is complete.
Next Steps
After your seed has been set up successfully, your KKP setup is functional and can be used to create
User Clusters on that seed. Here are a couple of suggestions what to do next: