Install Kubermatic Kubernetes Platform(KKP) CE
This chapter explains the installation procedure of Kubermatic Kubernetes Platform (KKP) into a pre-existing Kubernetes cluster.
Terminology
- User/Customer cluster – A Kubernetes cluster created and managed by KKP
- Seed cluster – A Kubernetes cluster which is responsible for hosting the master components of a customer cluster
- Master cluster – A Kubernetes cluster which is responsible for storing the information about users, projects and SSH keys. It hosts the KKP components and might also act as a seed cluster.
- Seed datacenter – A definition/reference to a seed cluster
- Node datacenter – A definition/reference of a datacenter/region/zone at a cloud provider (aws=zone, digitalocean=region, openstack=zone)
Requirements
Before installing, make sure your Kubernetes cluster meets the minimal requirements
and make yourself familiar with the requirements for your chosen cloud provider.
For this guide you will have to have kubectl
and Helm (version 2) installed locally.
Installation
To begin the installation, make sure you have a kubeconfig at hand, with a user context that grants cluster-admin
permissions.
Download the Installer
Download the tarball (e.g. kubermatic-X.Y.tar.gz) containing the
Helm charts choosing the appropriate release (vX.Y
) and extract it. e.g.
# For latest version:
VERSION=$(curl -w '%{url_effective}' -I -L -s -S https://github.com/kubermatic/kubermatic/releases/latest -o /dev/null | sed -e 's|.*/v||')
# For specific version set it explicitly:
# VERSION=2.14.x
wget https://github.com/kubermatic/kubermatic/releases/download/v${VERSION}/kubermatic-ce-v${VERSION}.tar.gz
tar -xzvf kubermatic-ce-v${VERSION}.tar.gz
Create a StorageClass
KKP uses a custom storage class for the volumes created for user clusters. This class, kubermatic-fast
, needs
to be manually created during the installation and its parameters depend highly on the environment where KKP is
installed.
It’s highly recommended to use SSD-based volumes, as etcd is very sensitive to slow disk I/O. If your cluster already
provides a default SSD-based storage class, you can simply copy and re-create it as kubermatic-fast
. For a cluster
running on AWS, an example class could look like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubermatic-fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
Store the above YAML snippet in a file and then apply it using kubectl
:
kubectl apply -f aws-storageclass.yaml
Please consult the Kubernetes documentation
for more information about the possible parameters for your storage backend.
Install Helm’s Tiller
It’s required to setup Tiller inside the cluster. This requires setting up a ClusterRole and
-Binding, before installing Tiller itself. If your cluster already has Tiller installed in another namespace, you
can re-use it, but an installation dedicated for KKP is preferred.
kubectl create namespace kubermatic
kubectl create serviceaccount -n kubermatic tiller
kubectl create clusterrolebinding tiller-cluster-role --clusterrole=cluster-admin --serviceaccount=kubermatic:tiller
helm --service-account tiller --tiller-namespace kubermatic init
Prepare Configuration
KKP ships with a number of Helm charts that need to be installed into the master or seed clusters. These are
built so they can be configured using a single, shared values.yaml
file. The required charts are:
- Master cluster: cert-manager, nginx-ingress-controller, oauth
Optional charts are:
In addition to the values.yaml
for configuring the charts, a number of options will later be made inside a special
KubermaticConfiguration
resource.
A minimal configuration for Helm charts sets these options. You can find it in the /examples directory of the tarball.
The secret keys mentioned below can be generated using any password generator or on the shell using
cat /dev/urandom | tr -dc A-Za-z0-9 | head -c32
.
On MacOS, use brew install coreutils
and cat /dev/urandom | gtr -dc A-Za-z0-9 | head -c32
# Dex is the OpenID Provider for KKP.
dex:
ingress:
# configure your base domain, under which the KKP dashboard shall be available
host: kubermatic.example.com
clients:
# The "kubermatic" client is used for logging into the KKP dashboard. It always
# needs to be configured.
- id: kubermatic
name: Kubermatic
# generate a secure secret key
secret: <dex-kubermatic-oauth-secret-here>
RedirectURIs:
# ensure the URLs below use the dex.ingress.host configured above
- https://kubermatic.example.com
- https://kubermatic.example.com/projects
# Depending on your chosen login method, you need to configure either an OAuth provider like
# Google or GitHub, or configure a set of static passwords. Check the `charts/oauth/values.yaml`
# for an overview over all available connectors.
# For testing purposes, we configure a single static user/password combination.
staticPasswords:
- email: "kubermatic@example.com"
# bcrypt hash of the string "password", can be created using recent versions of htpasswd:
# `htpasswd -bnBC 10 "" PASSWORD_HERE | tr -d ':\n' | sed 's/$2y/$2a/'`
hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W"
# these are used within KKP to identify the user
username: "admin"
userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"
Install Dependencies
With the configuration prepared, it’s now time to install the required Helm charts into the master
cluster. Take note of where you placed your values.yaml
and then run the following commands in your
shell:
helm upgrade --tiller-namespace kubermatic --install --values YOUR_VALUES_YAML_PATH --namespace nginx-ingress-controller nginx-ingress-controller charts/nginx-ingress-controller/
helm upgrade --tiller-namespace kubermatic --install --values YOUR_VALUES_YAML_PATH --namespace cert-manager cert-manager charts/cert-manager/
helm upgrade --tiller-namespace kubermatic --install --values YOUR_VALUES_YAML_PATH --namespace oauth oauth charts/oauth/
Please, make sure that the cert-manager
is available, before continuing and installing oauth
, by waiting a minute for its pods to be running (see: Validation section below).
Validation
Before continuing, make sure the charts we just installed are functioning correctly. Check that pods inside the
nginx-ingress-controller
, oauth
and cert-manager
namespaces are in status Running
:
kubectl -n nginx-ingress-controller get pods
#NAME READY STATUS RESTARTS AGE
#nginx-ingress-controller-55dd87fc7f-5q4zb 1/1 Running 0 17m
#nginx-ingress-controller-55dd87fc7f-l492k 1/1 Running 0 4h56m
#nginx-ingress-controller-55dd87fc7f-rwcwf 1/1 Running 0 5h33m
kubectl -n oauth get pods
#NAME READY STATUS RESTARTS AGE
#dex-7795d657ff-b4fmq 1/1 Running 0 4h59m
#dex-7795d657ff-kqbk8 1/1 Running 0 20m
kubectl -n cert-manager get pods
#NAME READY STATUS RESTARTS AGE
#cainjector-5dc8ccbd45-gk6xp 1/1 Running 0 5h36m
#cert-manager-799ccc8b5-m7wxk 1/1 Running 0 20m
#webhook-575b887-zb6m2 1/1 Running 0 5h36m
You should also have a working LoadBalancer service created by nginx:
Not all cloud providers provide support for LoadBalancers. In these environments the nginx-ingress-controller
chart can
be configured to use a NodePort Service instead, which would open ports 80 and 443 on every node of the cluster. Refer to
the charts/nginx-ingress-controller/values.yaml
for more information.
kubectl -n nginx-ingress-controller get services
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
#nginx-ingress-controller LoadBalancer 10.47.248.232 1.2.3.4 80:32014/TCP,443:30772/TCP 449d
Take note of the EXTERNAL-IP
of this service (1.2.3.4
in the example above). You will need to configure a DNS record
pointing to this in a later step.
If any of the pods above are not working, check their logs and describe them (kubectl -n nginx-ingress-controller describe pod ...
)
to see what’s causing the issues.
Install KKP Operator
Before installing the KKP Operator, the KKP CRDs need to be installed. You can install them like so:
kubectl apply -f charts/kubermatic/crd/
After this, the operator chart can be installed like the previous Helm charts:
helm upgrade --tiller-namespace kubermatic --install --values YOUR_VALUES_YAML_PATH --namespace kubermatic kubermatic-operator charts/kubermatic-operator/
Validation
Once again, let’s check that the operator is working properly:
kubectl -n kubermatic get pods
#NAME READY STATUS RESTARTS AGE
#kubermatic-operator-769986fc8b-7gpsc 1/1 Running 0 28m
Create KKP Configuration
It’s now time to configure KKP itself. This will be done in a KubermaticConfiguration
CRD, for which a
full example with all options is available, but for the
purpose of this document we will only need to configure a few things:
apiVersion: operator.kubermatic.io/v1alpha1
kind: KubermaticConfiguration
metadata:
name: kubermatic
namespace: kubermatic
spec:
ingress:
# this domain must match what you configured as dex.ingress.host
# in the values.yaml
domain: kubermatic.example.com
# These secret keys configure the way components communicate with Dex.
auth:
# this must match the secret configured for the KKP client from
# the values.yaml.
issuerClientSecret: <dex-kubermatic-oauth-secret-here>
# these need to be randomly generated. Those can be generated on the
# shell using:
# cat /dev/urandom | tr -dc A-Za-z0-9 | head -c32
issuerCookieKey: <a-random-key>
serviceAccountKey: <another-random-key>
# this needs to match the one in the values.yaml file.
imagePullSecret: |
{
"auths": {
"quay.io": {....}
}
}
You can find the YAML above under examples/kubermatic.example.ce.yaml
Apply it like using kubectl:
kubectl apply -f examples/kubermatic.example.ce.yaml
This will now cause the operator to being provisioning a master cluster for KKP. You can observe the progress by
looking at watch kubectl -n kubermatic get pods
:
watch kubectl -n kubermatic get pods
#NAME READY STATUS RESTARTS AGE
#kubermatic-api-cfcd95746-5r9z2 1/1 Running 0 24m
#kubermatic-api-cfcd95746-tsqjc 1/1 Running 0 28m
#kubermatic-master-controller-manager-7d97bb887d-8nb74 1/1 Running 0 3m23s
#kubermatic-master-controller-manager-7d97bb887d-z8t9w 1/1 Running 0 28m
#kubermatic-operator-769986fc8b-7gpsc 1/1 Running 0 28m
#kubermatic-ui-7fc858fb4b-dq5b5 1/1 Running 0 85m
#kubermatic-ui-7fc858fb4b-s8fnn 1/1 Running 0 24m
Note that because we don’t yet have a TLS certificate and no DNS records configured, some of the pods will crashloop
until this is fixed.
Create DNS Records
In order to acquire a valid certificate, a DNS name needs to point to your cluster. Depending on your environment,
this can mean a LoadBalancer service or a NodePort service.
With Load Balancers
When your cloud provider supports Load Balancers, you can find the target IP / hostname by looking at the
nginx-ingress-controller
Service:
kubectl -n nginx-ingress-controller get services
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
#nginx-ingress-controller LoadBalancer 10.47.248.232 1.2.3.4 80:32014/TCP,443:30772/TCP 449d
The EXTERNAL-IP
is what we need to put into the DNS record.
Without Load Balancers
Without a LoadBalancer, you will need to use the NodePort service (refer to the charts/nginx-ingress-controller/values.yaml
for more information) and setup the DNS records to point to one or many of your cluster’s nodes. You can get a list of
external IPs like so:
kubectl get nodes -o wide
#NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
#worker-node-cbd686cd-50nx Ready <none> 3h36m v1.15.8-gke.3 10.156.0.36 1.2.3.4
#worker-node-cbd686cd-59s2 Ready <none> 21m v1.15.8-gke.3 10.156.0.14 1.2.3.5
#worker-node-cbd686cd-90j3 Ready <none> 45m v1.15.8-gke.3 10.156.0.22 1.2.3.6
Some cloud providers list the external IP as the INTERNAL-IP
and show no value for the EXTENAL-IP
. In this case,
use the internal IP.
For this example we choose the second node, and so 1.2.3.5
is our DNS record target.
DNS Records
The main DNS record must connect the kubermatic.example.com
domain with the target IP / hostname. Depending on whether
or not your LoadBalancer/node uses hostnames instead of IPs (like AWS ELB), create either an A or a CNAME record,
respectively.
kubermatic.example.com. IN A 1.2.3.4
or, for a CNAME:
kubermatic.example.com. IN CNAME myloadbalancer.example.com.
Identity Aware Proxy
It’s a common step to later setup an identity-aware proxy (IAP) to
securely access other Kubermatic components from the logging or monitoring
stacks. This involves setting up either individual DNS records per IAP deployment or simply creating a single wildcard
record: *.kubermatic.example.com
.
Whatever you choose, the DNS record needs to point to the same endpoint (IP or hostname, meaning A or CNAME
records respectively) as the previous record, i.e. 1.2.3.4
.
*.kubermatic.example.com. IN A 1.2.3.4
; or for a CNAME:
*.kubermatic.example.com. IN CNAME myloadbalancer.example.com.
If CNAME records are not possible, you would configure individual records instead:
prometheus.kubermatic.example.com. IN A 1.2.3.4
alertmanager.kubermatic.example.com. IN A 1.2.3.4
Validation
With the 2 DNS records configured, it’s now time to wait for the certificate to be acquired. You can watch the progress
by doing watch kubectl -n kubermatic get certificates
until it shows READY=True
:
watch kubectl -n kubermatic get certificates
#NAME READY SECRET AGE
#kubermatic True kubermatic-tls 1h
If the certificate does not become ready, describe
it and follow the chain from Certificate to Order to Challenges.
Typical faults include bad DNS records or a misconfigured KubermaticConfiguration pointing to a different domain.
Have a Break
With all this in place, you should be able to access https://kubermatic.example.com/ and login either with your static
password from the values.yaml
or using any of your chosen connectors. All pods running inside the kubermatic
namespace
should now be running. If they are not, check their logs to find out what’s broken.
Next Steps