The Kubernetes vSphere driver contains bugs related to detaching volumes from offline nodes. See the Volume detach bug section for more details.
When creating worker nodes for a user cluster, the user can specify an existing image. Defaults may be set in the seed cluster
Supported operating systems
govc vm.change -e="disk.enableUUID=1" -vm='/PATH/TO/VM'
qemu-img convert -f qcow2 -O vmdk CentOS-7-x86_64-GenericCloud.qcow2 CentOS-7-x86_64-GenericCloud.vmdk
Modifications like Network, disk size, etc. must be done in the ova template before creating a worker node from it. If user clusters have dedicated networks, all user clusters therefore need a custom template.
During creation of a user cluster Kubermatic Kubernetes Platform (KKP) creates a dedicated VM folder in the root path on the Datastore (Defined in the seed cluster
That folder will contain all worker nodes of a user cluster.
Kubernetes needs to talk to the vSphere to enable Storage inside the cluster.
For this, kubernetes needs a config called
This config contains all details to connect to a vCenter installation, including credentials.
As this Config must also be deployed onto each worker node of a user cluster, its recommended to have individual credentials for each user cluster.
The vsphere user has to have to following permissions on the correct resources:
For provisioning actions of the KKP seed cluster, a technical user (e.g.
cust-seed-cluster) is needed:
For provisioning actions of the KKP in scope of an user cluster, a technical user (e.g.
cust-user-cluster) is needed:
cloud-init.iso(Ubuntu and CentOS) or defining the Ignition config into Guestinfo (CoreOS)
The described permissions have been tested with vSphere 6.7 and might be different for other vSphere versions.
It’s also possible to create the roles by a terraform script. The following repo can be used as reference:
After a node is powered-off, the Kubernetes vSphere driver doesn’t detach disks associated with PVCs mounted on that node. This makes it impossible to reschedule pods using these PVCs until the disks are manually detached in vCenter.
Upstream Kubernetes has been working on the issue for a long time now and tracking it under the following tickets:
Datastore in VMWare vSphere is an abstraction for storage. Datastore Cluster is a collection of datastores with shared resources and a shared management interface.
In KKP Datastores are used for two purposes:
dafault-datastorevalue, that is the default datastore for dynamic volume provisioning.
Datastore Clusters can only be used for the first purpose as it cannot be specified directly in vSphere cloud configuration.
There are two places where Datastores and Datastore Clusters can be configured in KKP
At the moment of writing this document *Datastore and Datastore Cluster
are not supported yet at
Cluster level by Kubermatic UI.
It is possible to specify Datastore or Datastore Clusters in a preset.