At its heart, Kubermatic Virtualization uses KubeVirt, a Kubernetes add-on. KubeVirt allows you to run virtual machines (VMs) right alongside your containers, and it’s built to heavily use Kubernetes’ existing storage model. The Container Storage Interface (CSI) driver is a crucial component in this setup because it allows KubeVirt to leverage the vast and diverse storage ecosystem of Kubernetes for its VMs.
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Before CSI, storage integrations were tightly coupled with Kubernetes’ core code. CSI revolutionized this by providing a pluggable architecture, allowing storage vendors to develop drivers that can integrate with Kubernetes without modifying Kubernetes itself.
KubeVirt’s integration with CSI (Container Storage Interface) drivers is fundamental to how it manages VM storage. This document explains how CSI enables dynamic volume provisioning, image importing, and advanced VM disk features in KubeVirt.
KubeVirt does not directly interact with the underlying storage backend (e.g., SAN, NAS, cloud block storage). Instead, it uses Kubernetes’ PVC abstraction. When a VM is defined, KubeVirt requests a PVC.
PVCs reference a StorageClass, which is configured to use a specific CSI driver as its “provisioner”.
The CSI driver associated with the StorageClass handles the provisioning of persistent storage by interfacing with external systems (e.g., vCenter, Ceph, cloud providers).
Once the PV is bound, KubeVirt uses the virt-launcher pod to attach the volume as a virtual disk to the VM.
KubeVirt works with the CDI project to import disk images (e.g., .qcow2, .raw) from HTTP, S3, and other sources into PVCs.
CDI relies on CSI drivers to provision the PVCs that will store the imported images. After import, KubeVirt consumes the PVC as a disk.
KubeVirt’s DataVolume custom resource simplifies image importing and ties CDI with PVC creation in a declarative way.
CSI drivers allow powerful features previously complex for VM setups:
VolumeSnapshot objects for point-in-time backups.allowVolumeExpansion.Filesystem and Block. Choose based on workload performance needs.Admin creates a StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-fast-storage
provisioner: csi.my-storage-vendor.com # This points to the specific CSI driver
parameters:
type: "ssd"
volumeBindingMode: WaitForFirstConsumer # Important for VM scheduling
allowVolumeExpansion: true
User defines a VirtualMachine with a DataVolume:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: my-vm
spec:
dataVolumeTemplates:
- metadata:
name: my-vm-disk
spec:
storageClassName: my-fast-storage # References the StorageClass
source:
http:
url: "http://example.com/my-vm-image.qcow2"
pvc:
accessModes:
- ReadWriteOnce # Or ReadWriteMany for live migration
resources:
requests:
storage: 20Gi
template:
spec:
domain:
devices:
disks:
- name: my-vm-disk
disk:
bus: virtio
# ... other VM specs
volumes:
- name: my-vm-disk
dataVolume:
name: my-vm-disk
In this flow:
KubeVirt sees the DataVolumeTemplate and requests a PVC (my-vm-disk) using my-fast-storage.
The my-fast-storage StorageClass directs the request to csi.my-storage-vendor.com (the CSI driver).
The CSI driver provisions a 20Gi volume on the backend storage.
CDI then imports my-vm-image.qcow2 into this newly provisioned PVC.
Once the data import is complete, KubeVirt starts the VM, and the PVC is attached as the VM’s disk.
KubeVirt uses CSI to: