This guide provides comprehensive instructions for deploying and managing Kubermatic Virtualization using the declarative apply command with YAML configuration files.
The Kubermatic Virtualization apply command enables declarative cluster management through infrastructure-as-code practices. This approach provides:
Before beginning, ensure you have:
The declarative installation uses a YAML configuration file following the Kubermatic Virtualization API schema. This file serves as the single source of truth for your cluster’s desired state.
apiVersion: virtualization.k8c.io/v1alpha1
kind: KubeVCluster
networkConfiguration:
dnsServerIP: "8.8.8.8"
networkCIDR: "10.244.0.0/16"
serviceCIDR: "10.96.0.0/12"
gatewayIP: "10.244.0.1"
controlPlane:
hosts:
- address: "192.168.1.10"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
loadBalancer:
none: {}
storage:
none: {}
apiVersion: virtualization.k8c.io/v1alpha1
kind: KubeVCluster
# Network configuration defines the fundamental connectivity layer
networkConfiguration:
# DNS server for name resolution
dnsServerIP: "8.8.8.8"
# Pod network CIDR (default: 10.244.0.0/16)
networkCIDR: "10.244.0.0/16"
# Service network CIDR (default: 10.96.0.0/12)
serviceCIDR: "10.96.0.0/12"
# Gateway IP for pod network (default: 10.244.0.1)
gatewayIP: "10.244.0.1"
# Control plane configuration
controlPlane:
hosts:
- address: "192.168.1.10"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/cluster-key"
# Worker nodes configuration
staticWorkers:
hosts:
- address: "192.168.1.11"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/cluster-key"
- address: "192.168.1.12"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/cluster-key"
# Load balancer configuration (exactly one option required)
loadBalancer:
# Option 1: Enable MetalLB
metallb:
ipRange: "192.168.1.100-192.168.1.150"
# Option 2: Disable load balancer (uncomment to use)
# none: {}
# Storage configuration (exactly one option required)
storage:
# Option 1: Enable Longhorn distributed storage
longhorn: {}
# Option 2: Disable managed storage (uncomment to use)
# none: {}
Create a YAML configuration file (e.g., cluster.yaml) with your cluster specifications:
# Create configuration file
cat > cluster.yaml <<EOF
apiVersion: virtualization.k8c.io/v1alpha1
kind: KubeVCluster
networkConfiguration:
dnsServerIP: "8.8.8.8"
networkCIDR: "10.244.0.0/16"
serviceCIDR: "10.96.0.0/12"
gatewayIP: "10.244.0.1"
controlPlane:
hosts:
- address: "192.168.1.10"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
staticWorkers:
hosts:
- address: "192.168.1.11"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
loadBalancer:
metallb:
ipRange: "192.168.1.100-192.168.1.150"
storage:
longhorn: {}
EOF
Run the apply command to see what will be installed:
kubermatic-virtualization apply -f cluster.yaml
The command will display:
INFO[12:01:02 CET] ╔══════════════════════════════════════════════════════════════╗
INFO[12:01:02 CET] ║ ║
INFO[12:01:02 CET] ║ KubeV - Kubermatic Virtualization Platform ║
INFO[12:01:02 CET] ║ ║
INFO[12:01:02 CET] ╚══════════════════════════════════════════════════════════════╝
INFO[12:01:02 CET]
INFO[12:01:02 CET] Starting cluster apply process configFile=cluster.yaml
INFO[12:01:02 CET] Loading configuration file file=cluster.yaml
INFO[12:01:02 CET] Configuration loaded and validated successfully loadBalancer=metallb masterNodes=1 storage=longhorn workerNodes=2
Do you want to proceed (yes/no):
Confirm the installation by typing y and pressing Enter. The installation will proceed through multiple phases:
[KubeV] Identifying the operating system...
[KubeV] Setting up required software components...
[KubeV] Creating configuration files...
[KubeV] Performing initial system checks...
...
[KubeV] Configuring virtualization support...
[KubeV] ✓ Cluster installation completed successfully
The apply command is not just for initial installation—it manages your cluster’s entire lifecycle.
Add new nodes to staticWorkers section and apply:
staticWorkers:
hosts:
- address: "192.168.1.11"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
- address: "192.168.1.12"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
# New node
- address: "192.168.1.13"
sshUsername: "ubuntu"
sshPrivateKeyFile: "/home/user/.ssh/id_rsa"
kubermatic-virtualization apply -f cluster.yaml
If a node becomes unhealthy or is removed from the cluster, simply run apply again:
kubermatic-virtualization apply -f cluster.yaml
The command will:
To check current cluster state without making changes:
kubermatic-virtualization apply -f cluster.yaml --verbose
This shows detailed information about:
Error: configuration validation failed: invalid IP address
Solution: Verify all IP addresses are valid IPv4 addresses:
controlPlane:
hosts:
- address: "192.168.1.10" # ✓ Valid
# - address: "192.168.1" # ✗ Invalid
Error: SSH key file does not exist
Solution: Ensure SSH key paths are absolute and files exist:
# Check key file exists
ls -la /home/user/.ssh/id_rsa
# Fix permissions if needed
chmod 600 /home/user/.ssh/id_rsa
Error: exactly one load balancer option must be specified
Solution: Set either metallb or none, not both:
# Choose one:
loadBalancer:
metallb:
ipRange: "192.168.1.100-192.168.1.150"
# OR
# none: {}
Error: failed to connect to host: connection refused
Solutions:
Verify SSH connectivity:
ssh -i /home/user/.ssh/id_rsa ubuntu@192.168.1.10
Check firewall rules allow SSH (port 22)
Verify SSH service is running on target nodes:
systemctl status sshd
Error: repair and upgrade are not supported simultaneously
Solution: This occurs when trying to upgrade while nodes are unhealthy. First repair the cluster:
# Step 1: Repair cluster with current version
kubermatic-virtualization apply -f cluster.yaml
# Step 2: After repair completes, upgrade
# (update version in cluster.yaml)
kubermatic-virtualization apply -f cluster.yaml
Detailed logs are available at /tmp/kubermatic-virtualization.log:
# View recent logs
tail -f /tmp/kubermatic-virtualization.log
# Search for errors
grep -i error /tmp/kubermatic-virtualization.log
After successful installation, verify cluster health and begin workload deployment.
# Set KUBECONFIG
export KUBECONFIG=kubev-cluster-kubeconfig
# Check nodes
kubectl get nodes
# Expected output:
# NAME STATUS ROLES AGE VERSION
# node1 Ready control-plane 5m v1.33.0
# node2 Ready <none> 4m v1.33.0
# Check system pods
kubectl get pods --all-namespaces
# Check storage (if Longhorn enabled)
kubectl get pods -n longhorn-system
# Check load balancer (if MetalLB enabled)
kubectl get pods -n metallb-system
For production deployments, enterprise support, or licensing inquiries, contact Kubermatic at sales@kubermatic.com.