Frequently asked questions about machine-controller.
Machine-controller is a Kubernetes controller that manages the lifecycle of worker nodes across multiple cloud providers. It implements the Cluster API specification, allowing you to create, update, and delete machines using Kubernetes resources.
Machine-controller is an implementation of the Cluster API specification, specifically focused on managing worker nodes. While Cluster API provides a complete solution including control plane management, machine-controller focuses solely on worker node lifecycle management and is commonly used with KubeOne for control plane management.
Machine-controller supports:
See Cloud Providers for detailed configuration.
Machine-controller supports:
Support varies by cloud provider. See the OS support matrix for details.
The recommended way is through KubeOne, which automatically installs and configures machine-controller. For manual installation, see the Installation Guide.
Machine-controller runs as a Deployment in the kube-system namespace, typically on control plane nodes or dedicated infrastructure nodes.
Credentials can be provided via:
Example using secrets:
kubectl create secret generic cloud-credentials \
-n kube-system \
--from-literal=token=<your-token>
Yes! You can create MachineDeployments for different cloud providers in the same cluster. Each MachineDeployment specifies its own provider configuration.
Create a MachineDeployment resource:
kubectl apply -f machinedeployment.yaml
See the Usage Guide for detailed examples.
Use kubectl to scale:
kubectl scale machinedeployment <name> --replicas=5 -n kube-system
Yes! Scaling to zero is supported and useful for temporarily removing workers while preserving configuration:
kubectl scale machinedeployment <name> --replicas=0 -n kube-system
Update the kubelet version in the MachineDeployment spec:
kubectl patch machinedeployment <name> -n kube-system --type merge -p '
{
"spec": {
"template": {
"spec": {
"versions": {
"kubelet": "<YOUR-UPGRADED-KUBERNETES-VERSION>"
}
}
}
}
}'
This triggers a rolling update.
Edit the MachineDeployment and update the cloud provider spec:
kubectl edit machinedeployment <name> -n kube-system
Change the instance type field (e.g., instanceType, machineType, serverType) and save. A rolling update will occur.
Provisioning time varies by cloud provider and OS:
Factors affecting speed:
Yes! Machine-controller is fully compatible with Kubernetes cluster-autoscaler. Annotate your MachineDeployments with min/max sizes:
metadata:
annotations:
cluster.k8s.io/cluster-api-autoscaler-node-group-min-size: "1"
cluster.k8s.io/cluster-api-autoscaler-node-group-max-size: "10"
See the cluster-autoscaler documentation for details.
kubectl describe machine <name> -n kube-systemkubectl logs -n kube-system deployment/machine-controllerSee the Troubleshooting Guide for detailed steps.
Common causes:
Check kubelet logs on the instance:
ssh into-instance
journalctl -u kubelet -f
-v=6sudo journalctl -u cloud-init-output
cat /var/log/cloud-init.log
Yes, but be careful:
# Remove finalizers to force delete
kubectl patch machine <name> -n kube-system -p '{"metadata":{"finalizers":[]}}' --type=merge
# Delete the machine
kubectl delete machine <name> -n kube-system
Manually clean up cloud resources if they still exist.
Yes! Specify a custom AMI/image ID in your provider configuration:
cloudProviderSpec:
# AWS example
ami: "ami-xxxxx"
# Azure example
imageReference:
publisher: "my-publisher"
offer: "my-offer"
sku: "my-sku"
version: "latest"
Ensure the image is compatible with the selected OS.
Use the network field in the Machine spec to add custom cloud-init:
spec:
providerSpec:
value:
cloudProvider: "aws"
# ... other config
network:
cidr: ""
gateway: ""
dns:
servers: []
# Custom cloud-init
cloudInit: |
#cloud-config
runcmd:
- echo "Custom initialization"
Yes, for supported providers:
AWS:
cloudProviderSpec:
isSpotInstance: true
GCP:
cloudProviderSpec:
preemptible: true
Note: Spot instances can be terminated at any time, ensure your workloads are tolerant of this.
Configure private networking in your provider spec:
AWS:
cloudProviderSpec:
assignPublicIP: false
subnetId: "subnet-private"
Hetzner:
cloudProviderSpec:
networks:
- "my-private-network"
Ensure nodes can reach the API server and download packages.
Yes, specify the zone/availability zone in the provider config:
cloudProviderSpec:
# AWS
availabilityZone: "us-east-1a"
# GCP
zone: "us-central1-a"
# Azure
zone: "1"
Specify them in the Machine spec:
spec:
template:
metadata:
labels:
environment: production
workload: compute
spec:
taints:
- key: "dedicated"
value: "gpu"
effect: "NoSchedule"
Yes, machine-controller automatically drains nodes before deletion. Configure drain behavior:
spec:
template:
spec:
metadata:
annotations:
"machine.k8s.io/exclude-node-draining": "false" # Enable draining (default)
Machine-controller can manage thousands of machines. Performance depends on:
For large deployments, increase worker count:
kubectl edit deployment machine-controller -n kube-system
# Add: -worker-count=20
Typical requirements:
Each provider has different limits:
Reduce worker count if hitting rate limits.
Credentials should be stored in Kubernetes Secrets with appropriate RBAC permissions. Machine-controller only needs get and list permissions on specific secrets.
Yes, for cloud platforms that support instance metadata:
Machine-controller needs permissions to:
See cloud provider documentation for exact IAM policies/roles needed.
Update the image version:
kubectl set image deployment/machine-controller \
machine-controller=quay.io/kubermatic/machine-controller:<LATEST-VERSION> \
-n kube-system
Check release notes for breaking changes.
Yes, but it requires:
There’s no automated migration path.
Create MachineDeployments in the new cloud provider, then:
Open an issue on GitHub with:
Yes! Contributions are welcome. See the Development Guide for details.
Yes, Kubermatic offers commercial support for machine-controller as part of their products.
Use machine-controller with KubeOne for a similar experience.
Machine-controller uses kubeadm internally for node provisioning.
Machine-controller is designed for self-managed clusters. For managed Kubernetes (EKS, GKE, AKS), use the provider’s native node group/pool management.
Yes, for:
Use labels to categorize:
metadata:
labels:
environment: production
pool: general
region: us-east
cost-center: engineering
Use node selectors and affinity rules to schedule workloads appropriately.