GCE

How To Install Kubernetes On GCE Cluster Using KubeOne

In this quick start we’re going to show how to get started with KubeOne on GCE. We’ll cover how to create the needed infrastructure using our example Terraform scripts and then install Kubernetes. Finally, we’re going to show how to destroy the cluster along with the infrastructure.

As a result, you’ll get Kubernetes High-Available (HA) clusters with three control plane nodes and one worker node.

Prerequisites

To follow this quick start, you’ll need:

  • KubeOne v0.11.1 or newer installed, which can be done by following the Installing KubeOne section of the README
  • Terraform v0.12.0 or newer installed. Older releases are not compatible. The binaries for Terraform can be found on the Terraform website

Setting Up Credentials

The provided credentials are deployed to the cluster to be used by machine-controller for creating worker nodes. You may want to consider providing a non-administrator credentials to increase the security.

In order for Terraform to successfully create the infrastructure and for machine-controller to create worker nodes, you need an Service Account with the appropriate permissions. These are:

  • Compute Admin: roles/compute.admin
  • Service Account User: roles/iam.serviceAccountUser
  • Viewer: roles/viewer

If the gcloud CLI is installed, a service account can be created like follow:

# create new service account
gcloud iam service-accounts create k1-cluster-provisioner

# get your service account id
gcloud iam service-accounts list
# get your project id
gcloud projects list

# create policy binding
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID --member 'serviceAccount:YOUR_SERVICE_ACCOUNT_ID' --role='roles/compute.admin'
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID --member 'serviceAccount:YOUR_SERVICE_ACCOUNT_ID' --role='roles/iam.serviceAccountUser' 
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID --member 'serviceAccount:YOUR_SERVICE_ACCOUNT_ID' --role='roles/viewer'

A Google Service Account for the platform has to be created, see Creating and managing service accounts. The result is a JSON file containing the fields

  • type
  • project_id
  • private_key_id
  • private_key
  • client_email
  • client_id
  • auth_uri
  • token_uri
  • auth_provider_x509_cert_url
  • client_x509_cert_url
# create a new json key for your service account
gcloud iam service-accounts keys create --iam-account YOUR_SERVICE_ACCOUNT k1-cluster-provisioner-sa-key.json

Once you have the Service Account, you need to set GOOGLE_CREDENTIALS environment variable:

# export JSON file content of created service account json key
export GOOGLE_CREDENTIALS=$(cat ./k1-cluster-provisioner-sa-key.json)

Also, the Compute Engine API has to be enabled for the project in the Google APIs Console.

Creating Infrastructure

KubeOne is based on the Bring-Your-Own-Infra approach, which means that you have to provide machines and needed resources yourself. To make this task easier we are providing Terraform scripts that you can use to get started. You’re free to use your own scripts or any other preferred approach.

The Terraform scripts for GCE are located in the ./examples/terraform/gce directory.

KubeOne comes with the Terraform integration that can source information about the infrastructure directly from the Terraform output. If you decide not to use our Terraform scripts, but you still want to use the Terraform integration, you must ensure that your Terraform output (output.tf) is using the same format as ours. Alternatively, if you decide not to use Terraform, you can provide needed information about the infrastructure manually in the KubeOne configuration file.

First, we need to switch to the directory with Terraform scripts:

cd ./examples/terraform/gce

Before we can use Terraform to create the infrastructure for us, Terraform needs to download the GCE plugin. This is done by running the init command:

terraform init

You need to run this command only the first time before using scripts.

You may want to configure the provisioning process by setting variables defining the cluster name, GCE region, instances size and similar. The easiest way is to create the terraform.tfvars file and store variables there. This file is automatically read by Terraform.

nano terraform.tfvars

For the list of available settings along with their names please see the variables.tf file. You should consider setting:

Variable Required Default Value Description
cluster_name yes cluster name and prefix for cloud resources
project yes GCP Project ID
region europe-west3 GCP region to use for all resources
ssh_public_key_file ~/.ssh/id_rsa.pub path to your SSH public key that’s deployed on instances
control_plane_type n1-standard-1 control plane instance type (note that you should have at least 2 GB RAM and 2 CPUs for Kubernetes to work properly)

The terraform.tfvars file can look like:

cluster_name = "demo"

project = "kubeone-demo-project"

region = "europe-west1"

Now that you configured Terraform you can use the plan command to see what changes will be made:

terraform plan

Finally, if you agree with changes you can proceed and provision the infrastructure:

terraform apply -var control_plane_target_pool_members_count=1

control_plane_target_pool_members_count is needed in order to bootstrap the control plane successfully. Once install is done it’s recommended to include all control plane VMs into the LB (will be covered a bit later in this document).

Shortly after you’ll be asked to enter yes to confirm your intention to provision the infrastructure.

Infrastructure provisioning takes around 5 minutes.

Once the provisioning is done, you need to export the Terraform output using the following command. This Terraform output file will be used by KubeOne to source information about the infrastructure and worker nodes.

terraform output -json > tf.json

The generated output is based on the output.tf file. If you want to change any settings, such as how worker nodes are created, you can modify the output.tf file. Make sure to run terraform apply and terraform output again after modifying the file.

Installing Kubernetes

Now that you have the infrastructure you can proceed with provisioning your Kubernetes cluster using KubeOne.

Before you start, you’ll need a configuration file that defines how Kubernetes will be installed, e.g. what version will be used and what features will be enabled. For the configuration file reference run kubeone config print --full.

To get started you can use the following configuration file:

apiVersion: kubeone.io/v1alpha1
kind: KubeOneCluster
versions:
  kubernetes: '1.18.0'
cloudProvider:
  name: 'gce'
  cloudConfig: |
    [global]
    regional = true

This configuration manifest instructs KubeOne to provision Kubernetes 1.18.0 cluster on GCE. Other properties, including information about the infrastructure and how to create worker nodes are sourced from the Terraform output. As KubeOne is using Kubermatic machine-controller for creating worker nodes, see the GCE example manifest for available options.

If control plane nodes are created in multiple zones, you must configure kube-controller-manager to support regional clusters by setting regional to true. Otherwise, kube-controller-manager will fail to create the needed routes and other cloud resources, without which the cluster can’t function properly. The example Terraform configuration creates control plane nodes in multiple zones by default.

Finally, we’re going to install Kubernetes by using the install command and providing the configuration file and the Terraform output:

kubeone install -m config.yaml --tfjson <DIR-WITH-tfstate-FILE>

Alternatively, if the terraform state file is in the current working directory --tfjson . can be used as well.

The installation process takes some time, usually 5-10 minutes. The output should look like the following one:

INFO[17:24:41 EET] Installing prerequisites…
INFO[17:24:42 EET] Determine operating system…                   node=35.198.117.209
INFO[17:24:42 EET] Determine operating system…                   node=35.246.186.88
INFO[17:24:42 EET] Determine operating system…                   node=35.198.129.205
INFO[17:24:42 EET] Determine hostname…                           node=35.198.117.209
INFO[17:24:42 EET] Creating environment file…                    node=35.198.117.209
INFO[17:24:42 EET] Installing kubeadm…                           node=35.198.117.209 os=ubuntu
INFO[17:24:43 EET] Deploying configuration files…                node=35.198.117.209 os=ubuntu
INFO[17:24:43 EET] Determine hostname…                           node=35.246.186.88
INFO[17:24:43 EET] Creating environment file…                    node=35.246.186.88
INFO[17:24:43 EET] Installing kubeadm…                           node=35.246.186.88 os=ubuntu
INFO[17:24:43 EET] Determine hostname…                           node=35.198.129.205
INFO[17:24:43 EET] Deploying configuration files…                node=35.246.186.88 os=ubuntu
INFO[17:24:43 EET] Creating environment file…                    node=35.198.129.205
INFO[17:24:43 EET] Installing kubeadm…                           node=35.198.129.205 os=ubuntu
INFO[17:24:43 EET] Deploying configuration files…                node=35.198.129.205 os=ubuntu
INFO[17:24:44 EET] Generating kubeadm config file…
INFO[17:24:45 EET] Configuring certs and etcd on first controller…
INFO[17:24:45 EET] Ensuring Certificates…                        node=35.246.186.88
INFO[17:24:47 EET] Downloading PKI files…                        node=35.246.186.88
INFO[17:24:49 EET] Creating local backup…                        node=35.246.186.88
INFO[17:24:49 EET] Deploying PKI…
INFO[17:24:49 EET] Uploading files…                              node=35.198.117.209
INFO[17:24:49 EET] Uploading files…                              node=35.198.129.205
INFO[17:24:52 EET] Configuring certs and etcd on consecutive controller…
INFO[17:24:52 EET] Ensuring Certificates…                        node=35.198.117.209
INFO[17:24:52 EET] Ensuring Certificates…                        node=35.198.129.205
INFO[17:24:54 EET] Initializing Kubernetes on leader…
INFO[17:24:54 EET] Running kubeadm…                              node=35.246.186.88
INFO[17:25:09 EET] Joining controlplane node…
INFO[17:26:36 EET] Copying Kubeconfig to home directory…         node=35.198.117.209
INFO[17:26:36 EET] Copying Kubeconfig to home directory…         node=35.246.186.88
INFO[17:26:36 EET] Copying Kubeconfig to home directory…         node=35.198.129.205
INFO[17:26:37 EET] Building Kubernetes clientset…
INFO[17:26:39 EET] Applying canal CNI plugin…
INFO[17:26:43 EET] Installing machine-controller…
INFO[17:26:46 EET] Installing machine-controller webhooks…
INFO[17:26:47 EET] Waiting for machine-controller to come up…
INFO[17:27:12 EET] Creating worker machines…

Once it’s finished in order in include 2 other control plane VMs into the LB:

terraform apply

KubeOne automatically downloads the Kubeconfig file for the cluster. It’s named as <cluster_name>-kubeconfig, where <cluster_name> is the name provided in the terraform.tfvars file. You can use it with kubectl such as:

kubectl --kubeconfig=<cluster_name>-kubeconfig

or export the KUBECONFIG environment variable:

export KUBECONFIG=$PWD/<cluster_name>-kubeconfig

You can check the Configure Access To Multiple Clusters document to learn more about managing access to your clusters.

Scaling Worker Nodes

Worker nodes are managed by the machine-controller. By default, it creates one MachineDeployment object. That object can be scaled up and down (including to 0) using the Kubernetes API. To do so you first got to retrieve the machinedeployments by running:

kubectl get machinedeployments -n kube-system

The names of the machinedeployments are generated. You can scale the workers in those using:

kubectl --namespace kube-system scale machinedeployment/<MACHINE-DEPLOYMENT-NAME> --replicas=3

The kubectl scale command is not working as expected with kubectl v1.15. If you want to use the scale command, please upgrade to kubectl v1.16 or newer.

Deleting The Cluster

Before deleting a cluster you should clean up all MachineDeployments, so all worker nodes are deleted. You can do it with the kubeone reset command:

kubeone reset config.yaml --tfjson <DIR-WITH-tfstate-FILE>

This command will wait for all worker nodes to be gone. Once it’s done you can proceed and destroy the GCE infrastructure using Terraform:

terraform destroy

You’ll be asked to enter yes to confirm your intention to destroy the cluster.

Congratulations! You’re now running Kubernetes HA cluster with three control plane nodes and one worker node. If you want to learn more about KubeOne and its features, make sure to check our documentation.