Prerequisites

  • A linux environment to work from, (personal machine, VM, EC2)
  • kops,kubectl, and the aws cli must be installed

    It may be possible to install these dependencies from your respective package manager. The documentation mentions having an ssh key generated before cluster creation. I didn’t see an ssh key used in any commands later but I did already have an id_rsa key in my (~/.ssh folder.

IAM

The instructions specify creating an IAM user for kops and then go on to have you configure the aws cli to defaut to that user. While it is closer to best practice, I don’t want my aws cli configured to use the kops user as default. I configured the aws cli for my personal IAM user that already has sufficient privileges. To do this:

  • Create an access key for your IAM user.
  • Run aws configure from the terminal, entering your new access key id and secret access key at the prompts. I use the us-west-2 region as the default because it is closest to me and json as the default output but both are preference.
    • Test the cli with :
        aws ec2 describe-availability-zones
      

      The output should show the availability zones for your default region in the ZoneName field. Take note.

Configure DNS

My goal in this was to provision a cluster quickly and cheaply for labs/fun. To avoid having to purchase a domain and provision route53 records I decided to go with the gossip-based-dns option. So there is no DNS configuration!

Create State Bucket

Kops uses S3 to manage state. You can think of the bucket as the etcd of kops. Create a bucket for state. The bucket name has to be unique for the region you choose. Note that when creating a bucket outside the us-east-1 region you need the create-bucket-configuration option:

aws s3api create-bucket
--bucket <state-bucket-name> \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2

Create OIDC Bucket

I’d never heard of OIDC before this so I’ll have to read more to understand how kops is using it here. I do know kops needs a bucket for it to work from.

aws s3api create-bucket
--bucket <oidc-bucket-name> \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2 \
--acl public-read

Create a Cluster

Kops will automatically use gossip for dns when it seeds .k8s.local at the end of the cluster name. Without the --node-count parameter the default is one master and one worker node.

kops create cluster \
--name=<cluster-name.k8s.local> \
--cloud=aws \
--zones=<aws-az>\
--state=s3://<state-bucket-name> \
--discovery-store=s3://<oidc-bucket-name>/<cluster-name.k8s.local>/discovery \
--node-count=<number-of-desired-nodes> \
--yes

Make sure to read the documentation as there are many other options. Some notable ones are --target to generate terraform/cloudformation templates, --dry-run and --output to specify the format of the output without changing state, and --cloud-labels to add tags to the created resources. Omitting the --yes option prevents the cluster from being immediately created.

Validate

Run the kops validate command to make sure the cluster goes up ok.

kops validate cluster --wait 10m

I ran into a recurring error related to authorization the first time around that I fixed with:

kops export kubecfg --admin

The docs make it seem as if my kube config file should have been updated automatically so I’m not sure what happened there but after that I was able to run kubectl without issue from my machine.

Delete

When you’re done just run kops delete cluster <cluster-name> --yes to remove the resources provisioned from the create command.

Note: This write-up is based on but does not take precedent over the official documentation. My intention is to simplify these instructions for my own understanding. The kops version used was 1.23.2.