Menu
myezbrew
  • Privacy Policy
  • prometheus documentation
myezbrew

k8s with kops and terraform

Posted on September 4, 2021September 7, 2021 by Andy Jenkins
Follow us on Social Media
linkedin
  • kOps
  • kubernetes
  • AWS account
  • Source Control is nice to have also…

Links:

  • https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md
  •  

I assume you are at a console with the AWS cli installed and connected to an account.

Create a route53 domain for your cluster

Already completed… need to populate.

 

Create an S3 bucket to store your clusters state

Already completed… need to populate.

Build your cluster configuration

 

To keep cost down I modify the InstanceGroup for nodes and master to use t2.small. The command “kops edit ig” launches the vi editor where we search and replace the machineType: specified. 

Commands and files after edit:

				
					kops edit ig --name=uswest2.k8s.myezbrew.com nodes-us-west-2a
kops edit ig --name=uswest2.k8s.myezbrew.com master-us-west-2a
				
			
				
					apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-09-05T19:57:01Z"
  labels:
    kops.k8s.io/cluster: uswest2.k8s.myezbrew.com
  name: nodes-us-west-2a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210720
  machineType: t2.small
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-us-west-2a
  role: Node
  subnets:
  - us-west-2a
				
			
				
					apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-09-05T19:57:01Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: uswest2.k8s.myezbrew.com
  name: master-us-west-2a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210720
  machineType: t3.small
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-west-2a
  role: Master
  subnets:
  - us-west-2a
				
			
				
					bitnami@ip-172-26-15-161:~$ kops update cluster --name uswest2.k8s.myezbrew.com --yes --admin
I0905 20:15:38.450332   16836 executor.go:111] Tasks: 0 done / 79 total; 43 can run
W0905 20:15:38.559050   16836 vfs_castore.go:612] CA private key was not found
I0905 20:15:38.563469   16836 keypair.go:195] Issuing new certificate: "etcd-clients-ca"
I0905 20:15:38.563919   16836 keypair.go:195] Issuing new certificate: "etcd-peers-ca-main"
I0905 20:15:38.567267   16836 keypair.go:195] Issuing new certificate: "etcd-manager-ca-events"
I0905 20:15:38.567324   16836 keypair.go:195] Issuing new certificate: "apiserver-aggregator-ca"
I0905 20:15:38.695015   16836 keypair.go:195] Issuing new certificate: "etcd-manager-ca-main"
W0905 20:15:38.718208   16836 vfs_castore.go:612] CA private key was not found
I0905 20:15:38.718754   16836 keypair.go:195] Issuing new certificate: "ca"
I0905 20:15:39.079517   16836 keypair.go:195] Issuing new certificate: "etcd-peers-ca-events"
I0905 20:15:39.413215   16836 keypair.go:195] Issuing new certificate: "service-account"
I0905 20:15:40.948295   16836 executor.go:111] Tasks: 43 done / 79 total; 18 can run
I0905 20:15:41.795362   16836 executor.go:111] Tasks: 61 done / 79 total; 16 can run
I0905 20:15:42.911994   16836 executor.go:111] Tasks: 77 done / 79 total; 2 can run
I0905 20:15:44.231630   16836 executor.go:155] No progress made, sleeping before retrying 2 task(s)
I0905 20:15:54.232653   16836 executor.go:111] Tasks: 77 done / 79 total; 2 can run
I0905 20:15:56.245380   16836 executor.go:111] Tasks: 79 done / 79 total; 0 can run
I0905 20:15:56.245420   16836 dns.go:157] Checking DNS records
I0905 20:15:56.517230   16836 dns.go:219] Pre-creating DNS records
I0905 20:15:56.928999   16836 update_cluster.go:313] Exporting kubecfg for cluster
kOps has set your kubectl context to uswest2.k8s.myezbrew.com
Cluster is starting.  It should be ready in a few minutes.
Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.uswest2.k8s.myezbrew.com
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.
bitnami@ip-172-26-15-161:~$ 
bitnami@ip-172-26-15-161:~$ kops validate cluster --wait 10m
Using cluster from kubectl context: uswest2.k8s.myezbrew.com
Validating cluster uswest2.k8s.myezbrew.com
INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-us-west-2a       Master  t3.small        1       1       us-west-2a
nodes-us-west-2a        Node    t2.small        1       1       us-west-2a
NODE STATUS
NAME                                            ROLE    READY
ip-172-20-38-182.us-west-2.compute.internal     master  True
ip-172-20-43-42.us-west-2.compute.internal      node    True
Your cluster uswest2.k8s.myezbrew.com is ready
bitnami@ip-172-26-15-161:~$
				
			

Clean-up

				
					andy@Andrews-Mini .aws % kops delete cluster --name ${NAME} --yes
TYPE			NAME										ID
autoscaling-config	master-us-west-2a.masters.k8.myezbrew.com					lt-09a2248a3b71893a4
autoscaling-config	nodes-us-west-2a.k8.myezbrew.com						lt-0b5a70ead46ab35cc
autoscaling-group	master-us-west-2a.masters.k8.myezbrew.com					master-us-west-2a.masters.k8.myezbrew.com
autoscaling-group	nodes-us-west-2a.k8.myezbrew.com						nodes-us-west-2a.k8.myezbrew.com
dhcp-options		k8.myezbrew.com									dopt-09b97d77c8b779af6
iam-instance-profile	masters.k8.myezbrew.com								masters.k8.myezbrew.com
iam-instance-profile	nodes.k8.myezbrew.com								nodes.k8.myezbrew.com
iam-role		masters.k8.myezbrew.com								masters.k8.myezbrew.com
iam-role		nodes.k8.myezbrew.com								nodes.k8.myezbrew.com
instance		master-us-west-2a.masters.k8.myezbrew.com					i-052f57b6f5082f64b
instance		nodes-us-west-2a.k8.myezbrew.com						i-0b538a88f541c7bde
internet-gateway	k8.myezbrew.com									igw-0fceec7e7e59f04e5
keypair			kubernetes.k8.myezbrew.com-a8:4e:e0:0a:1b:1c:ce:bd:2e:85:6c:3e:32:b2:6f:c9	key-0af414765f8e1f034
route-table		k8.myezbrew.com									rtb-0605c4674927e178e
route53-record		api.internal.k8.myezbrew.com.							Z05399871A9JAY5VUT692/api.internal.k8.myezbrew.com.
route53-record		api.k8.myezbrew.com.								Z05399871A9JAY5VUT692/api.k8.myezbrew.com.
route53-record		kops-controller.internal.k8.myezbrew.com.					Z05399871A9JAY5VUT692/kops-controller.internal.k8.myezbrew.com.
security-group		masters.k8.myezbrew.com								sg-004c295361a311a47
security-group		nodes.k8.myezbrew.com								sg-03e266ab2d6c44de6
subnet			us-west-2a.k8.myezbrew.com							subnet-042aadc7b94abdec6
volume			a.etcd-events.k8.myezbrew.com							vol-07b8d62fff888a03c
volume			a.etcd-main.k8.myezbrew.com							vol-0e40969503e40b934
volume			master-us-west-2a.masters.k8.myezbrew.com					vol-003433e38eef4f4c3
volume			nodes-us-west-2a.k8.myezbrew.com						vol-076ac6017bfae285e
vpc			k8.myezbrew.com									vpc-0d329293bb06a4a18
...
Deleted kubectl config for k8.myezbrew.com
Deleted cluster: "k8.myezbrew.com"
andy@Andrews-Mini .aws % 

				
			

You can stop here and just use kOps or you can use terraform to save and maintain your state.  This is more of an exercise to learn. I did delete the previous cluster before doing this. I always delete the resources as soon as I am finished, the hole point is to get used to automation and part of that is learning where the pain points are.

Using kOps generated Terraform

To make it a bit easier we are going to export our variables and define the size of the ec2 instances so we don’t have to modify the instance groups as we had before. 

The first set are the export statements you need to run you will want to modify them to your needs.

Then we create an ssh key incase you don’t have one.

The last command will generate the terraform.tf file

				
					#env's that we need to set
export KOPS_STATE_STORE="s3://my-ezbrew-state-store"
export CLOUD="aws"
export ZONE="us-west-2a"
export MASTER_ZONES="us-west-2"
export NAME="k8s.myezbrew.com"
export K8S_VERSION="1.6.4"
export NETWORKCIDR="10.240.0.0/16"
export MASTER_SIZE="t3.small"
export WORKER_SIZE="t3.small"
export DNS_ZONE="myezbrew.com"
#create an ssh key
ssh-keygen -t rsa -C "andy@myezbrew.com"
#this is one comand on multiple lines copy it all.
kops create cluster              
--name=$NAME                     
--state="$KOPS_STATE_STORE"      
--zones="$ZONE"                  
--node-count=1                   
--node-size="$WORKER_SIZE"       
--master-size="$MASTER_SIZE"     
--dns-zone="$DNS_ZONE"           
--out=.                          
--target=terraform               
--ssh-public-key=$HOME/.ssh/id_rsa.pub
				
			

Now we will init, plan and apply the terraform. I also create a main.tf file which will only have the terraform provider in it for now.

  • In your terraform folder create a file main.tf with the following contents

             terraform {
                backend “s3” {
                   bucket = “my-ezbrew-state-store”
                   key = “terraform/k8s”
                  encrypt = true
                 region = “us-east-1”
                }
             }

  • terraform init
  • terraform plan
  • terraform apply

You will need to type “yes” to create. 

				
					bitnami@ip-172-26-15-161:~/repo/terraform$ vi main.tf
#.... (in vi type "esc:i" and paste the following edit what you need to.)
terraform {
  backend "s3" {
    bucket = "my-ezbrew-state-store"
    key    = "terraform/k8s"
    encrypt = true
    region = "us-east-1"
  }
}
#...(in vi type "esc:wq!" this will save and exit the file.)
terraform init
...
terraform apply
....
Plan: 35 to add, 0 to change, 0 to destroy.
....
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
...
Apply complete! Resources: 35 added, 0 changed, 0 destroyed.
Outputs:
cluster_name = "k8s.myezbrew.com"
master_autoscaling_group_ids = [
  "master-us-west-2a.masters.k8s.myezbrew.com",
]
master_security_group_ids = [
  "sg-0a1196e5e3bf292a7",
]
masters_role_arn = "arn:aws:iam::xxx:role/masters.k8s.myezbrew.com"
masters_role_name = "masters.k8s.myezbrew.com"
node_autoscaling_group_ids = [
  "nodes-us-west-2a.k8s.myezbrew.com",
]
node_security_group_ids = [
  "sg-0fd28fa182197e23a",
]
node_subnet_ids = [
  "subnet-01021f309211d87e7",
]
nodes_role_arn = "arn:aws:iam::xxx:role/nodes.k8s.myezbrew.com"
nodes_role_name = "nodes.k8s.myezbrew.com"
region = "us-west-2"
route_table_public_id = "rtb-0fd4f96ed8098aa3d"
subnet_us-west-2a_id = "subnet-01021f309211d87e7"
vpc_cidr_block = "172.20.0.0/16"
vpc_id = "vpc-0179255f5df6f8c1d"
bitnami@ip-172-26-15-161:~/repo/terraform$ 

				
			

Last but not least we destroy it all…

terraform -destroy

				
					console.log( 'Code is Poetry' );
				
			

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

myezbrew:

				
					export NAME=k8.myezbrew.com
export KOPS_STATE_STORE=s3://my-ezbrew-state-store
ssh-keygen -t rsa -C "andy@myezbrew.com"
kops create secret --name uswest2.k8s.myezbrew.com sshpublickey admin -i ~/.ssh/id_rsa.pub
kops edit ig --name=uswest2.k8s.myezbrew.com nodes-us-west-2a
kops edit ig --name=uswest2.k8s.myezbrew.com master-us-west-2a
kops create cluster --zones=us-west-2a uswest2.k8s.myezbrew.com
kops delete cluster --name ${NAME} --yes
Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster uswest2.k8s.myezbrew.com
 * edit your node instance group: kops edit ig --name=uswest2.k8s.myezbrew.com nodes-us-west-2a
 * edit your master instance group: kops edit ig --name=uswest2.k8s.myezbrew.com master-us-west-2a
Finally configure your cluster with: kops update cluster --name uswest2.k8s.myezbrew.com --yes --admin
Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.uswest2.k8s.myezbrew.com
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.
				
			

Leave a Reply Cancel reply

You must be logged in to post a comment.

Recent Posts

  • HAOS – Notes
  • Vger: Docker Images
  • Project: Duplicate Terraform Cloud Global Variable
  • PowerShell Crash Day
  • vger: Using a man in the middle terraform module

Recent Comments

    Archives

    • October 2023
    • September 2023
    • August 2023
    • March 2023
    • February 2023
    • November 2022
    • May 2022
    • April 2022
    • December 2021
    • October 2021
    • September 2021

    Categories

    • devops
    • docker images
    • prometheus
    • Prometheus Alerting
    • Uncategorized
    • vger

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2025 myezbrew | WordPress Theme by Superb Themes