Back to top
1 minute read

Getting started with Kubernetes and GKE

This post is going to take a look at Google’s new container engine (GKE), why the K? Well GCE is already taken by Google Compute Engine, so that explains that hopefully.

GKE is now generally available and is still very polished. When using it for the first time I got the buzz of playing with something cool and fresh like I did in the early days of cloud in 2006/2007. Nowadays those cloud tools today seem like a mainstay, but using GKE feels like playing with the next generation.

The polished feeling is largely due to the fact that Kubernetes is so tightly integrated into GKE by default, minimising the amount of effort to set these tools up by hand, or the sometimes fiddly cloud-configs when you roll it yourself. To get running, the first thing you will need to do is get the gcloud tool configured.

Configuring gcloud

gcloud is the tool you use to interoperate with Googles services, be that GCE, BigQuery or in this case GKE. I run this tool from a small ump box I span up in the google cloud platform console. If you don’t have an account head to http://cloud.google.com and sign up and enable GCE and GKE on your account right now.

I launched a debian jessie instance in europe-west1-b. This is going to be my machine I run all the commands from later in this post. The quick way to do this is via the web console; you can then also click on the SSH button and launch a browser window which automatically SSH’s you into the system.


Once logged in there are a few things we need to setup and update:

sudo apt-get -qq update
sudo apt-get -qq -y upgrade
sudo gcloud components update
sudo gcloud components update alpha
sudo gcloud auth login

The final command here may ask you to copy and paste a URL into your browser and then enter the resulting key back into the shell. This basically authorises the server you are SSH’d into, and allows it to execute commands on your cloud platform account.

Note: You may want to run this also to prevent errors later on:

export PATH=$PATH:/usr/local/share/google/google-cloud-sdk/bin/

Also I found it necessary to run the following to set my region:

gcloud config set compute/zone europe-west-1b

Creating a container cluster

Now this step is ridiculously easy, however under the bonnet it’s doing some pretty clever stuff. You are getting XX nodes created in GCE with all the Kubernetes components installed so the nodes act as a cluster. Something that can be fiddly when done by hand. So lets get it going:

gcloud container clusters create demo-cluster –num-nodes 4

And thats really all there is to it. If you check in your web console (you may need to refresh) you should see some new servers in the VM instances under Compute Engine and in the Container Engine you should see your new cluster demo-cluster

Testing Kubernetes

Another very cool thing thats happened is that gcloud has set up your kubectl tool on your host allowing you run commands against your cluster. Give it a go:

kubectl version

This should return you a API version form the local tool and the cluster version.

Lets create!

Right now lets use Kubernetes to create something! In this case we are going to spin up a replication group with NGINX servers running. Our initial replication size is going to be 2 servers. We’ll try scaling them later.

First lets start these servers:

kubectl run nginx –replicas=2 –image=nginx –port=80

So what’s this done you may ask? This has instructed Kubernetes and Docker to spin up two servers using the image nginx and open them on port 80. One thing to note is that unlike normal Docker, Kubernetes assigns an internal IP to each container, this allows you to have multiple containers all running on port 80 for example.

Lets check our containers:

kubectl get pods -o wide

This is going to return something like this:

NAME READY STATUS RESTARTS AGE NODE
nginx-baekc 1/1 Running 0 1h gke-demo-cluster-987df1db-node-huxk
nginx-fqbmw 1/1 Running 0 1h gke-demo-cluster-987df1db-node-cfe7

We used the flag -o wide to show that the containers are running on different VM’s.

Now an interesting feature in replica groups is that if you kill a machine its going to get replaced so that you always have two replicas as we stated when we launched

kubectl stop pod nginx-baekc

now if we looks at get pods again:

kubectl get pods -o wide

We get a different output:

NAME READY STATUS RESTARTS AGE NODE
nginx-envls 1/1 Running 0 1m gke-demo-cluster-987df1db-node-eb8h
nginx-fqbmw 1/1 Running 0 1h gke-demo-cluster-987df1db-node-cfe7

Notice one container has been replaced and new one is in its place. This is also on a different host.

Testing the container

Currently your containers have a private IP and its none routable from the net. You can test form the command line (if you are running kubectl from a GCE instace).

curl http://$(kubectl get pod nginx-envls -o=template -t={{.status.podIP}})

You’ll need to change the nginx-envls to what ever your machines are called when you get pods. You can test both instances if you like.

External access

I have a firewall rule on my project that allows all traffic on port 80 and 443 into my instances. You may want to create a similar rule using the web console. Doing this by the command line is beyond the scope here.
However I’ll show you how to add a load balancer using kubectl to allow external traffic.

kubectl expose rc nginx –create-external-load-balancer=true

This sets up a load balancer and adds your container to it. Run the following to check the status of the load balancer. It takes a few mins to spin up.

kubectl get services nginx

Initially this will return something like the following:

NAME LABELS SELECTOR IP(S) PORT(S)
nginx run=nginx run=nginx 10.27.250.81 80/TCP

Yet again another no internet routable IP is returned. But if you wait a short while and run the command again you should get something like this:

NAME LABELS SELECTOR IP(S) PORT(S)
nginx run=nginx run=nginx 10.27.250.81 80/TCP
104.155.39.233

If you now open your browser and head to http://104.155.39.233 you should see the nginx default page.


Scaling

So if you are expecting lots of visitors you may want to add more resource to your website. This is where kubectl can help once again.

kubectl scale –current-replicas=2 –replicas=3 replicationcontrollers nginx

Now when you run kubectl get pods, you’ll see more resources.

NAME READY STATUS RESTARTS AGE NODE
nginx-envls 1/1 Running 0 16m gke-demo-cluster-987df1db-node-eb8h
nginx-fqbmw 1/1 Running 0 2h gke-demo-cluster-987df1db-node-cfe7
nginx-jg3qm 1/1 Running 0 24s gke-demo-cluster-987df1db-node-huxk

Clean Up

The following command deletes the service and the external load balancer:

kubectl delete services nginx

Now lets stop the replica group

kubectl stop rc nginx

The last thing to do is shutdown your cluster:

gcloud container clusters delete demo-cluster

Conclusion

Hopefully now you have a basic understanding of using GKE and and running some services on it. Also you should be able to connect those services to the external world via Google’s load balancer service. Theres lots and lots more to learn with Kubernetes and we’ll look to cover them in the future.

Ric Harvey

Ric leads engineering and technical architecture for Ngineered. He has a vast amount of experience in cloud computing, having been responsible for the delivery of large-scale cloud migration projects at companies like Ticketmaster and Channel 4.

Discussion