/
1

Overview & Objectives

⏱ 2 min read 🟢 Beginner

Overview

This codelab walks you through deploying your very first application on Google Kubernetes Engine (GKE). You'll go from zero to a live, internet-accessible app scaled across multiple replicas using just a handful of commands.

No prior Kubernetes or cloud experience is required. We'll explain every concept before you use it.

What you'll learn

  • Why containers exist and what problem they solve.
  • What Kubernetes and GKE are (in plain English).
  • The meaning of Pods, Deployments, Services, and Nodes.
  • How to create a GKE cluster using Cloud Shell.
  • How to deploy a containerized app and expose it to the internet.
  • How to scale your app to multiple replicas with one command.

What you'll build

Your finished architecture
Internet 🌐
Load Balancer Service
GKE Cluster
Pod 1 🟢
hello-app:1.0
Pod 2 🟢
hello-app:1.0
Pod 3 🟢
hello-app:1.0
Managed by a Deployment

Prerequisites

  • A Google Account (Gmail works).
  • Access to Google Cloud Console:— console.cloud.google.com
  • No software installation needed we'll use Cloud Shell (browser-based terminal).
2

Before You Begin

⏱ 5 min

Set up your Google Cloud project

  1. In the Google Cloud Console, go to the project selector page and create a new project. Name it something like gke-codelab.
  2. Make sure billing is enabled for your project. New accounts get $300 free credits.
  3. Enable the Kubernetes Engine API: Enable the API →
⚠️
Cost Warning

Running a GKE cluster incurs charges. This codelab costs approximately $0.10–0.30 to complete. You'll delete all resources at the end. New GCP accounts receive $300 in free credits which more than covers this.

Open Cloud Shell

Cloud Shell is a free browser-based terminal with all the tools pre-installed. No setup on your machine required!

  1. Go to console.cloud.google.com
  2. Click the Cloud Shell icon (>_) in the top-right corner of the toolbar
  3. A terminal opens at the bottom of your browser. Wait for it to initialize.
Verify your setup

Run this in Cloud Shell to confirm your project is set correctly:

gcloud config get-value project

You should see your project ID printed. If not, set it manually:

gcloud config set project YOUR_PROJECT_ID

Set your default zone

A zone is a datacenter location. We'll use us-central1-a throughout this codelab.

gcloud config set compute/zone us-central1-a
3

Containers & Why GKE

⏱ 5 min read

The problem containers solve

You've probably experienced this: an app works perfectly on your laptop but breaks the moment you run it somewhere else. The reason? Different environments have different versions of Python, Node.js, system libraries, and OS settings.

The Shipping Container Analogy 🚢

Before standardized shipping containers, loading a ship was chaotic every package was different, and workers had to figure out how to stack and handle each one individually.

Software containers do for code what shipping containers did for cargo: they package everything your app needs code, runtime, libraries, config into one standardized unit that runs the same way everywhere.

So what is Kubernetes?

Once you have containers, you face a new challenge: how do you run hundreds of them across dozens of servers, keep them healthy, scale them up under load, and roll out updates without downtime?

Kubernetes (often abbreviated as K8s) is an open-source system that automates this. Think of it as the operating system for your cluster of machines.

The Head Chef Analogy👨‍🍳

Kubernetes is like the head chef of a restaurant. You don't tell each cook what to do minute by minute. You tell the head chef: "I always need 5 portions of this dish ready." The chef assigns cooks, restarts anyone who gets sick, and adjusts staffing when it gets busy.

Where does GKE fit in?

Running Kubernetes yourself requires deep expertise you'd need to install it, configure networking, manage upgrades, set up monitoring, handle security patches... Google Kubernetes Engine (GKE) does all of that for you.

Without GKE (DIY)With GKE
Install Kubernetes on VMs manuallyCluster ready in ~2 minutes
Apply security patches yourselfGoogle auto-patches the control plane
Set up monitoring from scratchBuilt-in Cloud Monitoring & Logging
Scale nodes manuallyCluster Autoscaler does it automatically
Manage etcd backupsFully managed and backed up by Google
💡
Key insight

GKE lets you focus on building your application, not managing infrastructure. The hard parts of Kubernetes become Google's problem, not yours.

4

Core GKE Concepts

⏱ 7 min read

The building blocks demystified

Kubernetes has a lot of new vocabulary. Here's a plain-English guide to the terms you'll encounter in this codelab.

Cluster & Nodes 🖥️

A cluster is a group of computers (virtual machines) working together as one. Each individual machine in the cluster is called a node. GKE manages these nodes for you on Google Cloud infrastructure.

Pod📦

A Pod is the smallest unit Kubernetes manages a bubble that wraps your container(s). Every Pod gets its own IP address inside the cluster. Pods are designed to be temporary and replaceable: if one crashes, Kubernetes starts a fresh one.

Deployment 🎛️

A Deployment is a standing instruction: "always keep N copies of this Pod running." It watches your Pods and automatically replaces any that fail. It also handles rolling updates deploying a new version gradually so users never experience downtime.

Service🚪

Since Pods come and go (and their IPs change), a Service provides a stable address that always points to the right Pods. It acts as a load balancer, distributing requests across all healthy Pods behind it.

TermReal-world analogyWhat it does
Cluster A datacenter building Group of machines running your workloads
Node One computer in the building A VM that runs Pods
Pod One worker at a desk One running instance of your app
Deployment HR policy ("always have 3 staff") Maintains desired number of Pods
Service The reception desk / switchboard Stable entry point that routes to Pods
Namespace Different floors in the building Logical isolation within a cluster

How they connect

User request
Service
stable IP / Load Balancer
Pod A
Pod B
Pod C
↑ All three Pods managed by one Deployment, all running on Nodes inside your Cluster
📖
Remember this

You never talk to Pods directly. You talk to Services. Services talk to Pods. This is what makes your app resilient if a Pod dies, the Service instantly routes to the remaining ones.

5

Create a GKE Cluster

⏱ 5 min

Create your first cluster

In your Cloud Shell, run the following command to create a small GKE cluster with 2 nodes:

gcloud container clusters create my-first-cluster \
  --num-nodes=2 \
  --zone=us-central1-a
This takes 2–4 minutes

Google is provisioning 2 VMs, installing Kubernetes, and setting up networking. This is work that would take a DevOps engineer several hours to do manually. Grab a coffee ☕

When it's done, you'll see output ending with something like:

NAME               LOCATION       STATUS
my-first-cluster   us-central1-a  RUNNING

Connect kubectl to your cluster

kubectl (pronounced "kube-control") is the command-line tool for interacting with Kubernetes. Run this to configure it to point at your new cluster:

gcloud container clusters get-credentials my-first-cluster \
  --zone=us-central1-a

Verify the connection

List the nodes in your cluster:

kubectl get nodes

You should see 2 nodes with STATUS: Ready:

NAME                                              STATUS   AGE
gke-my-first-cluster-default-pool-xxxxx-abcd      Ready    2m
gke-my-first-cluster-default-pool-xxxxx-efgh      Ready    2m
Your cluster is live!

Two virtual machines are now running Kubernetes, waiting to run your application. You're ready to deploy.

6

Deploy Your First App

⏱ 5 min

Create a Deployment

We'll deploy a simple "Hello World" web app using a pre-built Google sample container image. This command creates a Deployment with one Pod running your app:

kubectl create deployment hello-app \
  --image=gcr.io/google-samples/hello-app:1.0

Verify the Pod is running

kubectl get pods

Wait until STATUS shows Running:

NAME                         READY   STATUS    RESTARTS
hello-app-xxxxxxxxxx-xxxxx   1/1     Running   0
💡
What just happened?

Kubernetes pulled the container image from Google Container Registry, scheduled it onto one of your nodes, and started it. The Deployment is now watching this Pod if it crashes, a replacement starts automatically.

Expose your app to the internet

Right now your app is running but only reachable inside the cluster. Create a Load Balancer Service to give it a public IP:

kubectl expose deployment hello-app \
  --type=LoadBalancer \
  --port=80 \
  --target-port=8080

Get your public IP

Watch for the EXTERNAL-IP to appear (this takes about 60 seconds):

kubectl get service hello-app --watch
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)
hello-app   LoadBalancer   10.12.10.100   <pending>      80/TCP
hello-app   LoadBalancer   10.12.10.100   34.68.X.X      80/TCP  ← your IP!

Once the external IP appears, press Ctrl+C to stop watching, then open that IP in your browser. You'll see:

Hello, world!
Version: 1.0.0
Hostname: hello-app-xxxxxxxxxx-xxxxx
🎉
Your app is live on the internet!

You just deployed a containerized application to Google's global infrastructure. Anyone in the world can now access it at that IP address.

7

Scale & Update Your App

⏱ 5 min

Scale to 3 replicas

This is where Kubernetes becomes magical. With a single command, you run 3 copies of your app and GKE automatically load balances traffic across all of them:

kubectl scale deployment hello-app --replicas=3

Check that all 3 Pods are running:

kubectl get pods
NAME                         READY   STATUS    RESTARTS
hello-app-xxxxxxxxxx-aaaaa   1/1     Running   0
hello-app-xxxxxxxxxx-bbbbb   1/1     Running   0
hello-app-xxxxxxxxxx-ccccc   1/1     Running   0
💡
What this used to take

Before Kubernetes: provision 2 more VMs, install your app on each, configure a load balancer, update DNS. With Kubernetes: one command. The Service automatically routes to all 3 Pods.

Deploy a new version (rolling update)

Update your app to version 2.0. Kubernetes will roll this out one Pod at a time so there is zero downtime:

kubectl set image deployment/hello-app \
  hello-app=gcr.io/google-samples/hello-app:2.0

Watch the rollout happen in real time:

kubectl rollout status deployment/hello-app
Waiting for deployment "hello-app" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "hello-app" rollout to finish: 2 out of 3 new replicas have been updated...
deployment "hello-app" successfully rolled out

Reload your app's IP you'll now see Version: 2.0.0.

Clean up (important!)

Delete all resources to avoid ongoing charges:

# Delete the Service (releases the external IP)
kubectl delete service hello-app

# Delete the cluster (deletes all nodes and workloads)
gcloud container clusters delete my-first-cluster \
  --zone=us-central1-a
⚠️
Don't skip the cleanup

The LoadBalancer Service reserves an external IP (billed per hour) and the cluster runs 2 VMs. Always delete when done to avoid unexpected charges.

Congratulations!

🚀

You finished the codelab!

You've successfully deployed, scaled, and updated a containerized app on Google Kubernetes Engine.

What you accomplished

  • Understood what containers, Kubernetes, and GKE are.
  • Created a GKE cluster in under 4 minutes.
  • Deployed a containerized app with kubectl.
  • Exposed it to the internet via a LoadBalancer.
  • Scaled to 3 replicas with a single command.
  • Performed a zero-downtime rolling update.

What to learn next

TopicWhy it mattersWhere to learn
ConfigMaps & Secrets Store configuration and credentials securely kubernetes.io/docs
Ingress & Gateway API Route traffic by domain/path, add TLS GKE docs
Cluster Autoscaler Add/remove nodes automatically based on demand cloud.google.com/kubernetes-engine
Helm Package and deploy complex apps with one command helm.sh
CKA Certification Prove your Kubernetes expertise cncf.io/certification/cka

Resources