Create Private GKE Standard Cluster using gcloud CLI¶
This tutorial guides you through creating a Private, VPC-native GKE Standard cluster.
In GKE Standard mode, you have full control over the underlying infrastructure (nodes). You manage the node pools, machine types, and upgrades, giving you maximum flexibility.
Prerequisites¶
- GCP Project: Billing enabled.
- gcloud CLI: Installed and authorized.
- Permissions:
roles/container.admin,roles/compute.networkAdmin.
Network Requirements¶
We will create a VPC-native cluster, which is the recommended network mode.
- Subnet Primary Range (Nodes):
/24(256 IPs). - Pod Secondary Range:
/14(Allows for many pods per node). - Service Secondary Range:
/20.
CIDR Ranges and Limits¶
When planning your network, keep these constraints in mind:
| Component | Recommended CIDR | Minimum CIDR | Maximum CIDR | Notes |
|---|---|---|---|---|
| Nodes (Primary) | /24 (256 IPs) |
/29 (8 IPs) |
/20 (4096 IPs) |
Can overlap with other subnets in the VPC if they are not peered. Must be large enough for max node count + upgrade surge. |
| Pods (Secondary) | /14 - /21 |
/21 |
/9 |
Determines the max number of pods and nodes. Pod CIDRs are allocated to nodes in blocks (e.g., /24 per node). |
| Services (Secondary) | /20 |
/27 |
/16 |
Used for ClusterIPs. A /20 provides 4096 Service IPs. |
Do Pod/Service ranges have to be secondary ranges?
Yes, for VPC-native clusters (recommended).
In a VPC-native cluster, Pod and Service IP ranges must be defined as secondary IP ranges on the same subnet used by the cluster nodes.
- Nodes: Use the Subnet's Primary CIDR range.
- Pods: Use a Secondary CIDR range on that subnet.
- Services: Use another Secondary CIDR range on that subnet.
Step 1: Network Setup¶
Create a dedicated VPC and Subnet.
-
Create VPC:
export PROJECT_ID=$(gcloud config get-value project) export REGION=us-central1 export NETWORK_NAME=gke-private-std-net export SUBNET_NAME=gke-private-std-subnet gcloud compute networks create $NETWORK_NAME \ --subnet-mode=custom \ --bgp-routing-mode=regional -
Create Subnet with Secondary Ranges:
gcloud compute networks subnets create $SUBNET_NAME \ --network=$NETWORK_NAME \ --region=$REGION \ --range=10.0.0.0/25 \ --secondary-range=pods=10.1.0.0/21,services=10.0.0.128/25 \ --enable-private-ip-google-access
Step 2: Create GKE Standard Cluster¶
Create the cluster with private nodes.
--enable-private-nodes: Nodes have only internal IPs.--enable-ip-alias: Enables VPC-native traffic (creates Alias IPs for pods).--num-nodes: Number of nodes per zone.--machine-type: The Compute Engine machine type for the nodes.
Control Plane IP Allocation
With Private Service Connect (PSC), the Control Plane endpoint takes an IP address from your Subnet's Primary Range.
export CLUSTER_NAME=my-private-standard
gcloud container clusters create $CLUSTER_NAME \
--region=$REGION \
--network=$NETWORK_NAME \
--subnetwork=$SUBNET_NAME \
--cluster-secondary-range-name=pods \
--services-secondary-range-name=services \
--enable-private-nodes \
--enable-ip-alias \
--enable-private-endpoint \
--enable-master-authorized-networks \
--master-authorized-networks=10.0.0.0/24 \
--num-nodes=1 \
--machine-type=e2-medium
Optional: Public Endpoint for Testing
If you want to access the cluster from outside the VPC (e.g., from your local machine) for testing, remove the --enable-private-endpoint flag.
- Private Nodes: Yes (Nodes have internal IPs only).
- Master Access: Public (Open to internet).
gcloud container clusters create $CLUSTER_NAME \
--region=$REGION \
--network=$NETWORK_NAME \
--subnetwork=$SUBNET_NAME \
--cluster-secondary-range-name=pods \
--services-secondary-range-name=services \
--enable-private-nodes \
--enable-ip-alias \
--num-nodes=1 \
--machine-type=e2-medium
- Note: We simply omitted
--enable-private-endpoint.
--enable-private-endpoint: The control plane has only a private IP address. It is not accessible from the public internet.--master-authorized-networks: Restricts access to the control plane to specific IP ranges. We allow the subnet range (10.0.0.0/24) where our Bastion Host will reside.
Understanding Private Cluster Flags
- Private Cluster (
--enable-private-nodes): Makes the Worker Nodes private (no public IPs). This is the main requirement for a "Private Cluster". - Private Endpoint (
--enable-private-endpoint): Makes the Control Plane private. If omitted (defaults to false), the Control Plane remains accessible via a Public Endpoint.
Step 3: Create Bastion Host¶
Since the cluster control plane is private, we need a Bastion Host (a VM inside the VPC) to run kubectl commands.
-
Create the VM:
gcloud compute instances create gke-bastion \ --zone=${REGION}-a \ --network=$NETWORK_NAME \ --subnet=$SUBNET_NAME \ --machine-type=e2-micro \ --tags=bastion -
Allow SSH Access: Create a firewall rule to allow SSH (port 22) into the Bastion Host (via IAP).
gcloud compute firewall-rules create allow-ssh-bastion \ --network=$NETWORK_NAME \ --allow=tcp:22 \ --source-ranges=35.235.240.0/20 \ --target-tags=bastion
Step 4: Access the Cluster¶
Now, we will log in to the Bastion Host and access the cluster.
-
SSH into Bastion:
gcloud compute ssh gke-bastion --zone=${REGION}-a --tunnel-through-iap -
Install kubectl and Auth Plugin (Inside the Bastion):
sudo apt-get update sudo apt-get install -y kubectl google-cloud-sdk-gke-gcloud-auth-plugin- Note: The
gcloudCLI is pre-installed on standard Google Cloud VM images. We only need to installkubectland the auth plugin.
Other Installation Methods
If your Bastion is not Debian/Ubuntu, or you are running this locally:
- Using gcloud components (Recommended):
gcloud components install gke-gcloud-auth-plugin - Red Hat/CentOS:
sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
- Note: The
-
Get Credentials:
export CLUSTER_NAME=my-private-standard export REGION=us-central1 gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION --internal-ip--internal-ip: Tellskubectlto communicate with the cluster's private IP address.- This command updates your local
kubeconfigfile with the cluster's authentication details and endpoint information.
Step 5: Configure Artifact Registry (Remote Repo)¶
Since the cluster is private (no internet access for nodes), we need a way to pull images. We will create a Remote Artifact Registry repository that acts as a pull-through cache for Docker Hub.
-
Enable Artifact Registry API:
gcloud services enable artifactregistry.googleapis.com -
Create Remote Repository:
gcloud artifacts repositories create docker-hub-remote \ --project=$PROJECT_ID \ --repository-format=docker \ --location=$REGION \ --mode=remote-repository \ --remote-repo-config-desc="Docker Hub" \ --remote-docker-repo=DOCKER_HUB
Step 6: Verify Cluster (From Bastion)¶
-
Check Nodes:
You should see nodes withkubectl get nodes -o wideINTERNAL-IPbut noEXTERNAL-IP. -
Deploy a Test App (Using Remote Repo): Refer to the image using the Artifact Registry path.
# Format: LOCATION-docker.pkg.dev/PROJECT-ID/REPOSITORY-ID/IMAGE IMAGE_PATH=${REGION}-docker.pkg.dev/${PROJECT_ID}/docker-hub-remote/nginx kubectl create deployment nginx --image=$IMAGE_PATH --port=80Regional Load Balancer Controller
The GKE Ingress Controller (
ingress-gce) is installed by default. -
Check Pods:
kubectl get pods -w -
Expose via Regional External Load Balancer: Create a
ServiceandIngressto expose the app.# Create the Service kubectl expose deployment nginx --type=NodePort --target-port=80 --port=80 # Create the Ingress # 1. Save the following manifest as ingress.yaml cat <<EOF > ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "gce-regional-external" spec: defaultBackend: service: name: nginx port: number: 80 EOF # 2. Apply the manifest kubectl apply -f ingress.yamlkubernetes.io/ingress.class: "gce-regional-external": Tells GKE to provision a Regional External HTTP(S) Load Balancer.
Quiz¶
Which flag enables VPC-native networking (Alias IPs) in a GKE Standard cluster?
In GKE Standard, who is responsible for upgrading the worker nodes?
What is the effect of setting --enable-private-endpoint?
Cleanup¶
gcloud container clusters delete $CLUSTER_NAME --region=$REGION --quiet
gcloud compute networks subnets delete $SUBNET_NAME --region=$REGION --quiet
gcloud compute networks delete $NETWORK_NAME --quiet
📬 DevopsPilot Weekly — Learn DevOps, Cloud & Gen AI the simple way.
👉 Subscribe here