Playing around with Workload Identity on GKE


Google recently beta-released Workload Identity, a solution to use Google Service Accounts (GSA) from Workloads in Kubernetes using Kubernetes Service Accounts (KSA).

Setting this up requires some steps and using it with other technologies requires some wiring. My current setup involves Terraform for cloud infrastructure setup, Helm charts to bundle applications and FluxCD to manage GKE clusters in a GitOps way.

Setup of GSA and KSA

So you have a GKE cluster with Workload Identity enabled on the cluster and Node Pool. Now how to create a KSA which can use a GSA? Of course, write a Terraform module!

provider "google" {}
provider "kubernetes" {}

variable "name" {
  description = "Name of the Google and Kubernetes Account that is created"
  type        = string

variable "namespace" {
  description = "Kubernetes namespace where the SA is created"
  type        = string

data "google_project" "this" {}

resource "google_service_account" "this" {
  account_id =

resource "kubernetes_service_account" "this" {
  metadata {
    annotations = {
      managed_by_terraform = true

      "" =

    name      =
    namespace = var.namespace

  automount_service_account_token = true

resource "google_service_account_iam_member" "workload_identity_user" {
  service_account_id =
  role               = "roles/iam.workloadIdentityUser"
  member             = "serviceAccount:${data.google_project.this.project_id}[${var.namespace}/${}]"

output "kubernetes_service_account" {
  description = "The created kubernetes service account"
  value       = kubernetes_service_account.this

output "google_service_account" {
  description = "The created google service account"
  value       = google_service_account.this

Now you can use this module wherever you need to create a KSA for a workload that has to access some Google API:

module "workload_identity_service_account" {
  source = "../workload_identity_service_account" # or something else

  providers = {
    google     = google
    kubernetes = kubernetes

  name      = "workload-name"
  namespace = "default"

And now to the really fun part: automagically making a Helm chart using this KSA when managed via Flux!

Sharing Terraform knowledge with Flux

So whats the problem? Terraform knows how our KSA is named, our Helm chart needs this info. Hopefully our Helm chart allows us to pass something akin to whatever (if not, make it so!).

Thankfully FluxCD allows us, to get values from one (or multiple) configMap and pass them to a helmRelease:

kind: HelmRelease
  name: workload
  namespace: default
  releaseName: workload
    git: SOME_GIT_REPO
    path: charts/workload
    - configMapKeyRef:
        name: workload-values
    - configMapKeyRef:
        name: other-values
    otherStuff: true

So let’s also create this configMap from Terraform:

resource "kubernetes_config_map" "workload_values" {
  metadata {
    name      = "workload-values"
    namespace = "default"

  data = {
    "values.yaml" = <<-YAML
        create: false
        name: ${}


With this setup the following happens:

  • Terraform creates a KSA and a GSA, the KSA is allowed to impersonate the GSA.
  • Terraform will also create a configMap which holds values for the Helm chart.
  • FluxCD picks up these values, merges them with others and deploys the helmRelease.
  • The workload will now identify as the GSA when calling Google APIs

Therefore you can now also use Terraform to grant IAM permissions to the GSA, e.g.:

resource "google_project_iam_member" "storage_object_viewer" {
  project = data.google_project.this.project_id
  role    = "roles/storage.objectViewer"
  member  = "serviceAccount:${}"

comments powered by Disqus