Kubernetes has its advantages when you’re big enough and require the flexibility it offers. But when all you’re after is running services and you don’t need stateful sets, Google Cloud Run can be a good alternative (provided you’re using Google Cloud Platform). This article provides a few pointers to migrate Kubernetes deployments to Google Cloud Run.

Migrating YAML deployment descriptors

There’s a guide for migrating deployment descriptors which outlines the major syntax differences between a Kubernetes deployment and a Cloud Run service.

What the guide doesn’t say however is that the feature set of Cloud Run YAML descriptors is fairly limited. For example:

  • you can’t have more than one deployment per YAML file. So if your service spins is comprised of multiple deployments, you’ll need to create one file for each deployment
  • there’s no way to use environment variables in the descriptors. I think this is the biggest caveat here because it means hardcoding everything. It is possible to inject values from the Secret Manager but they’ll have to be named explicitly.

If you happen to be using terraform to manage your infrastructure, there’s a nice way around this by:

  • initially deploying a service using terraform
  • subsequently deploying new images to it in the CI/CD using the gcloud CLI.

This approach assumes that you already have a build pipleine in place and only would like to move the deployments of the services. If you want to go one step further and also port the building of the projects, there’s Google Cloud Build for this.

Defining services in terraform

In what follows we’re defining a terraform module to deploy services to Cloud Run. The module assumes:

  • that the service image is available in a Google Artifact registry repository
  • that we want to inject a number of environment variables which we’ll read from the Secret Manager

Let’s start by creating a repository to hold the images:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
resource "google_artifact_registry_repository" "example-service-repository" {
  location      = var.gcp_region
  repository_id = "example-service-${var.environment}"
  format        = "DOCKER"

  labels = {
    terraform : "true"
    environment : var.environment
  }
}

This will create the docker image repository which we’ll use later on to publish our images.

Next, we can create the service itself by using the google_cloud_run_v2_service resource.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
resource "google_cloud_run_v2_service" "service" {
  name     = "${var.name}-${var.environment}"
  location = var.gcp_region
  ingress  = var.ingress


  template {
    service_account = var.gcp_service_account_email
    containers {
      image   = "${var.gcp_region}-docker.pkg.dev/${var.gcp_project_id}/${var.name}-${var.environment}/${var.image_tag}"
      name    = "${var.name}-${var.environment}"
      command = var.command
      args    = var.args

      ports {
        container_port = var.port
      }

      resources {
        limits = {
          memory = var.memory_limit
        }
      }

      startup_probe {
        http_get {
          path = var.startup_probe_path
          port = var.port
        }
        initial_delay_seconds = 10
        period_seconds        = 30
        timeout_seconds       = 5
      }

      liveness_probe {
        http_get {
          path = var.liveness_probe_path
          port = var.port
        }
        initial_delay_seconds = 30
        period_seconds        = 60
        timeout_seconds       = 5
      }

      // note: we're assuming that secret IDs use the naming convention <service_name>-<environment>_<ENV_VAR_NAME>
      dynamic "env" {
        for_each = var.environment_vars
        content {
          name = env.value
          value_source {
            secret_key_ref {
              secret  = "${var.name}-${var.environment}_${env.value}"
              version = "latest"
            }
          }
        }
      }

    }

    vpc_access {
      connector = var.connector_id
      egress    = "ALL_TRAFFIC"
    }


  }

  labels = {
    terraform = "true"
    service                = var.name
    environment            = var.environment
  }

  # this is necessary in order to allow redeployment from outside terraform
  lifecycle {
    ignore_changes = [template[0].containers[0].image]
  }

}

And this is pretty much it! If the services you’re deploying are homogenous (similar startup / liveness probe paths and similar ways to run a service), it’s a good idea to provide those as defaults in the variables.tf of the module. A possible module usage then looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
module "example-service" {
  source         = "../modules/cloud-run-service"
  environment    = var.environment
  gcp_project_id = var.gcp_project_id
  gcp_region     = var.gcp_region
  connector_id   = google_vpc_access_connector.connector-dev.id

  name              = "example-service"
  image_tag         = "example:latest"
  environment_vars  = ["VAR_1", "VAR_2"]
}

Deploying services in the CI/CD pipeline

In order to deploy new versions of the service in the CI/CD pipeline, we can use a gcloud command to do so. Note how in the lifecycle settings of the terraform service resource, we are ignoring lifecycle changes for the image value - this allows to deploy different image tags without terraform complaining about the image having changed.

After having pushed the new version of the service image, we can simply deploy it like so:

1
2
3
gcloud auth activate-service-account --key-file=$GCP_SA
gcloud config set project $GCP_PROJECT_ID
gcloud run deploy example-service-$ENVIRONMENT --image $GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/$CI_PROJECT_NAME-$ENVIRONMENT/$CI_PROJECT_NAME:latest --region $GCP_REGION

And that’s about it. Happy deployment!