Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Provisioning runners fails #34176

Open
1 of 17 tasks
damccorm opened this issue Mar 4, 2025 · 2 comments
Open
1 of 17 tasks

[Bug]: Provisioning runners fails #34176

damccorm opened this issue Mar 4, 2025 · 2 comments

Comments

@damccorm
Copy link
Contributor

damccorm commented Mar 4, 2025

What happened?

Right now, following the steps in https://github.com/apache/beam/blob/master/.github/gh-actions-self-hosted-runners/arc/README.md fails:

gcloud auth login
gcloud auth application-default login
terraform init -backend-config="bucket=beam-arc-state"

terraform apply -var-file=environments/beam.env

leads to the following error:

│ Error: googleapi: Error 400: At least one of ['node_version', 'image_type', 'updated_node_pool', 'locations', 'workload_metadata_config', 'upgrade_settings', 'kubelet_config', 'linux_node_config', 'tags', 'taints', 'labels', 'node_network_config', 'gcfs_config', 'gvnic', 'confidential_nodes', 'logging_config', 'fast_socket', 'resource_labels', 'accelerators', 'windows_node_config', 'machine_type', 'disk_type', 'disk_size_gb', 'storage_pools', 'containerd_config', 'resource_manager_tags', 'performance_monitoring_unit', 'queued_provisioning', 'max_run_duration', 'flex_start'] must be specified.
│ Details:
│ [
│   {
│     "@type": "type.googleapis.com/google.rpc.DebugInfo",
│     "detail": "INVALID_ARGUMENT: at least one of ['node_version', 'image_type', 'updated_node_pool', 'locations', 'workload_metadata_config', 'upgrade_settings', 'kubelet_config', 'linux_node_config', 'tags', 'taints', 'labels', 'node_network_config', 'gcfs_config', 'gvnic', 'confidential_nodes', 'logging_config', 'fast_socket', 'resource_labels', 'accelerators', 'windows_node_config', 'machine_type', 'disk_type', 'disk_size_gb', 'storage_pools', 'containerd_config', 'resource_manager_tags', 'performance_monitoring_unit', 'queued_provisioning', 'max_run_duration', 'flex_start'] must be specified",
│     "stackEntries": [
│       "cloud/kubernetes/engine/common/error_desc.go:430 +0x26 google3/cloud/kubernetes/engine/common/errdesc.(*GKEErrorDescriptor).createErr(0xc002175c80, {0x56278ddf7528, 0xc0833d24b0})",
│       "cloud/kubernetes/engine/common/error_desc.go:302 +0x4c google3/cloud/kubernetes/engine/common/errdesc.(*GKEErrorDescriptor).WithMsgCtx(0x56278ddf7528?, {0x56278ddf7528?, 0xc0833d24b0?}, {0x56277cca6475, 0x239}, {0x0, 0x0, 0x0})",
│       "cloud/kubernetes/server/v1alpha1/input_validation_updates.go:196 +0x305 google3/cloud/kubernetes/server/v1alpha1/validate.(*Validator).V1alpha1UpdateNodePoolRequest(0xc02346b528, {0x56278ddf7528, 0xc0833d24b0}, 0xc038d56c00)",
│       "cloud/kubernetes/server/v1alpha1/server.go:564 +0x32 google3/cloud/kubernetes/server/v1alpha1/server.(*ClusterServer).updateNodePool.func1({0x56278ddf7528?, 0xc0833d24b0?})",
│       "cloud/kubernetes/engine/requests/stage.go:188 +0x78 google3/cloud/kubernetes/engine/requests/stage.Record.func1({0x56278ddf7528, 0xc0833d24b0})",
│       "cloud/kubernetes/engine/common/stage/stage.go:212 +0x9df google3/cloud/kubernetes/engine/common/stage/stage.Record({0x56278ddf7528, 0xc072069c80}, {0x56277ca24f63?, 0x562798610720?}, {0x56277c9f6dfe, 0x3}, 0xc0440f6d78, {0xc0440f6db8, 0x1, 0x1})",
│       "cloud/kubernetes/engine/requests/stage.go:221 +0x4d2 google3/cloud/kubernetes/engine/requests/stage.Record({0x56278ddf7528, 0xc072069c80}, {0x56277ca24f63, 0xf}, {0x56277c9f6dfe, 0x3}, 0xc0440f6f00)",
│       "cloud/kubernetes/server/v1alpha1/server.go:563 +0x98 google3/cloud/kubernetes/server/v1alpha1/server.(*ClusterServer).updateNodePool(0xc02d314200, {0x56278ddf7528, 0xc072069c80}, 0xc038d56c00, 0xc01f072900, 0xc0bfd03ec0)",
│       "cloud/kubernetes/server/v1alpha1/server.go:542 +0x38d google3/cloud/kubernetes/engine/server/api/v1/server.(*ClusterServer).UpdateNodePool.(*ClusterServer).UpdateNodePool.func1({0x56278ddf7528, 0xc072069c80})",
│       "cloud/kubernetes/engine/requests/stage.go:188 +0x78 google3/cloud/kubernetes/engine/requests/stage.Record.func1({0x56278ddf7528, 0xc072069c80})",
│       "cloud/kubernetes/engine/common/stage/stage.go:212 +0x9df google3/cloud/kubernetes/engine/common/stage/stage.Record({0x56278ddf7528, 0xc072069a10}, {0x56277ca59da3?, 0x101?}, {0x56277c9f6dfe, 0x3}, 0xc0440f7620, {0xc0440f7660, 0x1, 0x1})",
│       "cloud/kubernetes/engine/requests/stage.go:221 +0x4d2 google3/cloud/kubernetes/engine/requests/stage.Record({0x56278ddf7528, 0xc072069a10}, {0x56277ca59da3, 0x16}, {0x56277c9f6dfe, 0x3}, 0xc0440f7708)",
│       "cloud/kubernetes/server/v1alpha1/server.go:518 google3/cloud/kubernetes/server/v1alpha1/server.(*ClusterServer).UpdateNodePool(...)",
│       "cloud/kubernetes/engine/server/api/v1/server.go:99 +0x14e google3/cloud/kubernetes/engine/server/api/v1/server.(*ClusterServer).UpdateNodePool(0xc02346b538, {0x56278ddf7528, 0xc072069a10}, 0xc06878ae00, 0xc01f0727e0)",
│       "blaze-out/k8-opt/bin/google/container/v1/cluster_service.pb.go:36751 +0xeb google3/google/container/v1_cluster_service_go_proto._ClusterManager_UpdateNodePool_Handler({0x56278da46e80, 0xc02346b538}, 0xc090ab3808, {0x56278dc7d380?, 0xc06878ae00})",
│       "cloud/kubernetes/engine/common/interceptors/stubby_interceptor.go:149 +0x3fe google3/cloud/kubernetes/engine/common/interceptors/stubbyinterceptor.(*Hook).handleRPCWithCall(0xc02d09ba60, {0x56278ddf7b20, 0xc01280cc00}, 0xc095bfaac0, 0xc06a87eb80)",
│       "cloud/kubernetes/engine/common/interceptors/stubby_interceptor.go:99 +0xb2 google3/cloud/kubernetes/engine/common/interceptors/stubbyinterceptor.(*Hook).handleRPC(0xc02d09ba60, {0x56278ddf7b20, 0xc01280cc00}, 0xc06a87eb80)"
│     ]
│   },
│   {
│     "@type": "type.googleapis.com/google.rpc.RequestInfo",
│     "requestId": "0x18abaffb8e82791f"
│   }
│ ]
│ , badRequest
│ 
│   with google_container_node_pool.additional_runner_pools["highmem-runner-22"],
│   on gke.tf line 53, in resource "google_container_node_pool" "additional_runner_pools":
│   53: resource "google_container_node_pool" "additional_runner_pools" {

This is basically failing to deploy changes after #34170 but it seems like the issue is actually with the underlying terraform config

Issue Priority

Priority: 2 (default / most bugs should be filed as P2)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam YAML
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Infrastructure
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner
@damccorm
Copy link
Contributor Author

damccorm commented Mar 4, 2025

@Amar3tto @mrshakirov @akashorabek could one of you take a look at this at some point? It is possible something is busted with my local config and this will just work for you.

Not urgent, but would be nice to have at some point in the next few weeks.

@claudevdm
Copy link
Collaborator

It looks like maybe the command did something. I can see there are now 8 highmem22 nodes.

From gke:
Size
Number of nodes
8
Auto-scaling
On (0-8 nodes)
Node zones
us-central1-b
Location policy
Balanced

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants