5.10.3 Consider GKE Sandbox for running untrusted workloads

Information

Use GKE Sandbox to restrict untrusted workloads as an additional layer of protection when running in a multi-tenant environment.

GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes.

When you enable GKE Sandbox on a Node pool, a sandbox is created for each Pod running on a node in that Node pool. In addition, nodes running sandboxed Pods are prevented from accessing other GCP services or cluster metadata. Each sandbox uses its own userspace kernel.

Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. A flaw in the container runtime or in the host kernel could allow a process running within a container to 'escape' the container and affect the node's kernel, potentially bringing down the node.

The potential also exists for a malicious tenant to gain access to and exfiltrate another tenant's data in memory or on disk, by exploiting such a defect.

Solution

Once a node pool is created, GKE Sandbox cannot be enabled, rather a new node pool is required. The default node pool (the first node pool in your cluster, created when the cluster is created) cannot use GKE Sandbox.

Using Google Cloud Console:

- Go to Kubernetes Engine by visiting:

https://console.cloud.google.com/kubernetes/

.
- Select a cluster and click ADD NODE POOL
- Configure the Node pool with following settings:
- For the node version, select v1.12.6-gke.8 or higher.
- For the node image, select Container-Optimized OS with Containerd (cos_containerd) (default)
- Under Security select Enable sandbox with gVisor

- Configure other Node pool settings as required.
- Click SAVE

Using Command Line:

To enable GKE Sandbox on an existing cluster, a new Node pool must be created, which can be done using:

gcloud container node-pools create <node_pool_name> --zone <compute-zone> --cluster <cluster_name> --image-type=cos_containerd --sandbox="type=gvisor"

Impact:

Using GKE Sandbox requires the node image to be set to Container-Optimized OS with containerd ( cos_containerd ).

It is not currently possible to use GKE Sandbox along with the following Kubernetes features:

- Accelerators such as GPUs or TPUs
- Istio
- Monitoring statistics at the level of the Pod or container
- Hostpath storage
- Per-container PID namespace
- CPU and memory limits are only applied for Guaranteed Pods and Burstable Pods, and only when CPU and memory limits are specified for all containers running in the Pod
- Pods using PodSecurityPolicies that specify host namespaces, such as hostNetwork, hostPID, or hostIPC
- Pods using PodSecurityPolicy settings such as privileged mode
- VolumeDevices
- Portforward
- Linux kernel security modules such as Seccomp, Apparmor, or Selinux Sysctl, NoNewPrivileges, bidirectional MountPropagation, FSGroup, or ProcMount

See Also

https://workbench.cisecurity.org/benchmarks/19166

Item Details

Category: SYSTEM AND COMMUNICATIONS PROTECTION

References: 800-53|SC-7, CSCv7|18.9

Plugin: GCP

Control ID: ae4b145c41137ea5b7f99f6b3697d0597bb31a7fbd1c5194b60dfcad3c9fb6e9