AWS - S3 Unauthenticated Enum
Tip
Learn & practice AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking:HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the š¬ Discord group or the telegram group or follow us on Twitter š¦ @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
S3 Public Buckets
A bucket is considered āpublicā if any user can list the contents of the bucket, and āprivateā if the bucketās contents can only be listed or written by certain users.
Companies might have buckets permissions miss-configured giving access either to everything or to everyone authenticated in AWS in any account (so to anyone). Note, that even with such misconfigurations some actions might not be able to be performed as buckets might have their own access control lists (ACLs).
Learn about AWS-S3 misconfiguration here: http://flaws.cloud and http://flaws2.cloud/
Finding AWS Buckets
Different methods to find when a webpage is using AWS to storage some resources:
Enumeration & OSINT:
-
Using wappalyzer browser plugin
-
Using burp (spidering the web) or by manually navigating through the page all resources loaded will be save in the History.
-
Check for resources in domains like:
http://s3.amazonaws.com/[bucket_name]/ http://[bucket_name].s3.amazonaws.com/ -
Check for CNAMES as
resources.domain.commight have the CNAMEbucket.s3.amazonaws.com -
s3dns ā A lightweight DNS server that passively identifies cloud storage buckets (S3, GCP, Azure) by analyzing DNS traffic. It detects CNAMEs, follows resolution chains, and matches bucket patterns, offering a quiet alternative to brute-force or API-based discovery. Perfect for recon and OSINT workflows.
-
Check https://buckets.grayhatwarfare.com, a web with already discovered open buckets.
-
The bucket name and the bucket domain name needs to be the same.
- flaws.cloud is in IP 52.92.181.107 and if you go there it redirects you to https://aws.amazon.com/s3/. Also,
dig -x 52.92.181.107givess3-website-us-west-2.amazonaws.com. - To check itās a bucket you can also visit https://flaws.cloud.s3.amazonaws.com/.
- flaws.cloud is in IP 52.92.181.107 and if you go there it redirects you to https://aws.amazon.com/s3/. Also,
Brute-Force
You can find buckets by brute-forcing names related to the company you are pentesting:
- https://github.com/sa7mon/S3Scanner
- https://github.com/clario-tech/s3-inspector
- https://github.com/jordanpotti/AWSBucketDump (Contains a list with potential bucket names)
- https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets
- https://github.com/smaranchand/bucky
- https://github.com/tomdev/teh_s3_bucketeers
- https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/s3
- https://github.com/Eilonh/s3crets_scanner
- https://github.com/belane/CloudHunter
# Generate a wordlist to create permutations
curl -s https://raw.githubusercontent.com/cujanovic/goaltdns/master/words.txt > /tmp/words-s3.txt.temp
curl -s https://raw.githubusercontent.com/jordanpotti/AWSBucketDump/master/BucketNames.txt >>/tmp/words-s3.txt.temp
cat /tmp/words-s3.txt.temp | sort -u > /tmp/words-s3.txt
# Generate a wordlist based on the domains and subdomains to test
## Write those domains and subdomains in subdomains.txt
cat subdomains.txt > /tmp/words-hosts-s3.txt
cat subdomains.txt | tr "." "-" >> /tmp/words-hosts-s3.txt
cat subdomains.txt | tr "." "\n" | sort -u >> /tmp/words-hosts-s3.txt
# Create permutations based in a list with the domains and subdomains to attack
goaltdns -l /tmp/words-hosts-s3.txt -w /tmp/words-s3.txt -o /tmp/final-words-s3.txt.temp
## The previous tool is specialized increating permutations for subdomains, lets filter that list
### Remove lines ending with "."
cat /tmp/final-words-s3.txt.temp | grep -Ev "\.$" > /tmp/final-words-s3.txt.temp2
### Create list without TLD
cat /tmp/final-words-s3.txt.temp2 | sed -E 's/\.[a-zA-Z0-9]+$//' > /tmp/final-words-s3.txt.temp3
### Create list without dots
cat /tmp/final-words-s3.txt.temp3 | tr -d "." > /tmp/final-words-s3.txt.temp4http://phantom.s3.amazonaws.com/
### Create list without hyphens
cat /tmp/final-words-s3.txt.temp3 | tr "." "-" > /tmp/final-words-s3.txt.temp5
## Generate the final wordlist
cat /tmp/final-words-s3.txt.temp2 /tmp/final-words-s3.txt.temp3 /tmp/final-words-s3.txt.temp4 /tmp/final-words-s3.txt.temp5 | grep -v -- "-\." | awk '{print tolower($0)}' | sort -u > /tmp/final-words-s3.txt
## Call s3scanner
s3scanner --threads 100 scan --buckets-file /tmp/final-words-s3.txt | grep bucket_exists
Loot S3 Buckets
Given S3 open buckets, BucketLoot can automatically search for interesting information.
Find the Region
You can find all the supported regions by AWS in https://docs.aws.amazon.com/general/latest/gr/s3.html
By DNS
You can get the region of a bucket with a dig and nslookup by doing a DNS request of the discovered IP:
dig flaws.cloud
;; ANSWER SECTION:
flaws.cloud. 5 IN A 52.218.192.11
nslookup 52.218.192.11
Non-authoritative answer:
11.192.218.52.in-addr.arpa name = s3-website-us-west-2.amazonaws.com.
Check that the resolved domain have the word āwebsiteā.
You can access the static website going to: flaws.cloud.s3-website-us-west-2.amazonaws.com
or you can access the bucket visiting: flaws.cloud.s3-us-west-2.amazonaws.com
By Trying
If you try to access a bucket, but in the domain name you specify another region (for example the bucket is in bucket.s3.amazonaws.com but you try to access bucket.s3-website-us-west-2.amazonaws.com, then you will be indicated to the correct location:
.png)
Enumerating the bucket
To test the openness of the bucket a user can just enter the URL in their web browser. A private bucket will respond with āAccess Deniedā. A public bucket will list the first 1,000 objects that have been stored.
Open to everyone:
.png)
Private:
.png)
You can also check this with the cli:
#Use --no-sign-request for check Everyones permissions
#Use --profile <PROFILE_NAME> to indicate the AWS profile(keys) that youwant to use: Check for "Any Authenticated AWS User" permissions
#--recursive if you want list recursivelyls
#Opcionally you can select the region if you now it
aws s3 ls s3://flaws.cloud/ [--no-sign-request] [--profile <PROFILE_NAME>] [ --recursive] [--region us-west-2]
If the bucket doesnāt have a domain name, when trying to enumerate it, only put the bucket name and not the whole AWSs3 domain. Example: s3://<BUCKETNAME>
Public URL template
https://{user_provided}.s3.amazonaws.com
Get Account ID from public Bucket
Itās possible to determine an AWS account by taking advantage of the new S3:ResourceAccount Policy Condition Key. This condition restricts access based on the S3 bucket an account is in (other account-based policies restrict based on the account the requesting principal is in).
And because the policy can contain wildcards itās possible to find the account number just one number at a time.
This tool automates the process:
# Installation
pipx install s3-account-search
pip install s3-account-search
# With a bucket
s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket
# With an object
s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket/path/to/object.ext
This technique also works with API Gateway URLs, Lambda URLs, Data Exchange data sets and even to get the value of tags (if you know the tag key). You can find more information in the original research and the tool conditional-love to automate this exploitation.
Confirming a bucket belongs to an AWS account
As explained in this blog post, if you have permissions to list a bucket itās possible to confirm an accountID the bucket belongs to by sending a request like:
curl -X GET "[bucketname].amazonaws.com/" \
-H "x-amz-expected-bucket-owner: [correct-account-id]"
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">...</ListBucketResult>
If the error is an āAccess Deniedā it means that the account ID was wrong.
Used Emails as root account enumeration
As explained in this blog post, itās possible to check if an email address is related to any AWS account by trying to grant an email permissions over a S3 bucket via ACLs. If this doesnāt trigger an error, it means that the email is a root user of some AWS account:
s3_client.put_bucket_acl(
Bucket=bucket_name,
AccessControlPolicy={
'Grants': [
{
'Grantee': {
'EmailAddress': 'some@emailtotest.com',
'Type': 'AmazonCustomerByEmail',
},
'Permission': 'READ'
},
],
'Owner': {
'DisplayName': 'Whatever',
'ID': 'c3d78ab5093a9ab8a5184de715d409c2ab5a0e2da66f08c2f6cc5c0bdeadbeef'
}
}
)
References
- https://www.youtube.com/watch?v=8ZXRw4Ry3mQ
- https://cloudar.be/awsblog/finding-the-account-id-of-any-public-s3-bucket/
Tip
Learn & practice AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking:HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the š¬ Discord group or the telegram group or follow us on Twitter š¦ @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
HackTricks Cloud

