Today, we will discuss a real-world scenario for securely sharing files using AWS S3. We will provide detailed information about the architecture and configurations required for each step. 

However, before diving into the topic, it is essential to understand what AWS S3 is. 

What is AWS S3?

AWS S3 is one of the most popular and widely used services provided by AWS. It is an essential repository service for file storage, backup, disaster recovery, data archives, and data lakes for analytics, as well as hybrid cloud storage.

As storage needs continue to increase, building and maintaining a storage system becomes increasingly difficult and complex. AWS S3 is the ideal solution for these needs due to its scalability, high availability, low latency, and low cost.

Also, its usage is very easy.

You can create an AWS S3 bucket and upload any files that you want. You can also integrate it with other AWS services such as AWS LambdaAWS API GatewayAWS CloudFront, etc.

How to Securely Share AWS S3 Files

In addition to being attractive for developers, it also becomes an exciting storage place for attackers. Data leaks are becoming extremely critical for companies if the necessary AWS S3 configurations are not made. 

Therefore, it is necessary to consider some security points when creating your own AWS S3 bucket:

  • Always consider blocking public access first.
  • Use AWS S3 encryption.
  • Limit your IAM permissions to AWS S3 buckets. Always follow at least the privilege principle.
  • Enforce HTTPS to prevent attacks like MITM (Man in the Middle).
  • Enhance S3 security using logging.

Now, it’s time to move on to our real-world scenario. 

A Real-World Scenario of Secure Sharing

A company wants to share invoice reports weekly. Here are the conditions:

  • After a week, reports cannot be reachable to anyone.
  • Reports contain critical data like financial details, so they should not be accessible from a public URL.
  • The process should be simple as the reporters are not competent in coding.
  • The company manager does not want to pay much for the process.

Solution: AWS S3 pre-signed URL generation for every upload!

Let’s review the architecture together:

  1. AWS S3 pre-signed URL generation
  2. Firstly, we need to create a private AWS S3 bucket for uploading reports. Then we have to create a new API Gateway for uploading reports to the AWS S3 bucket with API requests. So we will be able to upload reports with the API requests.

How to Securely Share AWS S3 Files

Our AWS API Gateway should look like below:

 AWS API Gateway

We need to get invoke URL from the created AWS API Gateway:

 AWS API Gateway

Note: Keep this invoke URL for the next steps. You will put the object in the AWS S3 bucket.

Send the API request to invoke the URL for uploading reports to AWS S3 (We’re using Postman for this) :

API request to invoke the URL

API request body:

Note: This API is accessible to anyone. Attackers love unauthorized access. To prevent this, we need to allow only specific IP address access to our Amazon API Gateway. These IP addresses can be VPN IPs of the company. You can use resource policies for this.

  1. We’re checking the reports are uploaded to AWS S3 successfully.
  1. We have uploaded reports via API request to create an AWS S3 bucket. Now we need an AWS Lambda function that invokes uploaded reports. This function will generate a pre-signed URL that will expire after 7 days and send this to employees that are defined on AWS SNS. Employees need to be subscribed to the AWS SNS topic before this operation.

Note: All S3 objects are private by default. The object owner can only access them. By creating a pre-signed URL, the object owner can share objects with others. To create a pre-signed URL, use your own credentials and grant time-limited permission to download the objects. The maximum expiration time for a pre-signed URL is one week from the time of creation and there is no way to have a pre-signed URL without an expiry time.

You can use the Lambda function code below:

import json
import boto3
def lambda_handler(event, context):
    s3_bucket_name = str(event['Records'][0]['s3']['bucket']['name'])
    s3_report_key = str(event['Records'][0]['s3']['object']['key'])
    s3_client = boto3.client('s3')
    report_presigned_url = s3_client.generate_presigned_url('get_object', Params={'Bucket': s3_bucket_name, 'Key': s3_report_key}, ExpiresIn=604800)
    MY_SNS_TOPIC_ARN = "{SNS_topic_ARN}"
    sns_client = boto3.client('sns')
    sns_client.publish(
        TopicArn=MY_SNS_TOPIC_ARN,
        Subject = 'Reports Presigned URLs',
        Message="Your Reports are ready:\n%s" %report_presigned_url)
    print('success')
    return {'message': 'it works'}

We need to attach the required policies to the AWS Lambda. For testing purposes, full access policies are attached. However, if you use this code in production, you should create your own policies with at least a privilege principle.

  1. Now, we should add the event notification that will trigger Lambda when the report is uploaded to the s3 bucket. We need to navigate “Amazon S3 → {bucket_name} → Properties → Event notifications” and create a new event notification:

We need to select the Lambda function’s ARN that we created before:

  1. We should also check reports on the S3 bucket and delete reports whose creation date is more than 7 days. For this, we will create a lambda function and trigger it every 7 days from AWS CloudWatch. You can use the Lambda function code below:
import json
import boto3
import time
from datetime import datetime, timedelta
def lambda_handler(event, context):
    old_files = []
    s3 = boto3.client('s3')
    try:
        files = s3.list_objects_v2(Bucket={bucket_name})['Contents']
        old_files = [{'Key': file['Key']} for file in files if (file['LastModified'].strftime('%Y-%m-%d')) < ((datetime.now() - timedelta(days=7)).strftime('%Y-%m-%d'))]
        if old_files:
            s3.delete_objects(Bucket={bucket_name}, Delete={'Objects': old_files})
			return {
            'statusCode': 200,
            'body': "Expired Objects Deleted!"
            }
    except:
        return {
            'statusCode': 400,
            'body': "No Expired Object to Delete!"
            }

When creating the CloudWatch Event Rule, we must select the event source and targets.

  1. All processes are completed and ready for use. To upload reports, we can make an API request. After successfully sending the request, we will receive an email containing a pre-signed URL which will look like this:

When we click on the pre-signed URL, we will be able to download the report we have uploaded with the API request.

Also, the event rule we created in CloudWatch will be checking our S3 bucket weekly and will delete those that have been there for more than 1 week.

This article may interest you: AWS IAM for Red and Blue Teams

To sum up

In this blog, we’ve summarized what AWS S3 is. We also prepared a real-world use case for sharing AWS S3 files securely. We provided the architecture and configurations for all steps that you need. We hope you enjoyed it.

In conclusion, sharing AWS S3 files securely is crucial to prevent unauthorized access or data breaches. By following these guidelines, organizations can ensure the protection of their sensitive data while securely sharing files through AWS S3.

Check out our Cloud Security services to stay secure in the cloud!