paint-brush
Simplifying Amazon S3: A Must-Read for AWS Associate Certificationby@maithoa

Simplifying Amazon S3: A Must-Read for AWS Associate Certification

by Mai ThoaJanuary 11th, 2025
Read on Terminal Reader
tldt arrow

Too Long; Didn't Read

This comprehensive guide explores Amazon S3 essentials for AWS Associate Solution Architects, covering storage classes, cost considerations, availability, and retrieval times. Learn about bucket policies, versioning, lifecycle management, encryption, and event notifications. The article also dives into advanced topics like S3 Transfer Acceleration, static website hosting, and data protection using Object Lock and Glacier Vault Lock. With practical AWS CLI examples, you'll gain hands-on insights to confidently architect scalable and secure cloud solutions.
featured image - Simplifying Amazon S3: A Must-Read for AWS Associate Certification
Mai Thoa HackerNoon profile picture

Amazon S3 is a cornerstone of AWS cloud storage, offering unmatched scalability, reliability, and versatility for modern applications. For AWS Associate Solution Architects, mastering S3 is essential to designing efficient, secure, and cost-effective solutions.


From understanding storage classes and lifecycle management to leveraging features like encryption, versioning, and event notifications, this guide covers the critical knowledge you need to optimize your S3 usage and succeed in your AWS journey.


  1. Storage Classes: 7 classes

S3 Storage Class

Use-cases

Cost

Availability

Retrieval Time

S3 Standard

Frequently accessed data. Ideal for big data analytics, mobile and gaming apps, and content delivery.

High

99.99%

Immediate

S3 Intelligent-Tiering

Data with unknown or changing access patterns. Automatically moves objects between tiers.

Depends on tier

99.99%

Immediate

S3 Standard-IA (Infrequently-Access)

Infrequently accessed data that requires rapid access. Great for backups and disaster recovery.

Lower than S3 Standard

99.99%

Immediate

S3 One Zone-IA

Infrequently accessed data that does not require multi-AZ resilience. One AZ only

Lower than Standard-IA

99==.95%==

Immediate

S3 Glacier Instant Retrieval

Archive data that needs millisecond access.

Lower than Standard-IA

99.99%

Milliseconds

S3 Glacier Flexible Retrieval

Long-term archival with less frequent access but cost-sensitive. Suitable for compliance archives.

Very Low

99.99%

Minutes to hours

S3 Glacier Deep Archive

Lowest-cost archival for rarely accessed data.

Lowest

99.99%

12-48 hours


  1. Durability and Availability:

    Amazon S3 offers 99.999999999% (11 nines) durability and 99.99% availability across all storage classes.


    By default, S3 replicates your data across at least three Availability Zones within the same Region, providing built-in redundancy to ensure the durability of most S3 storage classes. This setup does not require additional configuration, except for S3 One Zone-IA, which stores data in a single Availability Zone.


  2. Bucket Policies and Access Control:

    When an S3 bucket is created, it is private by default, which means that all objects in the bucket are private and only AWS accounts which are either bucket owners or accounts with administrator permissions and/or with appropriate IAM policies.


  3. Versioning and Lifecycle Management:

    Amazon S3 Versioning is a valuable feature in many scenarios, particularly when data protection, audit trails, disaster recovery, or compliance with regulatory requirements are needed. It is especially useful for use cases involving frequently changing data, data that needs to be retained over time, or historical data that is important for recovery and analysis.


    It is a best practice to use S3 Versioning alongside Lifecycle Management in use cases such as data backup and recovery, compliance, archiving, and storage management. This combination ensures optimal storage costs, data protection, and simplifies operations by automating versioning and data transitions.


    Below, are lifecycle rules you can define for objects in a bucket. Note that objects can be filtered. You can define rules to transition objects to different storage classes to delete them after a specified period. For example, move to Glacier after 90 days and Permanently delete after 365 days.


  4. Data Protection and Encryption:

    When a bucket is created, all objects stored in the bucket will be encrypted with SSE-S3 by default. SSE-S3 is a free server-side encryption solution for data encryption at-rest.


    Granular permissions with SSE-KMS: By using SSE-KMS (AWS Key Management Service), you can control who has access to the encryption keys, enabling fine-grained control over who can decrypt your data. With SSE-KMS, you can create, rotate, and revoke encryption keys, giving you full control over the life cycle of the keys used to encrypt your data.


    Try it: Check if a bucket is enabled with server-side encryption.


    (Note that all the information that using place holder such as region, account-id, and name are changed, and you need to replace the placeholder when you try with aws-cli)

    • Use the command below.
    aws s3api get-bucket-encryption --bucket <BUCKET-NAME>
    


    • A bucket with encryption on the server side will have a similar response as below.
    {
    
        "ServerSideEncryptionConfiguration": {
    
            "Rules": [
    
                {
    
                    "ApplyServerSideEncryptionByDefault": {
    
                        "SSEAlgorithm": "AES256"
    
                    },
    
                    "BucketKeyEnabled": false
    
                }
    
            ]
    
        }
    
    }
    


    Note: to encrypt data in transit, make sure to use HTTPS for secure data transfers.


  5. Event Notifications:

    S3 can trigger notifications to AWS services like Lambda, SNS, or SQS when specific events occur (e.g., object creation or deletion).

    It is used for workflows such as image processing, logging, or custom alerts.


    Try it: create an s3 bucket and configure an SNS topic to send email notifications whenever there is a file uploaded to the s3 bucket.


    1. Create an S3 bucket for uploading the file.

      aws s3api create-bucket --bucket <BUCKET-NAME> --region eu-north-1 --create-bucket-configuration LocationConstraint=eu-north-1
      
    2. Create an SNS Topic.

      aws sns create-topic --name MySNSTopic
      
      • The output will contain the Topic ARN (e.g., arn:aws:sns:region:account-id:MySNSTopic).

      • Note down the Topic ARN for later steps.


    3. Subscribe to an SNS topic to receive email from the topic.

      aws sns subscribe --topic-arn arn:aws:sns:region:account-id:MySNSTopic --protocol email --notification-endpoint youremail@example.com
      


    4. Confirm the subscription by checking your email and clicking on the confirmation link. OR you can copy the token from the confirmation link and confirm your subscription via aws cli

      aws sns confirm-subscription --topic-arn arn:aws:sns:region:account-id:MySNSTopic --token [a very long token you got in your email]
      

      This is the email where you can confirm that you subscribe to the SNS


    5. Once the token is confirmed, you can test if you will receive an email from the SNS topic by publishing a message to SNS.

      • use the command below to publish a test message

        aws sns publish --topic-arn arn:aws:sns:region:account-id:MySNSTopic --message "Test notification"
        
    6. Grant S3 Permission to Publish to SNS topic

      • Create a file name sns-policy.json to define the policy for S3 to publish the message.

        {
          "Version": "2008-10-17",
          "Id": "__default_policy_ID",
          "Statement": [
            {
              "Sid": "__default_statement_ID",
              "Effect": "Allow",
              "Principal": {
                "Service": "s3.amazonaws.com"
              },
              "Action": "SNS:Publish",
              "Resource": "arn:aws:sns:regionId:account-idd:MySNSTopic",
              "Condition": {
                "StringEquals": {
                  "AWS:SourceAccount": "account-id"
                },
                "ArnLike": {
                  "AWS:SourceArn": "arn:aws:s3:::test-s3-for-upload-files-202501061146"
                }
              }
            }
          ]
        }
        
      • Attach the policy to the SNS topic

        aws sns set-topic-attributes --topic-arn arn:aws:sns:region:account-id:MySNSTopic --attribute-name Policy --attribute-value file://sns-policy.json
        
      • Good to know to save time with policy definition (I have met no errors when defining the policy, however, the link between the s3 and SNS queue could not be set up, and it took a lot of time to find out these stupid findings.)


        • use AWS instead of AWS


        • use SourceAccount instead of SourceOwner


    7. Configure S3 to Send Notifications to SNS

      • Create a bucket notification configuration JSON file notification.json, and define the rule below which means that the s3 bucket will fire an event when an object is created and send the notification to the SNS topic.

        {   "TopicConfigurations": [     {       "TopicArn": "arn:aws:sns:region:account-id:MySNSTopic",       "Events": ["s3:ObjectCreated:*"]     }   ] }
        
      • Attach the configuration to the s3 bucket

        aws s3api put-bucket-notification-configuration --bucket <BUCKET-NAME> --notification-configuration file://notification.json
        

        The result can also be checked from the s3 bucket’s event notifications.





    8. Test the configuration: Upload a file, and check your email if you are receiving a notification.

      • touch my-local-file.txt
      • aws s3 cp my-local-file.txt s3://<BUCKET-NAME>/


  6. Data Transfer Acceleration:

    S3 Transfer Acceleration speeds up uploads by routing traffic through AWS edge locations using the CloudFront network. Useful for globally distributed users with high-latency connections. To use Transfer Acceleration, your S3 bucket must be created in a region that supports this feature.


    • Uses a distinct Transfer Acceleration endpoint:
      • Format: bucket-name.s3-accelerate.amazonaws.com
    • Bucket name must follow DNS-compliant naming conventions.


  7. Static Website Hosting:

    S3 can host static websites by enabling Static Website Hosting on a bucket. You can configure an index.html and error.html file. Combine with Amazon Route 53 for custom domains. Common use cases are marketing websites, landing pages, and documentation portals. Simple, affordable, and cost-effective.


    Try it: create a static website from an s3 bucket

    • Create an s3 bucket

      aws s3api create-bucket --bucket <bucket-name> --region eu-north-1  --create-bucket-configuration LocationConstraint=eu-north-1
      
    • Enable static website hosting

      aws s3 website s3://<bucket-name>/ --index-document index.html --error-document error.html
      
    • Set Bucket Policy for Public Access

      • Create a bucket-policy.json with the following content and replace the bucket-name with your s3 bucket name.

        {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Sid": "PublicReadGetObject",
                "Effect": "Allow",
                "Principal": "*",
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::bucket-name/*"
              }
            ]
          }
        


    • Apply the bucket policy to the s3 bucket.

      aws s3api put-bucket-policy --bucket <bucket-name> --policy file://bucket-policy.json
      
    • Upload the content of the static website (for example files index.html and error.html)


    • Verify static website hosting URL

      aws s3api get-bucket-website --bucket <bucket-name>
      

      You can get a result similar to the one below as an example.

    • Browse the website



  8. S3 Object Lock and Glacier Vault Lock:

    S3 Object Lock: Enables write-once-read-many (WORM) to protect objects from being deleted or modified for a fixed retention period. Glacier Vault Lock: Enforces compliance controls for Glacier storage.

Scenario

Recommended Service

Why?

Short-term retention with compliance needs

S3 Object Lock

Provides object-level retention and flexibility for managing versions.

Protecting objects from accidental deletion

S3 Object Lock

Can easily be applied to specific objects for day-to-day needs.

Regulatory archival storage (decades-long)

Glacier Vault Lock

Designed for long-term, high-security archival compliance.

Immutable backups for ransomware recovery

S3 Object Lock

Combines immutability with faster access than Glacier Vault Lock.

Archiving corporate records or tax data

Glacier Vault Lock

Cost-efficient and secure for storing large volumes of archival data.

\
  1. Bonus Knowledge:

    Performance Optimization: S3 scales automatically and supports unlimited throughput, so there’s no need to partition buckets. Replication: Use Cross-Region Replication (CRR) or Same-Region Replication (SRR) to replicate objects automatically between buckets.

    Multi-Part Upload: For large files (over 100 MB), break uploads into parts for faster and more resilient uploads.


  2. Only when you try you will know:

    Try to create a bucket; you will know that not all names are valid

    aws s3api create-bucket --bucket <BUCKET-NAME> --region eu-north-1 --create-bucket-configuration LocationConstraint=eu-north-1
    

    Rules for Naming S3 Buckets:

    1. Globally Unique:
      • The bucket name must be globally unique across all AWS accounts (i.e., no two buckets can have the same name).
    2. Length:
      • The bucket name must be between 3 and 63 characters in length.
    3. Allowed Characters:
      • The bucket name can only contain lowercase letters, numbers, hyphens (-).
      • Uppercase letters, spaces, and special characters (like underscores _, periods ., etc.) are not allowed.
    4. Bucket Name Format:
      • The name must start and end with a lowercase letter or number.
      • It can only contain lowercase letters, numbers, and hyphens (-) in between.
      • The bucket name should not contain consecutive hyphens (--).
      • Bucket names cannot contain periods (.) as this can interfere with SSL certificate verification.
    5. No IP Address Format:
      • The bucket name cannot be in the format of an IP address (e.g., 192.168.1.1).
    6. DNS-Compatibility:
      • Bucket names are also DNS-compliant because S3 uses the name as part of the URL for the bucket. The name must be compatible with domain names (RFC 1123).

      • For example, my-bucket-name is valid, but my..bucket is invalid due to consecutive periods.


    Private by default? We can check by trying to browse for the bucket from the browser.

    You can't see bucket's content

    After a bucket is created, then we can check that the Block of all public access is ON in the bucket’s permissions settings.

    Screenshot of how the configuration looks like for Block all public access


    In conclusion, mastering Amazon S3's essential features—from storage classes to encryption and event notifications—lays a strong foundation for AWS Solutions Architects at the associate level. Whether you’re managing data, optimizing costs, or securing your storage, understanding S3’s capabilities will help you design scalable, efficient, and compliant cloud solutions. As you continue to explore AWS, these fundamentals will guide your approach to building resilient, cost-effective cloud architectures.