AWS S3 API Examples and Best Practices

Okay, here’s a comprehensive article on AWS S3 API examples and best practices, aiming for approximately 5000 words:

AWS S3 API: Examples and Best Practices

Amazon Simple Storage Service (S3) is a highly scalable, durable, and secure object storage service offered by AWS. It’s a cornerstone of many cloud-based applications, providing a reliable way to store and retrieve any amount of data, at any time, from anywhere on the web. The power of S3 lies not only in its infrastructure but also in its robust and versatile API (Application Programming Interface), which allows developers to interact with S3 programmatically.

This article delves deep into the AWS S3 API, providing practical examples using various SDKs (Software Development Kits) and the AWS CLI (Command Line Interface), along with a detailed discussion of best practices to ensure security, performance, and cost-effectiveness.

1. Introduction to the AWS S3 API

The AWS S3 API is primarily a RESTful API, meaning it uses standard HTTP methods (GET, PUT, POST, DELETE, HEAD) to interact with S3 resources. While you can interact with the S3 API directly using raw HTTP requests, it’s far more common and efficient to use one of the AWS SDKs or the AWS CLI. These tools handle the complexities of authentication, request signing, error handling, and retries, making development significantly easier.

Key S3 Concepts:

Before diving into API examples, it’s crucial to understand the fundamental concepts of S3:

  • Buckets: Buckets are the top-level containers for storing objects in S3. Bucket names are globally unique (across all AWS accounts). You can think of buckets as root-level directories.
  • Objects: Objects are the fundamental entities stored in S3. Each object consists of data (the file itself) and metadata (information about the object, such as content type, size, and last modified date).
  • Keys: The key is the unique identifier for an object within a bucket. It acts like a filename, but S3 treats it as a flat namespace (there are no real folders, although you can use prefixes to simulate a folder structure). The combination of a bucket name and a key uniquely identifies an object in S3.
  • Regions: S3 buckets are created in a specific AWS Region. Choosing the right region is important for latency, cost, and compliance reasons.
  • Access Control Lists (ACLs): ACLs are a legacy access control mechanism. While still supported, AWS recommends using bucket policies and IAM policies for more granular and flexible access control.
  • Bucket Policies: Bucket policies are JSON-based documents that define permissions for a specific bucket. They allow you to grant access to other AWS accounts, services, or even the public.
  • IAM Policies: IAM (Identity and Access Management) policies are used to control access to AWS resources, including S3. You can attach IAM policies to users, groups, or roles to grant specific permissions.
  • Versioning: S3 Versioning allows you to keep multiple versions of an object in the same bucket. This protects against accidental deletions or overwrites.
  • Lifecycle Policies: Lifecycle policies automate the transition of objects to different storage classes (e.g., from Standard to Glacier) or their deletion based on predefined rules. This is crucial for cost optimization.
  • Storage Classes: S3 offers various storage classes designed for different use cases and cost requirements. Key storage classes include:
    • S3 Standard: For frequently accessed data.
    • S3 Intelligent-Tiering: Automatically transitions objects between frequent, infrequent, and archive access tiers based on access patterns.
    • S3 Standard-IA (Infrequent Access): For data accessed less frequently but requiring rapid access when needed.
    • S3 One Zone-IA: Similar to Standard-IA but stores data in a single Availability Zone, offering lower cost but reduced redundancy.
    • S3 Glacier Instant Retrieval: Archive storage that supports millisecond retrieval times.
    • S3 Glacier Flexible Retrieval: Cost-effective archive storage with retrieval times ranging from minutes to hours.
    • S3 Glacier Deep Archive: The lowest-cost storage class, designed for long-term data archiving with retrieval times of 12 hours or more.

2. Setting Up Your Environment

Before you can use the S3 API, you need to set up your environment. This typically involves:

  1. AWS Account: You need an active AWS account.
  2. IAM User: Create an IAM user with the necessary permissions to access S3. It’s best practice not to use your root AWS account credentials for programmatic access. The IAM user should have, at minimum, permissions like s3:ListBucket, s3:GetObject, s3:PutObject, and s3:DeleteObject, depending on your application’s needs.
  3. AWS Credentials: Configure your environment with the IAM user’s access key ID and secret access key. You can do this in several ways:
    • Environment Variables: Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
    • AWS CLI Configuration: Use the aws configure command to set up a profile.
    • IAM Roles (for EC2 instances, Lambda functions, etc.): If your code is running on an AWS service, it’s best to use IAM roles. The role provides temporary credentials, eliminating the need to manage access keys directly.
  4. Install an SDK or the AWS CLI: Choose the appropriate SDK for your programming language (e.g., Boto3 for Python, AWS SDK for Java, AWS SDK for .NET, etc.) or install the AWS CLI.

3. AWS CLI Examples

The AWS CLI provides a command-line interface for interacting with S3. It’s excellent for quick tasks, scripting, and automation.

  • Create a Bucket:

    bash
    aws s3 mb s3://my-unique-bucket-name --region us-east-1

    • aws s3 mb: The command to make a bucket.
    • s3://my-unique-bucket-name: The globally unique bucket name.
    • --region us-east-1: Specifies the AWS Region where the bucket will be created.
  • List Buckets:

    bash
    aws s3 ls

  • Upload a File:

    bash
    aws s3 cp my-local-file.txt s3://my-unique-bucket-name/my-object-key.txt

    • aws s3 cp: The command to copy files.
    • my-local-file.txt: The local file to upload.
    • s3://my-unique-bucket-name/my-object-key.txt: The destination in S3, including the bucket name and object key.
  • Download a File:

    bash
    aws s3 cp s3://my-unique-bucket-name/my-object-key.txt my-local-file.txt

  • Delete an Object:

    bash
    aws s3 rm s3://my-unique-bucket-name/my-object-key.txt

  • Delete a Bucket (must be empty):

    bash
    aws s3 rb s3://my-unique-bucket-name

    If the bucket has files, you can use:
    bash
    aws s3 rb s3://my-unique-bucket-name --force

  • List Objects in a Bucket:

    bash
    aws s3 ls s3://my-unique-bucket-name/

    To list with a prefix (simulating a folder):
    bash
    aws s3 ls s3://my-unique-bucket-name/my-prefix/

  • Sync a local directory with an S3 bucket:
    bash
    aws s3 sync . s3://my-unique-bucket-name/

    This command synchronizes the contents of the current local directory (“.”) with the specified S3 bucket. It uploads new or modified files, and by default, it does not delete files from the bucket that are not present locally. You can use the --delete flag to remove objects in the bucket that don’t exist locally.

  • Set Object Metadata:

    bash
    aws s3 cp s3://my-unique-bucket-name/my-object-key.txt s3://my-unique-bucket-name/my-object-key.txt --metadata "key1=value1,key2=value2" --content-type "text/plain"

    This example copies the object to itself (effectively updating it) and sets new metadata and content type. You can also set other headers like --cache-control, --content-disposition, --content-encoding, etc.

  • Enable Versioning:

    bash
    aws s3api put-bucket-versioning --bucket my-unique-bucket-name --versioning-configuration Status=Enabled

  • Get Object (with Version ID):

    bash
    aws s3api get-object --bucket my-unique-bucket-name --key my-object-key.txt --version-id <version-id> my-local-file.txt

  • Create a Pre-signed URL (for temporary access):

    bash
    aws s3 presign s3://my-unique-bucket-name/my-object-key.txt --expires-in 3600

    This generates a pre-signed URL that allows anyone with the URL to download the object for a limited time (3600 seconds, or 1 hour, in this case). This is extremely useful for sharing private objects securely without granting permanent access.

  • Configure a Static Website:

    bash
    aws s3 website s3://my-unique-bucket-name/ --index-document index.html --error-document error.html

    This command configures the S3 bucket to serve a static website. index.html is the default page, and error.html is served for 404 errors. You’ll also need to set a bucket policy to allow public read access.

  • Set a Bucket Policy (using a JSON file):

    bash
    aws s3api put-bucket-policy --bucket my-unique-bucket-name --policy file://bucket-policy.json

    Where bucket-policy.json contains the JSON policy document. For example, to make the bucket publicly readable (for a static website):

    json
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "PublicReadGetObject",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::my-unique-bucket-name/*"
    }
    ]
    }

  • Set a Lifecycle Configuration (using a JSON file):

    bash
    aws s3api put-bucket-lifecycle-configuration --bucket my-unique-bucket-name --lifecycle-configuration file://lifecycle.json

    Where lifecycle.json contains the lifecycle rules. For example, to transition objects to Glacier after 30 days and delete them after 365 days:

    json
    {
    "Rules": [
    {
    "ID": "TransitionToGlacierAndDelete",
    "Filter": {
    "Prefix": ""
    },
    "Status": "Enabled",
    "Transitions": [
    {
    "Days": 30,
    "StorageClass": "GLACIER"
    }
    ],
    "Expiration": {
    "Days": 365
    }
    }
    ]
    }

4. Python (Boto3) Examples

Boto3 is the AWS SDK for Python. It provides a clean, object-oriented interface to interact with S3.

“`python
import boto3
import botocore

Create an S3 client

s3 = boto3.client(‘s3’)

Create a bucket

try:
s3.create_bucket(Bucket=’my-unique-bucket-name-python’, CreateBucketConfiguration={‘LocationConstraint’: ‘us-east-1’})
print(“Bucket created successfully”)
except botocore.exceptions.ClientError as e:
if e.response[‘Error’][‘Code’] == ‘BucketAlreadyOwnedByYou’:
print(“Bucket already exists and is owned by you.”)
elif e.response[‘Error’][‘Code’] == ‘BucketAlreadyExists’:
print(“Bucket already exists (and is owned by someone else).”)
else:
print(f”Error creating bucket: {e}”)

List buckets

response = s3.list_buckets()
print(“Existing buckets:”)
for bucket in response[‘Buckets’]:
print(f” {bucket[‘Name’]}”)

Upload a file

try:
with open(‘my-local-file.txt’, ‘rb’) as data:
s3.upload_fileobj(data, ‘my-unique-bucket-name-python’, ‘my-object-key.txt’)
print(“File uploaded successfully”)
except FileNotFoundError:
print(“The file was not found”)
except botocore.exceptions.ClientError as e:
print(f”Error uploading file: {e}”)

Upload a file with additional metadata

try:
with open(‘image.jpg’, ‘rb’) as data:
s3.upload_fileobj(data, ‘my-unique-bucket-name-python’, ‘image.jpg’,
ExtraArgs={‘ContentType’: ‘image/jpeg’, ‘Metadata’: {‘mykey’: ‘myvalue’}})
print(“File uploaded with metadata successfully”)
except FileNotFoundError:
print(“The file was not found”)
except botocore.exceptions.ClientError as e:
print(f”Error uploading file: {e}”)

Download a file

try:
with open(‘downloaded-file.txt’, ‘wb’) as data:
s3.download_fileobj(‘my-unique-bucket-name-python’, ‘my-object-key.txt’, data)
print(“File downloaded successfully”)
except botocore.exceptions.ClientError as e:
print(f”Error downloading file: {e}”)

List objects in a bucket

response = s3.list_objects_v2(Bucket=’my-unique-bucket-name-python’) # Use list_objects_v2 for pagination
print(“Objects in bucket:”)
if ‘Contents’ in response:
for obj in response[‘Contents’]:
print(f” {obj[‘Key’]}”)

List objects with a prefix

response = s3.list_objects_v2(Bucket=’my-unique-bucket-name-python’, Prefix=’my-prefix/’)
print(“\nObjects with prefix ‘my-prefix/’:”)
if ‘Contents’ in response:
for obj in response[‘Contents’]:
print(f” {obj[‘Key’]}”)

Delete an object

try:
s3.delete_object(Bucket=’my-unique-bucket-name-python’, Key=’my-object-key.txt’)
print(“Object deleted successfully”)
except botocore.exceptions.ClientError as e:
print(f”Error deleting object: {e}”)

Generate a pre-signed URL

url = s3.generate_presigned_url(
ClientMethod=’get_object’,
Params={
‘Bucket’: ‘my-unique-bucket-name-python’,
‘Key’: ‘image.jpg’
},
ExpiresIn=3600
)
print(f”\nPre-signed URL: {url}”)

Enable versioning on a bucket

try:
s3.put_bucket_versioning(
Bucket=’my-unique-bucket-name-python’,
VersioningConfiguration={‘Status’: ‘Enabled’}
)
print(“Versioning enabled successfully”)
except botocore.exceptions.ClientError as e:
print(f”Error enabling versioning: {e}”)

Get object with version ID (assuming versioning is enabled)

try:
# First, you need to get the version ID. You can list versions:
versions = s3.list_object_versions(Bucket=’my-unique-bucket-name-python’, Prefix=’image.jpg’)

if 'Versions' in versions:
    latest_version_id = versions['Versions'][0]['VersionId'] # Get the latest

    with open('downloaded-image-version.jpg', 'wb') as data:
        s3.download_fileobj('my-unique-bucket-name-python', 'image.jpg', data, ExtraArgs={'VersionId': latest_version_id})
    print(f"Downloaded image with version ID: {latest_version_id}")
else:
  print("No versions found.")

except botocore.exceptions.ClientError as e:
print(f”Error getting object version: {e}”)

Delete a bucket (must be empty)

IMPORTANT: Empty the bucket first!

try:
# List and delete all objects and object versions
response = s3.list_object_versions(Bucket=’my-unique-bucket-name-python’)
if ‘Versions’ in response:
for version in response[‘Versions’]:
s3.delete_object(Bucket=’my-unique-bucket-name-python’, Key=version[‘Key’], VersionId=version[‘VersionId’])
if ‘DeleteMarkers’ in response:
for marker in response[‘DeleteMarkers’]:
s3.delete_object(Bucket=’my-unique-bucket-name-python’, Key=marker[‘Key’], VersionId=marker[‘VersionId’])

# Now delete the bucket
s3.delete_bucket(Bucket='my-unique-bucket-name-python')
print("Bucket deleted successfully")

except botocore.exceptions.ClientError as e:
print(f”Error deleting bucket: {e}”)

Create a pre-signed POST URL (for uploading directly from a browser)

post_url = s3.generate_presigned_post(
Bucket=’my-unique-bucket-name-python’,
Key=’uploads/${filename}’, # Use a variable for filename
Fields={“acl”: “public-read”, “Content-Type”: “image/jpeg”}, # Example fields
Conditions=[
{“acl”: “public-read”},
{“Content-Type”: “image/jpeg”},
[“content-length-range”, 1024, 10485760] # Limit file size (1KB – 10MB)
],
ExpiresIn=3600
)

print(f”\nPre-signed POST URL: {post_url}”)

This will output a dictionary with ‘url’ and ‘fields’. You can use these in an HTML form.

— HTML Form Example (for pre-signed POST) —

<– Add other hidden fields from post_url[‘fields’] here –>

Using S3 Resource (Higher-Level Abstraction)

s3_resource = boto3.resource(‘s3’)
bucket = s3_resource.Bucket(‘my-unique-bucket-name-python’) # Assumes bucket exists

Upload using the resource

try:
bucket.upload_file(‘my-local-file.txt’, ‘my-object-key-resource.txt’)
print(“File uploaded using resource successfully”)
except FileNotFoundError:
print(“The file was not found (resource)”)
except botocore.exceptions.ClientError as e:
print(f”Error uploading file using resource: {e}”)

Iterate through objects using the resource

print(“Objects in bucket (using resource):”)
for obj in bucket.objects.all():
print(f” {obj.key}”)

Iterate through objects with a prefix using resource

print(“\nObjects with prefix ‘my-prefix/’ (using resource):”)
for obj in bucket.objects.filter(Prefix=’my-prefix/’):
print(f” {obj.key}”)

Download a file using the resource

try:
bucket.download_file(‘my-object-key-resource.txt’, ‘downloaded-resource.txt’)
print(“File downloaded using resource successfully.”)
except botocore.exceptions.ClientError as e:
print(f”Error downloading file using resource: {e}”)
“`

Key Boto3 Methods:

  • boto3.client('s3'): Creates a low-level S3 client. This gives you direct access to all S3 API operations.
  • boto3.resource('s3'): Creates a higher-level S3 resource object. This provides a more object-oriented and convenient way to interact with S3.
  • create_bucket(): Creates a new bucket.
  • list_buckets(): Lists all buckets owned by the authenticated user.
  • upload_fileobj(): Uploads a file-like object (e.g., an open file) to S3.
  • upload_file(): Uploads a local file to S3 (using the resource object).
  • download_fileobj(): Downloads an object from S3 to a file-like object.
  • download_file(): Downloads an object from S3 to a local file (using the resource object).
  • list_objects_v2(): Lists objects in a bucket. Use this version instead of list_objects() as it handles pagination automatically.
  • delete_object(): Deletes an object.
  • delete_bucket(): Deletes a bucket (the bucket must be empty).
  • generate_presigned_url(): Creates a pre-signed URL for GET requests.
  • generate_presigned_post(): Creates a pre-signed URL and associated form fields for POST requests (direct uploads from a browser).
  • put_bucket_versioning(): Enables or disables versioning for a bucket.
  • list_object_versions(): Lists the versions of objects in a bucket.
  • put_bucket_policy(): Sets the bucket policy.
  • put_bucket_lifecycle_configuration(): Sets lifecycle rules.
  • Bucket.objects.all(): Iterates through all objects in a bucket (using the resource object).
  • Bucket.objects.filter(Prefix='...'): Iterates through objects with a specific prefix (using the resource object).
  • get_object(): Retrieves the object from S3.

5. Java (AWS SDK for Java v2) Examples

The AWS SDK for Java v2 provides a modern and fluent API for interacting with S3.

“`java
import software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.core.sync.ResponseTransformer;

import java.io.File;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;

public class S3Example {

public static void main(String[] args) {

    // Create an S3 client (using environment variables for credentials)
    S3Client s3 = S3Client.builder()
            .region(Region.US_EAST_1) // Specify your region
            .credentialsProvider(EnvironmentVariableCredentialsProvider.create())
            .build();

    String bucketName = "my-unique-bucket-name-java";
    String objectKey = "my-object-key.txt";
    String localFilePath = "my-local-file.txt";
    String downloadFilePath = "downloaded-file.txt";

    // Create a bucket
    try {
        CreateBucketRequest createBucketRequest = CreateBucketRequest.builder()
                .bucket(bucketName)
                .build();
        s3.createBucket(createBucketRequest);
        System.out.println("Bucket created successfully");
    } catch (BucketAlreadyExistsException e) {
        System.out.println("Bucket already exists.");
    } catch (S3Exception e) {
        System.err.println("Error creating bucket: " + e.awsErrorDetails().errorMessage());
    }

    // List buckets
    ListBucketsRequest listBucketsRequest = ListBucketsRequest.builder().build();
    ListBucketsResponse listBucketsResponse = s3.listBuckets(listBucketsRequest);
    System.out.println("Existing buckets:");
    for (Bucket bucket : listBucketsResponse.buckets()) {
        System.out.println("  " + bucket.name());
    }

    // Upload a file
    try {
       PutObjectRequest putObjectRequest = PutObjectRequest.builder()
                .bucket(bucketName)
                .key(objectKey)
                .contentType("text/plain") // Set content type
                .build();

       s3.putObject(putObjectRequest, RequestBody.fromFile(Paths.get(localFilePath)));
        System.out.println("File uploaded successfully");

    } catch (S3Exception e) {
        System.err.println("Error uploading file: " + e.awsErrorDetails().errorMessage());
    } catch(IOException e){
        System.err.println("IO Error while uploading: " + e.getMessage());
    }

    // Download a file
    try{
        GetObjectRequest getObjectRequest = GetObjectRequest.builder()
                .bucket(bucketName)
                .key(objectKey)
                .build();

        s3.getObject(getObjectRequest, ResponseTransformer.toFile(Paths.get(downloadFilePath)));
        System.out.println("File downloaded successfully");
    } catch (S3Exception e) {
        System.err.println("Error downloading file: " + e.awsErrorDetails().errorMessage());
    } catch(IOException e){
        System.err.println("IO Error while downloading: " + e.getMessage());
    }

    // List objects in a bucket
    ListObjectsV2Request listObjectsRequest = ListObjectsV2Request.builder()
            .bucket(bucketName)
            .build();
    ListObjectsV2Response listObjectsResponse = s3.listObjectsV2(listObjectsRequest);
    System.out.println("Objects in bucket:");
    for (S3Object obj : listObjectsResponse.contents()) {
        System.out.println("  " + obj.key());
    }

     // List objects with a prefix
     listObjectsRequest = ListObjectsV2Request.builder()
            .bucket(bucketName)
            .prefix("my-prefix/")
            .build();

    listObjectsResponse = s3.listObjectsV2(listObjectsRequest);
    System.out.println("\nObjects with prefix 'my-prefix/':");
    for (S3Object obj : listObjectsResponse.contents()) {
        System.out.println("  " + obj.key());
    }

    // Delete an object
    try {
        DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder()
                .bucket(bucketName)
                .key(objectKey)
                .build();
        s3.deleteObject(deleteObjectRequest);
        System.out.println("Object deleted successfully");
    } catch (S3Exception e) {
        System.err.println("Error deleting object: " + e.awsErrorDetails().errorMessage());
    }

    // Generate a Pre-signed URL
   try{
    GetUrlRequest getUrlRequest = GetUrlRequest.builder()
                    .bucket(bucketName)
                    .key("my-object-key.txt")
                    .build();
    String presignedUrl = s3.utilities().getUrl(getUrlRequest).toString();

    System.out.println("Pre-signed URL (forGetObject): " + presignedUrl);

      // Generate Presigned URL for Put
    PutObjectRequest putPreSigned = PutObjectRequest.builder()
                                .bucket(bucketName)
                                .key("upload-test.txt")
                                .contentType("text/plain")
                                .build();

    GeneratePresignedUrlRequest presignedRequest = GeneratePresignedUrlRequest.builder()
                                .signatureDuration(java.time.Duration.ofMinutes(10))  // Valid for 10 minutes
                                .putObjectRequest(putPreSigned)
                                .build();
    String presignedPutUrl = s3.utilities().getUrl(presignedRequest).toString();
    System.out.println("Pre-signed URL (for PutObject): " + presignedPutUrl);

    // Enable Versioning on a bucket
    PutBucketVersioningRequest putVersioningRequest = PutBucketVersioningRequest.builder()
            .bucket(bucketName)
            .versioningConfiguration(VersioningConfiguration.builder().status(BucketVersioningStatus.ENABLED).build())
            .build();
    s3.putBucketVersioning(putVersioningRequest);
    System.out.println("Versioning Enabled");

   }catch (S3Exception e) {
    System.err.println("Error with Presigned URLs/ Versioning: " + e.awsErrorDetails().errorMessage());
}
    // Delete a bucket (must be empty)
    // IMPORTANT:  Empty the bucket first!
    try{

        // List and delete all objects and object versions
        ListObjectVersionsRequest listVersionsRequest = ListObjectVersionsRequest.builder()
                .bucket(bucketName)
                .build();
        ListObjectVersionsResponse listVersionsResponse = s3.listObjectVersions(listVersionsRequest);

        if(listVersionsResponse.hasVersions()){
            for(ObjectVersion version : listVersionsResponse.versions()){
                 DeleteObjectRequest deleteVersionRequest = DeleteObjectRequest.builder()
                        .bucket(bucketName)
                        .key(version.key())
                        .versionId(version.versionId())
                        .build();
                s3.deleteObject(deleteVersionRequest);
            }
        }

        if(listVersionsResponse.hasDeleteMarkers()){
          for(DeleteMarkerEntry marker : listVersionsResponse.deleteMarkers()){
              DeleteObjectRequest deleteMarkerRequest = DeleteObjectRequest.builder()
                    .bucket(bucketName)
                    .key(marker.key())
                    .versionId(marker.versionId())
                    .build();
              s3.deleteObject(deleteMarkerRequest);
            }
        }
        DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder()
                .bucket(bucketName)
                .build();
        s3.deleteBucket(deleteBucketRequest);
        System.out.println("Bucket deleted successfully");
    } catch (S3Exception e) {
        System.err.println("Error deleting bucket: " + e.awsErrorDetails().errorMessage());
    }


    s3.close(); // Close the client when done
}

}
“`

Key Java SDK v2 Concepts:

  • S3Client: The main client for interacting with S3. You configure it with your AWS credentials and region.
  • Request Objects: Each S3 operation has a corresponding request object (e.g., CreateBucketRequest, PutObjectRequest, GetObjectRequest). You use builder methods to configure the request.
  • RequestBody: Represents the body of a PUT request (e.g., RequestBody.fromFile() to upload a file).
  • ResponseTransformer: Handles the response of a GET request (e.g., ResponseTransformer.toFile() to download to a file).
  • S3Utilities: Used for utility operations, such as generating pre-signed URLs. Accessed via s3.utilities().
  • GeneratePresignedUrlRequest: Used with the S3Utilities to generate a presigned URL.
  • Error Handling: The SDK throws exceptions for errors (e.g., BucketAlreadyExistsException, S3Exception). You should handle these exceptions appropriately.
  • Asynchronous Operations: The Java SDK v2 also supports asynchronous operations using S3AsyncClient. This is crucial for high-throughput applications.

6. .NET (AWS SDK for .NET) Examples

The AWS SDK for .NET provides a comprehensive set of classes for interacting with S3.

“`csharp
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

public class S3Example
{
private const string bucketName = “my-unique-bucket-name-dotnet”;
private const string objectKey = “my-object-key.txt”;
private const string localFilePath = “my-local-file.txt”;
private const string downloadFilePath = “downloaded-file.txt”;
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USEast1; // Replace with your region

public static async Task Main(string[] args)
{
    // Create an S3 client (using environment variables for credentials)
    using (var s3Client = new AmazonS3Client(bucketRegion))
    {
        // Create a bucket
        try
        {
            var putBucketRequest = new PutBucketRequest
            {
                BucketName = bucketName,
                UseClientRegion = true // Use the client's region
            };
            await s3Client.PutBucketAsync(putBucketRequest);
            Console.WriteLine("Bucket created successfully");
        }
        catch (AmazonS3Exception e)
        {
            if (e.ErrorCode == "BucketAlreadyOwnedByYou")
            {
                Console.WriteLine("Bucket already exists and is owned by you.");
            }
            else if (e.ErrorCode == "BucketAlreadyExists")
            {
                Console.WriteLine("Bucket already exists (and is owned by someone else).");
            }
            else
            {
                Console.WriteLine($"Error creating bucket: {e.Message}");
            }
        }

        // List buckets
        var listBucketsResponse = await s3Client.ListBucketsAsync();
        Console.WriteLine("Existing buckets:");
        foreach (var bucket in listBucketsResponse.Buckets)
        {
            Console.WriteLine($"  {bucket.BucketName}");
        }

        // Upload a file
        try
        {
           var fileTransferUtility = new TransferUtility(s3Client);
           await fileTransferUtility.UploadAsync(localFilePath, bucketName, objectKey);
           Console.WriteLine("File uploaded successfully.");
        }
        catch (AmazonS3Exception e)
        {
            Console.WriteLine($"Error uploading file: {e.Message}");
        }
        catch (FileNotFoundException)
        {
            Console.WriteLine("The file was not found.");
        }


        // Download a file
        try
        {
            var fileTransferUtility =

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top