January 20 2018

Scripted S3 Site Deployment with AWS CLI and Powershell, Part 1

Amazon Web Services, through their Simple Storage Service (S3), provide an inexpensive and flexible solution for hosting static websites. These would normally be sites that are developed on a local environment (using Hugo or Jekyll), then pushed to either a web server or - in this case - a cloud-based storage platform configured to make the generated pages public. This post discusses a scripted approach to building a bucket you want to use to host an S3 website from scratch.

AWS CLI

Before doing anything that entails making scripted requests to AWS resources, the AWS Command Line Interface will be required on your system. Refer to Amazon's documentation for more details.

Once you have the AWS CLI configured, with keys set up for an account that has permission to configure S3 resources and policies, we can begin writing a script to build a new S3 bucket host site hosting. The finished project can be found at this Github repository.

Stubbing the Commandlet

For this example, we'll be building a Powershell script that accepts the name of a bucket you'd like to create. We also specify a parameter to indicate which path on our local system to copy files from.
  1. First, a check is performed to see if the bucket exists. If no bucket exists using the provided name, then the AWS CLI command will be run to create one using that name.
  2. We create the bucket, checking for success of that operation.
  3. Finally, we apply a default policy so those contents are publically available. While S3 does offer a highly programmable way to define bucket resource access and permissions, we can assume for our purposes that all parts of this website should be publically readable. Default contents for a website can be uploaded as well.

Using the text-editor of our choice, we create a new file, New-S3Site.ps1 and add the following contents. While any editor will do, using Powershell Integrated Scripting Environment (ISE) gives us the added bonus of running our script while writing it.

    [CmdletBinding()]
    Param(
        # Name of the bucket being configured  
        [Parameter(Mandatory=$true,Position=1)]
        [String]
        $Name,
        # Name of the path on the local computer to copy files from, assume current working directory by default
        [String]
        $Path = $pwd.Path
    )
    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist, checking for an ErrorRecord
    # if your bucket was created successfully, proceed to configure/upload defaults
        # configure default minimal viable policy, using Set-Content
            # upload your site contents
        }
    }  

Check for the S3 Bucket

The name parameter's value is used immediately to create a legal S3 URL, prefixed with the value s3://. We can pass this new variable, $bucketName to our first AWS CLI command, aws s3 ls. Exactly as you'd expect on a Unix system, this will attempt to do a listing of the bucket.
...
    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist
    aws s3 ls $bucketName
...

While this gets us our check very quickly, to better shape the behavior of our script, we should capture the result of running this command to a variable that can then be used to determine the next step to take, rather than simply print it out on the screen. If you run this command after providing a non-existant bucket name, you’ll see something similar to this:


Now let’s capture the contents of this error message by doing two things:

  1. Redirect the output of this command to standard out, from standard error
  2. Store the output in a new variable
...
    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist
    $result = aws s3 ls $bucketName 2>&1
...

The $result variable will now contain an ErrorRecord object if ls fails. (Actually, this turns out to be an array with one object inside of it, of type ErrorRecord) We know that, if the result of this ls command is an ErrorRecord, that the bucket does not exist. This is useful information that can be leveraged to perform the check we need before proceeding with the creation process, because if we don’t get an error record, then $result will be null (as nothing came from standard error), and we can check if the bucket exists by checking whether or not $result is null. If null, the ls succeeded, and thus we should stop the script because there’s a bucket there we don’t want to tamper with!

  1. Wrap the code that checks for the bucket in a try block
  2. In our try block, check the type of the variable $result
  3. If the type is NOT an ErrorRecord, then throw an exception. Use the exception in a catch block immediately after try, wherein we exist from our script
    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist, checking for an ErrorRecord
    try {
        $result = aws s3 ls $bucketName 2>&1
        if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") {
            throw [System.Exception]::new("The specified bucket $bucketName already exists.");
        }
    } catch {
        # if the bucket already exists, don't do anything and exit noisily!
        Write-Error $PSItem.Exception.Message
        Exit
    } 

Create the S3 Bucket

The next AWS CLI command creates our bucket. The constructed bucketName variable is used here. The command is aws s3 mb and here, mb is short form for “make bucket.”

The output of this command is also written to standard error, so we make use of redirection to standard out and place those contents in a variable, which we check later for a specific pattern. If the mb operation succeeds, the contents of the variable will contain “make_bucket” in it. Here’s how we check for that:

    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist, checking for an ErrorRecord
    try {
        $result = aws s3 ls $bucketName 2>&1
        if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") {
            throw [System.Exception]::new("The specified bucket $bucketName already exists.");
        }
    } catch {
        # if the bucket already exists, don't do anything and exit noisily!
        Write-Error $PSItem.Exception.Message
        Exit
    } 

    $bucket = aws s3 mb $bucketName 2>&1

    # if your bucket was created successfully, proceed to configure/upload defaults
    if ($bucket | Select-String -Pattern "make_bucket") {
        # configure default minimal viable policy, using Set-Content
            # upload your site contents
    }

Configure a Public Policy for the Bucket

To configure a policy that makes all the files in our bucket publically readable, we apply the action GetObject for any principal denoted by “*" on all resources under our bucket, which would end up being the Amazon Resource Name (ARN) arn:aws:s3:::acme.org/* for a hypothetical bucket named acme.org. We use the Amazon Policy Generator to generate such a policy for us.

{
    "Version":"2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME/*"
            ]
        }
    ]
}

We paste the resulting policy into a file called static-site-policy.json, in the same directory as our script, that our script will use. The file static-site-policy.json will become our template, and all we need to do to have it work for our new bucket, is replace the constant BUCKET_NAME with the value of the bucket name we passed in as a parameter. At this point, we would have the following as our complete script:

    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$true,Position=1)]
        [String]
        $Name,
        # Name of the path on the local computer to copy files from, assume current working directory by default
        [String]
        $Path = $pwd.Path
    )
    $bucketName = "s3://$($Name)"
    # make sure the bucket doesn't already exist, checking for an ErrorRecord
    try {
        $result = aws s3 ls $bucketName 2>&1
        if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") {
            throw [System.Exception]::new("The specified bucket $bucketName already exists.");
        }
    } catch {
        # if the bucket already exists, don't do anything and exit noisily!
        Write-Error $PSItem.Exception.Message
        Exit
    } 

    $bucket = aws s3 mb $bucketName 2>&1

    # if your bucket was created successfully, proceed to configure/upload defaults
    if ($bucket | Select-String -Pattern "make_bucket") {
        # configure default minimal viable policy, using Set-Content
        (Get-Content .\static-site-policy.json).replace('BUCKET_NAME', $Name) | Set-Content .\policy.json
        if (Test-Path -Path "policy.json") {
            # upload your site contents
            write-host (Get-Content "policy.json")
            aws s3api put-bucket-policy --bucket $Name --policy file://policy.json
            .\Upload-S3Site.ps1 -Name $Name -Path $Path
        }
    }

At this point, our complete script can create a bucket if it doesn’t exist, using a name we provide, and sets a policy for the bucket (which is really just generated by Powershell’s Get-Content and Set-Content commandlets, to replace the BUCKET_NAME value) using the command aws s3api put-bucket-policy which applies the policy in the actual policy file, policy.json.

In the next post for this series, the final piece in our puzzle will be discussed - the Upload-S3Site script, which will upload all of our website’s files to the bucket. The Path parameter we defined here will play an important role in this second script.

Additional Resources

comments powered by Disqus

Tags

Recent Posts