February 11 2018

Scripted S3 Site Deployment with AWS CLI and Powershell, Part 2

In this final post for a two-part series, we’ll go over an additional script that can be used to upload all local static site contents to a previously configured Amazon Web Services S3 bucket.

Not Stubbing the Commandlet

As previously mentioned, the complete version of the scripts shown here can be found at this Github repository. For this example, we'll be building a Powershell script that accepts the name of a bucket to push files to. We've already provisioned the bucket using New-S3Site.ps1 so if you were using this script, you would set this to the same value you used when running that script.

It turns out that writing this script is much easier; we’re going to use the same AWS S3 CLI command to upload all of our site’s contents. This will leave us with multiple, very similar lines where the only difference is the file extension and content type for each type of file we’ll upload (HTML, CSS, JS, so on). We’ll explain how the bulk of this script works by explaining the command used. The complete script is shown below:

    [CmdletBinding()]
    Param(
        # Name of the bucket being configured  
        [Parameter(Mandatory=$true,Position=1)]
        [String]
        $Name,

        # Name of the path on the local computer to copy files from, assume current working directory by default
        [String]
        $Path = $pwd.Path
    )

    # [https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html] :: upload boilerplate files
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.html" --recursive --metadata-directive REPLACE `
        --content-type text/html --cache-control public,max-age=604800

    # Upload stylesheets
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.css" --recursive --metadata-directive REPLACE `
        --content-type text/css --cache-control public,max-age=604800

    # Upload JavaScript files
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.js" --recursive --metadata-directive REPLACE `
        --content-type text/javascript --cache-control public,max-age=604800

    # Upload PNG images
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.png" --recursive --metadata-directive REPLACE `
        --content-type image/png --cache-control public,max-age=604800

    # Upload JPG images
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.jpg" --recursive --metadata-directive REPLACE `
        --content-type image/jpg --cache-control public,max-age=604800

    # Set the website configuration for the bucket, setting the index and error pages.
    aws s3 website s3://$($Name) --index-document index.html --error-document error.html

AWS Copy Command

The command aws s3 cp copies files to, from and between S3 buckets. In addition to copying files, a variety of parameters are provided to change how the copying operations perform, as well as attributes and metadata for each of the copied files.

The most simple example we could start with would copy files in the current directory to a bucket we previously provisoned, where the name of the bucket is substituted from the $Name parameter in the required format. To copy files inside the directory, we also need to include the –recursive parameter:

    aws s3 cp . s3://$($Name) --recursive

Exclude and Include Parameters

Now it's worth noting that this command will copy over everything in the current directory to the specified bucket. We need to change that behavior, so that this command only copies over a specific format of file. This requires two parameters, --exclude and --include.

The first of these is –exclude which can accept a wildcard value. This causes the copy command to exclude everything - all the files inside of the current directory. Once we’ve added this parameter, the second parameter –include is used with a wildcard pattern, where anything with the specified pattern in its filename gets picked and copied to the bucket. This has the effect of copying over all the HTML files inside the current directory:

    aws s3 cp . s3://$($Name) --exclude "*" --include "*.html" --recursive

Content Type and Cache-Control Metadata

The --metadata-directive parameter can accept one of two values, COPY and REPLACE. We're interested in the REPLACE option, which will always replace the metadata values for the bucket's copy of the file to whatever we specify in our command. Here, we're only going to specify two metadata values: One to tell S3 that we're uploading a file with a content type of HTML, and another to set the expiration of the file to 7 days. After 7 days, visitors of our website will need to download the file again.

As a minor formatting concern, we’ll break this command into two lines by using the backtick(`) character, which Powershell interprets appropriately:

    aws s3 cp . s3://$($Name) --exclude "*" --include "*.html" --recursive --metadata-directive REPLACE `
        --content-type text/html --cache-control public,max-age=604800

The –content-type parameter is set to the appropriate MIME type value that corresponds to an HTML page file. We can get the required value for this file from MDN’s “Incomplete list of MIME Types” page. Finally, the –cache-control parameter is set to indicate an expiration of 7 days, expressed as seconds. MDN also has a page describing possible values for this metadata attribute.

Choosing our File Types

At this point we have a command we can use to upload all HTML files in the current directory. Copying other types of files over is as easy as copying this line, and substituing the file pattern used in --include, as well as the --content-type metadata attribute for that file type (easily found from the previous MIME types list). This leaves us with:
    # [https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html] :: upload boilerplate files
    aws s3 cp . s3://$($Name) --exclude "*" --include "*.html" --recursive --metadata-directive REPLACE `
        --content-type text/html --cache-control public,max-age=604800

    # Upload stylesheets
    aws s3 cp . s3://$($Name) --exclude "*" --include "*.css" --recursive --metadata-directive REPLACE `
        --content-type text/css --cache-control public,max-age=604800

    # Upload JavaScript files
    aws s3 cp . s3://$($Name) --exclude "*" --include "*.js" --recursive --metadata-directive REPLACE `
        --content-type text/javascript --cache-control public,max-age=604800

    # Upload PNG images
    aws s3 cp . s3://$($Name) --exclude "*" --include "*.png" --recursive --metadata-directive REPLACE `
        --content-type image/png --cache-control public,max-age=604800

    # Upload JPG images
    aws s3 cp . s3://$($Name) --exclude "*" --include "*.jpg" --recursive --metadata-directive REPLACE `
        --content-type image/jpg --cache-control public,max-age=604800

AWS Website Command

We're nearing the end of writing this script. Our bucket, after running this script, will now have all the files copied over from the current working directory we run this script in. Now we just need to tell AWS which files (already in the bucket) to use as the index and error page for this static website. Fortunately, there's a command for that too: the aptly named aws s3 website command.
    aws s3 website s3://$($Name) --index-document index.html --error-document error.html

Finishing Touches

To make our commandlet a bit more flexible, as well as cooperative with the other commandlet calling it (New-S3Site.ps1) we replace any indication of the current working directory (.) with the Path parameter value. This is the same code we used to define the parameter in that other script, and it will use a default value of the present working directory, allowing us to specify a location as well. Whether using the default value or one we set, the AWS copy command will now look there for which files to copy to the bucket:
    [CmdletBinding()]
    Param(
        # Name of the bucket being configured  
        [Parameter(Mandatory=$true,Position=1)]
        [String]
        $Name,

        # Name of the path on the local computer to copy files from, assume current working directory by default
        [String]
        $Path = $pwd.Path
    )

    # [https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html] :: upload boilerplate files
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.html" --recursive --metadata-directive REPLACE `
        --content-type text/html --cache-control public,max-age=604800

    # Upload stylesheets
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.css" --recursive --metadata-directive REPLACE `
        --content-type text/css --cache-control public,max-age=604800

    # Upload JavaScript files
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.js" --recursive --metadata-directive REPLACE `
        --content-type text/javascript --cache-control public,max-age=604800

    # Upload PNG images
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.png" --recursive --metadata-directive REPLACE `
        --content-type image/png --cache-control public,max-age=604800

    # Upload JPG images
    aws s3 cp $Path s3://$($Name) --exclude "*" --include "*.jpg" --recursive --metadata-directive REPLACE `
        --content-type image/jpg --cache-control public,max-age=604800

    # Set the website configuration for the bucket, setting the index and error pages.
    aws s3 website s3://$($Name) --index-document index.html --error-document error.html

The Scripts in Action

Running the scripts produces output indicating our bucket gets created and has files uploaded from our local 'public' folder to the bucket.

Once created, we have the bucket available in the S3 management console with the appropriate access configured.

And, visiting the S3 bucket URL works as well, with both our index and error pages functioning.

Additional Resources

comments powered by Disqus

Tags

Recent Posts