Amazon Web Services, through their Simple Storage Service (S3), provide an inexpensive and flexible solution for hosting static websites. These would normally be sites that are developed on a local environment (using Hugo or Jekyll), then pushed to either a web server or - in this case - a cloud-based storage platform configured to make the generated pages public. This post discusses a scripted approach to building a bucket you want to use to host an S3 website from scratch.
Once you have the AWS CLI configured, with keys set up for an account that has permission to configure S3 resources and policies, we can begin writing a script to build a new S3 bucket host site hosting. The finished project can be found at this Github repository.
Using the text-editor of our choice, we create a new file, New-S3Site.ps1 and add the following contents. While any editor will do, using Powershell Integrated Scripting Environment (ISE) gives us the added bonus of running our script while writing it.
[CmdletBinding()] Param( # Name of the bucket being configured [Parameter(Mandatory=$true,Position=1)] [String] $Name, # Name of the path on the local computer to copy files from, assume current working directory by default [String] $Path = $pwd.Path ) $bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist, checking for an ErrorRecord # if your bucket was created successfully, proceed to configure/upload defaults # configure default minimal viable policy, using Set-Content # upload your site contents } }
... $bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist aws s3 ls $bucketName ...
While this gets us our check very quickly, to better shape the behavior of our script, we should capture the result of running this command to a variable that can then be used to determine the next step to take, rather than simply print it out on the screen. If you run this command after providing a non-existant bucket name, you’ll see something similar to this: Now let’s capture the contents of this error message by doing two things:
... $bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist $result = aws s3 ls $bucketName 2>&1 ...
The $result variable will now contain an ErrorRecord object if ls fails. (Actually, this turns out to be an array with one object inside of it, of type ErrorRecord) We know that, if the result of this ls command is an ErrorRecord, that the bucket does not exist. This is useful information that can be leveraged to perform the check we need before proceeding with the creation process, because if we don’t get an error record, then $result will be null (as nothing came from standard error), and we can check if the bucket exists by checking whether or not $result is null. If null, the ls succeeded, and thus we should stop the script because there’s a bucket there we don’t want to tamper with!
$bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist, checking for an ErrorRecord try { $result = aws s3 ls $bucketName 2>&1 if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") { throw [System.Exception]::new("The specified bucket $bucketName already exists."); } } catch { # if the bucket already exists, don't do anything and exit noisily! Write-Error $PSItem.Exception.Message Exit }
The next AWS CLI command creates our bucket. The constructed bucketName variable is used here. The command is aws s3 mb and here, mb is short form for “make bucket.”
The output of this command is also written to standard error, so we make use of redirection to standard out and place those contents in a variable, which we check later for a specific pattern. If the mb operation succeeds, the contents of the variable will contain “make_bucket” in it. Here’s how we check for that:
$bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist, checking for an ErrorRecord try { $result = aws s3 ls $bucketName 2>&1 if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") { throw [System.Exception]::new("The specified bucket $bucketName already exists."); } } catch { # if the bucket already exists, don't do anything and exit noisily! Write-Error $PSItem.Exception.Message Exit } $bucket = aws s3 mb $bucketName 2>&1 # if your bucket was created successfully, proceed to configure/upload defaults if ($bucket | Select-String -Pattern "make_bucket") { # configure default minimal viable policy, using Set-Content # upload your site contents }
To configure a policy that makes all the files in our bucket publically readable, we apply the action GetObject for any principal denoted by “*" on all resources under our bucket, which would end up being the Amazon Resource Name (ARN) arn:aws:s3:::acme.org/* for a hypothetical bucket named acme.org. We use the Amazon Policy Generator to generate such a policy for us.
{ "Version":"2012-10-17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME/*" ] } ] }
We paste the resulting policy into a file called static-site-policy.json, in the same directory as our script, that our script will use. The file static-site-policy.json will become our template, and all we need to do to have it work for our new bucket, is replace the constant BUCKET_NAME with the value of the bucket name we passed in as a parameter. At this point, we would have the following as our complete script:
[CmdletBinding()] Param( [Parameter(Mandatory=$true,Position=1)] [String] $Name, # Name of the path on the local computer to copy files from, assume current working directory by default [String] $Path = $pwd.Path ) $bucketName = "s3://$($Name)" # make sure the bucket doesn't already exist, checking for an ErrorRecord try { $result = aws s3 ls $bucketName 2>&1 if ($result -eq $null -or $result[0].GetType().Name -ne "ErrorRecord") { throw [System.Exception]::new("The specified bucket $bucketName already exists."); } } catch { # if the bucket already exists, don't do anything and exit noisily! Write-Error $PSItem.Exception.Message Exit } $bucket = aws s3 mb $bucketName 2>&1 # if your bucket was created successfully, proceed to configure/upload defaults if ($bucket | Select-String -Pattern "make_bucket") { # configure default minimal viable policy, using Set-Content (Get-Content .\static-site-policy.json).replace('BUCKET_NAME', $Name) | Set-Content .\policy.json if (Test-Path -Path "policy.json") { # upload your site contents write-host (Get-Content "policy.json") aws s3api put-bucket-policy --bucket $Name --policy file://policy.json .\Upload-S3Site.ps1 -Name $Name -Path $Path } }
At this point, our complete script can create a bucket if it doesn’t exist, using a name we provide, and sets a policy for the bucket (which is really just generated by Powershell’s Get-Content and Set-Content commandlets, to replace the BUCKET_NAME value) using the command aws s3api put-bucket-policy which applies the policy in the actual policy file, policy.json.
In the next post for this series, the final piece in our puzzle will be discussed - the Upload-S3Site script, which will upload all of our website’s files to the bucket. The Path parameter we defined here will play an important role in this second script.
Additional Resources
Tags: