Building a new AWS account – Part 2
May 12, 2020
May 12, 2020
Building the s3 buckets
This is going to be more like a ‘getting-started’ type of deal. Knowing what I know now about all the AWS stuff I have done (read as: “mistakes I have made”),
How would I go about setting up a new account?
I would put a few restrictions on myself. The first would be to automate infrastructure deployment. This means that I wouldn’t build anything manually that could be built using cloudformation. I would also set up pipelines for each of the Clouformation to force myself to only make changes through code and have them deployed for me.
There have just been too many times where things have started this way and because of time constraints or certain things not being setup that this has ‘bitten’ me. If the pipelines are set up beforehand and the templates as well, then changes are fast and easy. Most importantly, they can be rolled back or not applied if there is a problem.
So here’s the first problem. Everything ‘starts’ with S3. One of the things I want to avoid is AWS creating buckets for me. For example, let’s say I have a cloudformation template to create a bucket. If I go to the cloudformation console and upload the template, cloudformation is going to create a bucket with a ‘random’ name and put my template in it.
The first thing I want to build is a CodePipeline that will deploy a bucket for me. But if I upload the template for the CodePipeline to build the bucket, one will get created for me. I want to control everything in my account. So I want to build the initial bucket. This means I need to do it by hand. So I will create a ‘throw-away’ bucket first. Manually. Then I will upload my CodePipeline template into that and build the pipeline that will build a bucket for me. This way I’ll also have an artifacts store bucket for my pipeline.
So that I’m not tempted to keep the bucket around, I’ll give it a name that will remind me to delete it, ‘deleteme-firstbucket’.
Now that I have my bucket, I’ll upload my pipeline CloudFormation template into it and use that as the location when I create my stack.
So I had to create a bucket manually. No big deal, at least there is no bucket created for me. Now I’ll run through the stack creation. This particular stack creates a pipeline that executes CloudFormation pulled from a GitHub repository. Because the pipeline needs an artifacts store location in s3, I give it the location of the manually created bucket. Later, before deleting the ‘deleteme-firstbucket’, I will update the stack to use the bucket created in the pipeline.
Once my stack is complete, I have a pipeline that creates my bucket. The nice thing about this is that the pipeline will kick off after the stack is created.
Once the pipeline is finished running, I’ll have a new CloudFormation stack with my new artifacts-store bucket.
I usually keep any of my templates that require me to run them from S3 and other ‘things’ in a bucket labeled ‘deployment artifacts’. So now, using the same process, I’ll create a deployment artifacts bucket. For this pipeline, I’ll set the artifacts-store bucket to be the one I just created through the pipeline. When done, I’ll update the first pipeline stack to also use the pipeline created artifacts-store bucket as well.
So now I have four CloudFormation stacks. One pipeline for each s3 bucket and one stack for each bucket.
The last step will be updating the original pipeline stack to use the artifacts-store bucket and deleting the ‘deleteme-firstbucket’ bucket. Once done, I’ll have a code managed base infrastructure. Any changes I make to the buckets will be done via the CloudFormation scripts in source control and updated via webhooks that trigger the pipeline! Since everything starts with the pipeline CloudFormation template, I’ll upload it to the deployment-artifacts bucket for the next pipeline.
Now that we have the buckets, we’ll stop here. In the next few posts, I’ll start adding more things and build up to a good base. For now, this is a good start.
Advanced Data Engineering Platform for Cleansing, Preprocessing and Analytics