Deploy multi-container Docker app with CI/CD to Elastic Beanstalk with AWS ECR, CodeBuild and CodeDeploy
In this article, I will walk through the process of creating a Dockerized boilerplate web app that is deployed to Elastic Beanstalk.
The app consists of 3 containers: a client running React, a backend running Node.js, and an Nginx server that proxies to each of them. The project will use Github source control, and has a CI/CD pipeline where pushing to the master branch will cause the application to automatically re-deploy to Elastic Beanstalk.
This is NOT a Docker tutorial. I will briefly go over how the build process works, and docker-compose, but this is mostly just to clarify how to make multi-container deployments to Elastic Beanstalk.
The boilerplate code can be found on my Github here: https://github.com/ashwin9798/node-react-nginx-docker-boilerplate. This tutorial will walk through the steps of configuring the project and creating the necessary AWS services on the console in order for everything to work with your AWS account.
Create the Github Repo
Download the zipped version of the repository found at this link: https://github.com/ashwin9798/node-react-nginx-docker-boilerplate
Make sure to download the zip instead of cloning the repository because you will eventually push your version of the code to your own Github account (for CI/CD)
Unzip the folder, and create a git repository inside it.
git add -A
Create a new repository on your Github account for your app.
Back on your terminal, create an initial commit and push the changes to your new repository, using the following commands (replace <link_to_repository> with the URL of your Github repo).
git commit -m "first commit"
git remote add origin <link_to_repository>.git
git push origin master
Great! So now that you hopefully have the boilerplate code in your own repository, we can begin creating the necessary AWS services through the console.
Login to your AWS console at http://console.aws.amazon.com/
Once you are in, click the Services dropdown and click on ‘ECR’, which is under the Containers section.
ECR is a service similar to Docker Hub that allows you to store your Docker images on the cloud.
Once you’re in ECR, click on ‘Create Repository’. We are going to create three separate ECR repositories: one for each container that we are going to deploy.
Type in the desired name and create the repository. Make sure to do this 3 times. In my case, I just named them test-client, test-api, and test-nginx.
Go back to the ECR repositories tab and verify that 3 container repositories were created. Notice each repository has a URI — we will need to add these to the
Open up each file and replace the appropriate ECR_URL placeholders with the actual URIs from the ECR console.
Commit the changes and push to Github.
Click on the services dropdown and navigate to Elastic Beanstalk under the Compute section.
Step 1: Click on ‘Create a new environment’, and select ‘Web server environment’.
Step 2: Create a name for your application and environment (and description if you’d like).
Step 3: For the platform make sure you select ‘Docker’, and for Platform branch, select ‘Multi-container Docker running on 64bit Amazon Linux’ as seen below:
Go ahead and create the environment, and move on to the next step.
Now that we have the Elastic Beanstalk environment and the ECR configured, we can move on to CI/CD.
Essentially, the behavior we want is as follows: whenever we make changes to any of our apps, we want to rebuild those Docker containers and push them to our ECR repositories. Then after the new images are pushed to ECR, Elastic Beanstalk will be redeployed to use the newest version of each container.
So where are these steps defined?
- The buildspec.yml file defines the stages that AWS CodeBuild will go through when building the Docker containers and deploying them to Elastic Beanstalk.
- The Dockerrun.aws.json file is used by Elastic Beanstalk when it needs to read from ECR and provision the necessary EC2 instances to get all containers running.
Step 1: Navigate to ‘CodeBuild’ under the Developer Tools section of the Services dropdown. Then click on ‘Create build project’. Give the project a name and optionally a description.
Step 2: Set GitHub as the source provider and connect to the repository that you just created. This allows CodeBuild to look at the buildspec in our repo and perform the build steps.
- Environment Image — Managed Image
- Operating System — Ubuntu
- Runtime — Standard
- Image — aws/codebuild/standard:4.0
- Image Version — Always use the latest image
- Environment Type — Linux
- Privileged (Enable this flag if you want to build Docker images or want your builds to get elevated privileges) — Yes (Checked box)
- Service role — New service role
- Build specifications — Use a buildspec file
After the CodeBuild project is created, you need to add a policy that enables CodeBuild to access ECR.
Step 3: Navigate to the ‘IAM’ console (again through the Services dropdown) and click on Roles. Search for the CodeBuild service role that you just created. It has the following naming:
Click on the service role, then click on ‘Attach policies’, to arrive at the following screen:
Step 4: Click on ‘Create Policy’.
When presented with a list of services for which to create the policy for, select Elastic Container Registry. Once you’ve selected ECR, you can move on to the ‘Actions’ which is right under the Service selection section. You can view all the Access levels by clicking on the ‘Expand all’ link on the right side of this section. We’re going to give the following permissions to our policy:
- BatchCheckLayerAvailability (Read) — Grants permission to check the -availability of multiple image layers in a specified registry and repository
- GetAuthorizationToken (Read) — Grants permission to retrieve a token that is valid for a specified registry for 12 hours
- InitiateLayerUpload (Write) — Grants permission to notify Amazon ECR that you intend to upload an image layer
- PutImage (Write) — Grants permission to create or update the image manifest associated with an image
- CompleteLayerUpload (Write) — Grants permission to inform Amazon ECR that the image layer upload for a specified registry, repository name, and upload ID, has completed
- UploadLayerPart (Write) — Grants permission to upload an image layer part to Amazon ECR
Step 5: You can then move on to the Resources section right below and we’ll select All Resources his time around. After that you can click on the ‘Review Policy’ button in the bottom right of the screen and this should redirect you to the last step of naming your policy. I named my policy SampleCodeBuildToECR.
Navigate to ‘CodePipeline’ under the Developer Tools section of the Services dropdown. CodePipeline will listen for pushes to the Github repository, trigger the CodeBuild, and deploy the containers to Elastic Beanstalk, and also show the progress of builds in a nice GUI.
Step 1- Settings: Click on ‘Create new pipeline’, and use the default pipeline settings
Step 2- Source: Add GitHub as the source stage and click on ‘Connect to GitHub’. Then select the repository you created, and select ‘master’ as branch.
Step 3- Build : Add your CodeBuild project where it asks for the Project name. Create an environment variable called AWS_ACCOUNT_ID, and for the value add the prefix of the ECR URLs (the number before dkr.ecr.us-west-2…).
Step 4- Deploy: This is where we will use our Elastic Beanstalk environment. Remember to select the relevant application and environment.
Step 5- Add ECR Policy Service Role: We created the SampleCodeBuildToECR policy earlier, and need to attach it to the pipeline service role. Search for the service role in the IAM — it should have the format
codebuild-NameOfCodeBuildProject-service-role. Add the SampleCodeBuildToECR policy that we made earlier.
And that’s it!
The build should get triggered automatically when you push any change to GitHub. You can view the status of the pipeline here:
When the deploy stage is green, the app is running on Elastic Beanstalk and you can visit it by clicking on the ‘AWS Elastic Beanstalk’ link inside the Deploy stage box, and then clicking on the URL (see below)
This is the React boilerplate project created by create-react-app
The most basic express server that prints Hello World on the root URL
Now, when you can make changes to the client and api locally, you can test them using
docker-compose up --build . This spins up the containers on your local machine (make sure you have Docker Desktop installed).
When you are ready to push to production, simply push to your GitHub repo, and the CI/CD pipeline will take care of the rest.
Hope this was useful!