Setup Continuous Deployment Pipeline For AWS Elastic Beanstalk
In Part 1: Deploy the application to AWS EB using Docker platform, we already covered the steps to deploy the dockerized application manually — run awsebcli’s commands manually each time when you want to trigger new deployment.
In this tutorial, we will setup automated deployment the AWS Elastic Beanstalk environment using AWS CodePipeline or GitHub Actions.
Scenario 1: Use AWS CodePipeline
Manage IAM user permissions:
Before using CodePipeline, ensure that your IAM user have necessary permissions — it is recommended to assign these permissions within an user group:
AWSCodeStarConnectionsFullAccess
custom inline policy content:
CodeBuildServiceRoleManagement
custom inline policy content:
Additionally, ensure that your IAM user have ServiceRoleManagement
permission which already created in previous guide.
Define build steps:
Create buildspec.yml
file in the root directory with the following content — This file is used to define the build phases and commands that AWS CodeBuild will execute when building your project:
Notes:
CODEBUILD_RESOLVED_SOURCE_VERSION
value from AWS CodeBuild (ref doc).AWS_DEFAULT_REGION
,AWS_ACCOUNT_ID
,IMAGE_REPOSITORY
are required to set in AWS CodeBuild environment variables configuration.IMAGE_TAG
is generated from the commit hash.
Setup AWS CodePipeline
Open CodePipeline Console, choose Create pipeline
:
- Choose pipeline settings
2. Add source stage: Choose source provider — support Github, Gitlab, Bitbucket, … In case you don't have any source provider connection, you can create a new one — click Connect to Github/Gitlab/Bitbucket
button, then authorize provider access. Finally, choose the repository and branch name utilized within the pipeline.
3. Add build stage: Choose AWS CodeBuild as build provider, then create new CodeBuild project — which run commands defined in buildspec.yml
we created in previous step.
Create a new AWS CodeBuild project: remember to set required environment variables’ value — In our case, we need to set value for AWS_DEFAULT_REGION
, AWS_ACCOUNT_ID
, IMAGE_REPOSITORY
.
After creating CodeBuild project successfully, you need to add AmazonEC2ContainerRegistryPowerUser
permission to the associated service role — In our case, it is named codebuild-crewcall-production-service-role
.
4. Add deploy stage: Choose the target AWS Elastic Beanstalk environment that you wanna deploy to.
Finally, review all configurations again then Create pipeline
. Once the pipeline is created, it will automatically execute. You can View logs
to see the job progress and identify potential errors.
That’s all for creating AWS CodePipeline project to automate AWS Elastic Beanstalk environment deployment. Try to push a new commit to the source code repository to initiate another the pipeline workflow test.
Scenario 2: Use GitHub Actions
It’s much easier to go with Github Actions, first you need to define the deployment workflow in .github/workflows/cd.yml
like this:
With the GitHub Actions workflow provided above, your GitHub repository’s settings must set value for these environment secrets and variables.
Notes: With above settings, GitHub Actions uses production
environment when the workflow runs for main
branch, otherwise it uses the staging
environment. Therefore, it is necessary to configure the appropriate value for each environment.
- Environment secrets:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
- Environment variables:
EB_APP: crewcall-api
EB_ENV: crewcall-api-production
EB_BUCKET: elasticbeanstalk-[AWS_REGION]-[AWS_ACCOUNT_ID]
ECR_REGISTRY: [AWS_ACCOUNT_ID].dkr.ecr.[AWS_REGION].amazonaws.com
ECR_REPOSITORY: crewcall-api-production
After adding correct values for environment secrets and variables, commit changes with GitHub Actions workflow definition, then push it to the target branch (main or develop) to trigger GitHub Actions workflow.
Other Scenario
If you’re using alternative CI/CD methods like GitlabCI, Jenkins, … You should be able to define the workflow easily by following these steps:
- Checkout source code.
- Install awscli and config aws credentials.
- Build Docker image and push it to AWS ECR.
- Compress the needed application files/directories (
.ebextensions
directory,docker-compose.yml
andnginx.conf
file) with a unique version label, then upload it to elastic beanstalk S3 bucket. - Create new application version with S3 key of the uploaded file.
- Trigger the deployment with new created application version.
That's all for second part of this series. For the next part, we will talk about SSL & DNS configuration.