Immutable Server Patter

Published on 08 Mar 2023

What is the Immutable Server Pattern?

Immutable server pattern makes use of disposable components for everything that makes up an application that is not data. This means that once the application is deployed, nothing changes on the server - no scripts are run on it, no configuration is done on it. The packaged code and any deploy scripts is essentially baked into the server. No outside process is able to modify the contents after the server has been deployed. For example, if you were using Docker containers to deploy your code, everything the application needs would be in the Docker image, which you then use to create and run a container. You cannot modify the image once it’s been created, and if any changes do need to take place, you would create a new image and work with that one. In our case, we use AWS Amazon Machine Images (AMIs) to accomplish the same thing. We make heavy use of Amazon Linux machines, which are Redhat-based, and thus package the code into RPMs2. The RPMs define all the dependencies for running the application, the code itself, and any startup scripts to run on bootup. The RPM is then installed on a clean base image of Amazon Linux, and an image is taken, resulting in an AMI. This AMI is synonymous with “immutable server” - it cannot be changed once it is created. The AMI is then deployed into an Auto Scaling Group (ASG) and attached to the Elastic Load Balancer (ELB). In this post, I’ll guide you through for a closer look at every step of this Immutable Server deploy pipeline. I’ll then go into how and why we embedded planned failures into this system. At the end, I’ll share the insights we’ve gained into the pros and cons of deploying in this way.

How do you build immutable servers?

Stage 1: Build the application

In this stage, CodePipeline reads source code from CodeCommit and stores the latest version in an S3 bucket. CodeBuild runs the build process. At the end of this stage, the build artifacts are in an S3 bucket. This diagram was created with the assumption that you own the source code for the application and that you make regular changes that must be compiled first. If that is not the case, you can skip this step and store installation files in an S3 bucket. If your organization uses Atlassian Bitbucket or GitHub instead of AWS CodeCommit, you can edit the pipeline accordingly.

Stage 2: Build a fully installed AMI

In this stage, CodePipeline invokes an AWS Lambda function and then applies an AWS CloudFormation template. The purpose of the Lambda function is to prepare input parameters for CloudFormation. The input parameters consist of a reference to the build artifacts from the previous stage and a version number. The CloudFormation template creates EC2 Image Builder resources. EC2 Image Builder creates a fully installed AMI.

Stage 3: Deploy the image to your test environment using a rolling update

In this final stage, CodePipeline applies a second CloudFormation template. This template performs a rolling deployment using an EC2 Auto Scaling group. A rolling deployment removes an old instance only after creating a new instance to avoid downtime. The Auto Scaling group represents your development environment. This template is separate from the previous CloudFormation template to make the solution modular. If you want to extend the pipeline to deploy to more environments, you can replicate this deployment mechanism.


When you treat your EC2 instances as immutable servers, you get a repeatable and reliable process for creating instances. You reduce the risk of human error and the risk of inconsistencies in automated installations. When you need to resolve an issue, you can rule out environmental differences as a potential cause, at least as far as the EC2 instance is concerned. Immutable servers require a different way of working. You need a mechanism for creating fully installed AMIs and rolling them out across environments. In this blog post, I showed you how to launch a pipeline in CodePipeline that orchestrates the entire process—from source control to an up-and-running development environment.