DJANGO DEVOPS

Deploying your Django static files to AWS (Part 3)

Load balancing architecture

30/01/2019


This is the next part of my 4-part series (yes, I need to add an additional part) on how to deploy your static files to a production environment in a Django app. In the past 2 posts, we saw how to use a CDN and Whitenoise to serve your static files with a single instance setup. In this post, I'll show how you need to change this setup if you want your EC2 instances in an autoscaling group. You'll learn how to use AWS Elastic File System as a central location to collect your static files, shared by all instances in you auto-scaling group.

 8 min read

Part 3: Moving our static files to a central location accessible by multiple instances

The goal of this part is to setup auto-scaling. Auto-scaling means that you direct your traffic to a load balancer, behind which you will have one or more EC2 instances running Django receiving your traffic. 

Most Django tutorials focus on S3 as the best way to store your static files. However, that implies changing your default file storage to use boto3 so that python knows how to handle the S3 file system (django-storages is a popular package).

By choosing to use EFS (Elastic File Storage) instead, you continue to store your files just as if they were on a local disk drive, i.e. like on EBS. You don't need to change anything in the way you store your files. It's more expensive than S3, so it might be less appropriate for user-uploaded (media) files if that becomes large. But for static files, which are typically just a small set of files, it's perfect.

Here's a diagram of the situation we want to end up with:

Instead of fetching our static files from the local EBS volume, we store our statifolder centrally, on Amazon EFS, which is shared between all instances.

Steps to create an EFS volume

  1. Go to your AWS Console, choose EFS under Services and then "Create file system". 
  2. It's easiest to create the new file system in same VPC as where your current EC2 instance is located. Just follow the instructions with the defaults and your EC2 instances should be able to mount your file system.
  3. Call your file system "production static" for example, to clearly indicate this is a production system that will be hosting your static files.
  4. Once your file system is created, you should see mount instructions. You can also see this under the details of your file system. Click on "Amazon EC2 mount instructions (from local VPC)" to see the instructions.
  5. I'm using Ubuntu (not AWS Linux) so I use NFS to mount the volume. ssh into your EC2 instance and run:
    sudo apt-get -y install nfs-common
  6. Then, cd into the directory that contains your project (not inside your project repository, next to it). If you did the previous parts of this tutorial, you should have a static folder here where collectstatic puts your static files. Empty its contents since we will make this folder being the mount directory for your EFS file system.
  7. Now mount your EFS file system as your static folder:
    sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-eb934a9c.efs.eu-central-1.amazonaws.com:/ static
  8. You can check that the volume is mounted with df -T
  9. In order to ensure that the volume is automatically mounted every time your EC2 instance is restarted, add this line to the file /etc/fstab:
    fs-eb934a9c.efs.eu-central-1.amazonaws.com:/ /var/www/my_django_site.com/static nfs4 fsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev,nofail 0 0
  10. cd into your project (my_project folder) and run ../virtualenv/bin/python manage.py collectstatic --settings=my_project.settings to fill up your static folder with your static files again.

Your static files are now collected on an EFS file system! So now we're all set: If you restart your EC2 instance, it should remount the file system and your static files should be working as they were before.

Summary of steps for creating an autoscaling group

Now it's time to create an autoscaling group. We start by creating a launch template that will be used to create new EC2 instances in your autoscaling group. We then create an Application Load Balancer with the EC2 instances in its target group. Then we use the launch template we created earlier to create an autoscaling group. Finally we attach our autoscaling group to the target group used by our load balancer.

Creating a launch template

  1. In the EC2 console, select your EC2 instance and from the Actions menu, choose Image -> Create Image. This will create an AMI for your EC2 instance. Any new EC2 instance created from this image will be an exact replicate of your first EC2 instance.
  2. Now create a Launch Template. Still in the EC2 console, select Launch Templates then Create Launch Template
  3. Give your template a name (e.g. my-project-production) and a version description (e.g. version-3.4.15 if that's the version of your app). Select Autoscaling guidance option, so that AWS helps you to fill in correctly.
  4. Choose the AMI you created previously.
  5. Select the instance type (same as your current EC2 instance, e.g. t3.small), Key pair (the one you used for your EC2 instance), use VPC as the networking platform and select the security groups of your EC2 instance (I have a webserver group for HTTP/HTTPS access and a management group for restricted SSH access).
  6. Add an EBS volume to the template, select the same values as for your EC2 instance (you can open a tab in your browser to look at your EC2 instance to copy the values).
  7. Add tags, e.g. I use the key/value pair environment/production and name/my-app-autoscaling so that I can easily recognise EC2 instances launched with this launch template.
  8. Finally save your new launch template.

Creating a load balancer and target group

Now create the load balancer following these step-by-step instructions given by AWS. There's a few things you'll want to possibly change in these steps:

  • In step 2, when configuring your listeners, you probably also want to add an HTTPS listener on port 443. You'll need to have a TLS certificate for your domain to do that.
  • In step 3, when adding security groups, if you already have a security group for your EC2 instances that allows incoming traffic on ports 80 and 443, use that one, although there's no harm if you just create a new one as instructed. It keeps things clean to use existing security groups.
  • Steps 4 and 5 describe how you create a target group and add your existing EC2 instance to the target group. We'll see later how to assign this target group to your autoscaling group so that new instances are automatically launched inside the target group.
  • Step 6 tells you how to test your load balancer. Now, if you use the CNAME of your load balancer, this will be passed on to your EC2 instance. Depending on the configuration of your nginx server and Django settings.py, this won't normally work. You'd need to add this CNAME as a server directive to your nginx configuration and to add it to the ALLOWED_HOSTS in your settings.py file. Better, change the CNAME record for your website to point to the load balancer. If you don't want to disrupt www traffic to your EC2 instance for now, add a new subdomain entry, e.g. management.example.com to point to the load balancer and add this also to your nginx and `ALLOWED_HOSTS` list. It'll be useful for later anyway.&

Creating the autoscaling group

Finally create your autoscaling group following these instructions by AWS. I've used the following options:

  • In step 5c, I've used the "Latest" setting for the version of the launch template to use. We'll see how we can update the launch template with a new version when deploying, so you want your autoscaling group to pick up the latest version automatically.
  • In step 8, if you don't have subnets in multiple availability zones, open a new browser tab and create subnets in all the availability zones of the region where you have your VPC. I have 3 subnets, in eu-central-1a, eu-centra-1b and eu-central-1c. You can change and create subnets in the VPC dashboard.
  • Perform step 10, to register your load balancer with your autoscaling group.
  • In step 11, set minimum and desired capacity to 1 and maximum capacity to 2 or 3 (for now). I have desired capacity on 2, so that if one instance fails and gets replaced, the other instance is always available to take over all traffic. Otherwise you might have a couple of minutes of unavailability. But this comes at a cost of course, since you'll always have 2 EC2 instances running even at low traffic.
  • Also in step 11b, I've set an autoscaling policy based on an average CPU utilisation or 50%: If average CPU utilisation passes 50%, a new instance will be added, if it goes below, any extra instances will be removed one by one.

We're done! You have an autoscaling group that will automatically use your launch template to launch new instances when needed.

Now you should assign your www (or whatever is your website name) to the load balancer, so that all traffic now goes through your load balancer to your EC2 instances.

Check that everything works

If you started with only 1 EC2 instance in your target group and set the desired capacity to 2, you should be able to verify the launch of an additional EC2 instance (takes a few minutes) and verify that it works as expected. You can change the desired capacity back to 1 after this check.

Problems with the current situation

There are still a few issues with the current situation that we need to solve:

  • Health checks: Your load balancer is responsible for checking the health of your instances. It will automatically remove instances from the target group if they are unhealthy (i.e. direct traffic to healthy instances only). Your autoscaling group is also using the load balancer's health check to replace instances if needed. So we need to make a better health check.
  • Static files manifest on EFS: When we deploy a new version of our code, we'll collectstatic which will replace the static files manifest on our centralised EFS storage. When an updated instance receives a request, it'll use the new version of a static file. The CDN will try to fetch that new version and might hit an old EC2 instance (where whitenoise is running) which doesn't know about the new version yet and return a 404.
  • Deploying is now a bit more tricky: It involves creating an instance with the correct code, collecting the static files, creating an AMI for the instance, updating our launch template for the new AMI and replacing all instances in the auto-scaling group.

This will be addressed in part 4 of this series.


Dirk Groten is a respected tech personality in the Dutch tech and startup scene, running some of the earliest mobile internet services at KPN and Talpa and a well known pioneer in AR on smartphones. He was CTO at Layar and VP of Engineering at Blippar. He now runs and develops Dedico.

Things to remember

  • EFS is an easy alternative to S3 for storing your static files
  • Use an application load balancer to direct traffic to multiple EC2 instances
  • Setup autoscaling to let AWS automatically replace unhealthy instances and add instances if your traffic becomes large.
  • A Launch Template allows your to store your instance configuration

Related Holidays