bright clouds in sunny sky

Nov 30, 2023 3:32:26 PM | Article The Overlooked Benefits of Migrating to the Cloud

It is often argued on whether migrating to the cloud is cost saving or incurs more cost. Both arguments are valid depending on the desired solution, however, there are many other factors that are overlooked which should be considered when debating on migrating a product to the cloud. There’s already widespread acceptance that backups and snapshots of databases and infrastructure are easier and more stable, that scaling and high availability is easier to achieve.

The Overlooked Benefits of Migrating to the Cloud
7:51

Introduction

Okay, so the buzzword automation was used, what’s the big deal?  It can be argued that Puppet, Ansible, Jenkins, etc. could be used to do these things locally so what?  Good luck finding someone who is good and affordable (we are) to tackle any of that stuff.  There’s a reason Puppet is losing market share and if you want to use Ansible in the same way then you need to shell out money for Ansible Tower.  Jenkins is an entirely different use case and can be used quite well with cloud solutions but to narrow down the scope of everything the cloud has to offer, a specific example will be used throughout the rest of this article.

The clients that typically approach us are IT professionals that work for a mid to large-sized company that have attempted to migrate a custom application to the cloud but have not succeeded.  They usually get it a decent amount of the way there but need us to lift them over the hurdle and while we’re at it we sprinkle some automation on their solution.

Use Case

A client approached us about an application they were currently running as a VM using Vagrant and just having customers stand this up themselves.  The application itself was some customized logic and content for a popular CMS.  The desire was to get this pushed to Amazon Web Services (AWS) or Azure so we opted for AWS because that was where the client had already attempted their own migration.

Docker

It’s safe to say anytime you can get something migrated to a docker container it’s a win (I know this is a generalization so don’t get upset).  So the first task that was tackled was getting away from using Vagrant and creating a Dockerfile for this project to ready it for deployment to Elastic Container Service (ECS) on either Fargate or EC2 instances, at this point it wasn’t quite clear what the direction was going to be.  For those who only have experience spinning up pre-created Docker images and not creating a custom image, every image is created from a base image which is typically a version of Alpine because of how light it is.

This is what was chosen in our scenario and a specific version was chosen so it would not pull a newer updated version of a typical tag each time it was run to avoid any possible breaking of backwards compatibility thus increasing stability.  In order to pull the CMS code provided by the client git needed to be added.  In order to do typical admin-related functions in Alpine, BusyBox and OpenRC needed to be added. In order to install Composer curl needed to be added.  In order to run the code PHP needed to be added.

Cloud

Now that the bare minimum has been completed to make the transition easy, the real stuff gets to be worked on.  Because the desire is to automate standing up items in AWS, Terraform was utilized.  Terraform Cloud was chosen just because of our comfort level with it and it is agnostic to any cloud platform.  This allows users to setup Workspaces which link to GitHub repositories that allows you to version control your Terraform code base.

The first step is getting a simple skeleton environment stood up for this container.  In order to do this the provider (AWS), Application Load Balancer (ALB), ALB listener, ECS, ECS Tasks, and Elastic Container Repository (ECR) must be established.  Once these have been successfully stood up by Terraform, the docker image was uploaded and security groups were added to allow HTTP, HTTPS, and SSH.  Next, AWS Secret Manager was setup to store credentials randomly generated by Terraform for SSH and CMS login.  Additionally, a map variable was created to store information to be used to create dynamic resources in Terraform.  Unfortunately, the AWS provider does not allow for the aws_ecs_service to be a dynamic resource. As a workaround, we moved the majority of the creation of client specific resources to an external module so that we could loop over it with the clients variable in order to dynamically create and tag resources.

Next, the trick was getting these variables into the docker container.  To do that, docker labels were used within Terraform’s container_definition which is retrieved inside of the image by curling the container_metadata_uri endpoint.   The container already has access to since it resides in the AWS environment.  Unfortunately, a manual process was done to parse the json to set the variables needed in the container.  Typically, Elastic File System (EFS) would be used to allow for container persistence.  However, issues prevented a clean mapping to this particular CMS container use case because the performance hit, due to PHP needing to run from this directory, was too great.  Luckily, the CMS app chosen has built-in GitHub integration that could be enabled by a simple flag so EFS was not needed at this point.  This caused a slight change in logic from using Composer as the main way to install everything to using git (needed to be added via Dockerfile) which required dynamically generating a ssh key for the cloning and pushing of the repository.  It was created in Terraform via the tls_private_key resource and added to the repository using the GitHub provider and the github_repository_deploy_key resource.  After enabling GitHub integration and testing it, we noticed that the CMS users and creds were not being pushed to the repository for persistence (rightfully so).  This caused us to go back and reevaluate EFS again.  It was decided to mount EFS to a directory and use a cron’d rsync (needed to be added to Dockerfile) to sync changes to the mounted EFS directory for persistence.

The next hurdle was enabling HTTPS which was slightly tricky to figure out a good automation pipeline because the client wanted subscribers to handle the DNS on their end.  A cert was created using AWS Certificate Manager (ACM) and Simple Notification Service (SNS) was used to send an email to the client with the required DNS entries.  Since SNS requires the subscriber to click a link, Hashicorp/time provider was used to sleep for 30 seconds to give time for the client to click the link.  Since AWS’s SNS topic does not support email protocol through Terraform, a null_resource was used as a workaround to manually subscribe the client to the topic and another null_resource was used to send the email with the DNS information.

In order to reduce costs, the decision was made to move all dev environments to a single Application Load Balancer (ALB).  This was done by first moving the creation of the ALB outside of the module so that it was only created once and then passing the ID into the module via a Terraform variable so that the module could interact with it.  A depends_on clause was utilized to ensure the module was not access until after the dev ALB was finished being created.  To allow all the environments to use a single load balancer, the traffic needed to be directed to each of the development target groups which was accomplished by utilizing an aws_lb_listener_rule with a host_header condition mapped to the dev URL for the client.  Route 53 was used to create a wildcard cert to use for the dev environment ALB to enable HTTPS on all dev sites.

Conclusion

After reading this it may be easy for the eyes to just glaze over, however, what has been accomplished was some heavy automation that only requires the client to add a few variables whenever a company or individual wants to use this service then after the client receives an email with the DNS information they need to forward to the customer.  After those two very easy tasks are completed a persistent, scalable, production-ready CMS environment is ready to be developed on.

Written By: Noel Hahn