Deploying AWS Infrastructure…with Terraform…using Azure DevOps?

Welcome to my first mash-up of cloud providers and tools post! I guess this can be considered some new age “multi-cloud” stuff, but my first venture into the world of Azure DevOps opened my eyes to how powerful and surprisingly accessible this tool can be. Let me frame that comment with some background.

More Dev and Less Ops

I cut my teeth in the world of virtualization, storage and converged infrastructure. A while back I started digging into AWS and realized (like many others certainly have by this point) that cloud wasn’t just some buzz word that would fade away in a few years. Even if my customers at the time only talking about cloud, I knew that building up my cloud skills would be valuable. Fast forward to the present where I’m on a cloud focused team and customers can’t seem to get enough cloud. If it wasn’t already evident a few years ago, the Dev half of DevOps is becoming a very important skill set for those of us who built careers on the Ops side of the fence.

I long time ago, I actually got my Bachelor’s degree in Computer Science. It feels like a lifetime now, but I once did a lot of C++ object oriented programming and actually passed classes (even while seemingly getting in my own way) like Operating Systems and Computer Graphics. While picking that side up things up again isn’t necessarily quite like riding a bike, I’ve always known that foundation would help with my immediate goals of flexing that Dev part of my brain and building the skills required by this newer IT landscape.

Getting started is the hardest part

I listened to a couple Day Two Cloud podcasts fairly recently, one titled “Making the Transition to Cloud and DevOps” and “Building your first CI/CD Pipeline.” These conversations hit home in that while I understood these concepts at a high level, I really needed to get my hands dirty and build something. One thing that stuck with me since my old Comp Sci days is that simply finding a way to start is the hardest hill to climb. It’s even true today for things like writing a blog post, but once I get started, it is much easier to plow through whatever is in front of me.

My starting point for using Azure DevOps (ADO) presented itself by way of a customer project. Being forced to learn something that comes with a deadline is sometimes the best way to reach that starting point. This project required an environment to be built within Azure, where the customer already had some infrastructure and started developing new apps. There was also an ask to have a similar environment built in AWS, even though it wasn’t widely used at the moment. Terraform made perfect sense in this case, as we could deliver Infrastructure as Code (IaC) using one platform, rather than using both Azure Resource Manager and Cloud Formation.

Azure DevOps from the ground up

It was proposed that Azure DevOps would be used to store and deploy the Terraform code for this project. Thankfully, I was working another team member who had a lot of ADO experience. When it comes to learning new tech, it is much easier having someone step you through the process rather than starting from scratch. ADO is a very powerful tool that can do many different things, which was fairly intimidating to me. One hour of knowledge transfer with someone to step through the basics was enough to spark my interest and show me that it was actually within my grasp.

I took a more passive role for the deployment of a landing zone into Azure, but soaked in as much as I could during that process. That gave me enough confidence to volunteer for the AWS portion of the deliverable. Since we had already set up ADO to deliver an Azure environment, all I had to do was “AWS-itize” the Terraform code, right? Kinda, sorta…

After translating the basic Azure infrastructure design over to AWS, I also translated the Terraform code. That was the easy part of the process. Once that was complete, I began to dig into the ADO config and noticed that there was wiggle room to build upon it and learn something new.

Repositories, Pipelines and Workspaces, oh my!

For those as unfamiliar with ADO as I was, I want to highlight some of the basics. ADO is Microsoft’s end-to-end DevOps tool chain as a service. Yes it has Azure in the name, but it is a SaaS platform that is separate from Azure Public Cloud. ADO is cloud and platform agnostic, and can integrate or connect to just about anything under the sun. ADO’s major components are:

  • Azure Pipelines – platform agnostic CI/CD pipeline
  • Azure Boards – work tracking, reporting and visualization
  • Azure Artifacts – package management for Maven, npm and NuGet package feeds
  • Azure Repos – ADO private hosted git repositories
  • Azure Test Plans – integrated planned and exploratory testing solution

I focused solely on Azure Repos and Azure Pipelines for this project. Azure Repos is fairly simple, as it is just ADOs version of a Git repo. It is functionally the same as anything else that uses Git, so if you are familiar with GitHub, you know how to use it.

It is nice to be able to store code in the same platform that runs your CI/CD pipeline, but a pipeline can still connect to GitHub, among others if you so desire. I also like the ease of cloning directly into VS Code from Azure Repos. It makes pushing changes up to Azure Repos a breeze.

The biggest learning curve for me was developing the Azure Pipeline. Pipelines in ADO use a YAML file to generally define the tasks a pipeline will perform. YAML is pretty common in IaC and the world of cloud DevOps, so the biggest hurdle is really understanding the ADO pipeline components. Thankfully I had a working pipeline to start with, and plenty of Microsoft documentation to help fill the gaps.

One of the really cool things about Azure Pipelines is the ability to use a Microsoft hosted agent to perform tasks for you. By defining an agent pool (Linux, Windows or Mac), your pipeline can spin up a VM on the fly for you to do the actual work.

The meat of a pipeline (at least the pipeline I created) is defining jobs for the pipeline to run. In this case I only had one job, which was to create AWS infrastructure using Terraform. That job consisted of a number of tasks required to complete that job. Thinking of it like a workflow made it easier to understand how to best configure the tasks. Pipeline tasks can be chosen from a wide variety of options that are pre-defined within ADO or custom built programmatically.

The initial Azure version of this pipeline used a Linux agent VM to install Terraform, then install Azure CLI. We used Azure as a backend for the Terraform state, so the next tasks were simply bash scripts that used Azure CLI to login to the proper Azure environment, create a Resource Group, Storage Account and Container and configure the Terraform backend.

While I had done a bit of Terraform on my own, it was never for anything that was shared by an organization. The backend config was new to me, but makes a ton of sense when collaboration is taken into account. We also utilized a Terraform workspace so that this code could be re-used for multiple hubs/spokes across different regions. Diving into the collaboration side of Terraform helped tie some of the IaC CI/CD concepts together for me. Once the backend was configured, all that was left were quick Terraform init, validate, plan and apply tasks to get the ball rolling.

From a Terraform perspective, the switchover to deploying AWS stuff was super simple. Even though I was using an Azure tool, all it took was a change of the Terraform provider and some access keys to create infrastructure directly into AWS!

Learning some new tricks

I noticed that the task to install Azure CLI on the agent VM took 45-60 seconds to complete, so I dug into the built-in ADO tasks and saw that I could simply use -task: AzureCLI@2 within the pipeline and not need to wait a whole minute(!) for the CLI install to take place…thanks MS documentation!

Another cool part of ADO is the ability store environment variables within the pipeline itself. This is an easy way to keep things like secrets and access keys both safe and readily available to use within the pipeline. One interesting rabbit hole I went down is that I wanted to add conditionals to the scripts that created the Terraform backend: check if Azure Storage Accounts, etc. exist before actually creating them. I’m not very familiar with bash scripting and for the life of me could not figure out how to get these conditionals to work. I pivoted to using PowerShell instead of bash, which is supported by the AzureCLI task and was more familiar to me. It also resulted in learning that environment variables have different syntax across different systems!

Once I got the PowerShell working, I also used the above link to figure out how to pass a variable between tasks. The script to create the backend had to grab a secret key from the Azure Storage Account, and that variable was required in a later task for the Terraform init -backend-config command. Since I hadn’t installed the Azure CLI and wasn’t defining that variable directly within the agent anymore, I had to first define the var as an empty string in the agent VM and use a write-output(“##vso[task.setvariable¬†variable=KEY]$KEY”) command to have the PowerShell script task pass the value it grabbed back out to the pipeline itself. Google to the rescue again!

The last cool thing I figured out was how to use parameters within the pipeline. In this case, once a Terraform apply built the stuff, what happens next? Well if you are testing and running it over a number of times to validate, you need to throw a Terraform destroy in for good measure. I found out that you can add parameters that essentially give someone running the pipeline the ability to inject parameter values. In this case a simple boolean check box for running either apply or destroy allowed me to add conditionals to define which Terraform command the pipeline would run at the end.

virtualBonzo’s take

Azure DevOps is an extremely powerful and far reaching tool. My use case and initial demo with Terraform/AWS only begins to scratch the surface of what is possible with ADO. Regardless, it was one little victory for me. There are also many other tools and possibly more effective ways to do what I did in this case. I was very lucky (and thankful) to be given something to work with, and that kick in the pants is exactly what I needed to get my hands dirty and learn something new. I am sure there will be more to come in this space as I continue explore the world of DevOps. All it takes is hours worth of effort and a bunch of green check boxes to get excited, only to realize there is still so much more out there to learn…


Leave a Reply

Your email address will not be published. Required fields are marked *