It has been a long time since the initial release of Terraform in 2014, we have seen new providers, changes in the CLI and on the functions as well, but I think we are yet to see one of its biggest improvements, the Cloud Development Kit. I won’t assume that you are familiar with Terraform or even with Infrastructure as Code, so I’ll explain it very briefly.
Infrastructure as Code
Working with infrastructure was always a complicated process, people used to put servers in place, configure them and setup the operative system to finally install the dependencies and get an application up and running. Even with documentation and experience, the process was never without its problems and as a result, provisioning was inconsistent, slow, and prone to human error.
Infrastructure as code came with the idea to define all the configurations in files, in a way that we could have easy-to-read, consistent, and reusable templates. There are multiple solutions that can help us with this, some of them are Azure Resource Manager, AWS Cloudformation, Terraform amongst many others.
Terraform configuration files are written in HashiCorp Configuration Language (HCL). In those configuration files, you declare the resources you are going to manage within your provider.
To give you an idea of how is it done, here you have an example of how you declare your resources.
With something as simple as that you can create a VPC and a Subnet, seems easy right? of course you can improve it by adding variables, more resources and even include modules, but that’s not on the scope of this article. You can learn more from Terraform here: https://learn.hashicorp.com/terraform.
Since last year we have a new alternative to work with Terraform and that’s the Cloud Development Kit (CDK). HashiCorp collaborated with the AWS CDK team to support the ability to use general-purpose programming languages to declare our resources and then generate the Terraform configuration files needed for provisioning.
In this article, I’ll show you how to use the Terraform CDK with Typescript to create resources on IBM Cloud to finally configure a CI/CD pipeline with Travis CI.
Disclaimer: We will be using IBM Cloud, but to show you how to include a backend I won’t be using IBM Schematics — which is the IBM IaC service based on Terraform. If you are following this article, don’t forget to delete everything you create with this article.
Creating the Project
First, you need to install the CDK CLI to be able to run all the commands, then run cdktf init to initialize the project. The init command will create all the files you need to start writing some code.
Before we continue, this is what we are going to create with the CDK:
- 1 VPC.
- 2 Subnets.
- 2 Public Gateways.
- 1 Public Load Balancer.
- 1 Security Group
- 1 Instance group to handle our instances.
In the cdk-tutorial folder, open the cdktf.json. In this file, we are going to declare the providers we are going to use. In this case IBM Cloud provider.
Then, run the cdktf get command to download all the dependencies to use IBM Cloud. All the dependencies will be located in the .gen folder.
Configuring the Provider and creating your first resource
With the dependencies in place, your IDE autocomplete feature will make your life easier than ever before. Configurations in CDK are represented as classes, so all we have to do is create instances of the resources we want to work with, inside the MyStack class. Open the main.ts file and let’s declare the provider and a VPC.
If we compare the typescript code, we’ll see is very similar to what we do with HCL.
As in HCL, we can use variables here as well, we just need to create a new instance of the class TerraformVariable. This variable will be user in all the resources as a prefix for the name.
To create the public gateways we need the ID of the previously declared VPC, so we will create two instances of the public gateway and assign the ID of the VPC.
It’s important to mention that we need to assign the instance of the class to a constant only when a reference is needed, otherwise cdk cli will take it as an error.
We won’t be using the default Address Prefixes for this article, instead we will create two, one for each zone. But, since there is no implicit dependency between the subnet and address prefixes, we need to put it explicitly using the dependsOn property, that way we can be sure that Subnets are created after the Address Prefixes.
Sometimes, you have resources already created on your Cloud Service Provider account that you might want to reference. We will create an SSH Key required by the instance template we will declare later.
On the IBM Cloud Console menu click on VPC Infrastructure — SSH Keys, then click on the Create button, name your key “tutorial” and complete the rest of the form.
Data Source declaration is similar to resource declaration, you just need to create an instance of the data source class.
Load Balancer and Security Groups
For the Load Balancer and Security Groups there’s nothing new to explain in terms of CDK. Our security group will allow outgoing traffic to the ports 80 (HTTP) , 443 (HTTPS) and 53 (DNS).
Built in Functions?
We want to install nginx on instances at the time they launch. The instance templates include a property called User Data which allows us to execute some commands at launch time.
HCL has some built-in functions that allows us to perform different operations with files, strings, numbers, and so on. Normally we would have a script file that we could use as user-data, and to read the contents of that file we end up using the file function.
At the time of writing, built-in functions have not been abstracted to the CDK, so we can’t use them, but we can rely on libraries of the language of your choice.
To read the contents of a script we will use the nodejs filesystem library.
When everything is created, we might need an easy way to get the load balancer url. We can do that with output values.
Where’s the backend?
We are almost ready, we just need a backend to keep the current state of the resources handled by Terraform. For demonstration purposes, we will create an etcd database.
On your IBM Cloud account, go to the top search bar and enter databases for etcd and click on the result, then click on the create button. Name your database “Terraform-etcd” and complete the rest of the form.
Click on the create button and wait until provisioning is done. Now click on your database, then go to the service credentials, click on New Credential and finally click on Add.
On your new credential take note of the endpoint, username and password, we will use those later.
Unlike resources and data sources, the backend is not declared inside the MyStack class. The Etcd backend requires to include the endpoints, password and, username of your database, but of course, we don’t want to put it on plain text. The good news is we can use environment variables.
Before deploying you must consider the following:
For Terraform to be able to create resources on your account it needs an API Key.
- In the IBM Cloud console, go to Manage — Access (IAM) — API keys.
- Click Create an IBM Cloud API key.
- Enter a name and description for your API key.
- Click Create.
- Take note of the API Key.
Finally, set all the required environment variables
Configuring etcdv3 as backend requires you to enter the cacertpath, certpath and keypath.
- To obtain the CA certificate we can use ibmcloud CLI.
- To obtain the Certificate and its key we can use openssl to generate a self signed certificate.
There are two choices to deploy when it comes to CDK:
- We can deploy with a single command using cdktf deploy.
- We can run the workflow in smaller steps by running cdktf synth to generate the configuration file in JSON format, which then we can use to run terraform commands.
What if... we go to the next step and set up a pipeline? Here is Travis to help us.
If you have been following this article, you already have the project almost complete, you are just missing the travis configuration file. If this is your case:
- Create a GitHub repository.
- Initialize a local git repository on the cdk-tutorial folder and push its contents to the repository.
Otherwise, you can import my repository https://github.com/jonathanbc92/TerraformCDKTutorial in your account.
If you don’t have a travis-ci account, you can sign up here: https://travis-ci.com/.
When we sign up, we’ll see that Travis CI asks for authorization on our GitHub account.
Then, it will ask us to activate all our repositories.
if you don’t want to have all of them activated, go to your account settings and click on Manage repositories on GiHub.
On the Repository access section select “Only select repositories” and then select the repositories you want to work with.
Back on the dashboard, let’s see the repository settings.
We need to add the environment variables required by this article.
For travis to work with your repository you need to create the build configuration file .travis.yml. You can learn more about it here: https://docs.travis-ci.com/user/customizing-the-build/
In this case, we are setting up an ubuntu 18.04 (bionic) environment to run our build. Also, we set three environment variables, one for the Terraform version, a second one for the terraform apply command options and the last one to set the base_name variable we declared before.
The branch configuration will tell our build to run only on the master branch.
We also define the commands that must be executed before the jobs:
- Download and install Terraform.
- Install npm.
- Download and install ibmcloud CLI.
- Login and install the Cloud Databases (cdb)plugin.
- Download the etcd CA certificate and generate our self signed certificate to able to communicate with the etcd backend.
Now we define two jobs:
- The first one, called terraform plan will execute only on pull requests. Let’s say that you work on a development branch, when you create the pull request to merge development branch into the master branch, terraform plan will execute, letting you see the changes in your resources so you can validate if those meet your expectations.
- The second one, called terraform apply will execute on push to master branch, this includes approved pull requests, and direct commits to master branch. In case your expectations were met and you decide to merge development into the master branch, this job will execute applying your changes.
To test the pipeline, we just need to create a new branch on the GitHub repository.
Then, on the main.ts file add a comment.
And commit the changes.
On the development branch click on Compare & pull request to create a new pull request.
Enter some description and click on Create pull request.
If we go back to your travis dashboard, we will see a new build is being created. Wait until the build finishes and then check the log, we should see at the output of the terraform plan command at the bottom.
Back on the pull request window, we should see the build passed. Now we can click on Merge pull request.
This will trigger the terraform apply job, that will create our resources. On the Travis dashboard we should see another build being created, wait until it finishes and check the logs.
At the bottom, you will find our output value with the URL of the load balancer.
Wait a couple of minutes until the instances are ready and then open the URL on your favorite browser. You should see the nginx welcome page.
In this post, we walked through the Terraform CDK to create some resources on IBM Cloud and defined a CI/CD pipeline with TravisCI that has two jobs: the first one for pull requests and the second one for pushing directly to master. Terraform CDK is still on alpha, so it’s probably going to be changing quite often but it is a good start.
I hope you can keep building some great stuff! and give feedback to the community as well.