Continuous Delivery Pipelines for Infrastructure Code

  • August 24, 2018

Pipelines are not for application code only. Continuous Delivery matters.

In modern application and infrastructure management, programmers and sysadmins not only work purely in application code but also in terms of infrastructure code. The latter includes all the code that deploys the essentials needed to run your application: cloud resources (ie. instances, DBs), operating system configuration, base software, frameworks, etc. Manual administration is an outdated method previously used to manage your infrastructure and it doesn’t work anymore.

When you read about “Continuous Delivery Pipelines,” you usually see descriptions of application code (and sometimes infrastructure code) including specific instructions regarding the stages of  implementing your code before it’s deemed production-ready. The basic CD Pipeline as described by Jez Humble[1] in his book, includes the following three segments:

Commit Stage:

  1. Compile the software, integrate.
  2. Run the unit tests.
  3. Code analysis.
  4. Publish artifacts (packages used to deploy the code).

Acceptance test:

  1. Configure environment, using Infrastructure Code.
  2. Deploy artifacts.
  3. Automated test functional and nonfunctional.

Production:

  1. Configure environment using infrastructure code.
  2. Deploy binaries.
  3. Smoke tests.

These stages could vary, depending on the specific needs of your project. You could also add UAT and Capacity test stages, which are both very common. One point to emphasize regarding this approach is the fact that infrastructure code is part of the application CD pipeline. Application and infrastructure code are both delivered.

Over the last several months, we have been working on a big project for a huge company. The assignment has included automating the delivery of about 60 applications to be deployed on the premises. There are different development teams and several different pipelines. Each project manages its own delivery of artifacts to internal repositories and our project then takes them to build the pipeline and install applications in real customer environments.

Our pipeline is different from the one described above, because it only works for infrastructure code. But the concepts and stages described are identical. The tools used in building this code are:

  • Chef: configuration management, automation.
  • Kitchen: local development for Chef Cookbooks.
  • Berkshelf: cookbook dependency management.
  • ServerSpec: testing framework for Chef cookbooks.
  • FoodCritic: chef code inspector.
  • Python QA: internal tool.

Commit stage

Each Infrastructure Developer works on his local workstation using Chef and Kitchen. Writing the code to install a given application in a specific cookbook, writing and running tests locally before the code is pushed to a Git repository. Jenkins Continuous Integration servers watch this repository, trigger the integration and test for the modified cookbook. This process is identical to the process each developer executes on his workstation, running “kitchen tests” to integrate and test the code. This entire process is run through OpenStack. In this case, we are doing it in a central location in order to notify everybody if an issue were to arise. The code is always checked after a push to be sure that it is always 100% correct and, if there is an issue, it is reported and fixed ASAP.

This stage isn’t different from the unit tests described in the application code pipeline. In this case, the unit is the cookbook that installs a specific application. We test each of them individually.

In this stage we also execute Foodcritic to check the quality of the code and ultimately publish the cookbook to an internal repository —  a Chef server in this case.

Acceptance tests

After the previous phase, we know that each cookbook works properly — both independently and perhaps with other integrated cookbooks it may depend on. To be sure that the whole solution works, we need to combine our different cookbooks to create different scenarios similar to those used on premises to validate the product. In this phase, we have a set of predefined scenarios created by QA people where we deploy the software and run tests which verify that everything works as expected.

Unit or cookbook tests are not enough to validate the solution. We need to try real combinations. In one specific case, we spent a lot of time discovering a bug in our initial manual tests because a cookbook had overwritten an attribute which was utilized in several cookbooks.

Production

In the case of this example project, there is no production stage directly connected to the pipeline. As mentioned, the production environment is installed on premises. But most of the tasks we do there are very similar: we configure the environment, deploy the cookbooks, binaries and run smoke tests to verify that everything is installed properly.

In cases where production is connected to the pipeline, the delivery to production is done by pushing a button.

The purpose of this entire process is to create an exercise which produces and delivers a fully functioning code which is available to the end user at any given time.

Conclusion

Infrastructure management has undergone some dramatic changes over the last few years. Developers have added agile methods but for sysadmins there have been even more significant changes. We need to learn to code, work with agility and build pipelines. The manufacturing of these pipelines are the core of an IT production system and you can’t deliver value if you don’t have properly placed pipelines. Developers introduce changes all the time and you need to be sure that those changes won’t break anything, ensuring that the application is ready to go to production at any time.

References

  1. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Jez Humble, David Farley.