Walking the talk

How we containerized our new WordPress website and continuously deliver it on Azure

When we set out to design and build a new website for Xpirit, we knew that we wanted to use WordPress: fully featured, widely used, stable and easily accessible technology. While we were at it, we thought: why not walk our own talk and follow the advice and wisdom we apply and implement for our customers here as well?

This meant we wanted to run our website on Azure, using cloud native building blocks, set up continuous delivery and deploy the whole thing as Docker containers.

There were some requirements we set for ourselves:

  • Make sure our web designer could easily develop and test the HTML and CSS for our WordPress theme
  • Website content and media must be preserved when the site is redeployed
  • Needs to support upgrade scenario’s: new WordPress and plugin versions
  • Needs to follow security best practices (again, walk our own talk)

The Dev in DevOps

Containerization is a very suitable technique for enabling local development, testing and distribution using the same software on the developer’s computer and in production. As it happens, WordPress lends itself for containerization pretty well, with an official Docker image in the Docker repository.

We used this as a starting point for our website, including the MySQL Docker image for running the WordPress backend database. A local development environment is as easy as setting up a Docker container with the latest WordPress image and one with the latest MySQL image, and hooking them up together. We can capture this very nicely in a docker-compose definition like this:

version: '2'

services:  
  db:
     image: mysql:5.7
     restart: always
     ports:
      - "3306:3306"
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

  website:
     image: wordpress:latest
     working_dir: /var/www/html
     depends_on:
        - db
     ports:
        - "8001:80"
     volumes:
        - "./Xpirit.com/wp-content:/var/www/html/wp-content/"
     restart: always
     environment:
       - WORDPRESS_DB_HOST=db:3306
       - WORDPRESS_DB_PASSWORD=wordpress
       - WP_DEBUG=true
       - WP_DEBUG_LOG=true
       - WP_DEBUG_DISPLAY=true

And starting up is as simple as:

docker-compose -f docker-compose.dev.yaml up  

When you set this up from scratch, and navigate to localhost:8001, you are asked to install WordPress. It will create the database and tables in the MySQL database container and you’re good to go. As long as you don’t remove or rebuild the containers, your WordPress instance will stay around.

The nice thing about containers though is that they’re dispensable, because they can be easily rebuilt using the images you reference. New WordPress version? No problem, just rebuild your website container using wordpress:latest.

Here’s the catch: WordPress keeps part of its content inside its own folder structure, specifically inside the wp-content folder. This includes plugins you may have installed, website themes and media files you upload. Of course these are not dispensable, and you’ll want them to be there when you rebuild the website Docker image.

Luckily, Docker knows about something called volume mapping, which is the trick we use to “swap in” these custom files into the boilerplate WordPress image. These files reside on your hard drive, e.g. in your source code repository.

Our Git repository contains a definition of our Docker image, and just the source files we keep track of: specific versions of the plugins we installed and our own Xpirit theme. We host it on Visual Studio Team Services, which makes it easy to hook it up to our build and release pipelines. More on these later.

We have are two types of content left: media files and the WordPress database. For production, we knew we wanted to use only cloud native PaaS services. For the database, we chose the new MySQL Database as a Service, and for the website, we went with Azure Web App on Linux, which supports Docker containers.

The database sits solid in Azure, with a backup strategy in place to prevent data loss. In production we don’t work with a container for the database engine since we have a fully managed PaaS service for this. Every now and then, we export the production WordPress database so we can import that data in our local development database. The production MySQL database sits safely behind Azure’s firewall and is only accessible for our website’s App Service. So when the site is (re)deployed, we don’t touch the database and leave the content intact.

But what about the other content?

I’ve already discussed how we handle plugins and themes. We just version these under source control and they’re packaged along with the website. In case a plugin or WordPress needs to do a database update, it’s just a matter of running the database upgrade manually after the site has been deployed.

Media files are part of the dynamic content, they just happen to be stored in the wp-uploads folder by default. To avoid losing them upon (re)deployment, we need to store them somewhere else. This is where a CDN (Content Delivery Network) comes into play. There are several options here. We looked at using Cloudinary for example, which is a nice service that provides features for server side resizing and caching of images. But there’s also a nice CDN we can use right inside Azure: Azure blob storage. This suited our needs perfectly and allowed us to keep the media files alongside our other data within the same Azure Resource Group. A WordPress plugin intercepts file uploads and redirects them to our Azure blob container, where they will survive redeployments of our website.

The website theme itself was developed by our web designer using Timber. Timber basically adds a domain specific language for building site templates on top of Content Management Systems such as WordPress. As mentioned, we just keep the Timber templates, CSS and JavaScript in our Git repo.

Summarized, we end up with a high level architecture that looks like this:

Architecture diagram

From Dev to Ops: Continuous integration and deployment

Whenever a developer updates a plugin or changes the site theme, we want it to be baked into a new Docker image and shipped to the live website. Assuming the developer has tested their changes locally of course :)

Continuous Integration

Our build pipeline is in VSTS, and is set up to trigger upon a push to our repo. It’s fairly simple, and consists of a few steps:

Xpirit.com build pipeline

Although our local development workflow includes using SASS compilation of our stylesheets on the developer machine, we do it during CI build anyhow to make sure the CSS files are always up to date. After this, we can build the docker image using the following DOCKERFILE:

FROM wordpress:latest

COPY ./wp-content/ /var/www/html/wp-content

RUN rm -r -f /var/www/html/wp-content/themes/starter-theme-master && rm -r -f /var/www/html/wp-content/themes/twenty* && chown -v -R -L www-data:www-data /var/www/html/ && chmod -v -R 777 /var/www/html/ && ls -al /var/www/html/wp-content/plugins

EXPOSE 443  

We build on top of the wordpress:latest image, copy in our own content (theme and plugins), and remove some of the default stuff that is always present when you install WordPress. Then we make sure that ownership of the entire wp-content folder is set to our website user so it is writable.

After packaging, the Docker image has to be published to a location where it can be deployed to the web app. Of course we could use Docker Hub for this, but this is a public repo, and probably no one outside Xpirit would be interested in running a copy of our website. So we need a more private solution, in the form of an Azure Container Registry. Our VSTS build pushes the images out to this registry, where it's ready to be picked up.

Deployment

Let's focus on our production environment to see how this container image ends up on our website.

The web app running our site is a Web App for Linux, a Linux specific version of the Web App PaaS service we already know in Azure. At the moment of writing, this service is still in preview, but we felt comfortable enough to run our website on it. After all, it's just a WordPress site.

It can run all sorts of services, such as Node.js, PHP, .NET Core, but also a Docker image. All we need to do is configure it to point to a container registry and specify the name and version of the container we want to run:

Xpirit.com container config

Whenever the Web App starts up, the container is pulled from the registry and started. So all we need to do to update our website is restart our website. This means there will be a brief moment of downtime in our case, which is acceptable for us. You'll probably want to set up a rolling upgrade if you have multiple nodes, or use a staging environment if you want to minimize downtime.

Note that there is an option now to enable continuous delivery, which automatically deploys the container as soon as it is pushed to the registry. We chose to use a VSTS Release pipeline because we wanted to have a little bit more control. It's utterly simple: basically it stops and then restarts our Web App.

Xpirit.com release pipeline Xpirit.com release pipeline steps

We have one extra step in there though: the LinkCrawler. It's a little command line tool that we built on top of the Chromium Embedded Framework, which crawls the entire site to warm it up and check for any issues such as violations of our Content Security Policy settings. Which brings us to our final topic of interest...

Security

An important underpinning of our service offerings at Xpirit is security. Whether we're doing Cloud, Mobile or ALM/DevOps work, security is of crucial importance. So again, when it comes to our own website, we should walk our talk.

There may not be a lot of rich interactivity on our site, and virtually all of the dynamic content comes from our side via the WordPress backend, but it makes sense to make sure we're protected from attacks like script injection or other hacks. And at the very least where visitors leave their data for us to contact them, this has to happen over a private connection.

If you go to securityheaders.io and scan our site, you'll see that we score a neat A+ for having our shit together.

Hacker

Most of it is covered by using the HTTP Headers Plugin by Dimitar Ivanov. This plugin gives us a template we can fill out so it generates proper HTTP headers for our Content Security Policy.

Some more handy tools we used:

  • CSP Fiddler Extension: if you don't know where to start, set up this Fiddler extension and browse your "secure" website. Afterwards, this extension will tell you what your CSP header should look like; or at least it will give you a good start.
  • CSP Validator: use this site to check if your CSP is set up correctly.
  • Report URI: you can set up an account here to which browsers can report any violations of your CSP whenever they occur. This helps you to stay on top of it.
  • Chrome Developer Tools: offers a nice console that will give you feedback on why your CSP isn't working, and will also inform you about which SHA-256 hash to use to identify the JavaScript on your site.

Conclusion

It was a fun exercise to develop and deliver our website using containerization, and paying extra attention to details that matter. We learned a lot about how to run a relatively simple WordPress site like this on Azure and putting these pieces together. Perhaps this blogpost will give you some ideas for doing something similar.

Share this: