Not long ago, developers had to set up their databases, caches, integrations, possible bootstrap scripts and then do the magic setup dance to get a project up and running. The setup not only took time but was also quite frustrating in more complex projects.
Sometimes all went well, or the project was easy to set up, there might have even been a setup script that made things easy. However, many times, multiple people had to be involved with in-depth knowledge about the project to set up complex projects.
Then, along came Docker
The magic solution to everything that would fix all of the hassles with containers. Well, it did not go that smoothly in the beginning, setting it up in anything but Linux was a bit of a hassle with the extra virtual machines, and it sometimes felt like you had to do more work than before when things did not work as expected.
This changed in the previous years, however. Docker now runs without any extensive setup steps in both MacOS and Windows, it might still use underlying virtualization platforms with HyperKit and Hyper-V, but they don't require any action from the developer in the case of Docker. These changes led us, at Anders to start using Docker for real a couple of years ago, and together with Docker Compose, we can now spin up development environments for many of our older projects as well as all of our new ones.
Docker Compose gave us the possibility to not only run the application we wanted, but also all services around it, such as databases, caches, search engines as well as a multi-part application that may have several components that work together to make one serving. Just running "docker-compose up" has been a breath of fresh air for many when it comes to development, but what about Docker in production?
Choosing an orchestrator
To run Docker in production, there needs to be a way to manage all containers. Management can, of course, be done by hand by using the Docker CLI, but this is not a robust long-term solution. This is where container orchestrators come in, they handle deployments and management of containers as well as scaling and making sure that all services are up and running if a hardware node goes down.
There are multiple main contenders when it comes to running Docker containers in production; one is Dockers own Swarm, also known as SwarmKit or Swarm mode, another is Kubernetes, also known as k8s (8 for the number of letters between k and s). Alongside these, there are also Apache Mesos and Hashicorps Nomad. When googling around it is clear that Kubernetes is the one that is getting all of the hype currently, and the rest are picking up the scraps.
Technology strategy of Anders
We wanted to focus our technology strategy to one of the big four cloud providers (Amazon, Google, Microsoft, IBM) and quite quickly chose Google due to them opening their datacenter in Hamina in the summer of 2018. Google being the creator of Kubernetes and there not being a lot of good choices for Docker Swarm on the big providers unless you host manage everything yourself was one of the big reasons why Swarm was out quite fast.
Frank is in charge of planning our technology strategy. It seems to be fun. :)
While Mesos and Nomad seemed like interesting technologies, they either lacked features or maturity of Kubernetes. Mesos, for instance, would be a service that could run Kubernetes, so it is not the same thing while it still could provide similar features through Marathon. Nomad did not handle service discovery and secrets and would require additional services for these features. While there are pros and cons to spreading the features to several services, we chose not to go that route, mainly for the sake of maintenance time.
In the end, Kubernetes was the winner for us due to its maturity, backing by multiple large companies as well as for its feature set.
It also meant that we could move our setup quite easily between cloud providers due to all of the providers offering managed Kubernetes clusters and the Kubernetes API being the same across the board with only the initial setup being a bit different between cloud providers.
From code to Kubernetes
Since we chose Kubernetes as our platform of choice to run our container, we also had to rethink our deployment and integration strategies. Travis CI has been the continuous integration (CI) platform of choice for a long time at Anders, but we see a steady shift away from the service from several other tech companies. The reason being that they are not as invested in continuous deployment (CD) and adding new features to their platform. For this reason, competitors such as CircleCI and GitLab CI has gotten a strong foothold in the CI/CD market.
For us, this has not been a significant issue since we have not been doing CD via Travis before, but due to the changes in infrastructure, we were now looking to change that. At the same time, we were also looking for a source code management (SCM) platform with support for single sing-on (SSO). While GitHub has served us well for many years, to get SSO support, we would need to switch to the $21/user enterprise version of GitHub which would not provide that many features over the current $9 Pro plan we were using other than SSO support.
For a couple of dollars less, at $19/user, it would provide us with built-in CI/CD pipelines, Kubernetes deployment boards, application monitoring, docker and NPM registries as well as possibility integrate to our user management system. The choice was quite comfortable at this point; a move to GitLab was imminent.
GitLab is betting big on Kubernetes as well, as they have several features that are Kubernetes specific such as their previously mentioned deploy boards. It was not just the features that were tempting; it was also the unification of all of the features that was a strong selling point — a single platform where developers could get as much as possible done without jumping between services.
GitLab has a ready-made auto DevOps CI/CD setup that can is easy to use; however, this did not entirely fit our needs which led us to create our own modified version of the setup with support for mono-repositories, custom deployments as well as multi-cluster setups.
The future? Better service for customers
We have been hosting services for a long time at Anders, we have over ten years of experience in hosting and building software, and this is the next step in that journey. Moving to a DevOps environment not only makes it easier for us to develop excellent software for our customers but also makes it possible for us to deploy the same infrastructure and deployment pipeline for others.
The infrastructure that we are building is not specific to Anders workflow but created in a way that any other company could use the same setup. We are striving to spread the knowledge of how to build better software and getting it deployed in a fast and straightforward manner.
Are you interested in moving to the future regarding your infrastructure? Contact - we are ready to guide you into the future!
Chief Technology Officer
Scientia potentia est. (Knowledge is power.)