Jun 14, 2019
In today’s fast-paced economy, where time is becoming a highly-valued commodity, tech companies are expected to deliver their software update releases within extremely short cycles of discovering a need (e.g. a new feature or a bug) to the deployment of an update into production.
The traditional on-premises approach for providing infrastructure during development and production is slowly becoming a thorn in the side of this process because it usually involves multiple departments, takes time (purchase, deployment, configuration) and in turn kills productivity. Cloud-based Infrastructure as a Service (IaaS) aims to solve these problems by providing a simple and quick way to deploy new servers, provision new services, scale existing ones, and much, much more.
When people say “cloud”, most developers usually think about cloud-native applications that are designed top to bottom to leverage all the capabilities of the underlying platform. This usually involves micro-services, highly decoupled systems, asynchronicity etc., but it’s not only about that. If we look at the cloud and its IaaS capabilities from an operational standpoint, we can see that this can be leveraged for far more than just hosting our application.
We’re all familiar with what is arguably a pretty standard setup for automated builds or Continuous Integration (CI). Most commonly we would use a build management tool (ex. TeamCity, Travis, GitLab CI, …) and a fixed infrastructure of N servers that support our build process. Those servers have a pre-installed bundle of packages that are needed for our procedures to run. In our experience, this type of setup will work quite well most of the time, but one peculiar thing can sometimes be observed. 90% of the time the infrastructure handles all builds within accepted times and jobs don’t wait in the pipeline for a free agent. The servers are not fully utilised, and most of them are not even doing anything. However, within that 10%, when a big milestone is being completed or is just being prepared for a regular release, the servers can all become fully loaded, processing build requests and new ones keep piling up to the point where developers have to wait up to an hour (or more) for their job to start.
If we utilise something like IaaS we can programmatically (e.g. via a script) deploy a new server that would join the party and help process builds. Of course, we can do the same with our on-premises infrastructure, but with IaaS a new server can be fully operational in a matter of seconds instead of hours. Most major cloud providers will also offer the ability to define a virtual machine (VM) template that can be used for all newly deployed servers. This means all the aforementioned packages are pre-installed and the server is immediately configured to start processing our requests.
Even vendors that provide build management solutions have jumped on the bandwagon, and most major players nowadays offer something called auto-scaling. This mechanism will monitor our infrastructure and, based on the load, decide if a new agent needs to be deployed, so all processes are running within acceptable time-frames. An even better aspect of this is that this particular solution works in both directions, meaning if the current infrastructure is not being utilised, the auto-scaler will shut down and delete all unrequired servers. With this, the appropriate amount of build servers can be online at any given moment.
Another fun aspect of using cloud-based infrastructure is setting up different environments for acceptance testing and staging our application. In essence, this is a simple process even for on-premises deployments. A server is set up and our application is deployed for testing to verify all newly developed functionality, and regression testing is done to verify that everything has remained intact.
This got us thinking. Can we somehow enable our testing teams to verify all functionality before it is merged with the main development branch in source control? We quickly noticed that by using IaaS this should be fairly simple. Issue a request to deploy a new server, install our application, test and when the functionality gets merged, drop the server. If we also utilise a clustering orchestrator and an auto-scaling program, it all becomes a breeze.
The benefits of using the cloud for running production work-loads should be quite obvious to anyone already familiar with this. Almost all the same benefits also appear for cloud-native applications. To highlight just a few of them:
It can be seen that cloud is not only reserved for applications that run on it natively, but its powerful IaaS capabilities can be utilised to augment existing processes and infrastructure. A high level of flexibility can be achieved which enables us to always have exactly what is needed, no more and no less. From a financial standpoint, any company using the cloud could also save some money on their infrastructure. Most of the major providers will usually only bill for what you’re using. In the last few years the prices have come down, and if we compare the operational costs of this kind of infrastructure deployed on-premises (even with IaaS), we can see that even large companies can save some money by leveraging cloud-based infrastructure.
Here at Adacta we're trying to leverage IaaS into the building of our next-gen AdInsure insurance software by slowly transitioning some parts of our infrastructure to the cloud. This will simplify end-to-end testing, ensure HA for our environments, enable automatic scaling of our CI infrastructure (“have what you need”) and in general ensure availability of all our services.
We're also working more and more on cloud deployments and PaaS which would also allow us to provide our system to small insurance companies and other companies that don't have the internal IT resources needed for running and managing our software in-house.
We think the cloud is a future that is worth investing in.