With the evolution of Dynamic Infrastructure and Microservices Architecture, we start noticing that our applications tend to get more technical functionalities than it used to. In the Monolith era, these concerns weren’t there since everything was in a single place. Let me give you some examples so that you can better understand what I am referring to:
Requesting SSL certificates for one application is not that much overhead but when you have 10 applications, things can get a bit out of hand. Also, back then, it has fine if those certificates were renewed every couple of years. Nowadays, the recommended way is to renew them at least every year ( preferably, every month or couple of months ). So just do the math, 10x certificates every 3 months = a lot of effort and time spent making sure that you won’t have SSL errors. If there only would be something that is able to renew the certificates automatically when they expire!
Service to Service AuthN & AuthZ
Since we’re still in the realm of security let’s talk a bit about Service to Service Authentication and Authorization. Let’s say that Service A needs to call a REST API on Service B. How can Service B make sure that Service A is who he says it is ( Authentication ) and he should also be allowed to call that endpoint ( Authorization ). For authentication, there are 2 options: we can either use a token-based approach ( service tokens, e.g: OIDC ) or Mutual TLS ( inspect the Common Name of the Service A and match it against a pre-defined list of authorized Services ). For authorization, we can be even more restrictive if Service A should be allowed to call only a subset of the API’s exposed by Service B. In this scenario, a token-based solution is more favorable as well ( e.g: OAuth2 ).
We’ve come a long way since we manually dump and store logs in files on the same file system that the application runs on. Just as the 12-factor app suggests, we should treat logs as event streams rather than some simple text that we dump it into a file. There are tons of tools nowadays that let us do just that, like Logstash, Amazon CloudWatch, Azure Monitor, etc. but they do require a very specific integration in order to achieve meaningful results.
Making sure the application is running within expected parameters is a must! Otherwise, some very bad things can happen. All the monitoring data is stored in a time-series database ( e.g: Prometheus, InfluxDB, etc. ) but how do we get it there? If we want to keep the impact to the application to a minimum we need a pull-based solution ( the monitoring stack pulls the metrics from the app ) but it’s not always easy to configure such a solution.
You’ve just deployed another instance of your application due to some peek traffic. Is your infrastructure able to determine that a new instance is there? Or you have to manually add some IPs to your load balancer? While using Service Discovery, the app is automatically registered into a Service Registry and it will receive traffic as soon as it is considered healthy.
Externalised configuration should be your way to go when building a cloud-native application. The most advanced option when it comes to service configuration is by having a Configuration Server that lets your application know when a configuration has changed. When this happens, the app should automatically refresh its context and use the new version of the configuration.
These are only a couple of target areas that I could quickly think off but there are many more out there!
Behold … Sidecars
As you’ve probably guessed by now, all the above-mentioned problems can be solved using sidecars. But what is a sidecar?
A sidecar is a micro-application designed to alleviate the technical effort required while implementing a production ready service by externalizing some of its technical concerns.
Sounds really complex but essentially, a sidecar is nothing more than some code extracted from your application. Let’s take for example the SSL problem. You want your application to be able to talk only SSL. While using the sidecar pattern we spawn up a small application next to our service that acts as a proxy. Anyone who would want to talk to our service it would first have to go through the sidecar.
SSL Sidecar Setup
The sidecar takes the responsibility of establishing an SSL connection with clients, while the communication between the sidecar and the service is plaintext. Now, there is a key thing without which this whole setup becomes useless! Both the Service and the Sidecar should reside in an isolated network and the only way to communicate with the application is through the Sidecar. It would be quite difficult to create such a setup through low-level networking tools ( e.g: iptables ) but actually it is extremely easy to do it in a Kubernetes environment.
Pods, Containers, and Sidecars
Kubernetes uses Pods to wrap any container in such a way that it can be managed by the Kubernetes Controller. There is a lot to talk about it, so if you feel the need for a refresher, check out the official documentation. In essence, we can use pods to wrap not one, but multiple containers. The containers will be allowed to talk to each other through the localhost interface but no outside connection is allowed unless we specifically expose it through a Kubernetes Service.
So, the general approach is to separate the main application and the sidecar(s) in different containers. The port of the sidecar is exposed through the pod and it can be accessed through the Kubernetes Service, whereas the main application is only accessible by the sidecar container through the localhost interface. Easy enough, right?!
How many Sidecars?
Essentially, there is no limit on the number of sidecars. As long as you are not encountering any hard limits imposed by the machine you are running your app on, there shouldn’t be any obstacle in the number of sidecars.
In the above example, each sidecar tackles a different problem: the SSL Sidecar will be responsible for establishing SSL connections with the clients, the Log Collector will be responsible for pushing the logs to a Kafka topic, and so on. If one sidecar per target area is too much for you, then you can also combine several functionalities together to compose a single sidecar.
The good and the bad
There are a lot of good things coming out from this approach:
- The app contains only business functionality -> less code to maintain;
- Ease of development: not having to disable certain functionalities during development can save a lot of time;
- Testability: we test only the business logic
However, even though we externalize the code related to technical functions we still have to put it somewhere. This means, that we now have to maintain even more applications, each with a different lifecycle. But as an upside, you only have to write it once and then it can be reused by every application.