September 26, 2023 by Caroline Morton and Maxwell Flitton

Rust Nanoservices

blog-feature-image

Introduction

We are big fans of microservices, and our products and consulting services are built around them. We have been using microservices for a few years now and we have seen the benefits that they can bring to a project. Isolated microservices that are loosely coupled and independently deployable are a great way to build a system. You can concentrate on building a single service, and you can test and deploy it in isolation. This makes it easier to build and maintain a system with several developers. It also enforces the isolation of code and reduces the probability to tech debt and “big ball of mud systems”. If bad coding practices take hold in one server, the damage and rewrite requirements are contained.

One big advantage that we have leaned into is the ability to thoroughly test a microservice in isolation. We use both unit tests to test individual functions and integration tests using Postman (and Newman in GH Actions) to test the API endpoints. This means that we can be confident that the microservice works as expected before we deploy it. There are no surprises when we deploy it to production. This approach has saved us many times as we convert our existing python microservices to rust. The endpoints should work the same way, and we can test this before we deploy.

Two problems

We have identified two problems with our collection of microservices. One is that our microservices are not as small as they could be, and they contain a lot of repeated dependencies. For example, most of our rust microservices use a range of libraries from DB connections to network cards for HTTP requests, and an async runtime. This means that we must include these dependencies in every microservice.

Secondly, HTTP requests between microservices are slow. They add latency to the system. When we make a request, the data must be serialised, go through the network card, sent over the network, received from the other network card, and the deserialised. This happens multiple times per HTTP request and is 10 times slower than an in-memory function call minimum.

The Solution

We have been experimenting with a new approach to building microservices. We call them nanoservices. They are very small microservices that are built into a single rust binary. The services are still independently coded in their own codebases with full unit and integration tests, but they can also be compiled into a single binary. This binary can then be deployed as a single container. This means that we can deploy a single container that contains all the microservices that we need. This means that we can avoid the latency of HTTP requests between microservices as due to compilation flags, the nanoservices merely call each other in-memory. For those who remember the move to microservices several years ago, this might sound familiar, but we are not advocating a move back to monoliths. The nanoservices are still independently coded and tested. They can still be deployed independently. The difference is that they can also be deployed together. We have also embedded the SQL files for database migrations into the rust binary so one Rust binary performs migrations on all the databases, spins up listeners for each server on their own port so specific servers can still be hit from the outside if wanted, but when the servers call each other, they do so in-memory. Using layers distroless builds an entire cluster can be in one container that is roughly 60mb. This also means you don’t need docker to run your cluster, it can just run on the metal of the server. Furthermore, we are looking into using embedded runtimes like Embassy so the cluster can be run on a chip without any operating system. Embassy async runtime also powers down the chip when not being used, so you can have your entire cluster running on a chip with no operating system, and the chip only gets powered up when receiving a request. Both Maxwell Flitton and Caroline Morton are currently writing the O’Reilly Async Rust book where different async approaches are being documented.

The Benefits

There are number of benefits to this approach:

  • The nanoservices single binary is very small in size - typically less than 20MB. This is because if we combine 5 rust servers into one static binary, we are only compiling the async runtime and database driver once.
  • The nanoservices single binary is very fast. This is because we are not making HTTP requests between microservices.
  • We get to use the power of the rust compiler to find type errors. Whilst we do have standardised interfaces between microservices, errors can still creep in because you need to remember to update the interface in both microservices. With nanoservices, we can use the rust compiler to ensure that the interface is up to date.
  • The nanoservices are still independently coded and tested meaning that developers can work on them in isolation, and they can be tested in isolation. For us, this is the main advantage of microservices as we strive to have 100% test coverage and have no bugs end up in production.
  • The nanoservices single binary is very secure. This is because we are not making HTTP requests between microservices. In addition, we typically use distroless builds so there is really no way to get into the container and the container doesn’t even have a terminal if someone does get access to it.
  • The nanoservices can still be deployed independently where needed which is a great tool for prototyping products and developing the MVP.
  • Nanoservices could be run on the edge on IOT devices with no operating system if combined with Embassy

Why we went this route

We love microservices and have steadily over the last few years moved our products and consulting services to a microservice architecture. This has been a great move for us allowing productivity to skyrocket and allowing us to build and maintain complex systems. In our product Clinical Metrics, we have to deal with exceptional demands when classes of medical students all log in at the same time and generate unique virtual patients. This meant that we had to build a system that could scale to meet these demands. We have been able to do this with microservices deployed in Kubernetes with horizontal scaling. Key here is that we also converted most of our python microservices to rust to get the speed of rust and allow us to adapt and scale the system as needed. We are now in a situation where the majority of our system is in rust and this opens up the possibility of compiling the microservices into a single binary.

LET’S WORK TOGETHER