Engineering • 5 min read
By Juraj Zachar
27.01.2021
Engineering @ Bitpanda: Here at Bitpanda Pro, we love building reactive systems that deliver uncompromising levels of security and service quality. The core of our business is to process as many financial transactions as possible, and with zero inaccuracies in the shortest time span available.
Here at Bitpanda Pro, we love building reactive systems that deliver uncompromising levels of security and service quality to our customers. Since the core of our business is to process as many financial transactions as possible, and with zero inaccuracies in the shortest time span available, we take our job very seriously.
Back in 2018, when we set out to build the next-generation JVM stack to power our growing business, we set out some goals:
Then, we quickly learned the fastest human eye-blinkers's performance is roughly 200 milliseconds. Also, apparently there are many fast eye-blinkers who want to jump on the Bitcoin band wagon. As the business goals and the first set of requirements became known, a more refined set of software-engineering goals was identified:
We were not sold on orchestrating tens, possibly hundreds of microservices. Instead, we wanted to start with just one service and add more if the need arose.
Each service implementation should have well-understood and testable unhappy path handling. Whenever this path is encountered and the system cannot self-heal, all errors are transparently propagated to a supervisory entity and the faulty instance is removed from the operation. Finally, if there are faults in the system, they don't propagate to other context boundaries and bring our whole operation to a halt.
Each dependency comes with some hidden cost. One is its potential exposure to security vulnerability. The other is losing control altogether, which means performance-tuning and just keeping our application maintainable, scalable and predictable is a much more difficult job.
Enter the Vert.X and its 650 kB of asynchronous, reactive Java framework goodness.
At the core of Vert.x is the principle event-handling. Traditionally, Java applications scaled by encapsulating work-loads on Threads. While in modern multi-core systems, threads are cheap and operating systems are built to support enormous amounts of concurrent workloads, in Java, it is still quite complex to reign in orchestration of threads.
This has partly to do with Java's memory model (chapter 17.4) when threads have to access shared memory - heap space. Get this wrong and you are in for some pretty frustrating hours of debugging out-of-memory, dead-locks, live-locks and the likes of Java concurrency.
Now imagine we set a number of available threads in the system to a fixed number up-front. We don't create any new threads while running the application and design workload execution to take place on any of these threads. Furthermore, the application instructs the execution layer on what to do by emitting events and by registering handlers (who does it) which execute in a so-called event loop. When the processing of one event is finished, the next event-handler is called to occupy the underlying thread.
In summary, the life cycle of threads and the entire cost of concurrency management is completely solved by the framework.
At this point you might think, this is all well and good, but how does it relate to your goals that you set out to achieve? Well, imagine the service you've been commissioned to build is a HTTP server. In case of Vert.X you can seamlessly hook your event handlers to Netty's ChannelHandler (another multi-reactor) and your final setup could look like this:
Notice the time axis at the bottom where every slice represents concurrent processing of user requests using registered event handlers. Some requests may be very cheap to complete - fetching current account balance, while others - processing a new client order - may take longer. This means that while one thread is occupied by creating a new order to buy Bitcoin, the other registered event handlers may complete processing requests to deposit or withdraw funds on the other available thread.
Verticles are the entry point to Ver.X multi-reactor. Each Verticle is spawned with its own Context which is not visible to other verticles. This forces developers to organise code in a perfect share-nothing encapsulation. Moreover, each context allows for perfect testability of the business logic. Integration to database or message-oriented-middleware which resides in other contexts can be mocked and eventually tested separately.
A good practice for writing verticles is to register an exception handler which nicely aides our goal of resilience by design.
@Override
public void start(Promise<Void> start) {
//do this before anything else
context.exceptionHandler(error -> {
if(error instanceof FatalError) {
escalateAndShutdown(error);
} else {
log.error("this error is ignored", error);
}
});
registerServiceHealthCheck();
//complete start-up only when successfully connected to database
registerHandlerWhenConnectedToDatabase().setHandler(start.future());
}
and clean up when you're done.
@Override
public void stop(Promise<Void> stop) {
databaseConnectionPool.close().setHandler(stop.future());
}
Since verticles operate with their own share-nothing contexts, it is not possible to facilitate direct method-calls of this sort:
@RequiredConstructor
public final class OrderProcessingVerticle extends AbstractVerticle
// never reference another Verticle
private final OrderMatchingVerticle matching;
// or other context than own
private final Context parentContext;
private MatchResult match(Order order) {
return matching.handle(order);
}
Instead, if one verticle wishes to communicate to another it does so by passing a message over EventBus to a specific address.
Using ask:
private Future<MatchResult> askToMatch(Order order) {
// an empty envelope, in which the system relays the final outcome
Promise<Void> whenReplied = Promise.promise();
//emit event by passing a message and register response handler
vertx.eventBus().request("handle-this-order-for-me", order, reply -> {
if(reply.succeeded()) {
whenReplied.complete();
} else {
whenReplied.fail(reply.cause());
}
});
//return async result of the whole event-handling
return whenReplied.future();
}
Using broadcast:
By chaining event handlers which may operate in many fully independent contexts, Vert.X framework provides building blocks for creating a complex and performant system which at the same time allows for clean architecture. We argue such a reactive system is easier to reason about than its many alternatives and leaves room to grow your business in times of emergent requirements.
In future posts, we will look at how we solved service encapsulation and why we at Bitpanda Pro love Apache Kafka.
Bitpanda GmbH ve grup şirketleri (Bitpanda) Türk Parasının Kıymetini’nin Korunması Hakkında 32 sayılı Karar’ın 2/b maddesine göre Türkiye’de yerleşik sayılan hiçbir kişiye yönelik olarak 6362 sayılı Sermaye Piyasası Kanunu başta olmak üzere Türkiye Cumhuriyeti Devleti mevzuatı hükümleri gereği Türkiye’de faaliyet izni gerektiren hiçbir sermaye piyasası faaliyetine dair hizmet sunmamaktadır. Şayet Bitpanda’nın yabancı sermaye piyasalarında vermiş olduğu hizmetlerden Türkiye’de yerleşik kişilerin faydalandığı tespit edilecek olursa tüm zararları kullanıcıya ait olmak üzere bu hizmetler ivedilikle sona erdirilecektir.
We use cookies to optimise our services. Learn more
The information we collect is used by us as part of our EU-wide activities. Cookie settings
As the name would suggest, some cookies on our website are essential. They are necessary to remember your settings when using Bitpanda, (such as privacy or language settings), to protect the platform from attacks, or simply to stay logged in after you originally log in. You have the option to refuse, block or delete them, but this will significantly affect your experience using the website and not all our services will be available to you.
We use such cookies and similar technologies to collect information as users browse our website to help us better understand how it is used and then improve our services accordingly. It also helps us measure the overall performance of our website. We receive the date that this generates on an aggregated and anonymous basis. Blocking these cookies and tools does not affect the way our services work, but it does make it much harder for us to improve your experience.
These cookies are used to provide you with adverts relevant to Bitpanda. The tools for this are usually provided by third parties. With the help of these cookies and such third parties, we can ensure for example, that you don’t see the same ad more than once and that the advertisements are tailored to your interests. We can also use these technologies to measure the success of our marketing campaigns. Blocking these cookies and similar technologies does not generally affect the way our services work. Please note, however, that while you’ll still see advertisements about Bitpanda on websites, the adverts will no longer be personalised for you.