Engineering
Przez Juraj Zachar
27.01.2021
Engineering @ Bitpanda: Here at Bitpanda Pro, we love building reactive systems that deliver uncompromising levels of security and service quality. The core of our business is to process as many financial transactions as possible, and with zero inaccuracies in the shortest time span available.
Here at Bitpanda Pro, we love building reactive systems that deliver uncompromising levels of security and service quality to our customers. Since the core of our business is to process as many financial transactions as possible, and with zero inaccuracies in the shortest time span available, we take our job very seriously.
Back in 2018, when we set out to build the next-generation JVM stack to power our growing business, we set out some goals:
Then, we quickly learned the fastest human eye-blinkers's performance is roughly 200 milliseconds. Also, apparently there are many fast eye-blinkers who want to jump on the Bitcoin band wagon. As the business goals and the first set of requirements became known, a more refined set of software-engineering goals was identified:
We were not sold on orchestrating tens, possibly hundreds of microservices. Instead, we wanted to start with just one service and add more if the need arose.
Each service implementation should have well-understood and testable unhappy path handling. Whenever this path is encountered and the system cannot self-heal, all errors are transparently propagated to a supervisory entity and the faulty instance is removed from the operation. Finally, if there are faults in the system, they don't propagate to other context boundaries and bring our whole operation to a halt.
Each dependency comes with some hidden cost. One is its potential exposure to security vulnerability. The other is losing control altogether, which means performance-tuning and just keeping our application maintainable, scalable and predictable is a much more difficult job.
Enter the Vert.X and its 650 kB of asynchronous, reactive Java framework goodness.
At the core of Vert.x is the principle event-handling. Traditionally, Java applications scaled by encapsulating work-loads on Threads. While in modern multi-core systems, threads are cheap and operating systems are built to support enormous amounts of concurrent workloads, in Java, it is still quite complex to reign in orchestration of threads.
This has partly to do with Java's memory model (chapter 17.4) when threads have to access shared memory - heap space. Get this wrong and you are in for some pretty frustrating hours of debugging out-of-memory, dead-locks, live-locks and the likes of Java concurrency.
Now imagine we set a number of available threads in the system to a fixed number up-front. We don't create any new threads while running the application and design workload execution to take place on any of these threads. Furthermore, the application instructs the execution layer on what to do by emitting events and by registering handlers (who does it) which execute in a so-called event loop. When the processing of one event is finished, the next event-handler is called to occupy the underlying thread.
In summary, the life cycle of threads and the entire cost of concurrency management is completely solved by the framework.
At this point you might think, this is all well and good, but how does it relate to your goals that you set out to achieve? Well, imagine the service you've been commissioned to build is a HTTP server. In case of Vert.X you can seamlessly hook your event handlers to Netty's ChannelHandler (another multi-reactor) and your final setup could look like this:
Notice the time axis at the bottom where every slice represents concurrent processing of user requests using registered event handlers. Some requests may be very cheap to complete - fetching current account balance, while others - processing a new client order - may take longer. This means that while one thread is occupied by creating a new order to buy Bitcoin, the other registered event handlers may complete processing requests to deposit or withdraw funds on the other available thread.
Verticles are the entry point to Ver.X multi-reactor. Each Verticle is spawned with its own Context which is not visible to other verticles. This forces developers to organise code in a perfect share-nothing encapsulation. Moreover, each context allows for perfect testability of the business logic. Integration to database or message-oriented-middleware which resides in other contexts can be mocked and eventually tested separately.
A good practice for writing verticles is to register an exception handler which nicely aides our goal of resilience by design.
@Override
public void start(Promise<Void> start) {
//do this before anything else
context.exceptionHandler(error -> {
if(error instanceof FatalError) {
escalateAndShutdown(error);
} else {
log.error("this error is ignored", error);
}
});
registerServiceHealthCheck();
//complete start-up only when successfully connected to database
registerHandlerWhenConnectedToDatabase().setHandler(start.future());
}
and clean up when you're done.
@Override
public void stop(Promise<Void> stop) {
databaseConnectionPool.close().setHandler(stop.future());
}
Since verticles operate with their own share-nothing contexts, it is not possible to facilitate direct method-calls of this sort:
@RequiredConstructor
public final class OrderProcessingVerticle extends AbstractVerticle
// never reference another Verticle
private final OrderMatchingVerticle matching;
// or other context than own
private final Context parentContext;
private MatchResult match(Order order) {
return matching.handle(order);
}
Instead, if one verticle wishes to communicate to another it does so by passing a message over EventBus to a specific address.
Using ask:
private Future<MatchResult> askToMatch(Order order) {
// an empty envelope, in which the system relays the final outcome
Promise<Void> whenReplied = Promise.promise();
//emit event by passing a message and register response handler
vertx.eventBus().request("handle-this-order-for-me", order, reply -> {
if(reply.succeeded()) {
whenReplied.complete();
} else {
whenReplied.fail(reply.cause());
}
});
//return async result of the whole event-handling
return whenReplied.future();
}
Using broadcast:
By chaining event handlers which may operate in many fully independent contexts, Vert.X framework provides building blocks for creating a complex and performant system which at the same time allows for clean architecture. We argue such a reactive system is easier to reason about than its many alternatives and leaves room to grow your business in times of emergent requirements.
In future posts, we will look at how we solved service encapsulation and why we at Bitpanda Pro love Apache Kafka.
Bitpanda GmbH ve grup şirketleri (Bitpanda) Türk Parasının Kıymetini’nin Korunması Hakkında 32 sayılı Karar’ın 2/b maddesine göre Türkiye’de yerleşik sayılan hiçbir kişiye yönelik olarak 6362 sayılı Sermaye Piyasası Kanunu başta olmak üzere Türkiye Cumhuriyeti Devleti mevzuatı hükümleri gereği Türkiye’de faaliyet izni gerektiren hiçbir sermaye piyasası faaliyetine dair hizmet sunmamaktadır. Şayet Bitpanda’nın yabancı sermaye piyasalarında vermiş olduğu hizmetlerden Türkiye’de yerleşik kişilerin faydalandığı tespit edilecek olursa tüm zararları kullanıcıya ait olmak üzere bu hizmetler ivedilikle sona erdirilecektir.
W celu optymalizacji naszych usług używamy plików cookie. Dowiedz się więcej
Zbierane informacje wykorzystujemy w naszej działalności w całej UE. Cookie settings[1]
Jak sama nazwa wskazuje, pewne pliki cookie na naszej stronie są niezbędne. Są niezbędne do zapamiętania Twoich ustawień przy korzystaniu z Bitpandy (np. ustawienia dot. prywatności lub języka), aby chronić platformę przed atakami lub choćby po to, aby wystarczyło się raz zalogować. Możesz je odrzucić, zablokować lub usunąć, ale wpłynie to znacząco na Twoje wrażenia przy korzystaniu z tej witryny, a nawet uniemożliwi użytkowanie pewnych naszych usług.
Takich plików cookie i podobnych technologii używamy do zbierania informacji, w czasie gdy użytkownicy przeglądają naszą witrynę, aby dowiedzieć się więcej o tym, jak jest używana, i odpowiednio ulepszać nasze usługi. Służą także do pomiaru ogólnej wydajności naszej witryny. Dane, jakie one generują, wykorzystujemy w sposób zbiorczy i anonimowy. Zablokowanie tych plików cookie i narzędzi nie wpływa na działanie naszych usług, ale znacznie utrudnia nam dalsze doskonalenie wrażenia ich użytkowników.
Te pliki cookie służą do eksponowania związanych z platformą Bitpanda reklam. Narzędzia do tego zwykle dostarczają strony trzecie. Z pomocą tych plików cookie i takich narzędzi stron trzecich możemy na przykład zapewnić, że nie widzisz tej samej reklamy więcej niż raz i że reklamy są dostosowane do Twoich zainteresowań. Możemy również wykorzystywać te technologie do pomiaru skuteczności naszych kampanii marketingowych. Zablokowanie tych plików cookie i podobnych technologii zasadniczo nie wpływa na działanie naszych usług. Prosimy jednak pamiętać, że chociaż nadal będziesz widzieć reklamy Bitpandy na stronach internetowych, nie będą one już spersonalizowane.