Eventual consistency что это

Eventual consistency что это

Продолжая включать интересные практические примеры в наши курсы Apache Kafka для разработчиков, сегодня поговорим о согласованности в распределенных системах с высокой доступностью. Читайте далее, что такое eventual consistency, почему это важно для микросервисной архитектуры, при чем здесь ограничения CAP-теоремы и как решить проблемы обеспечения конечной согласованности с Kafka Streams.

Что такое eventual consistency или как хакнуть CAP-теорему в распределенных микросервисах

Простота реализации микросервисных решений часто оборачивается сложностью их проектирования. В частности, если один из сервисов изменил состояние какого-то объекта, другие сервисы должны знать об этом как можно скорее (в идеале – сразу же), чтобы избежать несогласованности данных. К примеру, работы по сборке и доставке товара начнутся только после того, как заказ оплачен, и желательно, сразу же после наступления этого события. При этом за процессы создания заказа, оплаты, сборки и доставки отвечают разные сервисы.

Однако, добиться такой согласованности данных в распределенных системах не так-то просто. Не случайно CAP-теорема для распределенных систем гласит, что из 3 возможных состояний (Consistency – Согласованность, Availability – Доступность и Partition tolerance – Устойчивость к разделению) одновременно возможны лишь 2. При этом для микросервисов достижение максимальной согласованности является наиболее затратным из-за требований к распределенным транзакциям с двухфазной фиксацией (2PC). Такие транзакции имеют низкую производительность, ограниченную масштабируемость и требуют тесной связи сервисов друг с другом. Напомним, согласованность является одним из 4-х ключевых свойств ACID (Atomicity – атомарность, Consistency – консистентность, Isolation – изолированность, Durability – долговечность), характерных для транзакций. В случае микросервисов необходимо гарантировать ACID транзакций в распределенной системе с низкими затратами при сохранении слабосвязанной архитектуры.

Чаще всего из всех моделей согласованности для распределенных систем современные микросервисные архитектуры выбирают конечную согласованность (eventual consistency) [2].

Это обеспечивает высокую доступность, гарантируя, что в отсутствии изменений данных, через какой-то промежуток времени после последнего обновления, т.е. в конечном счёте, все запросы будут возвращать последнее обновлённое значение. Например, обновлённая DNS-запись распространится по серверам согласно настройкам интервалов кэширования, в результате чего в конечном счёте все клиенты увидят обновление, хотя и не в то же мгновение. Таким образом, eventual consistency гарантирует асинхронное применение изменений с некоторой задержкой времени [3].

Но главной проблемой асинхронного обмена сообщениями является отсутствие каких-либо гарантий для потребителей относительно логического порядка получения событий от производителей. Например, сообщение об оплате заказа получено перед сообщением о новом заказе, что вызовет попытку обработки как будто не существующего объекта. Это можно решить, сохранив сообщение об оплате и повторно обработать его после получения и обработки сообщения о новом заказе. Реализовать такой подход можно с помощью обмена сообщениями или платформы обработки событий с разделением системы и базы данных (кэша) для отслеживания состояния обработки объектов, а также через приложения Apache Kafka Streams. Почему именно это является наиболее простым, легким в реализации и масштабируемым решением, мы рассмотрим далее.

Как Kafka Streams обеспечивает конечную согласованность распределенных микросервисов

Kafka Streams API – это клиентская библиотека для разработки приложений, обрабатывающих записи из топиков в Apache Kafka. Приложение Kafka Streams обрабатывает потоки данных (записи) через топологии процессоров из топиков, хранилищ состояний и узлов процессора. Приложение Kafka Streams потребляет и создает сообщения, которые постоянно хранятся в топиках и хранилищах состояний. Потоки Kafka можно масштабировать, разбивая топики так, чтобы несколько задач и потоков могли обрабатывать данные параллельно, в т.ч. с отслеживанием состояния (stateful), когда записи обрабатываются на основе их исторических состояний. Раздел топика обрабатывается выделенной задачей или потоком. Алгоритм назначения разделов Kafka Streams API гарантирует, что записи с одним и тем же первичным ключом обрабатываются последовательно, а хранилище состояний для раздела используется исключительно назначенной задачей или потоком. Записи с одним и тем же первичным ключом гарантированно попадают в один и тот же раздел, что позволяет обрабатывать их «последовательно» по времени события в Kafka Streams.

Таким образом, Kafka Streams отлично походит для реализации конечной согласованности распределенных микросервисных систем, дополнительно обеспечивая отказоустойчивость, масштабируемость и производительность [1]:

Именно поэтому разработчики международной финтех-компании BlackRock, о которых мы рассказывали вчера на примере обеспечения расширенной безопасности Apache Kafka, использовали API Kafka Streams для своей веб-платформы управления ликвидностью, одобренной крупными банками и корпорациями по управлению активами. Приложение Cachematrix Cloud Connector для портала Cachematrix Liquidity Trading Portal представляет собой сервис «Интеграция как услуга» в реальном времени, соединяя разные внешние банковские системы и системы Transfer Agent (TA) с мультитенантным порталом Cachematrix. В качестве брокера сообщений и платформы потоковой передачи собыйтий используется Apache Kafka. А необходимые функции сохранения сообщений и их stateful-обработки реализованы с помощью Kafka Streams API, гарантируя конечную согласованность с высокой отказоустойчивостью, стабильностью и производительностью [1].

Узнайте все тонкости разработки распределенных приложений потоковой аналитики больших данных и администрирования кластеров Apache Kafka на специализированных курсах в нашем лицензированном учебном центре обучения и повышения квалификации для разработчиков, менеджеров, архитекторов, инженеров, администраторов, Data Scientist’ов и аналитиков Big Data в Москве:

Источник

Eventually Consistent Databases: State of the Art

Introduction

Eventual consistency [1] is a consistency model, which is used in many large distributed databases. Such databases require that all changes to a replicated piece of data eventually reach all affected replicas. Furthermore, the conflict resolution is not handled in these databases, and the responsibility is pushed up to the application authors in the event of conflicting updates. Eventual consistency is a specific form of weak consistency: the storage system guarantees that if no new updates are made to the object, eventually all accesses will return the last updated value [1]. If no failures occur, the maximum size of the inconsistency window can be determined based on the factors such as communication delays, the load on the system, and the number of replicas involved in the replication scheme. We earlier in https://blog.mariadb.org/ mariadb-eventually-consistent / studied is the MariaDB eventually consistent and found out that in the most of the configurations it can be.

DEFINITION: Eventual consistency.

EXAMPLE: Consider a case where data item R=0 on all three nodes. Assume that we have the following sequence of writes and commits: W(R=3) C W(R=5) C W(R=7) C in node 0. Now read on node 1 could return R=5 and read from node 2 could return R=7. This is eventually consistent as long as eventually read from all nodes return the same value. Note that this final value could be R=5. Eventual consistency does not restrict the order in which the writes must be executed.

In this blog we shortly introduce database management systems using eventual consistency and we evaluate reviewed databases based on popularity, maturity, consistency and use cases. Based on this we present advantages and disadvantages of eventual consistency. A longer and more thorough version of this research has been published on [13].

Databases using eventual consistency

MongoDB [2] is a cross-platform document-oriented NoSQL database system, and uses BSON to store data as its data model. MongoDB is free and open source software, and has official drivers for a variety of popular programming languages and development environments. Web programming language Opa also has built- in support for MongoDB, and offers a type-safety layer on top of MongoDB. There are also a large number of unofficial or community-supported drivers for other programming languages and frameworks.

CouchDB [3] is an open source NoSQL database, and uses JSON as its data mdoel, JavaScript as its query language and HTTP as API. CouchDB was first released in 2005 and later became an Apache project in 2008. One of CouchDB’s distinguished features is multi-master replication.

Amazon SimpleDB [4] is a distributed database written in Erlang by Amazon.com. It is used as a web service with Amazon Elastic Compute Cloud (EC2) and Amazon S3, and is part of Amazon Web Services. It was announced on December 13, 2007.

Amazon DynamoDB [5,6] is a fully managed proprietary NoSQL database service that is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB uses a similar data model as Dynamo, and derives its name also from Dynamo, but has a different underlying implementation: DynamoDB has a single master design. DynamoDB was announced by Amazon CTO Werner Vogels on January 18, 2012.

Riak [7] is an open-source, fault-tolerant key-value NoSQL database, and implements the principles from Amazon’s Dynamo. Riak uses the consistent hashing to distribute data across nodes, and buckets to store data.

DeeDS [8] is a prototype of a distributed, active real-time database system. It aims to provide a data storage for real-time applications, which may have hard or firm real-time requirements. As database, DeeDS uses OBST (Object Management system of STONE) and TDBM (DBM with transactions), which replaces the OBST storage manager. One main reason for introducing TDBM is to add support of nested transaction into DeeDS. TDBM is a transaction processing data store with a layered architecture, and provides DeeDS with:

ZATARA database [9] is a distributed database engine that features an abstract query interface and plug-in-able internal data structures. ZATARA is designed for the framework, where it is flexible enough to be used by any software application, and guarantees data integrity and achieves high performance and scalability.

Both DeeDS and ZATARA are the result from research projects and not yet mature enough for production usage.

We use the following criteria to evaluate the database systems that support the eventual consistency:

Popularity

We evaluate the popularity of the presented database systems based on DB Engines ranking (http://db-engines.com/en/ranking). The DB Engines Ranking ranks database management systems according to their popularity. At the beginning of 2014, MongoDB was ranked 7th with a score of 96.1. In June 2014, it is ranked 5th with the score 231.44. At the beginning of 2014, CouchDB was ranked 16th. In June 2014,CouchDB is ranked 21st with the score 22.78. At the beginning of 2014, Riak was ranked 27th. In June 2014, Riak is ranked 30 th with the score 10.82. At the beginning of 2014, DynamoDB was ranked 35th with a score of 7.20. In June 2014 DynamoDB is ranked 32nd with the score 9.58. At the beginning of 2014, SimpleDB was ranked 46th. In June 2014 SimpleDB, is ranked 52th with the score 2.94.

According to this ranking, MongoDB is clearly the most popular and widely known database system supporting the eventual consistency. As a reference MySQL is ranked 2nd on June 2014 and MariaDB is ranked 28th.

Maturity

Based on the authors’ research, MongoDB is clearly the most mature database system using eventual consistency. It has a large user and customer base and is actively developed. MongoDB has official drivers for several popular programming languages and development environments. There are also a huge number of unofficial or community-supported drivers for other programming languages and frameworks.

Riak is available for free under the Apache 2 License. In addition, Riak uses Basho Technologies to offer commercial licenses with subscription support and the ability for MDC (Multi Data Center) Replication. Riak has official drivers for Ruby, Java, Erlang, Python, PHP, and C/C++. There are also many community-supported drivers for other programming languages and frameworks.

CouchDB is a NoSQL database. CouchDB uses JSON to store data, supports MapReduce query functions in JavaScript and Erlang. CouchDB was first released in 2005 and became an Apache project in 2008. The replication and synchronization features of CouchDB make it ideal for mobile devices, where network connection is not guaranteed but the application must keep on working offline. CouchDB is also suited for applications with accumulating, occasionally changing data, on which pre-defined queries are to be run and where versioning is important (CRM, CMS systems, for example). The master-master replication is an especially interesting feature of CouchDB, which allows easy multi-site deployments. CouchDB is clearly a mature system and used in production environments.

Amazon SimpleDB is on the Beta phase and thus we do not suggest its use in production. ZATARA and DeeDS are in the research phase and there are no publicly available systems for testing. Therefore, they are at most in the Alpha phase and wedo not recommend their use in production as well.

Consistency

From earlier research, we know that Amazon SimpleDB’s inconsistency window for eventually consistent reads was almost always less than 500ms [10], while another study found that Amazon S3’s inconsistency window lasted up to 12 seconds [10]. However, to the author’s knowledge, there is not a widely known and accepted workload for the databases using eventual consistency. Therefore, the comparison of consistency or inconsistency must be based solely on system features. From a point of view of consistency, Riak offers the most configurable consistency feature, which allows selecting the consistency level.

MongoDB, SimpleDB and DynamoDB offer the possibility to read the latest version of the data time, thus providing strong consistency as well as eventual consistency. All other systems reviewed offer only eventual consistency, and may return an old version of the data when performing read operations.

Use Cases

MongoDB has been successfully used on operational intelligence, especially on storing log data, creating pre-aggregated reports and in hierarchical aggregation. Furthermore, MongoDB has been used on product management system to store product catalogs, manage inventory and category hierarchy. In content management systems, MongoDB is used to store metadata, asset management and store user comments on content, like blog posts and media.

Riak has been successfully used on simple high read-write applications for session storage, serving advertisements, storing log data and sensor data. Furthermore, Riak has been used in content management and social applications for storing user accounts, user settings and preferences, user events and timelines, and articles and blog posts.

The replication and synchronization capabilities of CouchDB are well suited in mobile environment, where network connection is not guaranteed, but the application must keep on working offline. CouchDB is also ideal for the applications with accumulating, occasionally changing data, on which pre-defined queries are to be run, and where versioning is important. CRM, CMS systems are the examples of such applications. CouchDB has an especially interesting feature: master-master replication, which allows easy multi-site deployments.

SimpleDB is well suited for logging, online games, and metadata indexing. However, one cannot use SimpleDB for aggregate reporting: there are no aggregate functions such as SUM, AVERAGE, MIN, etc. in SimpleDB. Metadata indexing is a very good use case for SimpleDB. One can also have data stored in S3 and use SimpleDB domains to store pointers to S3 objects with more information about them. Another class of applications, for which SimpleDB is ideal, is sharing information between isolated components of an application. SimpleDB also provides a way to share indexed information, i.e., the information that can be searched. A SimpleDB item is limited in size, but one can use S3 for storing bigger objects, such as images and videos, and point to them from SimpleDB. This could be called the metadata indexing.

Advantages and Disadvantages

Advantages

Eventual consistency is easy to achieve and provides some consistency for the clients[11]. Building an eventually consistent database has two advantages over building a strongly-consistent database: (1) It is much easier to build a system with weaker guarantees, and (2) database servers separated from the larger database cluster by a network partition can still accept writes from applications. Unsurprisingly, the second justification is the one given by the creators of the first generation NoSQL systems that adopted eventual consistency.

Eventual consistency is often strongly consistent. Several recent projects have verified the consistency of real-world eventually consistent stores [10].

Disadvantages

While eventual consistency is easy to achieve, the current definition is not precise [11]. Firstly, from the current definition, it is not clear what the state of eventually consistent databases is. A database always returning the value 42 is eventually consistent, even if 42 were never written. One possible definition would be that eventually all accesses return the last updated value, and thus the database cannot converge to an arbitrary value[1]. Even this new definition has another problem: what values can be returned before the eventual state of the database is reached? If replicas have not yet converged, what guarantees can be made on the data returned? In this case, the only possible solution would be to return the last known consistent value. The problem here is how to know what version of data item was converged to the same state on all replicas[1].

Eventual consistency requires that writes to one replica will eventually appear at other replicas, and that if all replicas have received the same set of writes, they will have the same values for all data. This weak form of consistency does not restrict the ordering of operations on different keys in any way, thus forcing programmers to reason about all possible orderings and exposing many inconsistencies to users. For example, under eventual consistency, after Alice updates her profile, she might not see that update after a refresh. Or, if Alice and Bob are commenting back-and-forth on a blog post, Carol might see a random non-contiguous subset of that conversation. When an engineer builds an application on an eventually consistent database, the engineer needs to answer several tough questions every time when data is accessed from the database:

That is a hard list,and developers must work very hard in order to answer these questions. Essentially, an engineer needs to manually do the work to make sure that multiple clients do not introduce inconsistency between nodes.

One way to address these questions at least partly is to use a stronger version of eventual consistency.

DEFINITION: Strong Eventual consistency.

To the authors’ knowledge, there is currently no database system that uses strong eventual consistency. This could be because it is harder to implement. Eventual consistency represents a clear weakening of the guarantees that traditional databases provide, and places a requirement for software developers. Designing applications, which maintain correct behavior even if the accuracy of the database cannot be relied on, is hard.

In fact, Google addressed the pain points of eventual consistency in a recent paper on its F1 database [12] and noted: “We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date. We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level.”

Conclusions

Clearly, there are several very mature and popular database systems using eventual consistency. Most of these are actively developed and there is a strong community behind them. We believe that we will see more database systems in the future using eventual consistency or strong eventual consistency.

References

[1] Vogels, W.: Scalable Web services: Eventually Consistent, ACM Queue, vol. 6, no. 6, pp. 14-16, October 2009.

[3] Anderson, C. J., Lehnardt, J., and Slater, N.: CouchDB: The Definitive Guide. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. January 2010, First Edition

[6] Sivasubramanian, S.: Amazon DynamoDB: a seamlessly scalable non-relational database service. In proceeding SIGMOD International Conference on Management of Data, ACM New York, NY, USA, 2012, pp. 729-730.

[8] F. Andler, J. Hansson, J. Mellin, J. Eriksson, and B. Eftring: An overview of the DeeDS real-time database architecture. In Proc. of 6th International Workshop on Parallel and Distributed Real-Time Systems, 1998

[9] Bogdan Carstoiu and Dorin Carstoiu: Zatara, the Plug-in-able Eventually Consistent Distributed Database. AISS, 2(3), 2010.

[10] Bermbach, D. and Tai S: Eventual Consistency: How soon is eventual? Proceedings of the 6th Workshop on Middleware for Service Oriented Computing, pp1-5, 2011.

[11] Bailis, P., and Ghodsi, A: Eventual consistency today: limitations, extensions, and beyond. In communications of the ACM vol. 56, no. 5, pp.55-63, May 2013.

[12] Shute Jeff, Vingralek Radek, Samwel Bart, Handhy Ben, Whipkey Chad, Rollins Eric, Oancea Mircea, Littlefield Kyle, Menestina David, Ellner Stephqan, Cieslewicz John, Rae Ian, Stancescu Traian, Apte Himani: F1: A Distributed SQL Database That Scales, VLDB, 2013.

Источник

Handling Eventual Consistency with Distributed Systems

Eventual consistency что это. Смотреть фото Eventual consistency что это. Смотреть картинку Eventual consistency что это. Картинка про Eventual consistency что это. Фото Eventual consistency что это

At SSENSE, we employ microservices to provide functionality for both our customers and back-office operators.

To operate at scale and provide resilience in a distributed environment, we leverage several patterns, including:

The use of any of the methods above welcomes the fact that the systems are eventually consistent, which presents some challenges that are often overlooked by developers.

This article aims to discuss some of these challenges, while sharing potential approaches you can take to handle the eventual consistency aspect in your applications.

Defining Eventual Consistency

The term eventual consistency is used to describe the model, commonly found in distributed systems, that guarantees that if an item does not receive new updates, then eventually all accesses to that item will return the same value matching the last update, therefore providing a consistent result. Systems that present this behavior are also referred to as convergent.

Perhaps the most common example is when you leverage asynchronous replication between two or more instances of an RDBMS (Relational Database Management System). In this setup, the application is scaled by driving the read operations to the replica while keeping the writes to the single main instance.

Because of its asynchronicity, when the write is performed it is only persisted locally, and later another process takes care of copying and applying the same operation on the replica(s). As this process is independent of the original write operation and is not instantaneous, there is a time where the replica does not contain the updated information.

Figure 1 illustrates the replication — in a simplified way — that happens between the main and its replica. In this example, if someone queries the replica in instant T3 where T2>T3, T2>T1the result it would provide would not match the one that already exists in the main instance.

Another example happens in an event-driven system, where asynchronous messages, also known as events, would be published to inform of changes just to be picked up by other parts of the system. These events would in turn trigger additional actions in the systems that consume them, such as updating local storage with copies of the data.

Similarly to the replication, there is a temporal disconnection between the part of the system that originally registered the state change and the other part(s) that are interested in those changes.

This disconnect is something intrinsic to this model and comes with a specific set of challenges. I will cover two of the most common ones next.

Common Challenges

Read After Write

This is perhaps the most common problem that we face. You execute a write operation to an entity of your system, and right after you attempt to retrieve it from the persistence, just to find out it is not there!

Figure 2 illustrates the case where your application is writing — creating or updating — an entity and in the same execution flow attempts to read it back. Because the replication takes time, depending on the current load of your application and persistence layer, by the time you execute the query, the changes have not yet been propagated.

The “funny” part is that during the development this will not manifest itself. You normally do not have a replication setup in your development environment, hence the reads and writes go to the same persistence. By the time you deploy your application and have real users interacting you will find out ‘strange’ behaviors that you can’t consistently reproduce.

Concurrency Control

Almost like a corollary of the read after write, with eventually consistent systems you often find yourself retrieving an entity from a projection, a read-only model, only to perform additional changes. All goes well if the projection is up-to-date with the source, but if that is not the case you may end up losing previously updated information.

Figure 3 illustrates that your order has been updated by a Customer Care Agent, but before the change has been propagated to the read-only model, you decide to edit your order. The system will end up in an inconsistent state because it does not take into account the concurrency aspect of the access.

Of course, this is not exclusive to eventually consistent systems, but it is an important aspect to not be neglected.

Solutions

The solutions I will explore next try to address the challenges in different ways. Depending on the case, you may use one or more combined to handle eventual consistency issues in a graceful way.

Fake It

Arguably the best way to avoid the read-after-write issue is not to perform a read at all. Imagine the situation illustrated in figure 4.

You just placed an order and want to show a confirmation page to your customer. Because we are trying to read the order right after placing it, we will not see it if there is any delay — replica lagging due to higher load, message broker with a backlog of messages, etc.

A solution would be not to read the orders, and simply show the order information you have in memory at the moment of placing it.

If your use case generates more data while executing the operation you will have to consider showing just the information you already have and signaling that the operation is still in progress.

Set an Expected Version

If you have a use case where you are making changes to an existing entity and trying to retrieve the updated entity right after, one solution is to define what the expected version of the entity you want to receive is.

In this solution, you are embracing the fact that your system may not have converged by the time you requested the entity. If you do not receive the version you expect or a later one, you know you have to do something and can’t use the information you just received. At this moment you can show a custom message to the user or use a spinner — see the UI Poller approach next — and retry fetching until you succeed.

This is a more sophisticated approach that requires you to use some form of versioning to the entities you are manipulating, but still simple to implement.

Having the version also helps with the concurrency control because at the server you can simply reject a change that expected a version that is older than the one that already exists.

If your use case allows, you may want to consider even merging changes or using conflict-free replicated data (CRDT).

UI Poller

In the two previous solutions, I attempted to avoid the eventual consistency issues by not retrieving any information or detecting if the one I retrieved was good to be used.

Although those can be applied to many situations, what happens when it is not the case? For example, you want to return a previously placed order and the system generates a return code after processing the request.

You can’t use the Fake it approach because this code is not part of the submitted data and the version you expect is still not available. A simple approach is to use the (in)famous spinner approach and behind the scenes keep polling for the updated information.

As illustrated in figure 9, after receiving the information you stop the spinner and provide the information the user needs to see.

The downside of this solution is that you will increase the load on your backend with potentially useless requests. In order to alleviate the situation, you should consider at least setting at the client-side a retry mechanism that uses a back-off strategy and a maximum number of retries before giving up.

If you are monitoring your application — like you should — you can even set the initial retry interval based on the Nth percentile for the use case you are trying to handle. If figure 11 illustrates the execution time for the use case, you can choose, for example, the 75% and set the first attempt after 500ms, the second with 750ms, and the third and final one at 1300ms.

If your use case is more complex, or the number of potential polling requests unacceptable due to the scale, the next solution can help.

WebSockets

This is by far the most complex solution of the list so far, but also the most powerful and likely flexible. It consists of leveraging Websockets and the way you communicate with the client via the backend of your application.

WebSockets or the WebSocket API is a way to enable bi-directional communication between the client — usually the browser — and a server. This way, the client can send messages to the server and receive responses without having to rely on polling the server.

The problems it tries to solve are the same as the UI Poller, but addresses the limitations when it comes down to scale — such as too many clients — and the potential variability of the execution time.

Figure 12 illustrates the solution using WebSockets and has the following flow:

This solves the limitations of the previous solution. There are no additional and useless polling requests from the client, which means once the information is ready, the server will push it to the client. Additionally, it also enables more complex cases where a single operation can have many individual stages as seen in Figure 12.

In this example, a Customer Care Agent is updating an Order and as part of the operation, you are expected to capture a new amount, reserve the stock, and update the shipping information at the warehouse.

Now let’s look at the components that could be associated with enabling this solution while leveraging AWS.

As you can see, it is a much more complex solution than the previous ones but extremely powerful as it allows you to send continuous feedback to the client as the execution takes place.

It is important to note that although not shown, the code behind a production application is expected to handle failures, such as when the client is no longer connected or attempts to reconnect.

Wrap Up

Eventual consistency is a reality to our systems and in most cases unavoidable. I presented some approaches you can leverage that do not try to remove the eventual consistency, but instead acknowledge its existence and incorporate ways to handle it.

When choosing which one — or which ones — to apply, always take into account your context and select the one that best serves the business needs. It will help you to avoid introducing unnecessary complexity or cost to your solution.

Источник

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *