A Food for Thought: How to Share Data Among Services in a Microservice World (part 2)

A microservice architecture is all about transferring data from one service to another.

Sharing is caring!

Overview

In the previous article, we already discussed data sharing between services in the microservice architecture world. We talked about Direct Database Connection sharing and HTTP call mechanism.

Now we continue with the third and last option for this topic.

Messaging and/or Streams

The last option in today’s article is using messaging system. Because the almost similar between messaging and streams, I will put them into one sub-section.

tl;dr: I am just lazy to split them.

Data Sharing using Stream API

In this approach, every Payment Service writes down their transactions into a specific topic. Let say CC-PAYMENT for credit card payment topic and BANK-TRANSFER-PAYMENT for bank transfer topic. Moreover, each Payment Service has its stream API. The main purpose of each stream API is to transform different payment payloads into a single common payment data. Then store it back into PAYMENT-OUTPUT topic.

After the payment data is stored into the topic, we can build another stream API or KSQL to transform and aggregate all of them to produce the last five transaction information per customer. Then we store the information back to another topic, let say PAYMENT-TRX-AGGR topic. And Customer Service can fetch the information later and store it into its own database (or elsewhere).

De-coupling Architecture

With this approach, we can handle previous issues with the HTTP call mechanism. For instance, whenever there is a new payment channel, we only focus on developing the new one. In the Customer Service perspective, they even do not know whether there is a new payment channel or not. It is totally transparent. As long as the new service writes the information into the PAYMENT-OUTPUT topic, the aggregation will take place itself.

Whenever one of the Payment Service is down (or both), we still can display the transaction information to our customers. This is the main beauty of the stream approach. You have almost an independent service that is transparent to one another, and experience almost no impact if another service is having a problem. By combining messaging and data streaming approach, we manage to keep data flowing from one service to another.

Lastly, you do not need to worry about the over-flooding network by polling every 30 minutes. Messaging and stream API already implements reactive architecture. Instead of polling requests in every specific duration, it will react if there is a new message comes.

All of these are possible because this communication style is purely asynchronous. And we are flowing our data from and into Kafka. The only dependable component in this mechanism is Kafka itself.

Single Point of Failure?

Since every service connects to Kafka, there is a possibility that Kafka is becoming the new single point of failure. Well, we cannot escape the reality that at some point, every service and integration will face a critical point. But I tell you good news, Kafka was designed with distributed, scalability, and fault-tolerance configuration. So less worry about this issue.

Hard to Implement

To achieve this approach, it is not an easy task. First, we need to have a proper Kafka setup and infrastructure. Next, we need to have a team of developers who are ready to learn Kafka, either as a pub-sub or stream API. And the most critical is we need people who have a strong vision and collaboration from the beginning in order for our services to run seamlessly in the future. Someone who can oversee the flow of data within the organizations, as well as coordinate between multiple product development streams.

Eventual Consistency

Another important aspect of this approach is eventual consistency. There will be some delays to deliver data to its final stage. Therefore, our customer maybe will retrieve stale data for a certain amount of time. Usually, it takes less than 5 minutes to reach eventual consistency. But it depends on how you configure your stream API. In my opinion, it is still acceptable because stale data is still better than inconsistent data.

Code Sample

Hereby, I provided some code implementation to implement the stream API approach.

Credit Card Stream API

Bank Transfer Stream API

Payment Aggregation Stream API

Sample Output

 

the sample is on my github repository.

Conclusion

I deliberately consider only three approaches. There is still plenty of options out there that you may want to implement, such as traditional messaging, dumb pipe, etc.

As stated in the beginning, there is no silver bullet for microservice communication. Each approach brings pros and contra. Remember, there is no black and white conclusion. In the end, it is you who decide which one is the best-fit solution for your organization.

Author: ru rocker

I have been a professional software developer since 2004. Java, Python, NodeJS, and Go-lang are my favorite programming languages. I also have an interest in DevOps. I hold professional certifications: SCJP, SCWCD, PSM 1, AWS Solution Architect Associate, and AWS Solution Architect Professional.

One thought on “A Food for Thought: How to Share Data Among Services in a Microservice World (part 2)”

Leave a Reply

Your email address will not be published. Required fields are marked *