After we have a blueprint for our solutions in part 1, now it is time to implement it into code. We use AWS Cloud Development Kit (CDK) as Infrastructure as a Code and Spring Boot as our custom job implementation.
An example to provision EC2 instances in the private subnet using AWS SSM, Ansible Dynamic Inventory and AWS community collections.
Part 2 will talk about mapping the Gerkhin feature file into Cucumber Step Definition and its implementation using Kafka Streams.
Part 1 will talk about building a user story and how to translate it into Gerkhin feature file.
Part 2 of 2 articles to unit test Kafka Streams application. In the second part, I talk about testing Processor API by using MockProcessorContext as well as how to test Processor Scheduler with two types of Punctuator: STREAM_TIME and WALL_CLOCK.
This is part 1 of 2 articles to unit test Kafka Streams application. The first part talks about testing DSL transformation, stateless and stateful, including joining and windowing.
Painless is a simple, secure scripting language designed specifically for use with Elasticsearch. It is the default scripting language for Elasticsearch and can safely be used for inline and stored scripts.
Yet another Kafka feature, which is Kafka Streams, allow us to join two streams into one single stream. Kafka streams provide one-to-many and many-to-one join types.
Sensitive data always need to be handled with extra careful. Thus, in some cases, we need to encrypt before delivering message to Kafka topic.
After we secure Kafka Broker and Zookeeper with SASL/SCRAM, it is time for client (Java + Spring) to connect to secured Kafka.