How to Secure Confluent Kafka with SSL and SASL/SCRAM

Sharing is caring!

Overview

When I decided to include Apache Kafka as one of our technology stack, I never imagine the demand would be this huge. At first, my requirements were only two things. Something faster than ActiveMQ for MQTT needs, and future necessity for event driven approach. It started well in late 2017 and established as one of our important technology stack until now.

As a consequence, the requirements for using and storing message inside Kafka are getting bigger and complex. One of these requirements is security aspect. Even though we put Kafka inside our private network and behind firewall, still it is not sufficient enough. People started to ask about how secure Kafka is; are the connections encrypted; and how about authentication and authorization for each Kafka topic.

Therefore this article is started by securing Apache Kafka systems first. Then I will write about client connection into Kafka brokers based on role based access mechanism.

This article is a summary from security tutorial from confluent page There are lot of security methods based on this link, and I choose SASL/SCRAM method for my use case.

Since my VM installation is using Confluent version is 5.1.2, this article will based on that version.

SSL

First of all, I’ll go with securing the connection using SSL protocol. Each machine in cluster has public-private key and certificate as an identity. Because the machines are located inside private network, I decided to create self-signed certificate for each machine. Therefore I have to register all of self-signed certificate into each machine trust store (JKS).

SASL/SCRAM and JAAS

Salted Challenge Response Authentication Mechanism (SCRAM) is a family of modern, password-based challenge mechanism providing authentication of a user to a server. It can be used for password based login to services¹. Apache Kafka itself supports SCRAM-SHA-256 and SCRAM-SHA-512.

JAAS (Java Authentication and Authorization Service) is a Java implementation of Pluggable Authentication Module (PAM). We are going to use JAAS for inter-broker authentication and zookeeper-broker authentication.

Use Case

Use case for this article is upgrade existing Kafka server that has been installed in past article by adding some security layer on top of it (SASL/SCRAM) with SCRAM-SHA-256 mechanism.

And as a pre-existing setup, we already have had three-servers, which their respective IPs are 172.20.20.10 (node-1), 172.20.20.20 (node-2) and 172.20.20.30 (node-3). Each node has zookeeper and broker in their machine.

Certificate

Because some host naming issue on my Kafka environment (Vagrant), I disabled host-name verification for this tutorial. However, you can always enable it and put FQDN into CN or SAN fields while generating certificate.

One more thing, the certificate validity is set to 10 years because I am just lazy to generate new one each year (not a good behavior).

For each node, we will generate one key store and two trust store. The key store is needed as private key for broker and the trust store as a public key for broker and client (spring-kafka for instance). Just as a reminder, we need trust store because this is self-signed certificate (not trusted authority by JVM point of view).

Zookeeper

For zookeeper, the authentication is using Digest-MD5 combine with JAAS as an implementation.

Configure JAAS for zookeeper under /etc/kafka/zookeeper_jaas.conf.

Set runtime configuration for storing KAFKA_OPTS environment variables because we are using systemd to start zookeeper. And then update systemd for zookeeper.

Broker

I divided four steps for broker configurations. There are SSL configuration, JAAS configuration, SASL configuration and Listener configuration.

SSL

First step, configure keystore and truststore for each broker and security inter broker, under /etc/kafka/server.properties.

Note: please be careful if your certificate does not contain FQDN. You have to put additional properties as mentioned in line 2.

JAAS

Now we configure authentication for brokers. This configuration is intended as a communication between zookeeper and brokers, as well as between brokers themselves. Put the configuration under new file /etc/kafka/kafka_server_jaas.conf.

Note: in the Client part, the username and password match with the zookeeper JAAS config for user_kafka. For admin username, will be created later in the SASL part.

Set runtime configuration for storing KAFKA_OPTS environment variables because we are using systemd to start broker. And then update systemd configuration for broker.

SASL Authentication

Before we configure SASL/SCRAM authentication into brokers, we will create admin user first. We can execute this command in any broker.

Configure /etc/kafka/server.properties.

Listener

Because we are going to do rolling update, and minimized impact for existing topic, we will open three listeners with different protocols and ports for each broker.

Note: change the IP address accordingly for every broker.

Restart Zookeeper and Kafka

After all the configurations are finished, it is time to restart zookeepers and brokers. First we need to reload systemd because we changed the configuration.
Now restart zookeeper one by one, then follow by broker. You should see there are three ports, and they are ready to receive connection from client. You can choose port 9092 for PLAIN connection, 9093 for SSL connection and 9094 for SASL_SSL connection (Of course you need to provide authentication to access 9094 port). Later I will show how to connect to Kafka with SCRAM authentication by using spring boot.

And here are the final properties file.

Conclusion

This is only first part, which is securing Zookeeper and Kafka. Next article will talk about how to connect to Kafka using SASL/SCRAM mechanism. Any missing configuration, please provide feedback. For now, try and enjoy playing with server configuration.

References

Author: ru rocker

I am a professional software developer with more than 10 years experiences. I am a certified Java Developer (SCJP and SCWCD). However, In the recent months, I have more interest in DevOps and start to become a polyglot developer. Python and Go-lang become my favorite programming languages besides Java.

4 thoughts on “How to Secure Confluent Kafka with SSL and SASL/SCRAM”

  1. In the zookeeper JAAS config, why do you repeat 3 times the authProvider?
    authProvider.X=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

Leave a Reply

Your email address will not be published. Required fields are marked *