Codersee
Kotlin on the backend
Codersee
Kotlin on the backend
In this step-by-step article, I will teach you how to set up multiple Kafka Brokers using Docker Compose, aka multi-node cluster.

Hello, in this step-by-step guide, I will show you how to set up multiple Kafka brokers with Docker Compose 🙂
In this article, together we will:
Unlike the previous article about Kafka, I would like to start this one by showing the configuration you’ll need to set up 3 Kafka brokers. After that, we will go through it as thoroughly as possible to understand it even better and in the end, we will learn how to verify and test our config.
In my articles, I focus on the practice and getting things done.
However, if you would like to get a strong understading of DevOps concepts, which are more and more in demand nowadays, then I highly recommend to check out KodeKloud courses, like this Docker and Kubernetes learning paths.
With that being said, let’s see the docker-compose.yaml:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka1:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka1
ports:
- "8097:8097"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:8097,INTERNAL://kafka1:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
kafka2:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka2
ports:
- "8098:8098"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:8098,INTERNAL://kafka2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
kafka3:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka3
ports:
- "8099:8099"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:8099,INTERNAL://kafka3:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
[elementor-template id=”9007393″]
Firstly, let’s take a look at docker images versions and containers names:
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
container_name: zookeeper
...
kafka1:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka1
...
kafka2:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka2
...
kafka3:
image: confluentinc/cp-kafka:7.2.1
container_name: kafka3
As we can see, in our example we will use Confluent Community Docker Image for Apache Kafka and Confluent Docker Image for Zookeeper– both in version 7.2.1.
Additionally, we explicitly set names for our containers, which are discoverable at a hostname identical to the container name. Simply put, container exposed port 9092 of kafka1 will be visible for other containers at kafka1:9092.
Nextly, let’s have a look at the exposed ports:
...
kafka1:
ports:
- "8097:8097"
...
kafka2:
ports:
- "8098:8098"
...
kafka3:
ports:
- "8099:8099"
As can be seen, we did it for each service except the zookeeper.
You might already know this, but ports is responsible for exposing container ports to the host machine. Simply put, when running docker-compose on our computer with a 123:456 setting, we will be able to access a container’s 456 port through localhost:123.
The last thing, I would like to cover before we head to the environment settings of our Kafka brokers is the depends_on .
As we can see, we put it for each Kafka service specifying the zookeeper service as an argument. With this one, we simply make sure that when running a docker compose up command, the zookeeper service will be started before the brokers.
If you’ve seen my previous article on how to set up a single-node cluster with docker-compose, then you will notice that it looks a bit different. The main reason behind this is that with multiple Kafka brokers in our node, they have to know how to communicate with each other.
As the first step, let’s take a look at the ZOOKEEPER_CLIENT_PORT.
To put it simply, this one instructs Zookeeper where to listen for connections by clients- Kafka brokers in our example.
Following, check the KAFKA_BROKER_ID variable.
By default, when we don’t set this one, the Zookeeper will automatically assign a unique broker identifier. Nevertheless, if we would like to specify it explicitly, then we can take care of it with this environment variable. Of course, please keep in mind that this value has to be unique across our Kafka brokers.
As the next thing, we set the KAFKA_ZOOKEEPER_CONNECT in each service to point to the Zookeeper.
As the value, we have to set our Zookeeper container name with a port set with the ZOOKEEPER_CLIENT_PORT directive.
Finally, let’s see the rest of the environment variables responsible for communication:
Unlike the previous ones, I believe it has more sense to take a look at all of them at once to get a better understanding.
With a KAFKA_ADVERTISED_LISTENERS, we set a comma-separated list of listeners with their host/IP and port. This value must be set in a multi-node environment because otherwise, clients will attempt to connect to the internal host address. Please keep in mind that the listeners’ values (EXTERNAL and INTERNAL in our case), must be unique here.
When it comes to the naming here, we can specify our own values, just as I did with EXTERNAL/INTERNAL. Nevertheless, in such a case, we have to specify the correct mapping with KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
If you are wondering why have we done it, then the reason is mentioned above- listeners’ values have to be unique, so we can’t specify PLAINTEXT://localhost:8099,PLAINTEXT://kafka3:9092 .
Finally, with KAFKA_INTER_BROKER_LISTENER_NAME we explicitly specify which listener to use for inter-broker communication.
With all of that being said, we can finally make use of our docker-compose.yaml file.
To do so, let’s build and run our services with the following command:
docker compose up
Depending on our machine, it may take a while until everything is up and running.
In the end, we should see the following:
...Recorded new controller, from now on will use broker...
As the next step, let’s open up a new terminal window and list up and running containers with docker ps:
# Output: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf892aeb5793 confluentinc/cp-kafka:7.2.1 "/etc/confluent/dock…" 54 minutes ago Up 54 minutes 0.0.0.0:8098->8098/tcp, 9092/tcp kafka2 a10bef7b6a54 confluentinc/cp-kafka:7.2.1 "/etc/confluent/dock…" 54 minutes ago Up 54 minutes 0.0.0.0:8099->8099/tcp, 9092/tcp kafka3 46c9ca4862b6 confluentinc/cp-kafka:7.2.1 "/etc/confluent/dock…" 54 minutes ago Up 54 minutes 0.0.0.0:8097->8097/tcp, 9092/tcp kafka1 eef5ebec8519 confluentinc/cp-zookeeper:7.2.1 "/etc/confluent/dock…" 54 minutes ago Up 54 minutes 2181/tcp, 2888/tcp, 3888/tcp zookeeper
As we can clearly see, all Kafka containers have their ports exposed to our host and Zookeeper is reachable through 2181, as well.
Following, let’s run a zookeeper-shell on the zookeeper container:
docker exec -it zookeeper /bin/zookeeper-shell localhost:2181 # Output: Connecting to localhost:2181 Welcome to ZooKeeper! JLine support is disabled WATCHER:: WatchedEvent state:SyncConnected type:None path:null
What’s important to mention here is that we run a zookeeper-shell command from within the container. That’s the reason why the zookeeper_host value is set to localhost and accessible, even though we haven’t exposed it.
Thanks to the -it flags, we can use the shell from our terminal.
So, to get the identifiers of currently active Kafka Brokers, we can use the following:
ls /brokers/ids # Output: [1, 2, 3]
The above output clearly indicates that everything is working, as expected.
Note: to close the zookeeper-shell, please hit Ctrl + C (and please do it before heading to the Producer/Consumer parts)
After we know all of the above, we can finally send messages using the Console Producer.
As the first step, let’s run bash from a randomly picked Kafka broker container:
docker exec -it kafka2 /bin/bash
Nextly, let’s create a new topic called randomTopic:
kafka-topics --bootstrap-server localhost:9092 --create --topic randomTopic # Output: Created topic randomTopic.
Please note, that given the kafka-topics script is run inside of the container, we point to the 9092 port of localhost.
Following, we can produce some messages to our new topic:
kafka-console-producer --bootstrap-server localhost:9092 --topic randomTopic > lorem > ipsum # When we finish, we have to hit ctrl + d
Of course, we could do that from any other broker container (and in order to exit bash, we need to type exit in the command prompt).
As the last step, let’s connect to another Apache Kafka broker and read previously sent messages:
docker exec -it kafka3 /bin/bash kafka-console-consumer --bootstrap-server localhost:9092 --topic randomTopic --from-beginning # Output: lorem ipsum
As we can clearly see, the output contains all messages, we’ve sent using Console Producer.
And that would be all for this article on how to set up multiple Apache Kafka brokers using Docker Compose and produce/consume messages. As a follow-up, I would recommend checking out this Confluent article about connecting to Kafka with different configurations.
Finally, let me know your thoughts in the comment section bellow, dear friend! 😀