-
Type:
Bug
-
Resolution: Works as Designed
-
Priority:
Major - P3
-
None
-
Affects Version/s: 1.1
-
Component/s: Source
-
None
-
Environment:Windows 10, 8GB RAM, AMD Ryzen 5 2500U, 64Bit, HDD
-
Needed
-
-
None
-
None
-
None
-
None
-
None
-
None
I am trying to setup MongoDB-Kafka-Connect as Source listening to change-stream of one collection on my windows machine. I am running three worker nodes on localhost windows machine on ports 8083, 8084, 8085.
I am able to create connector with one task and receive change stream events successfully and fail lover to other worker node is also working fine. But the number of tasks spawned is always one. I have set tasks.max=10, and used multiple threads to generate high volume of change stream events but even then number of task remains one.
This kind of makes Kafka-producer(MongoDB source connector) not scalable at all. While with multiple partitions on the same topic, consumers are scalable. Is this an issue that I am experiencing or MongoDB-Kafka-Source connector is designed this way?