Details
-
Bug
-
Resolution: Works as Designed
-
Major - P3
-
None
-
3.6.2
-
None
-
Fully Compatible
-
ALL
-
Description
Opening multiple (10+) change stream cursors causes massive delays (up to several minutes) between database writes and notification arrival. A single change stream (or 2-3 of them) does not produce the same issue.
In a synthetic test, I wrote 100 small documents per second into a database and listen to changes using change streams. I opened 50 change streams and ran it for 100 seconds. The average delay between DB write and change event arrival was 7.1 seconds; the largest delay was 205 seconds (not a typo, over three minutes).
MongoDB version: 3.6.2
Test setup #1: MongoDB Atlas M10 (3 replica set)
Test setup #2: DigitalOcean Ubuntu box + single instance mongo cluster in Docker
I used a Node.js client, CPU and memory usage was minimal.
I tried two ways to set up change streams:
{{let cursor = collection.watch([
|
{$match: {"fullDocument.room": roomId}},
|
]);
|
cursor.stream().on("data", doc => {...});}}
|
and
{{let cursor = collection.aggregate([
|
{$changeStream: {}},
|
{$match: {"fullDocument.room": roomId}},
|
]);
|
cursor.forEach(doc => {...});}}
|
Both had the same effect.
Attachments
Issue Links
- depends on
-
DOCS-11270 [Server] Large number of change streams requires large pool size
-
- Closed
-
- related to
-
NODE-1305 Document potential performance degradation when using change streams
-
- Backlog
-