-
Type: Task
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: None
-
Component/s: None
-
Atlas Streams
-
Fully Compatible
-
Sprint 42, Sprint 43
s={ '$source': { connectionName: 'kbtestkafka', topic: 'topic_0', config:
{ group_id : "testkb2", auto_offset_reset: 'earliest' }} } af1 = {$addFields : {upper : {$toUpper : "$data" }}} af2 = {$addFields : {lower : {$toLower : "$upper" }}} m = {$merge: { into:
{ connectionName: 'jsncluster0', db: 'test', coll: '60kbcasetest'}, on: '_id' }}
Joe N 4 minutes ago
oh it didnt like that one
{ id: '65cfcfd9389dc4cce00f695a', name: 'sixtykbtest', lastModified: ISODate("2024-02-16T21:12:57.337Z"), state: 'STARTED', errorMsg: 'code: 247 reason: cost of item (135184140) larger than maximum queue size (134217728)',
!https://emoji.slack-edge.com/T01C4Q4H3CL/ouchies/8094a190b68703cc.gif!1
matthew.normyle 3 minutes ago
ouch. we’ll fix it. that means we’re sink limited but that shouldn’t be happening
Joe N 3 minutes ago
I cant call stats on it
AtlasStreamProcessing> sp.sixtykbtest.stats() BSONError: bad embedded document length in bson
!https://a.slack-edge.com/production-standard-emoji-assets/14.0/apple-small/1f44d@2x.png!1
----New
Joe N 1 minute ago
yep it froze
matthew.normyle < 1 minute ago
yeah its basically in a restart loop now, assuming it keeps reaching the buffer size on the sink queue. (edited)
matthew.normyle < 1 minute ago
bad bug, thanks for finding it, ill work on this now