[KAFKA-111] Failed to resume change stream: Bad resume token: _data of missing or of wrong type Created: 03/Jun/20 Updated: 28/Oct/23 Resolved: 23/Jun/20 |
|
| Status: | Closed |
| Project: | Kafka Connector |
| Component/s: | Source |
| Affects Version/s: | 1.1 |
| Fix Version/s: | 1.2.0 |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Rajaramesh Yaramati | Assignee: | Ross Lawley |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Kafka version 2.4.0 Source MongoDB 3.6.8 |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Description |
|
I am testing source and sink MongoDB kafka connector and after it completes init sync and when it start reading from oplog using change streams, I get below failure and stops copying new changes from source. Please take a look. SourceConnector config: curl -X POST -H "Accept:application/json" -H "Content-Type: application/json" localhost:9083/connectors/ --data '{ "name":"mongo-source-assets-shard1oplog2", "config": { "connector.class":"com.mongodb.kafka.connect.MongoSourceConnector", "key.converter":"org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable":"false", "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "connection.uri":"mongodb://xxx.xxx.xxx.xxx:27017", "database":"oz_next", "collection":"assets", "publish.full.document.only":"true", "topic.prefix":"oplog.oz_mongo", "batch.size":"5000", "copy.existing":"true", "copy.existing.max.threads":"3", "copy.existing.queue.size":"64000"} }' sinkCConector: curl -X POST -H "Accept:application/json" -H "Content-Type: application/json" localhost:9083/connectors/ --data '{ "name":"mongo-sink-assets-shard1oplog2", "config": { "topics":"oplog.oz_mongo.oz_next.assets", "connector.class":"com.mongodb.kafka.connect.MongoSinkConnector", "tasks.max":"1", "key.converter":"org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable":"false", "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "connection.uri":"mongodb://10.74.3.104:27017", "database":"poc_oz_next", "collection":"poc_assets", "max.num.retries":"3", "retries.defer.timeout":"5000", "session.timeout.ms":"25000"}}' connector log: [2020-05-29 08:40:55,565] INFO WorkerSourceTask{id=mongo-source-assets-shard1oplog2-0} Finished commitOffsets successfully in 8872 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:515) |
| Comments |
| Comment by Ross Lawley [ 22/Apr/22 ] |
|
Apologies abdul.basith.kj@gmail.com - I see you opened |
| Comment by Ross Lawley [ 22/Apr/22 ] |
|
Can you please open a new ticket if you haven't already resolved your issue? Please note the error message you have reported comes from the source connector and not the sink connector. Ross |
| Comment by Abdul Basith [ 21/Apr/22 ] |
|
Hi. I am deploying version 0.28.0 of strimzi kafka operator https://artifacthub.io/packages/helm/strimzi/strimzi-kafka-operator with mongodb sink connector to stream data from Kafka topic to MongoDb database. I get the same error when new data is added to Kafka topic. I get this error in the logs:
|
| Comment by Githook User [ 23/Jun/20 ] |
|
Author: {'name': 'Ross Lawley', 'email': 'ross.lawley@gmail.com', 'username': 'rozza'}Message: Fix Source connector copying existing resumability
|
| Comment by Ross Lawley [ 15/Jun/20 ] |
|
Thanks yaramati@adobe.com, I'll review and try to reproduce the issue locally and fix. |
| Comment by Rajaramesh Yaramati [ 13/Jun/20 ] |
|
Ross Lawley, I was sure connector did not restart during copy. Just to ensure, I tried again from scratch and I still get same issue. Attached full here: bad_resume_token_error.log The sequence of steps: Step 1: Created new shared collection. Step 2: Imported sample data (10000 doc) into shared collection. Step 3: Then started source connector task using REST API as shown in the attached log. As soon as source connector completing fetching 10000 docs, started seeing "Failed to resume change stream: Bad resume token: _data of missing or of wrong type" message in log. I am able reproduce this error again and again on my test server. Thanks, Rajaramesh
|
| Comment by Ross Lawley [ 09/Jun/20 ] |
|
It looks like there was a restart of the connector during the copying data phase. That is an error scenario, so it is expected to stop the connector. However, only more logs would help determine if that was the case. The error messaging should be clearer, so that will be improved in a future release. Ross |
| Comment by Rajaramesh Yaramati [ 08/Jun/20 ] |
|
Can anyone please confirm if this is a known issue? |