[KAFKA-43] Connector is killed without any information in console Created: 10/Jul/19  Updated: 11/Sep/19  Resolved: 24/Jul/19

Status: Closed
Project: Kafka Connector
Component/s: None
Affects Version/s: 0.1
Fix Version/s: None

Type: Task Priority: Major - P3
Reporter: Vu Le Assignee: Ross Lawley
Resolution: Cannot Reproduce Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

CentOS Linux release 7.5.1804



 Description   

Hi team,

I setup a environment for streaming data from Kafka to MongoDB.

My kafka contains more than 4 million messages.

I start the Connector at first time. 178337 records are inserted into Mongo successfully.

But, then the Connector is killed. The console display (I tried to set log level is TRACE in kafka/config/log4j.properties)

[2019-07-10 14:02:20,285] INFO WorkerSinkTask{id=mongo-sink-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:301)
[2019-07-10 14:02:20,301] INFO Cluster ID: h0zp0aY6RG2dg59xJWCLFA (org.apache.kafka.clients.Metadata:365)
[2019-07-10 14:02:20,302] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] Discovered group coordinator 192.168.1.45:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
[2019-07-10 14:02:20,304] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:459)
[2019-07-10 14:02:20,305] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-07-10 14:02:20,314] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-07-10 14:02:20,326] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] Successfully joined group with generation 7 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:455)
[2019-07-10 14:02:20,332] INFO [Consumer clientId=consumer-1, groupId=connect-mongo-sink] Setting newly assigned partitions: CDCDataStream-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:290)
[2019-07-10 14:02:20,625] INFO Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} (org.mongodb.driver.cluster:71)
[2019-07-10 14:02:20,668] INFO Cluster description not yet available. Waiting for 30000 ms before timing out (org.mongodb.driver.cluster:71)
[2019-07-10 14:02:20,770] INFO Opened connection [connectionId{localValue:1, serverValue:8300}] to localhost:27017 (org.mongodb.driver.connection:71)
[2019-07-10 14:02:20,774] INFO Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 16]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=null, roundTripTimeNanos=1474521} (org.mongodb.driver.cluster:71)
[2019-07-10 14:02:20,786] INFO Opened connection [connectionId{localValue:2, serverValue:8301}] to localhost:27017 (org.mongodb.driver.connection:71)
Killed

The configuration file MongoSinkConnector.properties

name=mongo-sink
topics=test
connector.class=com.mongodb.kafka.connect.MongoSinkConnector
tasks.max=1
key.ignore=true
 
# Specific global MongoDB Sink Connector configuration
connection.uri=mongodb://localhost:27017
database=test_kafka
collection=transaction
max.num.retries=3
retries.defer.timeout=5000
type.name=kafka-connect
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

Expectation: All messages from Kafka are inserted into Mongo successfully.



 Comments   
Comment by Ross Lawley [ 24/Jul/19 ]

Hi leanhvu1989,

I hope you were able to get to the bottom of this. As its not a message from the connector I'm closing as 'cannot reproduce'. I also found a great blog post about Kafka connect logging which may help.

Ross

Comment by Vu Le [ 10/Jul/19 ]

Hi Ross,

Thanks for your response.

I check kafka/logs folder, but there is no log of Kafka Connect (I only see server.log and controller.log). How to enable logging for Kafka Connect?

It seems that the Kafka node has problem. Maybe, it results in the connector to be stopped after streaming a number of messages.

I also double-check with another node (200k messages). However, I got problem what I raised KAFKA-42

Thank you.

Comment by Ross Lawley [ 10/Jul/19 ]

Hi leanhvu1989,

Thanks for the ticket - the Killed logging message didn't come from the Mongo Kafka connector codebase. So its unclear as to the cause of the connector being Killed. Is there any logging from Kafka Connect? Could the node have been killed or the connector itself been stopped?

Ross

Generated at Thu Feb 08 09:05:26 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.