[jira] [Commented] (KAFKA-7248) Kafka creating topic with no leader. Issue started showing up after unkerberizing the cluster

2019-08-20 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911303#comment-16911303
 ] 

Igor commented on KAFKA-7248:
-

Hello, I have got the same problem after mass deleting topics, some of them 
stuck as marked for deletion. At this moment restart all brokers helped me.

confluent-kafka-2.11-2.1.1cp1-1.noarch

count topics: 578

count partition: 4250

authorizer disable and my config
{noformat}
auto.create.topics.enable=true
controlled.shutdown.enable=true
default.replication.factor=3
delete.topic.enable=true
inter.broker.protocol.version=2.1
log.flush.interval.messages=2
log.flush.interval.ms=1
log.flush.scheduler.interval.ms=2000
log.message.format.version=2.1
log.retention.minutes=1
min.insync.replicas=2
num.partitions=2
num.recovery.threads.per.data.dir=6
offsets.retention.minutes=10080
unclean.leader.election.enable=false{noformat}
 

--describe:

 
{noformat}
Topic:topic_v1 PartitionCount:2 ReplicationFactor:3 Configs:
Topic: topic_v1 Partition: 0 Leader: none Replicas: 1,2,3 Isr:
Topic: topic_v1 Partition: 1 Leader: none Replicas: 2,3,1 Isr:{noformat}
 

That I found in log: broker1

 
{noformat}
[2019-08-20 11:55:01,678] INFO Topic creation Map(topic_v1-1 -> ArrayBuffer(2, 
3, 1), topic_v1-0 -> ArrayBuffer(1, 2, 3)) (kafka.zk.AdminZkClient)
[2019-08-20 11:55:01,680] INFO [KafkaApi-1] Auto creation of topic topic_v1 
with 2 partitions and replication factor 3 is successful 
(kafka.server.KafkaApis)
[2019-08-20 12:06:26,114] INFO [Admin Manager on Broker 1]: Error processing 
create topic request for topic topic_v1 with arguments (numPartitions=15, 
replicationFactor=1, replicasAssignments={}, configs={}) 
(kafka.server.AdminManager)
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic_v1' already 
exists.
[2019-08-20 12:13:31,640] INFO [Admin Manager on Broker 1]: Error processing 
create topic request for topic topic_v1 with arguments (numPartitions=15, 
replicationFactor=1, replicasAssignments={}, configs={}) 
(kafka.server.AdminManager)
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic_v1' already 
exists.{noformat}
broker2

 
{noformat}
[2019-08-20 12:51:33,169] INFO [ReplicaFetcherManager on broker 2] Removed 
fetcher for partitions Set(...topic_v1-0,topic_v1-1,...
[2019-08-20 12:51:33,288] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] 
Loading producer state till offset 0 with message format version 2 
(kafka.log.Log)
[2019-08-20 12:51:33,288] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] 
Completed load of log with 1 segments, log start offset 0 and log end offset 0 
in 0 ms (kafka.log.Log)
[2019-08-20 12:51:33,288] INFO Created log for partition topic_v1-0 in 
/app/kafka/log with properties {compression.type -> producer, 
message.format.version -> 2.1-IV2, file.delete.delay.ms -> 6, 
max.message.bytes -> 112, min.compaction.lag.ms -> 0, 
message.timestamp.type -> CreateTime, message.downconversion.enable -> true, 
min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, 
min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
unclean.leader.election.enable -> false, retention.bytes -> -1, 
delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 1, 
segment.ms -> 60480, segment.bytes -> 1073741824, retention.ms -> 
6, message.timestamp.difference.max.ms -> 9223372036854775807, 
segment.index.bytes -> 10485760, flush.messages -> 2}. 
(kafka.log.LogManager)
[2019-08-20 12:51:33,292] INFO [Partition topic_v1-0 broker=2] No checkpointed 
highwatermark is found for partition topic_v1-0 (kafka.cluster.Partition)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with 
initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with 
initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with 
initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO [Partition topic_v1-0 broker=2] topic_v1-0 
starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 
(kafka.cluster.Partition)
[2019-08-20 12:51:33,458] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] 
Loading producer state till offset 0 with message format version 2 
(kafka.log.Log)
[2019-08-20 12:51:33,459] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] 
Completed load of log with 1 segments, log start offset 0 and log end offset 0 
in 1 ms (kafka.log.Log)
[2019-08-20 12:51:33,459] INFO Created log for partition topic_v1-1 in 
/app/kafka/log with properties {compression.type -> producer, 
message.format.version -> 2.1-IV2, file.delete.delay.ms -> 6, 
max.message.bytes -> 112, min.compaction.lag.ms -> 0, 
message.timestamp.type -> CreateTime, message.downconversion.enable -> true, 
min.insync.replicas -> 2, segment.jitte

[jira] [Commented] (KAFKA-9747) No tasks created for a connector

2020-09-28 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203203#comment-17203203
 ] 

Igor commented on KAFKA-9747:
-

Same issue for Debezium (SqlServer connector)

> No tasks created for a connector
> 
>
> Key: KAFKA-9747
> URL: https://issues.apache.org/jira/browse/KAFKA-9747
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
> Environment: OS: Ubuntu 18.04 LTS
> Platform: Confluent Platform 5.4
> HW: The same behaviour on various AWS instances - from t3.small to c5.xlarge
>Reporter: Vit Koma
>Priority: Major
> Attachments: connect-distributed.properties, connect.log
>
>
> We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
> (MongoDB) and Confluent S3 connectors. When adding a new connector via the 
> REST API the connector is created in RUNNING state, but no tasks are created 
> for the connector.
> Pausing and resuming the connector does not help. When we stop all workers 
> and then start them again, the tasks are created and everything runs as it 
> should.
> The issue does not show up if we run only a single node.
> The issue is not caused by the connector plugins, because we see the same 
> behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
> that Debezium is correctly returning a task configuration from the 
> Connector.taskConfigs() method.
> Connector configuration examples
> Debezium:
> {
>   "name": "qa-mongodb-comp-converter-task|1",
>   "config": {
> "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
> "mongodb.hosts": 
> "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
> "mongodb.name": "qa-debezium-comp",
> "mongodb.ssl.enabled": true,
> "collection.whitelist": "converter[.]task",
> "tombstones.on.delete": true
>   }
> }
> S3 Connector:
> {
>   "name": "qa-s3-sink-task|1",
>   "config": {
> "connector.class": "io.confluent.connect.s3.S3SinkConnector",
> "topics": "qa-debezium-comp.converter.task",
> "topics.dir": "data/env/qa",
> "s3.region": "eu-west-1",
> "s3.bucket.name": "",
> "flush.size": "15000",
> "rotate.interval.ms": "360",
> "storage.class": "io.confluent.connect.s3.storage.S3Storage",
> "format.class": 
> "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
> "schema.generator.class": 
> "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
> "partitioner.class": 
> "io.confluent.connect.storage.partitioner.DefaultPartitioner",
> "schema.compatibility": "NONE",
> "key.converter": "org.apache.kafka.connect.json.JsonConverter",
> "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> "key.converter.schemas.enable": false,
> "value.converter.schemas.enable": false,
> "transforms": "ExtractDocument",
> 
> "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
>   }
> }
> The connectors are created using curl: {{curl -X POST -H "Content-Type: 
> application/json" --data @ http:/:10083/connectors}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)