[jira] [Comment Edited] (KAFKA-9774) Create official Docker image for Kafka Connect

2023-09-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068519#comment-17068519
 ] 

Jordan Moore edited comment on KAFKA-9774 at 9/27/23 4:36 PM:
--

Happy to take this on. Already maintaining at 
https://github.com/OneCricketeer/apache-kafka-connect-docker 


was (Author: cricket007):
Happy to take this on. And already working on a POC here (ignore the README and 
the project name, I started working on only the pom and the main class in this 
branch) - 
[https://github.com/cricket007/kafka-streams-jib-example/tree/feature/connect-distributed]

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features, needs-kip
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-9774) Create official Docker image for Kafka Connect

2023-09-27 Thread Jordan Moore (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-9774 ]


Jordan Moore deleted comment on KAFKA-9774:
-

was (Author: cricket007):
Done here 
https://github.com/OneCricketeer/apache-kafka-connect-docker#containerized-apache-kafka-connect

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features, needs-kip
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15445) KIP-975: Docker Image for Apache Kafka

2023-09-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769698#comment-17769698
 ] 

Jordan Moore commented on KAFKA-15445:
--

Adding backlink to KAFKA-9774

> KIP-975: Docker Image for Apache Kafka
> --
>
> Key: KAFKA-15445
> URL: https://issues.apache.org/jira/browse/KAFKA-15445
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Krishna Agarwal
>Assignee: Krishna Agarwal
>Priority: Major
>  Labels: KIP-975
>
> [KIP-975: Docker Image for Apache 
> Kafka|https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14011) Reduce retention.ms from default 7 days to 1 day (Make it configurable) for the dead letter topic created for error handling via sink connector

2022-06-20 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556337#comment-17556337
 ] 

Jordan Moore edited comment on KAFKA-14011 at 6/20/22 11:32 AM:


"topic.creation" configs are only available for source connectors, for topics 
that source connectors will write to, not dead letter topics. "admin" prefix 
will only attempt to modify the AdminClient config, which does not include any 
topic configs, such as retention time. 

If you are allowing the broker to auto-create topics with defaults, then it's 
recommended that you disable this, and create all necessary topics ahead of 
time, with the topic settings you need. 

For already existing topics, you can alter their configs. 

Both operations can be done with kafka-topics script 




was (Author: cricket007):
"topic.creation" configs are only available for source connectors, for topics 
that source connectors will write to, not dead letter topics.

If you are allowing the broker to auto-create topics with defaults, then it's 
recommended that you disable this, and create all necessary topics ahead of 
time, with the topic settings you need. 

For already existing topics, you can alter their configs. 

Both operations can be done with kafka-topics script 



> Reduce retention.ms from default 7 days to 1 day (Make it configurable) for 
> the dead letter topic created for error handling via sink connector
> ---
>
> Key: KAFKA-14011
> URL: https://issues.apache.org/jira/browse/KAFKA-14011
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, KafkaConnect
>Affects Versions: 3.0.1
>Reporter: Souvik
>Priority: Major
>
> We are creating a sink connector along with the error handling mechanism , so 
> if in case if there is bad record it is routed to error queue , The 
> properties used while creating the sink connector are as following
>  
>  'errors.tolerance'='all',
>  'errors.deadletterqueue.topic.name' = 'error_dlq',
>  'errors.deadletterqueue.topic.replication.factor'= -1,
>  'errors.log.include.messages' = true,
>  'errors.log.enable' = true,
>  'errors.deadletterqueue.context.headers.enable' = true
> *now is there any way we can configure the retention.ms for the dead letter 
> queue topic i.e from default 7 days to 1 day , or is there any way we can 
> change the compaction policy by providing any property while creating the 
> source connector*
> *we have tried the following properties in the connector*
> {code:java}
> 'topic.creation.default.retention.ms'='3000',
> 'admin.topic.creation.default.retention.ms'='3000',
> 'admin.retention.ms' = '3000',
> 'admin.topic.retention.ms' = '3000',
> 'admin.topic.creation.retention.ms' = '3000',
> 'error.topic.creation.default.retention.ms'='3000',
> 'error.deadletterqueue.topic.creation.default.retention.ms'='3000',
> 'error.deadletterqueue.topic.retention.ms'='3000',
> 'admin.topic.creation.default.retention.ms' = 3000, {code}
> but it did not work.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (KAFKA-14011) Reduce retention.ms from default 7 days to 1 day (Make it configurable) for the dead letter topic created for error handling via sink connector

2022-06-20 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556337#comment-17556337
 ] 

Jordan Moore commented on KAFKA-14011:
--

"topic.creation" configs are only available for source connectors, for topics 
that source connectors will write to, not dead letter topics.

If you are allowing the broker to auto-create topics with defaults, then it's 
recommended that you disable this, and create all necessary topics ahead of 
time, with the topic settings you need. 

For already existing topics, you can alter their configs. 

Both operations can be done with kafka-topics script 



> Reduce retention.ms from default 7 days to 1 day (Make it configurable) for 
> the dead letter topic created for error handling via sink connector
> ---
>
> Key: KAFKA-14011
> URL: https://issues.apache.org/jira/browse/KAFKA-14011
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, KafkaConnect
>Affects Versions: 3.0.1
>Reporter: Souvik
>Priority: Major
>
> We are creating a sink connector along with the error handling mechanism , so 
> if in case if there is bad record it is routed to error queue , The 
> properties used while creating the sink connector are as following
>  
>  'errors.tolerance'='all',
>  'errors.deadletterqueue.topic.name' = 'error_dlq',
>  'errors.deadletterqueue.topic.replication.factor'= -1,
>  'errors.log.include.messages' = true,
>  'errors.log.enable' = true,
>  'errors.deadletterqueue.context.headers.enable' = true
> *now is there any way we can configure the retention.ms for the dead letter 
> queue topic i.e from default 7 days to 1 day , or is there any way we can 
> change the compaction policy by providing any property while creating the 
> source connector*
> *we have tried the following properties in the connector*
> {code:java}
> 'topic.creation.default.retention.ms'='3000',
> 'admin.topic.creation.default.retention.ms'='3000',
> 'admin.retention.ms' = '3000',
> 'admin.topic.retention.ms' = '3000',
> 'admin.topic.creation.retention.ms' = '3000',
> 'error.topic.creation.default.retention.ms'='3000',
> 'error.deadletterqueue.topic.creation.default.retention.ms'='3000',
> 'error.deadletterqueue.topic.retention.ms'='3000',
> 'admin.topic.creation.default.retention.ms' = 3000, {code}
> but it did not work.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (KAFKA-13571) Enabling MirrorMaker 2.0 with TLS

2022-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518846#comment-17518846
 ] 

Jordan Moore edited comment on KAFKA-13571 at 4/7/22 12:21 PM:
---

Please lower the priority as I don't believe this is a project blocker. It 
should work just fine. Please check the startup logs for the process to verify 
the consumer and producer configs have loaded the properties you've defined. 


was (Author: cricket007):
It should work just fine. Please check the startup logs for the process to 
verify the consumer and producer configs have loaded the properties you've 
defined. 

> Enabling MirrorMaker 2.0 with TLS
> -
>
> Key: KAFKA-13571
> URL: https://issues.apache.org/jira/browse/KAFKA-13571
> Project: Kafka
>  Issue Type: Task
>  Components: mirrormaker
>Affects Versions: 3.0.0
>Reporter: Bharath Reddy
>Priority: Blocker
>
> Hi All,
>  
> We are trying to enableTLS for MirrorMaker 2.0(connect-mirror-maker.sh) for 
> apache kafka 3.0.I have used below parameters but it has not succeeded.
>  
> Please confirm the points below.
>  
>  - TLS feature is available for MirrorMaker 2.0(connect-mirror-maker.sh),If 
> yes can you please share a blog/configuration to enable it.
>  
> source.ssl.truststore.location=/home/kafka.truststore.jks
> source.ssl.truststore.password=
> source.ssl.keystore.location=/home/kafka.keystore.jks
> source.ssl.keystore.password=**
> source.ssl.key.password=**
> source.security.inter.broker.protocol=SSL
> source.ssl.endpoint.identification.algorithm=
> target.ssl.truststore.location=/home//kafka.truststore.jks
> target.ssl.truststore.password=***
> target.ssl.keystore.location=/home/kafka.keystore.jks
> target.ssl.keystore.password=**
> target.ssl.key.password=**
> target.security.inter.broker.protocol=SSL
> target.ssl.endpoint.identification.algorithm=
>  
> Thanks,
> Bharath Reddy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13571) Enabling MirrorMaker 2.0 with TLS

2022-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518846#comment-17518846
 ] 

Jordan Moore commented on KAFKA-13571:
--

It should work just fine. Please check the startup logs for the process to 
verify the consumer and producer configs have loaded the properties you've 
defined. 

> Enabling MirrorMaker 2.0 with TLS
> -
>
> Key: KAFKA-13571
> URL: https://issues.apache.org/jira/browse/KAFKA-13571
> Project: Kafka
>  Issue Type: Task
>  Components: mirrormaker
>Affects Versions: 3.0.0
>Reporter: Bharath Reddy
>Priority: Blocker
>
> Hi All,
>  
> We are trying to enableTLS for MirrorMaker 2.0(connect-mirror-maker.sh) for 
> apache kafka 3.0.I have used below parameters but it has not succeeded.
>  
> Please confirm the points below.
>  
>  - TLS feature is available for MirrorMaker 2.0(connect-mirror-maker.sh),If 
> yes can you please share a blog/configuration to enable it.
>  
> source.ssl.truststore.location=/home/kafka.truststore.jks
> source.ssl.truststore.password=
> source.ssl.keystore.location=/home/kafka.keystore.jks
> source.ssl.keystore.password=**
> source.ssl.key.password=**
> source.security.inter.broker.protocol=SSL
> source.ssl.endpoint.identification.algorithm=
> target.ssl.truststore.location=/home//kafka.truststore.jks
> target.ssl.truststore.password=***
> target.ssl.keystore.location=/home/kafka.keystore.jks
> target.ssl.keystore.password=**
> target.ssl.key.password=**
> target.security.inter.broker.protocol=SSL
> target.ssl.endpoint.identification.algorithm=
>  
> Thanks,
> Bharath Reddy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13762) Kafka brokers are not coming up

2022-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518845#comment-17518845
 ] 

Jordan Moore commented on KAFKA-13762:
--

[~kkameshm90] Please lower the priority as it is not a project blocker.

Your error is caused by the JMX exporter from Prometheus you've custom added to 
your broker startup process, and so is really unrelated to any issues with 
Kafka project.

You need to stop whatever other process has already bound to the exporter port 
or change the port the exporter runs on 

> Kafka brokers are not coming up 
> 
>
> Key: KAFKA-13762
> URL: https://issues.apache.org/jira/browse/KAFKA-13762
> Project: Kafka
>  Issue Type: Bug
>Reporter: Kamesh
>Priority: Blocker
>
> Out of 9 brokers only 3 brokers coming up. Totally 3 VMs Each VM is having 3 
> brokers
> We are getting below error 
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
>         at 
> sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at sun.net.httpserver.ServerImpl.(ServerImpl.java:100)
>         at sun.net.httpserver.HttpServerImpl.(HttpServerImpl.java:50)
>         at 
> sun.net.httpserver.DefaultHttpServerProvider.createHttpServer(DefaultHttpServerProvider.java:35)
>         at com.sun.net.httpserver.HttpServer.create(HttpServer.java:130)
>         at 
> io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer.(HTTPServer.java:179)
>         at 
> io.prometheus.jmx.shaded.io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:31)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13486) Kafka Connect: Failed to start task due to NPE

2022-01-19 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479033#comment-17479033
 ] 

Jordan Moore commented on KAFKA-13486:
--

You seem to be missing part of the stacktrace. The workers need to know about 
each other via their advertised REST listeners settings... I believe Strimzi 
Operator KafkaConnect CRD, for example, configures this correctly.  

> Kafka Connect: Failed to start task due to NPE
> --
>
> Key: KAFKA-13486
> URL: https://issues.apache.org/jira/browse/KAFKA-13486
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1
> Environment: Kubernetes, custom docker image
>Reporter: Geliba Uilte
>Priority: Major
>
> I have a Kafka Connect cluster with three workers running on Kubernetes. The 
> workers communicate with each other using pod's IP (internal IP 192.X.X.X). 
> Sometimes, pods are redistributed to different node. I am not sure if it has 
> anything to do with the issue, but I think it makes pod's IP to be changed 
> and Kafka Connect needs to rebalance.
> Occasionally, tasks fail due to NPE.
> From the connectors/:connector/status REST API, I can see this trace:
>  
> {code:java}
> at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:517)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1258)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1700(DistributedHerder.java:127)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1273)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1269)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834){code}
>  
> It looks like the issue is similar to KAFKA-10323 and
> It seems NPE is thrown from 
> [here|https://github.com/apache/kafka/blob/2.7.1/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L517].
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13497) Debug Log RegexRouter transform

2022-01-19 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479029#comment-17479029
 ] 

Jordan Moore commented on KAFKA-13497:
--

While extra logging would be useful, creating a Matcher and calling 
replaceFirst with the transform value against the topic name itself is easy to 
debug outside of Connect

> Debug Log RegexRouter transform
> ---
>
> Key: KAFKA-13497
> URL: https://issues.apache.org/jira/browse/KAFKA-13497
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Spencer Carlson
>Priority: Minor
>
> It is hard to troubleshoot regex transformations without knowing exactly what 
> the new topic transformation is.
>  
> Can we add a debug log in the 
> [RegexRouter.java#L59|https://github.com/apache/kafka/blob/99b9b3e84f4e98c3f07714e1de6a139a004cbc5b/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/RegexRouter.java#L59]
>  to print out the transformed topic



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13163) MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

2021-08-09 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395955#comment-17395955
 ] 

Jordan Moore commented on KAFKA-13163:
--

Like I said, JDBC or any Confluent connector isn't supported here, and they 
have their own Github issue repository for issues specific to them, or you can 
move back to Stackoverflow

Overall, this seems like a duplicate of KAFKA-10477

> MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: 
> null
> ---
>
> Key: KAFKA-13163
> URL: https://issues.apache.org/jira/browse/KAFKA-13163
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Affects Versions: 2.1.1
> Environment: PreProd
>Reporter: Muddam Pullaiah Yadav
>Priority: Major
>
> Please help with the following issue. Really appreciate it! 
>  
> We are using Azure HDInsight Kafka cluster 
> My sink Properties:
>  
> cat mysql-sink-connector
>  {
>  "name":"mysql-sink-connector",
>  "config":
> { "tasks.max":"2", "batch.size":"1000", "batch.max.rows":"1000", 
> "poll.interval.ms":"500", 
> "connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", 
> "connection.url":"jdbc:mysql://moddevdb.mysql.database.azure.com:3306/db_test_dev",
>  "table.name":"db_test_dev.tbl_clients_merchants", "topics":"test", 
> "connection.user":"grabmod", "connection.password":"#admin", 
> "auto.create":"true", "auto.evolve":"true", 
> "value.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "value.converter.schemas.enable":"false", 
> "key.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "key.converter.schemas.enable":"true" }
> }
>  
> [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:177)
>  org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in 
> error handler
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema 
> type: null
>  at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743)
>  at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task is being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask:178)
>  [2021-08-04 11:18:30,235] INFO [Consumer clientId=consumer-18, 
> groupId=connect-mysql-sink-connector] Sending LeaveGroup request to 
> coordinator 
> wn2-grabde.fkgw2p1emuqu5d21xcbqrhqqbf.rx.internal.cloudapp.net:9092 (id: 
> 2147482646 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:782)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13163) MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

2021-08-09 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395922#comment-17395922
 ] 

Jordan Moore commented on KAFKA-13163:
--

While useful information, showing only one record doesn't provide much detail. 
How exactly is data getting into the topic? Are you testing each config change 
you're making on a brand new topic? I still think you have at least one null 
message in your topic and the connector is failing on it. The 
MySqlSinkConnector isn't supported here, so are you able to reproduce the 
problem using a different Connector such as the FileSink since the same 
Converter logic will be used by it? 

> MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: 
> null
> ---
>
> Key: KAFKA-13163
> URL: https://issues.apache.org/jira/browse/KAFKA-13163
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Affects Versions: 2.1.1
> Environment: PreProd
>Reporter: Muddam Pullaiah Yadav
>Priority: Major
>
> Please help with the following issue. Really appreciate it! 
>  
> We are using Azure HDInsight Kafka cluster 
> My sink Properties:
>  
> cat mysql-sink-connector
>  {
>  "name":"mysql-sink-connector",
>  "config":
> { "tasks.max":"2", "batch.size":"1000", "batch.max.rows":"1000", 
> "poll.interval.ms":"500", 
> "connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", 
> "connection.url":"jdbc:mysql://moddevdb.mysql.database.azure.com:3306/db_test_dev",
>  "table.name":"db_test_dev.tbl_clients_merchants", "topics":"test", 
> "connection.user":"grabmod", "connection.password":"#admin", 
> "auto.create":"true", "auto.evolve":"true", 
> "value.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "value.converter.schemas.enable":"false", 
> "key.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "key.converter.schemas.enable":"true" }
> }
>  
> [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:177)
>  org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in 
> error handler
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema 
> type: null
>  at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743)
>  at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task is being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask:178)
>  [2021-08-04 11:18:30,235] INFO [Consumer clientId=consumer-18, 
> groupId=connect-mysql-sink-connector] Sending LeaveGroup request to 
> coordinator 
> wn2-grabde.fkgw2p1emuqu5d21xcbqrhqqbf.rx.internal.cloudapp.net:9092 (id: 
> 2147482646 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:782)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13163) Issue with MySql worker sink connector

2021-08-06 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394622#comment-17394622
 ] 

Jordan Moore commented on KAFKA-13163:
--

As the error suggests, your incoming messages are null, and the JSONConverter 
cannot process them because there is no schema for null records. 

You may set schemas.enable=false, and this will prevent schemas from being 
made. 

As an aside: You appear to have configured FileStreamSinkConnector, which does 
not require schemas, but are using unrelated JDBC properties for an unknown 
reason and called this a "mysql-sink-connector", which it is not. 

> Issue with MySql worker sink connector
> --
>
> Key: KAFKA-13163
> URL: https://issues.apache.org/jira/browse/KAFKA-13163
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Affects Versions: 2.1.1
> Environment: PreProd
>Reporter: Muddam Pullaiah Yadav
>Priority: Major
>
> Please help with the following issue. Really appreciate it! 
>  
> We are using Azure HDInsight Kafka cluster 
> My sink Properties:
>  
> cat mysql-sink-connector
>  {
>  "name":"mysql-sink-connector",
>  "config":
> { "tasks.max":"2", "batch.size":"1000", "batch.max.rows":"1000", 
> "poll.interval.ms":"500", 
> "connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", 
> "connection.url":"jdbc:mysql://moddevdb.mysql.database.azure.com:3306/db_test_dev",
>  "table.name":"db_test_dev.tbl_clients_merchants", "topics":"test", 
> "connection.user":"grabmod", "connection.password":"#admin", 
> "auto.create":"true", "auto.evolve":"true", 
> "value.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "value.converter.schemas.enable":"false", 
> "key.converter":"org.apache.kafka.connect.json.JsonConverter", 
> "key.converter.schemas.enable":"true" }
> }
>  
> [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:177)
>  org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in 
> error handler
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema 
> type: null
>  at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743)
>  at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:514)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
> Task is being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask:178)
>  [2021-08-04 11:18:30,235] INFO [Consumer clientId=consumer-18, 
> groupId=connect-mysql-sink-connector] Sending LeaveGroup request to 
> coordinator 
> wn2-grabde.fkgw2p1emuqu5d21xcbqrhqqbf.rx.internal.cloudapp.net:9092 (id: 
> 2147482646 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:782)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2021-05-12 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17343202#comment-17343202
 ] 

Jordan Moore commented on KAFKA-12534:
--

According to this documentation, the listener name (in lowercase) "must" be 
used to set the ssl properties

So, for listeners=SSL://:9093

{code} 
kafka-configs --command-config /etc/kafka/client.properties --bootstrap-server 
hostname:port --entity-type brokers --entity-name  --alter 
--add-config listener.name.ssl.ssl.keystore.location=
{code}

https://docs.confluent.io/platform/current/kafka/dynamic-config.html#updating-ssl-keystore-of-an-existing-listener

> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
> URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2021-05-12 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17343196#comment-17343196
 ] 

Jordan Moore commented on KAFKA-12534:
--

It would appear my original comment has solved your error and the title of this 
issue (you no longer get SSL handshake errors)

I'm not sure about the remaining issues, however the error you posted does not 
include /etc/kafka/secrets in your properties, nor refers to a JKS file in the 
error message, so it's difficult to reproduce your error if you obfuscate 
filenames differently in the logs and properties 

> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
> URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2021-04-14 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320979#comment-17320979
 ] 

Jordan Moore commented on KAFKA-12534:
--

The config warnings should be fixed with KAFKA-10090

The error says it failed to load the keystore, so the properties are definitely 
being read, and something with your files are incorrect, for example, maybe the 
double slash in the listed file path or the file permissions themselves 



> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
> URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12510) Unable to access Kafka connection on a machine behind the company firewall

2021-04-11 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319012#comment-17319012
 ] 

Jordan Moore commented on KAFKA-12510:
--

Please update the issue with more details. 

For example, can you telnet / netcat the broker from the machine you're trying 
to connect from? 

Does the broker setup advertised.listeners config in such a way that is 
actually resolvable from "outside" that firewall?

If neither is true, I'm unsure that's really an issue that is able to be fixed 
here, but rather should be addressed by your networking admins. 

> Unable to access Kafka connection on a machine behind the company firewall
> --
>
> Key: KAFKA-12510
> URL: https://issues.apache.org/jira/browse/KAFKA-12510
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.6.1
>Reporter: satyaveni
>Priority: Major
>
> Hi Team,
> I want to consume messages on Kafka Topic and there is an error which says 
> Connection Failed; Caused by: Unknown failure: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> fetching topic metadata; Caused by: Timeout expired while fetching topic 
> metadata



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12517) Kafka's readme should provide Run configurations in the BUILD with IDE Section

2021-04-11 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319011#comment-17319011
 ] 

Jordan Moore commented on KAFKA-12517:
--

I think the recommendation is to actually use kafka-server-start for running 
the broker with your code changes. 

You can find the main class (and how the classpath is constructed) by looking 
at that shell script + kafka-run-class.sh

If you need to setup a debugger, the KAFKA_DEBUG env var can be set - 
https://github.com/apache/kafka/blob/trunk/bin/kafka-run-class.sh#L235-L252

> Kafka's readme should provide Run configurations in the BUILD with IDE Section
> --
>
> Key: KAFKA-12517
> URL: https://issues.apache.org/jira/browse/KAFKA-12517
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Aviral Srivastava
>Priority: Major
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> I want to contribute to Kafka and for that, I want to set up IntelliJ IDEA 
> for Kafka so that I can debug and run it(Kafka) using an IDE. 
> However, I am not able to configure my "Run" settings and the documentation 
> does not contain any such information.
> I dived deeper into how Kafka is normally executed and could not deduce the 
> basics such as Classpath, etc.
> I have asked a specific question regarding the same here: 
> [https://stackoverflow.com/questions/66739363/where-is-the-main-class-of-kafka-located]
> I am ready to contribute (add Run configs to the configuration) but I need 
> some help from the community in setting up my own IDE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12530) kafka-configs.sh does not work while changing the sasl jaas configurations.

2021-04-11 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319006#comment-17319006
 ] 

Jordan Moore commented on KAFKA-12530:
--

Duplicates 
KAFKA-12529 
KAFKA-12528

> kafka-configs.sh does not work while changing the sasl jaas configurations.
> ---
>
> Key: KAFKA-12530
> URL: https://issues.apache.org/jira/browse/KAFKA-12530
> Project: Kafka
>  Issue Type: Bug
>Reporter: kaushik srinivas
>Priority: Major
>
> We are trying to use kafka-configs script to modify the sasl jaas 
> configurations, but unable to do so.
> Command used:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 'sasl.jaas.config=KafkaServer \{\n 
> org.apache.kafka.common.security.plain.PlainLoginModule required \n 
> username=\"test\" \n password=\"test\"; \n };'
> error:
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val".
> command 2:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 
> 'sasl.jaas.config=[username=test,password=test]'
> output:
> command does not return , but kafka broker logs below error:
> DEBUG", "neid":"kafka-cfd5ccf2af7f47868e83471a5b603408", "system":"kafka", 
> "time":"2021-03-23T08:29:00.946", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1001-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2
>  - org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - 
> Set SASL server state to FAILED during authentication"}}
>  {"type":"log", "host":"kf-kaudynamic-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83471a5b603408", "system":"kafka", 
> "time":"2021-03-23T08:29:00.946", "timezone":"UTC", 
> "log":{"message":"data-plane-kafka-network-thread-1001-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1001] 
> Failed authentication with /127.0.0.1 (Unexpected Kafka request of type 
> METADATA during SASL handshake.)"}}
> We have below issues:
>  1. If one installs kafka broker with SASL mechanism and wants to change the 
> SASL jaas config via kafka-configs scripts, how is it supposed to be done ? 
> Is one supposed to provide kafka-configs script credentials to get 
> authenticated with kafka broker ?
>  does kafka-configs needs client credentials to do the same ? 
>  2. Can anyone point us to example commands of kafka-configs to alter the 
> sasl.jaas.config property of kafka broker. We do not see any documentation or 
> examples for the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2021-04-11 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319001#comment-17319001
 ] 

Jordan Moore commented on KAFKA-12534:
--

Include --command-config parameter with a file that includes the SSL properties 
for the AdminClient

https://kafka.apache.org/documentation/#adminclientconfigs

> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
> URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12621) Kafka setup with Zookeeper- specifying an alternate znode creates the configuration at the wrong znode

2021-04-11 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318999#comment-17318999
 ] 

Jordan Moore commented on KAFKA-12621:
--

Based on various examples, you only need the znode on the very last entry

This is also stated as such in the docs - 
https://kafka.apache.org/documentation/#brokerconfigs_zookeeper.connect

> Kafka setup with Zookeeper- specifying an alternate znode creates the 
> configuration at the wrong znode
> --
>
> Key: KAFKA-12621
> URL: https://issues.apache.org/jira/browse/KAFKA-12621
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 2.6.1
> Environment: Linux
> OS: 16.04.1-Ubuntu SMP 
> Architecture: x86_64
> Kernel Version: 4.15.0-1108-azure
>Reporter: Jibitesh Prasad
>Priority: Major
>  Labels: config, newbie, server.properties
>
> While configuring kafka with an znode apart from "/", the configuration is 
> created in the wrong znode. Fo example, I have the following entry in my 
> server.properties
> _zookeeper.connect=10.114.103.207:2181/kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_
> The IPs are the IP addresses of the nodes of zookeeper cluster. I expect the 
> kafka server to use _kafka_secondary_cluster_ as the znode in the zookeeper 
> nodes. But, the znode which is created is actually
> _/kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_
> Executing ls on the above path shows me the necessary znodes being created in 
> that path
> _[zk: localhost:2181(CONNECTED) 1] ls 
> /kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_
> Output:
>  _[admin, brokers, cluster, config, consumers, controller, controller_epoch, 
> isr_change_notification, latest_producer_id_block, 
> log_dir_event_notification]_
> Shouldn't these configurations be created in _/kafka_secondary_cluster_. It 
> seems the comma separated values are not being split correctly. Or am I doing 
> something wrong?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-05-18 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109910#comment-17109910
 ] 

Jordan Moore commented on KAFKA-9774:
-

Done here 
https://github.com/OneCricketeer/apache-kafka-connect-docker#containerized-apache-kafka-connect

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features, needs-kip
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9660) KAFKA-1 build a kafka-exporter by java

2020-05-15 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17108652#comment-17108652
 ] 

Jordan Moore commented on KAFKA-9660:
-

Can someone please explain why this should be used over Jolokia JVM or 
Prometheus JMX Exporter agents?

In particular, those work great, and in production with most people I work 
with. Secondly, why be "native" to Kafka? If you're able to filter down to just 
export the Kafka MBeans, then great, however, I don't know if I personally am a 
fan of making something so specific when the options listed seem to work fine 
for me, and many others.

>  KAFKA-1 build a kafka-exporter by java
> ---
>
> Key: KAFKA-9660
> URL: https://issues.apache.org/jira/browse/KAFKA-9660
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, metrics
>Affects Versions: 0.10.2.0, 1.1.0, 2.0.0
> Environment: java8+
>Reporter: francis lee
>Assignee: Sujay Hegde
>Priority: Major
>  Labels: newbie
>
> [KIP-575|https://cwiki.apache.org/confluence/display/KAFKA/KIP-575%3A+build+a+Kafka-Exporter+by+Java]
> kafka is an excellent MQ running on JVM,  but no exporters JVMly. for a 
> better future of  Kafka-Ecosystems
> the Apache needs a formal exporter like 
> [https://github.com/apache/kafka-exporter].
> i wrote one for working, and hope to give to Apache. there are a lot of 
> metric in JMX, it can be configed in the exporter-config.
>  
> if you are interested in it , join me!
> if you are interested in it , join me!
> if you are interested in it , join me!
>  
> for some metric list here:
> kafka_AddPartitionsToTxn_50thPercentile
> kafka_AddPartitionsToTxn_95thPercentile
> kafka_AddPartitionsToTxn_999thPercentile
> kafka_AddPartitionsToTxn_99thPercentile
> kafka_AddPartitionsToTxn_Count
> kafka_AddPartitionsToTxn_Max
> kafka_AddPartitionsToTxn_Mean
> kafka_AddPartitionsToTxn_MeanRate
> kafka_AddPartitionsToTxn_Min
> kafka_AddPartitionsToTxn_OneMinuteRate
> kafka_AddPartitionsToTxn_StdDev
> kafka_BrokerTopicMetrics_BytesInPerSec_Count
> kafka_BrokerTopicMetrics_BytesInPerSec_MeanRate
> kafka_BrokerTopicMetrics_BytesInPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_BytesOutPerSec_Count
> kafka_BrokerTopicMetrics_BytesOutPerSec_MeanRate
> kafka_BrokerTopicMetrics_BytesOutPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_BytesRejectedPerSec_Count
> kafka_BrokerTopicMetrics_BytesRejectedPerSec_MeanRate
> kafka_BrokerTopicMetrics_BytesRejectedPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_FailedFetchRequestsPerSec_Count
> kafka_BrokerTopicMetrics_FailedFetchRequestsPerSec_MeanRate
> kafka_BrokerTopicMetrics_FailedFetchRequestsPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_FailedProduceRequestsPerSec_Count
> kafka_BrokerTopicMetrics_FailedProduceRequestsPerSec_MeanRate
> kafka_BrokerTopicMetrics_FailedProduceRequestsPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_MessagesInPerSec_Count
> kafka_BrokerTopicMetrics_MessagesInPerSec_MeanRate
> kafka_BrokerTopicMetrics_MessagesInPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_ProduceMessageConversionsPerSec_Count
> kafka_BrokerTopicMetrics_ProduceMessageConversionsPerSec_MeanRate
> kafka_BrokerTopicMetrics_ProduceMessageConversionsPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_ReplicationBytesInPerSec_Count
> kafka_BrokerTopicMetrics_ReplicationBytesInPerSec_MeanRate
> kafka_BrokerTopicMetrics_ReplicationBytesInPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_ReplicationBytesOutPerSec_Count
> kafka_BrokerTopicMetrics_ReplicationBytesOutPerSec_MeanRate
> kafka_BrokerTopicMetrics_ReplicationBytesOutPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_TotalFetchRequestsPerSec_Count
> kafka_BrokerTopicMetrics_TotalFetchRequestsPerSec_MeanRate
> kafka_BrokerTopicMetrics_TotalFetchRequestsPerSec_OneMinuteRate
> kafka_BrokerTopicMetrics_TotalProduceRequestsPerSec_Count
> kafka_BrokerTopicMetrics_TotalProduceRequestsPerSec_MeanRate
> kafka_BrokerTopicMetrics_TotalProduceRequestsPerSec_OneMinuteRate
> kafka_BytesInPerSec_Count
> kafka_BytesInPerSec_FifteenMinuteRate
> kafka_BytesInPerSec_FiveMinuteRate
> kafka_BytesInPerSec_MeanRate
> kafka_BytesInPerSec_OneMinuteRate
> kafka_BytesOutPerSec_Count
> kafka_BytesOutPerSec_FifteenMinuteRate
> kafka_BytesOutPerSec_FiveMinuteRate
> kafka_BytesOutPerSec_MeanRate
> kafka_BytesOutPerSec_OneMinuteRate
> kafka_BytesRejectedPerSec_Count
> kafka_BytesRejectedPerSec_FifteenMinuteRate
> kafka_BytesRejectedPerSec_FiveMinuteRate
> kafka_BytesRejectedPerSec_MeanRate
> kafka_BytesRejectedPerSec_OneMinuteRate
> kafka_CreatePartitions_50thPercentile
> kafka_CreatePartitions_95thPercentile
> kafka_CreatePartitions_999thPercentile
> kafka_CreatePartitions_99thPercentile
> kafka_CreatePartitions_Count
> kafka_CreatePartitions_Max
> kafka_CreatePartitions_Mean
> 

[jira] [Commented] (KAFKA-9813) __consumer_offsets loaded cost very long time

2020-04-08 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077931#comment-17077931
 ] 

Jordan Moore commented on KAFKA-9813:
-

Also if you could [dump the log 
segments|https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-DumpLogSegment]
 that would be appreciated

> __consumer_offsets loaded cost very long time
> -
>
> Key: KAFKA-9813
> URL: https://issues.apache.org/jira/browse/KAFKA-9813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: leibo
>Priority: Major
> Attachments: 2020-04-03_163556.png
>
>
> __consumer_offsets loaded with a long time.
>  
> After I restart kafka(3 HA), one of __consumer_offsets loaded with  41256909 
> ms (about 11 hours),  the others __consumer_offsets partition that load after 
> __consumer_offsets-14 can not be loaded , that lead many consumer can not 
> commit offsets and consume.
> restart time:  2020-04-02 19:06
> __consumer_offsets-14 loaded time: 2020-04-03 06:32,  41256909 ms
> other info: 
>     1.  there are 72408 consumer groups in kafka cluster, and most of it 
> (70498) are empty.
>     2. there was no exception can be found in server.log
>    3. the disk i/o and cpu ,memory load are very low.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9813) __consumer_offsets loaded cost very long time

2020-04-08 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077930#comment-17077930
 ] 

Jordan Moore commented on KAFKA-9813:
-

[~lbdai3190]

Thanks, but that wasn't what I was looking for.
{code}du {code}
or
{code}ls -lh  {code}
on the
{code}kafka.log.dirs {code}

> __consumer_offsets loaded cost very long time
> -
>
> Key: KAFKA-9813
> URL: https://issues.apache.org/jira/browse/KAFKA-9813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: leibo
>Priority: Major
> Attachments: 2020-04-03_163556.png
>
>
> __consumer_offsets loaded with a long time.
>  
> After I restart kafka(3 HA), one of __consumer_offsets loaded with  41256909 
> ms (about 11 hours),  the others __consumer_offsets partition that load after 
> __consumer_offsets-14 can not be loaded , that lead many consumer can not 
> commit offsets and consume.
> restart time:  2020-04-02 19:06
> __consumer_offsets-14 loaded time: 2020-04-03 06:32,  41256909 ms
> other info: 
>     1.  there are 72408 consumer groups in kafka cluster, and most of it 
> (70498) are empty.
>     2. there was no exception can be found in server.log
>    3. the disk i/o and cpu ,memory load are very low.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9829) Kafka brokers are unregistered on Zookeeper node replacement

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077839#comment-17077839
 ] 

Jordan Moore commented on KAFKA-9829:
-

I feel like autoscaling Zookeeper or Kafka is a bad idea.

You need at least 3 healthy ZK at all time to be considered "stable" and you 
can lose only one for fault tolerance.

Similarly, unless you have some external script around partition migration, 
horizontal scaling of brokers isn't possible.

> Kafka brokers are unregistered on Zookeeper node replacement
> 
>
> Key: KAFKA-9829
> URL: https://issues.apache.org/jira/browse/KAFKA-9829
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.1
>Reporter: Pradeep
>Priority: Major
>
> We have a Kafka cluster with 3 nodes connected to a Zookeeper (3.4.14) 
> cluster of 3 nodes in AWS. We make use of the auto-scaling group to provision 
> nodes upon failures. We are seeing an issue where the Kafka brokers are 
> getting un-registered when all the Zookeeper nodes are replaced one after the 
> other. Every Zookeeper node is terminated from AWS console and we wait for a 
> replacement node to be provisioned with Zookeeper initialized before 
> terminating the other node.
> On every Zookeeper node replacement, the /broker/ids path show all the Kafka 
> broker ids in the cluster. But only on the final Zookeeper node replacement, 
> the content in /broker/ids become empty. Because of this issue we are not 
> able to create any new topic or do any other operations.
> We are seeing below logs in one of the replaced Zookeeper nodes when all of 
> the original nodes are replaced.
> {{2020-03-26 20:29:20,303 [myid:3] - INFO 
> [[SessionTracker:ZooKeeperServer@355|sessiontracker:ZooKeeperServer@355]] - 
> Expiring session 0x10003b973b50016, timeout of 6000ms exceeded}}
> {{2020-03-26 20:29:20,303 [myid:3] - INFO 
> [[SessionTracker:ZooKeeperServer@355|sessiontracker:ZooKeeperServer@355]] - 
> Expiring session 0x10003b973b5000e, timeout of 6000ms exceeded}}
> {{2020-03-26 20:29:20,303 [myid:3] - INFO 
> [[SessionTracker:ZooKeeperServer@355|sessiontracker:ZooKeeperServer@355]] - 
> Expiring session 0x30003a126690002, timeout of 6000ms exceeded}}
> {{2020-03-26 20:29:20,307 [myid:3] - DEBUG [CommitProcessor:3:DataTree@893] - 
> Deleting ephemeral node /brokers/ids/1002 for session 0x10003b973b50016}}
> {{2020-03-26 20:29:20,307 [myid:3] - DEBUG [CommitProcessor:3:DataTree@893] - 
> Deleting ephemeral node /brokers/ids/1003 for session 0x10003b973b5000e}}
> {{2020-03-26 20:29:20,307 [myid:3] - DEBUG [CommitProcessor:3:DataTree@893] - 
> Deleting ephemeral node /controller for session 0x30003a126690002}}
> {{2020-03-26 20:29:20,307 [myid:3] - DEBUG [CommitProcessor:3:DataTree@893] - 
> Deleting ephemeral node /brokers/ids/1001 for session 0x30003a126690002}}
>  
> I am not sure if the issue is related to KAFKA-5473.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9813) __consumer_offsets loaded cost very long time

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077837#comment-17077837
 ] 

Jordan Moore commented on KAFKA-9813:
-

How large is each segment file on disk? How large is each partition?

> __consumer_offsets loaded cost very long time
> -
>
> Key: KAFKA-9813
> URL: https://issues.apache.org/jira/browse/KAFKA-9813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: leibo
>Priority: Major
> Attachments: 2020-04-03_163556.png
>
>
> __consumer_offsets loaded with a long time.
>  
> After I restart kafka(3 HA), one of __consumer_offsets loaded with  41256909 
> ms (about 11 hours),  the others __consumer_offsets partition that load after 
> __consumer_offsets-14 can not be loaded , that lead many consumer can not 
> commit offsets and consume.
> restart time:  2020-04-02 19:06
> __consumer_offsets-14 loaded time: 2020-04-03 06:32,  41256909 ms
> other info: 
>     1.  there are 72408 consumer groups in kafka cluster, and most of it 
> (70498) are empty.
>     2. there was no exception can be found in server.log
>    3. the disk i/o and cpu ,memory load are very low.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9805) Running MirrorMaker in a Connect cluster,but the task not running

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077836#comment-17077836
 ] 

Jordan Moore commented on KAFKA-9805:
-

Please show your worker configurations. I suspect you have misconfigured the 
listeners and advertised rest host names

> Running MirrorMaker in a Connect cluster,but the task not running
> -
>
> Key: KAFKA-9805
> URL: https://issues.apache.org/jira/browse/KAFKA-9805
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.1
> Environment: linux
>Reporter: ZHAO GH
>Priority: Major
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> when i am Running MirrorMaker in a Connect clusterwhen i am Running 
> MirrorMaker in a Connect cluster,sometime the task running,but sometime the 
> task cannot assignment。
> I post connector config to connect cluster,here is my config
> http://99.12.98.33:8083/connectorshttp://99.12.98.33:8083/connectors
> {    "name": "kafka->kafka241-3",   
> "config": {       
>     "connector.class": 
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",     
>     "topics": "MM2-3",     
>     "key.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",                     
>   "value.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",     
> "tasks.max": 8,     
> "sasl.mechanism": "PLAIN",   
>  "security.protocol": "SASL_PLAINTEXT",   
>  "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule 
> required username=\"admin\" password=\"admin\";",          
> "source.cluster.alias": "kafka",     "source.cluster.bootstrap.servers": 
> "55.13.104.70:9092,55.13.104.74:9092,55.13.104.126:9092",     
> "source.admin.sasl.mechanism": "PLAIN",     
> "source.admin.security.protocol": "SASL_PLAINTEXT",     
> "source.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",         
> "target.cluster.alias": "kafka241",   
>  "target.cluster.bootstrap.servers": 
> "55.14.111.22:9092,55.14.111.23:9092,55.14.111.25:9092",     
> "target.admin.sasl.mechanism": "PLAIN",     
> "target.admin.security.protocol": "SASL_PLAINTEXT",   
> "target.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",       
>  "producer.sasl.mechanism": "PLAIN",   
>  "producer.security.protocol": "SASL_PLAINTEXT",     
> "producer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",          
> "consumer.sasl.mechanism": "PLAIN",   
>  "consumer.security.protocol": "SASL_PLAINTEXT",   
>  "consumer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",     
> "consumer.group.id": "mm2-1"       
> }
> }
>  
> but I get the connector status,found not tasks running
> http://99.12.98.33:8083/connectors/kafka->kafka241-3/status
> {
>  "name": "kafka->kafka241-3",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [],
>  "type": "source"
> }
>  
> but sometime,the task run success
> http://99.12.98.33:8083/connectors/kafka->kafka241-1/status
> {
>  "name": "kafka->kafka241-1",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [
>  {
>  "id": 0,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  {
>  "id": 1,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.33:8083"
>  },
>  {
>  "id": 2,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  }
>  ],
>  "type": "source"
> }
> is somebody met this problem? how to fix it,is it a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-9794) JMX metrics produce higher memory and CPU consumption in Kafka docker

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077835#comment-17077835
 ] 

Jordan Moore edited comment on KAFKA-9794 at 4/8/20, 5:00 AM:
--

Why use a separate Github with configs when Prometheus already maintains Kafka 
examples?

 

[https://github.com/prometheus/jmx_exporter/tree/master/example_configs]

 

Additionally, I suspect this bug is actually in the JMX Exporter and due to the 
fact it is actively trying to connect and process JMX metrics

 Docker shouldn't matter, but you should definitely try the same outside of it


was (Author: cricket007):
Why use a separate Github with configs when Prometheus already maintains Kafka 
examples?

 

[https://github.com/prometheus/jmx_exporter/tree/master/example_configs]

 

Additionally, I suspect this bug is actually in the JMX Exporter and due to the 
fact it is actively trying to connect and process JMX metrics

 

> JMX metrics produce higher memory and CPU consumption in Kafka docker
> -
>
> Key: KAFKA-9794
> URL: https://issues.apache.org/jira/browse/KAFKA-9794
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.2.0
> Environment: kafka version: kafka_2.12-2.2.0.tgz using docker
>Reporter: Hariprasad
>Priority: Major
>  Labels: performance
>
> We have implemented the Kafka metrics using this github: 
> [https://github.com/oded-dd/prometheus-jmx-kafka]
> we have enabled the metrics using KAFKA_OPTS. export 
> KAFKA_OPTS='-javaagent:/etc/kafka/jmx_prometheus_javaagent-0.3.1.jar=9097:/etc/kafka/kafka-jmx-metrics.yaml'
> After starting the Kafka with this, we have seen high usage in CPU and memory 
> shoot up in the docker
> CPU cycles goes some times as up as 300% , we are running with 4 cpu's  
> Kindly suggest some solution how to fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9794) JMX metrics produce higher memory and CPU consumption in Kafka docker

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077835#comment-17077835
 ] 

Jordan Moore commented on KAFKA-9794:
-

Why use a separate Github with configs when Prometheus already maintains Kafka 
examples?

 

[https://github.com/prometheus/jmx_exporter/tree/master/example_configs]

 

Additionally, I suspect this bug is actually in the JMX Exporter and due to the 
fact it is actively trying to connect and process JMX metrics

 

> JMX metrics produce higher memory and CPU consumption in Kafka docker
> -
>
> Key: KAFKA-9794
> URL: https://issues.apache.org/jira/browse/KAFKA-9794
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.2.0
> Environment: kafka version: kafka_2.12-2.2.0.tgz using docker
>Reporter: Hariprasad
>Priority: Major
>  Labels: performance
>
> We have implemented the Kafka metrics using this github: 
> [https://github.com/oded-dd/prometheus-jmx-kafka]
> we have enabled the metrics using KAFKA_OPTS. export 
> KAFKA_OPTS='-javaagent:/etc/kafka/jmx_prometheus_javaagent-0.3.1.jar=9097:/etc/kafka/kafka-jmx-metrics.yaml'
> After starting the Kafka with this, we have seen high usage in CPU and memory 
> shoot up in the docker
> CPU cycles goes some times as up as 300% , we are running with 4 cpu's  
> Kindly suggest some solution how to fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9787) The single node broker is not coming up after unclean shutdown

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077834#comment-17077834
 ] 

Jordan Moore commented on KAFKA-9787:
-

How did you manage to uncleaning terminate a broker? Did the machine lose power?

> The single node  broker is not coming up after unclean shutdown 
> 
>
> Key: KAFKA-9787
> URL: https://issues.apache.org/jira/browse/KAFKA-9787
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.3.0
> Environment: Kafka version 2.3.0
> zookeeper.version=3.4.14
> OS: ubuntu 16.94
> Kernel version - 4.4.0-131-generi
>Reporter: etaven
>Priority: Major
> Fix For: 2.3.0
>
>
> After unclean shutdown , the kakfa  broker single node cluster is not coming 
> up. The logs are shown below. Requesting you to explain the reason for this 
> behaviour 
> ___
> [2020-02-12 12:43:13,532] INFO [ProducerStateManager 
> partition=__consumer_offsets-23] Loading producer state from snapshot file 
> '/bitnami/kafka/data/__consumer_offsets-23/0005.snapshot' 
> (kafka.log.ProducerStateManager) [2020-02-12 12:43:13,532] INFO [Log 
> partition=__consumer_offsets-23, dir=/bitnami/kafka/data] Completed load of 
> log with 1 segments, log start offset 0 and log end offset 5 in 60 ms 
> (kafka.log.Log) [2020-02-12 12:43:13,537] INFO Logs loading complete in 1809 
> ms. (kafka.log.LogManager) [2020-02-12 12:43:13,548] INFO Starting log 
> cleanup with a period of 30 ms. (kafka.log.LogManager) [2020-02-12 
> 12:43:13,549] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager) [2020-02-12 12:43:13,912] INFO 
> Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) 
> [2020-02-12 12:43:13,952] INFO [SocketServer brokerId=1001] Created 
> data-plane acceptor and processors for endpoint : 
> EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,954] INFO [SocketServer 
> brokerId=1001] Started 1 acceptor threads for data-plane 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,981] INFO 
> [ExpirationReaper-1001-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,984] INFO [ExpirationReaper-1001-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,986] INFO [ExpirationReaper-1001-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,988] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:14,024] ERROR [KafkaServer id=1001] Fatal error during KafkaServer 
> startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.nio.file.FileSystemException: 
> /bitnami/kafka/data/replication-offset-checkpoint: Input/output error at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>  at java.nio.file.Files.newByteChannel(Files.java:361) at 
> java.nio.file.Files.createFile(Files.java:632) at 
> kafka.server.checkpoints.CheckpointFile.(CheckpointFile.scala:45) at 
> kafka.server.checkpoints.OffsetCheckpointFile.(OffsetCheckpointFile.scala:56)
>  at kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:191) at 
> kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:190) at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> kafka.server.ReplicaManager.(ReplicaManager.scala:190) at 
> kafka.server.ReplicaManager.(ReplicaManager.scala:165) at 
> kafka.server.KafkaServer.createReplicaManager(KafkaServer.scala:356) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:258) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala) 
> [2020-02-12 12:43:14,027] INFO [KafkaServer id=1001] shutting down 
> (kafka.server.KafkaServer) [2020-02-12 12:43:14,028] INFO [SocketServer 
> brokerId=1001] Stopping socket server request 

[jira] [Comment Edited] (KAFKA-9787) The single node broker is not coming up after unclean shutdown

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077834#comment-17077834
 ] 

Jordan Moore edited comment on KAFKA-9787 at 4/8/20, 4:57 AM:
--

How did you manage to uncleanly terminate a broker? Did the machine lose power?


was (Author: cricket007):
How did you manage to uncleaning terminate a broker? Did the machine lose power?

> The single node  broker is not coming up after unclean shutdown 
> 
>
> Key: KAFKA-9787
> URL: https://issues.apache.org/jira/browse/KAFKA-9787
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.3.0
> Environment: Kafka version 2.3.0
> zookeeper.version=3.4.14
> OS: ubuntu 16.94
> Kernel version - 4.4.0-131-generi
>Reporter: etaven
>Priority: Major
> Fix For: 2.3.0
>
>
> After unclean shutdown , the kakfa  broker single node cluster is not coming 
> up. The logs are shown below. Requesting you to explain the reason for this 
> behaviour 
> ___
> [2020-02-12 12:43:13,532] INFO [ProducerStateManager 
> partition=__consumer_offsets-23] Loading producer state from snapshot file 
> '/bitnami/kafka/data/__consumer_offsets-23/0005.snapshot' 
> (kafka.log.ProducerStateManager) [2020-02-12 12:43:13,532] INFO [Log 
> partition=__consumer_offsets-23, dir=/bitnami/kafka/data] Completed load of 
> log with 1 segments, log start offset 0 and log end offset 5 in 60 ms 
> (kafka.log.Log) [2020-02-12 12:43:13,537] INFO Logs loading complete in 1809 
> ms. (kafka.log.LogManager) [2020-02-12 12:43:13,548] INFO Starting log 
> cleanup with a period of 30 ms. (kafka.log.LogManager) [2020-02-12 
> 12:43:13,549] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager) [2020-02-12 12:43:13,912] INFO 
> Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) 
> [2020-02-12 12:43:13,952] INFO [SocketServer brokerId=1001] Created 
> data-plane acceptor and processors for endpoint : 
> EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,954] INFO [SocketServer 
> brokerId=1001] Started 1 acceptor threads for data-plane 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,981] INFO 
> [ExpirationReaper-1001-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,984] INFO [ExpirationReaper-1001-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,986] INFO [ExpirationReaper-1001-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,988] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:14,024] ERROR [KafkaServer id=1001] Fatal error during KafkaServer 
> startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.nio.file.FileSystemException: 
> /bitnami/kafka/data/replication-offset-checkpoint: Input/output error at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>  at java.nio.file.Files.newByteChannel(Files.java:361) at 
> java.nio.file.Files.createFile(Files.java:632) at 
> kafka.server.checkpoints.CheckpointFile.(CheckpointFile.scala:45) at 
> kafka.server.checkpoints.OffsetCheckpointFile.(OffsetCheckpointFile.scala:56)
>  at kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:191) at 
> kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:190) at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> kafka.server.ReplicaManager.(ReplicaManager.scala:190) at 
> kafka.server.ReplicaManager.(ReplicaManager.scala:165) at 
> kafka.server.KafkaServer.createReplicaManager(KafkaServer.scala:356) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:258) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala) 
> [2020-02-12 12:43:14,027] INFO 

[jira] [Comment Edited] (KAFKA-9782) Kafka Connect InsertField transform - Add the ability to insert event's Key into Value

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077833#comment-17077833
 ] 

Jordan Moore edited comment on KAFKA-9782 at 4/8/20, 4:55 AM:
--

This already exists as an external SMT (ignore the fact it mentions S3)

[https://github.com/jcustenborder/kafka-connect-transform-archive]


was (Author: cricket007):
This already exists as an external SMT

https://github.com/jcustenborder/kafka-connect-transform-archive

> Kafka Connect InsertField transform - Add the ability to insert event's Key 
> into Value
> --
>
> Key: KAFKA-9782
> URL: https://issues.apache.org/jira/browse/KAFKA-9782
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Ryan Tomczik
>Priority: Major
>
> I'm using Debezium to pull change data capture events from a Mongo DB and 
> write them to S3 with the Confluent S3 Sink. The problem is Debezium stores 
> the document's key in each event's key and the S3 connector discards this 
> key. I need the ability to insert the key as a new field in the event value. 
> It seems that this would fit in perfectly into the InsertField transform or 
> create a new transform KeyToValue.
> Here is an example of someone else running into this same limitation and 
> creating a custom transform.
> [https://gist.github.com/shashidesai/aaf72489165c6a0fd73a3b51e5a8892a]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9782) Kafka Connect InsertField transform - Add the ability to insert event's Key into Value

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077833#comment-17077833
 ] 

Jordan Moore commented on KAFKA-9782:
-

This already exists as an external SMT

https://github.com/jcustenborder/kafka-connect-transform-archive

> Kafka Connect InsertField transform - Add the ability to insert event's Key 
> into Value
> --
>
> Key: KAFKA-9782
> URL: https://issues.apache.org/jira/browse/KAFKA-9782
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Ryan Tomczik
>Priority: Major
>
> I'm using Debezium to pull change data capture events from a Mongo DB and 
> write them to S3 with the Confluent S3 Sink. The problem is Debezium stores 
> the document's key in each event's key and the S3 connector discards this 
> key. I need the ability to insert the key as a new field in the event value. 
> It seems that this would fit in perfectly into the InsertField transform or 
> create a new transform KeyToValue.
> Here is an example of someone else running into this same limitation and 
> creating a custom transform.
> [https://gist.github.com/shashidesai/aaf72489165c6a0fd73a3b51e5a8892a]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9822) java kafka-client use "props.put("retries", "5")" ,why print 2 log ? Should 6 log !

2020-04-07 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077832#comment-17077832
 ] 

Jordan Moore commented on KAFKA-9822:
-

If you are looking for support, this question would be better suited for 
StackOverflow.

 

Besides, You have failed to mention your OS, Java version, Kafka version, etc.

> java kafka-client use "props.put("retries", "5")" ,why print 2 log ? Should 6 
> log !
> ---
>
> Key: KAFKA-9822
> URL: https://issues.apache.org/jira/browse/KAFKA-9822
> Project: Kafka
>  Issue Type: Test
>Reporter: startjava
>Priority: Major
>
> package test2;package test2;
> import java.util.Properties;
> import org.apache.kafka.clients.producer.KafkaProducer;import 
> org.apache.kafka.clients.producer.Producer;import 
> org.apache.kafka.clients.producer.ProducerRecord;
> public class ProduceMessage { public static void main(String[] args) {
> Properties props = new Properties();
> props.put("bootstrap.servers", "192.168.1.113:9091");/wrong ip 
> address
> props.put("acks", "1");
> props.put("retries", "5");
> props.put("key.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer"); 
> props.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
>  Producer producer = new KafkaProducer<>(props);
> for (int i = 0; i < 1; i++) {
> producer.send(new ProducerRecord("myTopic1", "key" + (i + 1), 
> "value" + (i + 1))); } producer.close(); }}
>  
> console print result:
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] 
> Connection to node -1 (/192.168.1.113:9091) could not be established. Broker 
> may not be available.
>  [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] 
> Bootstrap broker 192.168.1.113:9091 (id: -1 rack: null) disconnected
>  [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] 
> Connection to node -1 (/192.168.1.113:9091) could not be established. Broker 
> may not be available.
>  [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] 
> Bootstrap broker 192.168.1.113:9091 (id: -1 rack: null) disconnected
>  [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer 
> clientId=producer-1] Closing the Kafka producer with timeoutMillis = 
> 9223372036854775807 ms.
>  
>  
> log4j.properties:
> log4j.rootLogger=INFO,console
>  log4j.logger.com.demo.kafka=DEBUG,kafka
>  log4j.appender.kafka=kafka.producer.KafkaLog4jAppender
>  log4j.appender.console=org.apache.log4j.ConsoleAppender
>  log4j.appender.console.target=System.out
>  log4j.appender.console.layout=org.apache.log4j.PatternLayout
>  log4j.appender.console.layout.ConversionPattern=%d [%-5p] [%t] - [%l] %m%n



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-29 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070641#comment-17070641
 ] 

Jordan Moore commented on KAFKA-9774:
-

Hi [~ewencp]. Thanks for the response. 

{quote}
it's a significant public support, compatibility, and maintenance commitment 
for the project
{quote}

1. Public support and maintenance can be tracked in JIRA, mailing list, and 
other mediums as currently done, no? 
2. Compatibility is locked to the versions of dependencies locked in Gradle. 
The existing Kafka Compatibility Matrix page is good enough for all clients, 
including Connect. If you mean compatibility of various Java base images or 
OS's, then there should be a vote on that, perhaps?

{quote}
what's the commitment to publication during releases
{quote}

Well, the jib-gradle-plugin can be added to release as part of a standard AK 
release, so sure, the release process will have one new build artifact that 
would push into DockerHub. The RM would maintain a DockerHub credential, maybe 
initialized by a PMC, I assume. 

{quote}
periodic updates to get base image CVE updates
{quote}

I'd say this would be done per AK release, as needed. Base images I can think 
of would be openjdk , adoptopenjdk, or the new RedHat UBI images. 

{quote}
verification, testing
{quote}

What kinds of testing would be needed outside of the existing Connect test 
framework? A Docker image is just a build artifact like a JAR, no? After JARs 
are built, what extra testing is being done? 



> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069144#comment-17069144
 ] 

Jordan Moore commented on KAFKA-9774:
-

[~rhauch] & [~kkonstantine] , lemme know your thoughts here. 

Here's the current design
 # Use Jib to build the image 
 # Follow similar practices as the confluentinc/cp-docker image in that the 
CONNECT_ variables are templated into the worker property file. I'd planned on 
using FreeMarker to do this, but later realized that's probably overkill.
 # Wrap ConnectDistributed with a small class that just initializes that file 
from the environment variables.
 # Maybe add some additional verification around expected variables.
 # Write up some documentation that would be hosted in DockerHub
 # Ping someone in Docker official-images repo to get this put up there.

Yes, right now the above repo is built in Maven, and just a branch of some 
Kafka Stream stuff I did before. That is all fixable of course, no problem.

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1206) allow Kafka to start from a resource negotiator system

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068526#comment-17068526
 ] 

Jordan Moore commented on KAFKA-1206:
-

How relevant is this in light of Mesos or Kubernetes?

> allow Kafka to start from a resource negotiator system
> --
>
> Key: KAFKA-1206
> URL: https://issues.apache.org/jira/browse/KAFKA-1206
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Joe Stein
>Priority: Major
>  Labels: mesos
> Attachments: KAFKA-1206_2014-01-16_00:40:30.patch
>
>
> We need a generic implementation to hold the property information for 
> brokers, producers and consumers.  We want the resource negotiator to store 
> this information however it wants and give it respond with a 
> java.util.Properties.  This can get used then in the Kafka.scala as 
> serverConfigs for the KafkaServerStartable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10) Kafka deployment on EC2 should be WHIRR based, instead of current contrib/deploy code based solution

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068525#comment-17068525
 ] 

Jordan Moore commented on KAFKA-10:
---

This should be closed. Whirr is in the Apache Attic. 

> Kafka deployment on EC2 should be WHIRR based, instead of current 
> contrib/deploy code based solution
> 
>
> Key: KAFKA-10
> URL: https://issues.apache.org/jira/browse/KAFKA-10
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.6
>Priority: Major
>
> Apache Whirr is a a set of libraries for running cloud services 
> http://incubator.apache.org/whirr/ 
> It is desirable that Kafka's integration with EC2 be Whirr based, rather than 
> the code based solution we currently have in contrib/deploy. 
> The code in contrib/deploy will be deleted in 0.6 release



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-5306) Official init.d scripts

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068521#comment-17068521
 ] 

Jordan Moore commented on KAFKA-5306:
-

I feel like SystemD won out the battle for the most part, going forward

> Official init.d scripts
> ---
>
> Key: KAFKA-5306
> URL: https://issues.apache.org/jira/browse/KAFKA-5306
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.10.2.1
> Environment: Ubuntu 14.04
>Reporter: Shahar
>Priority: Minor
>
> It would be great to have an officially supported init.d script for starting 
> and stopping Kafka as a service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068519#comment-17068519
 ] 

Jordan Moore commented on KAFKA-9774:
-

Happy to take this on. And already working on a POC here (ignore the README and 
the project name, I started working on only the pom and the main class in this 
branch) - 
[https://github.com/cricket007/kafka-streams-jib-example/tree/feature/connect-distributed]

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)
Jordan Moore created KAFKA-9774:
---

 Summary: Create official Docker image for Kafka Connect
 Key: KAFKA-9774
 URL: https://issues.apache.org/jira/browse/KAFKA-9774
 Project: Kafka
  Issue Type: Task
  Components: build, KafkaConnect, packaging
Affects Versions: 2.4.1
Reporter: Jordan Moore
 Attachments: image-2020-03-27-05-04-46-792.png, 
image-2020-03-27-05-05-59-024.png

This is a ticket for creating an *official* apache/kafka-connect Docker image. 

Does this need a KIP?  -  I don't think so. This would be a new feature, not 
any API change. 

Why is this needed?
 # Kafka Connect is stateless. I believe this is why a Kafka image is not 
created?
 # It scales much more easily with Docker and orchestrators. It operates much 
like any other serverless / "microservice" web application 
 # People struggle with deploying it because it is packaged _with Kafka_ , 
which leads some to believe it needs to _*run* with Kafka_ on the same machine. 

I think there is separate ticket for creating an official Docker image for 
Kafka but clearly none exist. I reached out to Confluent about this, but heard 
nothing yet.

!image-2020-03-27-05-05-59-024.png|width=740,height=196!

 

Zookeeper already has one , btw  
!image-2020-03-27-05-04-46-792.png|width=739,height=288!

*References*: 

[Docs for Official Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (KAFKA-8310) CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting key=value pairs

2019-05-01 Thread Jordan Moore (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Moore closed KAFKA-8310.
---

> CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting 
> key=value pairs
> 
>
> Key: KAFKA-8310
> URL: https://issues.apache.org/jira/browse/KAFKA-8310
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 1.1.0
>Reporter: Jordan Moore
>Priority: Trivial
> Fix For: 2.1.0
>
>
> The CommandLineUtils split apart all equal signs in the key-value pair rather 
> than just the first, therefore making something like this fail.
> {code:java}
> --property value.schema='{"name":"foo", "type":"string", "doc": 
> "key=value"}'{code}
> as it tries to assign these two properties separately
> {code:java}
> {"name":"foo", "type":"string", "doc": "key{code}
> {code:java}
> value"}{code}
> Issue here
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L99]
> Ideally, the code should do something like indexOf("="), to get the first 
> one, then substring from there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8310) CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting key=value pairs

2019-05-01 Thread Jordan Moore (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Moore updated KAFKA-8310:

Affects Version/s: (was: 2.2.0)
   1.1.0

> CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting 
> key=value pairs
> 
>
> Key: KAFKA-8310
> URL: https://issues.apache.org/jira/browse/KAFKA-8310
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 1.1.0
>Reporter: Jordan Moore
>Priority: Trivial
> Fix For: 2.1.0
>
>
> The CommandLineUtils split apart all equal signs in the key-value pair rather 
> than just the first, therefore making something like this fail.
> {code:java}
> --property value.schema='{"name":"foo", "type":"string", "doc": 
> "key=value"}'{code}
> as it tries to assign these two properties separately
> {code:java}
> {"name":"foo", "type":"string", "doc": "key{code}
> {code:java}
> value"}{code}
> Issue here
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L99]
> Ideally, the code should do something like indexOf("="), to get the first 
> one, then substring from there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8310) CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting key=value pairs

2019-05-01 Thread Jordan Moore (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Moore resolved KAFKA-8310.
-
   Resolution: Duplicate
Fix Version/s: 2.1.0

Whoops. Need to upgrade my CLI clients.

Duplicates KAFKA-7388
h1.  

> CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting 
> key=value pairs
> 
>
> Key: KAFKA-8310
> URL: https://issues.apache.org/jira/browse/KAFKA-8310
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.2.0
>Reporter: Jordan Moore
>Priority: Trivial
> Fix For: 2.1.0
>
>
> The CommandLineUtils split apart all equal signs in the key-value pair rather 
> than just the first, therefore making something like this fail.
> {code:java}
> --property value.schema='{"name":"foo", "type":"string", "doc": 
> "key=value"}'{code}
> as it tries to assign these two properties separately
> {code:java}
> {"name":"foo", "type":"string", "doc": "key{code}
> {code:java}
> value"}{code}
> Issue here
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L99]
> Ideally, the code should do something like indexOf("="), to get the first 
> one, then substring from there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8310) CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting key=value pairs

2019-05-01 Thread Jordan Moore (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Moore updated KAFKA-8310:

Affects Version/s: 2.2.0

> CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting 
> key=value pairs
> 
>
> Key: KAFKA-8310
> URL: https://issues.apache.org/jira/browse/KAFKA-8310
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.2.0
>Reporter: Jordan Moore
>Priority: Trivial
>
> The CommandLineUtils split apart all equal signs in the key-value pair rather 
> than just the first, therefore making something like this fail.
> {code:java}
> --property value.schema='{"name":"foo", "type":"string", "doc": 
> "key=value"}'{code}
> as it tries to assign these two properties separately
> {code:java}
> {"name":"foo", "type":"string", "doc": "key{code}
> {code:java}
> value"}{code}
> Issue here
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L99]
> Ideally, the code should do something like indexOf("="), to get the first 
> one, then substring from there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8310) CommandLineUtils.parseKeyValueArgs --property flag too greedy when splitting key=value pairs

2019-05-01 Thread Jordan Moore (JIRA)
Jordan Moore created KAFKA-8310:
---

 Summary: CommandLineUtils.parseKeyValueArgs --property flag too 
greedy when splitting key=value pairs
 Key: KAFKA-8310
 URL: https://issues.apache.org/jira/browse/KAFKA-8310
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Jordan Moore


The CommandLineUtils split apart all equal signs in the key-value pair rather 
than just the first, therefore making something like this fail.
{code:java}
--property value.schema='{"name":"foo", "type":"string", "doc": 
"key=value"}'{code}
as it tries to assign these two properties separately
{code:java}
{"name":"foo", "type":"string", "doc": "key{code}
{code:java}
value"}{code}
Issue here

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L99]

Ideally, the code should do something like indexOf("="), to get the first one, 
then substring from there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8148) Kafka Group deletion

2019-04-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16829681#comment-16829681
 ] 

Jordan Moore commented on KAFKA-8148:
-

[~AtulRenapurkar], perhaps you misunderstood the comment. 

You used _--bootstrap-server_, and so the command *does* *not expect* a 
Zookeeper port.

If you had use _--zookeeper_ option instead (with a Zookeeper port), then it 
would attempt to delete consumer groups that have stored offsets in Zookeeper, 
as compared to the other way which would attempt to delete from the 
___consumer_offsets_ topic within Kafka.

Can you clarify which you need?

> Kafka Group deletion
> 
>
> Key: KAFKA-8148
> URL: https://issues.apache.org/jira/browse/KAFKA-8148
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.0.0
>Reporter: Atul T Renapurkar
>Priority: Critical
>
> When I was trying to delete group with below command 
>  
> ./kafka-consumer-groups.sh   --bootstrap-server zookeeper1:2181 
> zookeeper2:2181 zookeeper3:2181 --delete --group groupName
>  
> I am facing 
>  
> Error: Deletion of some consumer groups failed:
> * Group 'groupName' could not be deleted due to: COORDINATOR_NOT_AVAILABLE
> Please feel free If you required any additional information
> Thanks and Regards,
> Atul



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5983) Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker

2019-03-14 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793084#comment-16793084
 ] 

Jordan Moore commented on KAFKA-5983:
-

Following up here
{quote}add the CachedSchemaRegistryClient class to the handler, then it won't 
be doing a HTTP call for every message
{quote}
You can see an example of this in my project that is built on the Connect API, 
not MirrorMaker - [https://github.com/cricket007/schema-registry-transfer-smt]

Using Connect, you can target specific topics rather than copy every schema 
with it's original ID. Mostly did it in preparation for 
[KIP-382|https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]

Alternatively, you should set the second Registry as a follower of the first, 
or you will also need to mirror the {{_schemas}} topic using *StringSerializer* 
or *ByteArraySerializer* not the AvroSerializer (and again, Converter classes 
don't work for MirrorMaker, only Connect)

> Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker
> --
>
> Key: KAFKA-5983
> URL: https://issues.apache.org/jira/browse/KAFKA-5983
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
> Environment: OS: Linux CentOS 7 and Windows 10
>Reporter: Giulio Vito de Musso
>Priority: Major
>  Labels: windows
>
> I'm installing an Apache Kafka MirrorMaker instance to replicate one cluster 
> data to one another cluster. Both on the source and on the target clusters 
> I'm using the Confluent Avro schema registry and the data is binarized with 
> Avro.
> I'm using the latest released version of Confluent 3.3.0 (kafka 0.11). 
> Moreover, the source broker is on a Windows machine while the target broker 
> is on a Linux machine.
> The two Kafka clusters are independent, thus they have different schema 
> registries.
> This are my configuration files for the MirrroMaker
> {code:title=consumer.properties|borderStyle=solid}
> group.id=test-mirrormaker-group
> bootstrap.servers=host01:9092
> exclude.internal.topics=true
> client.id=mirror_maker_consumer0
> auto.commit.enabled=false
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host01:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host01:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> {code:title=producer.properties|borderStyle=solid}
> bootstrap.servers=host02:9093
> compression.type=none
> acks=1
> client.id=mirror_maker_producer0
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host02:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host02:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> I run the MirrorMaker on the host01 Windows machine with this command
> {code}
> C:\kafka>.\bin\windows\kafka-mirror-maker.bat --consumer.config 
> .\etc\kafka\consumer.properties --producer.config 
> .\etc\kafka\producer.properties --whitelist=MY_TOPIC
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'internal.key.converter.schemas.enable' was supplied but isn't a known 
> config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'value.converter.schema.registry.url' was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,571] WARN The configuration 'internal.key.converter' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,586] WARN The configuration 
> 'internal.value.converter.schemas.enable' was supplied but isn't a known 
> config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,602] WARN The configuration 'internal.value.converter' 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,633] WARN The configuration 'value.converter' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,649] WARN The configuration 'key.converter' was supplied 
> but isn't a known config. 

[jira] [Commented] (KAFKA-7914) LDAP

2019-02-15 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769766#comment-16769766
 ] 

Jordan Moore commented on KAFKA-7914:
-

I'll assume this is about having an LDAPAuthorizer in AK like Confluent has
https://docs.confluent.io/current/confluent-security-plugins/kafka/introduction.html

> LDAP
> 
>
> Key: KAFKA-7914
> URL: https://issues.apache.org/jira/browse/KAFKA-7914
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chris Bogan
>Priority: Major
>
> Entry



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7815) SourceTask should expose ACK'd offsets, metadata

2019-01-17 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745281#comment-16745281
 ] 

Jordan Moore commented on KAFKA-7815:
-

So, I was thinking about putting something like "recordLogged" as well into a 
SinkTasks - a callback that the record reached the sink. 

I understand that might need a new KIP, but I feel like the scope might be able 
to be expanded to include that?

One use case - HDFS / S3 Connector, when a file is written, I want a callback 
to register a Hive Metastore partition. 

> SourceTask should expose ACK'd offsets, metadata
> 
>
> Key: KAFKA-7815
> URL: https://issues.apache.org/jira/browse/KAFKA-7815
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Ryanne Dolan
>Assignee: Ryanne Dolan
>Priority: Minor
> Fix For: 2.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Add a new callback method, recordLogged(), to notify SourceTasks when a 
> record is ACK'd by the downstream broker. Include offsets and metadata of 
> ACK'd record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6738) Kafka Connect handling of bad data

2019-01-11 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740572#comment-16740572
 ] 

Jordan Moore edited comment on KAFKA-6738 at 1/11/19 5:38 PM:
--

Hi [~wicknicks] / [~rhauch], 

-I can't seem to find documentation on Apache or Confluent site how these 
features can be used, or what classnames are available for the configurations-. 

*Edit*: Nevermind found it. Was looking at the wrong section - 
https://kafka.apache.org/documentation/#sinkconnectconfigs


was (Author: cricket007):
Hi [~wicknicks] / [~rhauch], 

I can't seem to find documentation on Apache or Confluent site how these 
features can be used, or what classnames are available for the configurations. 

> Kafka Connect handling of bad data
> --
>
> Key: KAFKA-6738
> URL: https://issues.apache.org/jira/browse/KAFKA-6738
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Arjun Satish
>Priority: Critical
> Fix For: 2.0.0
>
>
> Kafka Connect connectors and tasks fail when they run into an unexpected 
> situation or error, but the framework should provide more general "bad data 
> handling" options, including (perhaps among others):
> # fail fast, which is what we do today (assuming connector actually fails and 
> doesn't eat errors)
> # retry (possibly with configs to limit)
> # drop data and move on
> # dead letter queue
> This needs to be addressed in a way that handles errors from:
> # The connector itself (e.g. connectivity issues to the other system)
> # Converters/serializers (bad data, unexpected format, etc)
> # SMTs
> # Ideally the framework as well, though we obviously want to fix known bugs 
> anyway



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6738) Kafka Connect handling of bad data

2019-01-11 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740572#comment-16740572
 ] 

Jordan Moore commented on KAFKA-6738:
-

Hi [~wicknicks] / [~rhauch], 

I can't seem to find documentation on Apache or Confluent site how these 
features can be used, or what classnames are available for the configurations. 

> Kafka Connect handling of bad data
> --
>
> Key: KAFKA-6738
> URL: https://issues.apache.org/jira/browse/KAFKA-6738
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Arjun Satish
>Priority: Critical
> Fix For: 2.0.0
>
>
> Kafka Connect connectors and tasks fail when they run into an unexpected 
> situation or error, but the framework should provide more general "bad data 
> handling" options, including (perhaps among others):
> # fail fast, which is what we do today (assuming connector actually fails and 
> doesn't eat errors)
> # retry (possibly with configs to limit)
> # drop data and move on
> # dead letter queue
> This needs to be addressed in a way that handles errors from:
> # The connector itself (e.g. connectivity issues to the other system)
> # Converters/serializers (bad data, unexpected format, etc)
> # SMTs
> # Ideally the framework as well, though we obviously want to fix known bugs 
> anyway



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container

2018-12-19 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725620#comment-16725620
 ] 

Jordan Moore commented on KAFKA-7581:
-

[~songxinlei], in that case, please don't hijack the issue titled "_Issues in 
building kafka using gradle on a Ubuntu based docker container_"

> Issues in building kafka using gradle on a Ubuntu based docker container
> 
>
> Key: KAFKA-7581
> URL: https://issues.apache.org/jira/browse/KAFKA-7581
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2
> Environment: Ubuntu 16.04.3 LTS
>Reporter: Sarvesh Tamba
>Priority: Blocker
>
> The following issues are seen when kafka is built using gradle on a Ubuntu 
> based docker container:-
> /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177:
>  File name too long
>  This can happen on some encrypted or legacy file systems. Please see SI-3623 
> for more details.
>  .foreach { txnMetadataCacheEntry =>
>  ^
>  56 warnings found
>  one error found
> > Task :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
>  Execution failed for task ':core:compileScala'.
>  > Compilation failed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6090) Upgrade the Scala recommendation to 2.12

2018-12-19 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725592#comment-16725592
 ] 

Jordan Moore commented on KAFKA-6090:
-

Resolved in KAFKA-7524

> Upgrade the Scala recommendation to 2.12
> 
>
> Key: KAFKA-6090
> URL: https://issues.apache.org/jira/browse/KAFKA-6090
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Lionel Cons
>Priority: Minor
>
> Currently, the download page contains for the latest Kafka version (0.11.0.1):
> {quote}
> We build for multiple versions of Scala. This only matters if you are using 
> Scala and you want a version built for the same Scala version you use. 
> Otherwise any version should work (2.11 is recommended).
> {quote}
> Scala 2.11 is not supported anymore. Version 2.11.11 (released 6 months ago) 
> indicates:
> {quote}
> The 2.11.11 release concludes the 2.11.x series, with no further releases 
> planned. Please consider upgrading to 2.12!
> {quote}
> So it seems it is time to recommend 2.12 for Kafka usage and (soon) start to 
> build for Scala 2.13...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container

2018-12-19 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725589#comment-16725589
 ] 

Jordan Moore commented on KAFKA-7581:
-

If this problem is with *building Kafka* in a Docker container, then no 
external system is "linked" to the build process. 

That being said, I don't think this is a blocker, and I cannot reproduce. Here 
is even a Dockerfile that I just made 

{code}
FROM ubuntu:16.04

ARG JAVA_VERSION=8.0.192-zulu
ARG GRADLE_VERSION=4.8.1
ARG KAFKA_VERSION=2.0.0
ARG SCALA_VERSION=2.11

RUN apt-get update && apt-get install -y \
  curl \
  zip \
  unzip \
  && rm -rf /var/apt/lists/*

RUN curl -s "https://get.sdkman.io; | bash
RUN ["/bin/bash", "-c", "source /root/.sdkman/bin/sdkman-init.sh; \
sdk i java $JAVA_VERSION && sdk i gradle $GRADLE_VERSION"]

RUN mkdir /kafka-src \
  && curl 
https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka-$KAFKA_VERSION-src.tgz
 \
| tar -xvzC /kafka-src --strip-components=1

WORKDIR /kafka-src
RUN ["/bin/bash", "-c", "source /root/.sdkman/bin/sdkman-init.sh; \
 gradle && ./gradlew -PscalaVersion=$SCALA_VERSION releaseTarGz -x 
signArchives"]
{code}

And it builds the release tarball, which you could use in a multi-stage build, 
for example

{code}
docker run --rm -ti kafka-7581:latest bash -c "ls -ltr 
/kafka-src/core/build/distributions/"
total 57888
-rw-r--r-- 1 root root  3343041 Dec 20 04:57 kafka_2.11-2.0.0-site-docs.tgz
-rw-r--r-- 1 root root 55928942 Dec 20 04:57 kafka_2.11-2.0.0.tgz
{code}

> Issues in building kafka using gradle on a Ubuntu based docker container
> 
>
> Key: KAFKA-7581
> URL: https://issues.apache.org/jira/browse/KAFKA-7581
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2
> Environment: Ubuntu 16.04.3 LTS
>Reporter: Sarvesh Tamba
>Priority: Blocker
>
> The following issues are seen when kafka is built using gradle on a Ubuntu 
> based docker container:-
> /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177:
>  File name too long
>  This can happen on some encrypted or legacy file systems. Please see SI-3623 
> for more details.
>  .foreach { txnMetadataCacheEntry =>
>  ^
>  56 warnings found
>  one error found
> > Task :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
>  Execution failed for task ':core:compileScala'.
>  > Compilation failed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container

2018-12-15 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722349#comment-16722349
 ] 

Jordan Moore commented on KAFKA-7581:
-

I'm not sure I understand the issue here.

Are you able to built other Kafka docker images? 
 - Confluent- 
[https://github.com/confluentinc/cp-docker-images/blob/5.0.1-post/debian/kafka/Dockerfile]
 - Bitnami - [https://github.com/bitnami/bitnami-docker-kafka]
 - wurstmeister - [https://github.com/wurstmeister/kafka-docker]

Besides, installing Gradle or Git inside a container is considered bad "Docker 
etiquette" 

> Issues in building kafka using gradle on a Ubuntu based docker container
> 
>
> Key: KAFKA-7581
> URL: https://issues.apache.org/jira/browse/KAFKA-7581
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2
> Environment: Ubuntu 16.04.3 LTS
>Reporter: Sarvesh Tamba
>Priority: Blocker
>
> The following issues are seen when kafka is built using gradle on a Ubuntu 
> based docker container:-
> /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177:
>  File name too long
>  This can happen on some encrypted or legacy file systems. Please see SI-3623 
> for more details.
>  .foreach { txnMetadataCacheEntry =>
>  ^
>  56 warnings found
>  one error found
> > Task :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
>  Execution failed for task ':core:compileScala'.
>  > Compilation failed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6890) Add connector level configurability for producer/consumer client configs

2018-10-25 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664027#comment-16664027
 ] 

Jordan Moore commented on KAFKA-6890:
-

[~rhauch], nice to meet you at Kafka Summit, 
When we spoke, I had mentioned that we currently have one large connect 
cluster, primarily for S3 Connect, in each of our development environments, and 
for some of the 100+ connector configurations that we have loaded, the topics 
they are sinking have variable amounts of throughput, so I think it would be 
beneficial to be able to tune at least some of the properties for those APIs. 

I know you brought up the concerns about at least being able to set 
"consumer.bootstrap.servers", for example, so perhaps setting 
"bootstrap.servers" (and other, similar properties, such as SSL certs for those 
servers) could be blacklisted in some way?

Also, as I mentioned on GitHub, this looks like it might duplicate KAFKA-4159 

> Add connector level configurability for producer/consumer client configs
> 
>
> Key: KAFKA-6890
> URL: https://issues.apache.org/jira/browse/KAFKA-6890
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Allen Tang
>Priority: Minor
>
> Right now, each source connector and sink connector inherit their client 
> configurations from the worker properties. Within the worker properties, all 
> configurations that have a prefix of "producer." or "consumer." are applied 
> to all source connectors and sink connectors respectively.
> We should also provide connector-level overrides whereby connector properties 
> that are prefixed with "producer." and "consumer." are used to feed into the 
> producer and consumer clients embedded within source and sink connectors 
> respectively. The prefixes will be removed via a String#substring() call, and 
> the remainder of the connector property key will be used as the client 
> configuration key. The value is fed directly to the client as the 
> configuration value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6293) Support for Avro formatter in ConsoleConsumer With Confluent Schema Registry

2018-09-15 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616508#comment-16616508
 ] 

Jordan Moore commented on KAFKA-6293:
-

[~ethiebaut], since the Github issue was closed, should this be closed?

> Support for Avro formatter in ConsoleConsumer With Confluent Schema Registry
> 
>
> Key: KAFKA-6293
> URL: https://issues.apache.org/jira/browse/KAFKA-6293
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Eric Thiebaut-George
>Priority: Major
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Add the ability the display Avro payloads when listening for messages in 
> kafka-console-consumer.sh.
> The proposed PR will display Avro payloads (in JSON) when executed with the 
> following parameters:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
> mytopic --confluent-server localhost:8081 --formatter 
> kafka.tools.AvroMessageFormatter
> PR: https://github.com/apache/kafka/pull/4282



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7249) Provide an official Docker Hub image for Kafka

2018-09-15 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616401#comment-16616401
 ] 

Jordan Moore commented on KAFKA-7249:
-

Hello [~timhigins], do you have any issues using the confluentinc/cp-kafka and 
confluentinc/cp-zookeeper images? 

Confluent, as the enterprise support company for Kafka, currently maintains 
up-to-date production-ready images that closely follow each Apache release, and 
those images are also supported for Kubernetes deployments from the Confluent 
Helm Charts. The other components of the Confluent Platform are not required. 

I will point out that the current official Zookeeper image on DockerHub is 
maintained by a third-party as well, not part of the Zookeeper project release 
cycle. 

> Provide an official Docker Hub image for Kafka
> --
>
> Key: KAFKA-7249
> URL: https://issues.apache.org/jira/browse/KAFKA-7249
> Project: Kafka
>  Issue Type: New Feature
>  Components: build, documentation, packaging, tools, website
>Affects Versions: 1.0.1, 1.1.0, 1.1.1, 2.0.0
>Reporter: Timothy Higinbottom
>Priority: Major
>  Labels: build, distribution, docker, packaging
>
> It would be great if there was an official Docker Hub image for Kafka, 
> supported by the Kafka community, so we knew that the image was trusted and 
> stable for use in production. Many organizations and teams are now using 
> Docker, Kubernetes, and other container systems that make deployment easier. 
> I think Kafka should move into this space and encourage this as an easy way 
> for beginners to get started, but also as a portable and effective way to 
> deploy Kafka in production. 
>  
> Currently there are only Kafka images maintained by third parties, which 
> seems like a shame for a big Apache project like Kafka. Hope you all consider 
> this.
>  
> Thanks,
> Tim



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-09-12 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613030#comment-16613030
 ] 

Jordan Moore commented on KAFKA-7315:
-

So, I've been noticing the reorganization of the Confluent documentation, and 
honestly, I feel like a lot more effort is going into that rather than the 
Kafka documentation :( 

For example, everything that I would want to add to the Kafka docs is summed up 
in the top section of https://docs.confluent.io/current/avro.html

If we ignore all the Confluent references and Avro details below, those two 
bullets hit the nail on the head for me, plus the references for 
"cross-language serialization libraries". 

All I could really suggest to be added to this is some DIY reference to a 
custom serializer / deserializer for some obscure object type like the 
[PriorityQueueSerializer|https://github.com/confluentinc/kafka-streams-examples/blob/5.0.0-post/src/main/java/io/confluent/examples/streams/utils/PriorityQueueSerializer.java]

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-5983) Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker

2018-08-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596998#comment-16596998
 ] 

Jordan Moore edited comment on KAFKA-5983 at 8/30/18 2:36 AM:
--

First, MirrorMaker isn't a Connect, so I don't think "key/value converter" 
would work. The internal values definitely aren't recognized.

You could try to make the destination registry be a slave. Otherwise, the only 
reason I see have two registries would if you want topics of the same name in 
two clusters with different schemas. 

My suggestion for a "fix" and something I've worked on before would be to 
implement a MessageHandler that reads the schema ID of the source message, does 
a lookup in the source registry, gets the schema, and then uploads it to the 
destination registry for your topic. 

If you add the CachedSchemaRegistryClient class to the handler, then it won't 
be doing a HTTP call for every message. 

See example handler that renames a topic here. 
https://github.com/gwenshap/kafka-examples/tree/master/MirrorMakerHandler


was (Author: cricket007):
You could try to make the destination registry be a slave. Otherwise, the only 
reason I see have two registries would if you want topics of the same name in 
two clusters with different schemas. 

My suggestion for a "fix" and something I've worked on before would be to 
implement a MessageHandler that reads the schema ID of the source message, does 
a lookup in the source registry, gets the schema, and then uploads it to the 
destination registry for your topic. 

If you add the CachedSchemaRegistryClient class to the handler, then it won't 
be doing a HTTP call for every message. 

See example handler that renames a topic here. 
https://github.com/gwenshap/kafka-examples/tree/master/MirrorMakerHandler

> Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker
> --
>
> Key: KAFKA-5983
> URL: https://issues.apache.org/jira/browse/KAFKA-5983
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
> Environment: OS: Linux CentOS 7 and Windows 10
>Reporter: Giulio Vito de Musso
>Priority: Major
>  Labels: windows
>
> I'm installing an Apache Kafka MirrorMaker instance to replicate one cluster 
> data to one another cluster. Both on the source and on the target clusters 
> I'm using the Confluent Avro schema registry and the data is binarized with 
> Avro.
> I'm using the latest released version of Confluent 3.3.0 (kafka 0.11). 
> Moreover, the source broker is on a Windows machine while the target broker 
> is on a Linux machine.
> The two Kafka clusters are independent, thus they have different schema 
> registries.
> This are my configuration files for the MirrroMaker
> {code:title=consumer.properties|borderStyle=solid}
> group.id=test-mirrormaker-group
> bootstrap.servers=host01:9092
> exclude.internal.topics=true
> client.id=mirror_maker_consumer0
> auto.commit.enabled=false
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host01:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host01:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> {code:title=producer.properties|borderStyle=solid}
> bootstrap.servers=host02:9093
> compression.type=none
> acks=1
> client.id=mirror_maker_producer0
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host02:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host02:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> I run the MirrorMaker on the host01 Windows machine with this command
> {code}
> C:\kafka>.\bin\windows\kafka-mirror-maker.bat --consumer.config 
> .\etc\kafka\consumer.properties --producer.config 
> .\etc\kafka\producer.properties --whitelist=MY_TOPIC
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'internal.key.converter.schemas.enable' was supplied but isn't a known 
> config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'value.converter.schema.registry.url' was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 

[jira] [Commented] (KAFKA-5983) Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker

2018-08-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596998#comment-16596998
 ] 

Jordan Moore commented on KAFKA-5983:
-

You could try to make the destination registry be a slave. Otherwise, the only 
reason I see have two registries would if you want topics of the same name in 
two clusters with different schemas. 

My suggestion for a "fix" and something I've worked on before would be to 
implement a MessageHandler that reads the schema ID of the source message, does 
a lookup in the source registry, gets the schema, and then uploads it to the 
destination registry for your topic. 

If you add the CachedSchemaRegistryClient class to the handler, then it won't 
be doing a HTTP call for every message. 

See example handler that renames a topic here. 
https://github.com/gwenshap/kafka-examples/tree/master/MirrorMakerHandler

> Cannot mirror Avro-encoded data using the Apache Kafka MirrorMaker
> --
>
> Key: KAFKA-5983
> URL: https://issues.apache.org/jira/browse/KAFKA-5983
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
> Environment: OS: Linux CentOS 7 and Windows 10
>Reporter: Giulio Vito de Musso
>Priority: Major
>  Labels: windows
>
> I'm installing an Apache Kafka MirrorMaker instance to replicate one cluster 
> data to one another cluster. Both on the source and on the target clusters 
> I'm using the Confluent Avro schema registry and the data is binarized with 
> Avro.
> I'm using the latest released version of Confluent 3.3.0 (kafka 0.11). 
> Moreover, the source broker is on a Windows machine while the target broker 
> is on a Linux machine.
> The two Kafka clusters are independent, thus they have different schema 
> registries.
> This are my configuration files for the MirrroMaker
> {code:title=consumer.properties|borderStyle=solid}
> group.id=test-mirrormaker-group
> bootstrap.servers=host01:9092
> exclude.internal.topics=true
> client.id=mirror_maker_consumer0
> auto.commit.enabled=false
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host01:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host01:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> {code:title=producer.properties|borderStyle=solid}
> bootstrap.servers=host02:9093
> compression.type=none
> acks=1
> client.id=mirror_maker_producer0
> # Avro schema registry properties
> key.converter=io.confluent.connect.avro.AvroConverter
> key.converter.schema.registry.url=http://host02:8081
> value.converter=io.confluent.connect.avro.AvroConverter
> value.converter.schema.registry.url=http://host02:8081
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> {code}
> I run the MirrorMaker on the host01 Windows machine with this command
> {code}
> C:\kafka>.\bin\windows\kafka-mirror-maker.bat --consumer.config 
> .\etc\kafka\consumer.properties --producer.config 
> .\etc\kafka\producer.properties --whitelist=MY_TOPIC
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'internal.key.converter.schemas.enable' was supplied but isn't a known 
> config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,555] WARN The configuration 
> 'value.converter.schema.registry.url' was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,571] WARN The configuration 'internal.key.converter' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,586] WARN The configuration 
> 'internal.value.converter.schemas.enable' was supplied but isn't a known 
> config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,602] WARN The configuration 'internal.value.converter' 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,633] WARN The configuration 'value.converter' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,649] WARN The configuration 'key.converter' was supplied 
> but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
> [2017-09-26 10:09:58,649] WARN The configuration 
> 'key.converter.schema.registry.url' 

[jira] [Comment Edited] (KAFKA-7314) MirrorMaker example in documentation does not work

2018-08-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596995#comment-16596995
 ] 

Jordan Moore edited comment on KAFKA-7314 at 8/30/18 2:28 AM:
--

The MirrorMaker documentation could be expanded on a bit, but I have also 
tested it with two instances, and it works as expected. 

What is this port for though?
{code}
-p 9094:9094 \
{code}

And this isn't necessary either, and might be the source of the issue 
{code}
-h `hostname` \
{code}

You're only running a single Kafka instance using those commands and those 
guides. Please show your config files. 

Also DNS shouldn't work from your host to the Docker network. You also don't 
need to edit any /etc/hosts file either. If you need to expose the Kafka 
advertised listener outside of the Docker network, you need to register them as 
such. 

{code}
  -p 29092:29092 \
  -e 
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
 \
  -e 
KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
{code}

The Confluent Quickstart assumes all Kafka Clients (including MirrorMaker) will 
be *within* the Docker network. 

If you try to run MirrorMaker outside of Docker with the above environment, you 
will now need to use {{localhost:29092}} to connect to this broker. You need to 
run a second Kafka container and change the port to {{29093}}, for example, 
then connect and mirror the two like that. 


was (Author: cricket007):
The MirrorMaker documentation could be expanded on a bit, but I have also 
tested it with two instances, and it works as expected. 

What is this port for though?
{code}
-p 9094:9094 \
{code}

And this isn't necessary either, and might be the source of the issue 
{code}
-h `hostname` \
{code}

You're only running a single Kafka instance using those commands and those 
guides. Please show your config files. 

Also DNS shouldn't work from your host to the Docker network. You also don't 
need to edit any /etc/hosts file either. If you need to expose the Kafka 
advertised listener outside of the Docker network, you need to register them as 
such. 

{code}
  -p 29092:29092 \
  -e 
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
 \
  -e 
KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
{code}

The Confluent Quickstart assumes all Kafka Clients (including MirrorMaker) will 
be *within* the Docker network. 

If you try to run MirrorMaker outside of Docker with the above environment, you 
will now need to use {{localhost:29092}} to connect to this broker. You need to 
run a second Kafka container and change the port to {{29093}}, for example, 
then connect mirror the two like that. 

> MirrorMaker example in documentation does not work
> --
>
> Key: KAFKA-7314
> URL: https://issues.apache.org/jira/browse/KAFKA-7314
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: John Wilkinson
>Priority: Critical
>
> Kafka MirrorMaker as described in the documentation 
> [here|https://kafka.apache.org/documentation/#basic_ops_mirror_maker] does 
> not work. Instead of pulling messages from the consumer-defined 
> {{bootstrap.servers}} and pushing to the producer-defined 
> {{bootstrap.servers}}, it consumes and producers on the same topic on the 
> same host repeatedly.
> To replicate, set up two instances of kafka following 
> [this|https://docs.confluent.io/current/installation/docker/docs/installation/recipes/single-node-client.html]
>  guide. The schema registry and rest proxy are unnecessary. 
> [Here|https://hub.docker.com/r/confluentinc/cp-kafka/] is the DockerHub page 
> for the image.  The Kafka version is 2.0.0.
> Using those two kafka instances, go {{docker exec}} into one and set up the 
> {{consumer.properties}} and the {{producer.properties}} following the 
> MirrorMaker guide.
> Oddly, if you put in garbage unresolvable server addresses in the config, 
> there will be an error, despite the configs not getting used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7314) MirrorMaker example in documentation does not work

2018-08-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596995#comment-16596995
 ] 

Jordan Moore edited comment on KAFKA-7314 at 8/30/18 2:27 AM:
--

The MirrorMaker documentation could be expanded on a bit, but I have also 
tested it with two instances, and it works as expected. 

What is this port for though?
{code}
-p 9094:9094 \
{code}

And this isn't necessary either, and might be the source of the issue 
{code}
-h `hostname` \
{code}

You're only running a single Kafka instance using those commands and those 
guides. Please show your config files. 

Also DNS shouldn't work from your host to the Docker network. You also don't 
need to edit any /etc/hosts file either. If you need to expose the Kafka 
advertised listener outside of the Docker network, you need to register them as 
such. 

{code}
  -p 29092:29092 \
  -e 
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
 \
  -e 
KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
{code}

The Confluent Quickstart assumes all Kafka Clients (including MirrorMaker) will 
be *within* the Docker network. 

If you try to run MirrorMaker outside of Docker with the above environment, you 
will now need to use {{localhost:29092}} to connect to this broker. You need to 
run a second Kafka container and change the port to {{29093}}, for example, 
then connect mirror the two like that. 


was (Author: cricket007):
The MirrorMaker documentation could be expanded on a bit, but I have also 
tested it with two instances, and it works as expected. 

What is this port for though?
{code}
-p 9094:9094 \
{code}

And this isn't necessary either, and might be the source of the issue 
{code}
-h `hostname` \
{code}

You're only running a single Kafka instance using those commands and those 
guides. Please show your config files. 

Also DNS shouldn't work from your host to the Docker network. You also don't 
need to edit any /etc/hosts file either. If you need to expose the Kafka 
advertised listener outside of the Docker network, you need to register them as 
such. 

{code}
  -p 29092:29092 \
  -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=" 
PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" \
  -e 
KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
{code}

The Confluent Quickstart assumes all Kafka Clients (including MirrorMaker) will 
be *within* the Docker network. 

If you try to run MirrorMaker outside of Docker with the above environment, you 
will now need to use {{localhost:29092}} to connect to this broker. You need to 
run a second Kafka container and change the port to {{29093}}, for example, 
then connect mirror the two like that. 

> MirrorMaker example in documentation does not work
> --
>
> Key: KAFKA-7314
> URL: https://issues.apache.org/jira/browse/KAFKA-7314
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: John Wilkinson
>Priority: Critical
>
> Kafka MirrorMaker as described in the documentation 
> [here|https://kafka.apache.org/documentation/#basic_ops_mirror_maker] does 
> not work. Instead of pulling messages from the consumer-defined 
> {{bootstrap.servers}} and pushing to the producer-defined 
> {{bootstrap.servers}}, it consumes and producers on the same topic on the 
> same host repeatedly.
> To replicate, set up two instances of kafka following 
> [this|https://docs.confluent.io/current/installation/docker/docs/installation/recipes/single-node-client.html]
>  guide. The schema registry and rest proxy are unnecessary. 
> [Here|https://hub.docker.com/r/confluentinc/cp-kafka/] is the DockerHub page 
> for the image.  The Kafka version is 2.0.0.
> Using those two kafka instances, go {{docker exec}} into one and set up the 
> {{consumer.properties}} and the {{producer.properties}} following the 
> MirrorMaker guide.
> Oddly, if you put in garbage unresolvable server addresses in the config, 
> there will be an error, despite the configs not getting used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7314) MirrorMaker example in documentation does not work

2018-08-29 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596995#comment-16596995
 ] 

Jordan Moore commented on KAFKA-7314:
-

The MirrorMaker documentation could be expanded on a bit, but I have also 
tested it with two instances, and it works as expected. 

What is this port for though?
{code}
-p 9094:9094 \
{code}

And this isn't necessary either, and might be the source of the issue 
{code}
-h `hostname` \
{code}

You're only running a single Kafka instance using those commands and those 
guides. Please show your config files. 

Also DNS shouldn't work from your host to the Docker network. You also don't 
need to edit any /etc/hosts file either. If you need to expose the Kafka 
advertised listener outside of the Docker network, you need to register them as 
such. 

{code}
  -p 29092:29092 \
  -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=" 
PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" \
  -e 
KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092"
{code}

The Confluent Quickstart assumes all Kafka Clients (including MirrorMaker) will 
be *within* the Docker network. 

If you try to run MirrorMaker outside of Docker with the above environment, you 
will now need to use {{localhost:29092}} to connect to this broker. You need to 
run a second Kafka container and change the port to {{29093}}, for example, 
then connect mirror the two like that. 

> MirrorMaker example in documentation does not work
> --
>
> Key: KAFKA-7314
> URL: https://issues.apache.org/jira/browse/KAFKA-7314
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: John Wilkinson
>Priority: Critical
>
> Kafka MirrorMaker as described in the documentation 
> [here|https://kafka.apache.org/documentation/#basic_ops_mirror_maker] does 
> not work. Instead of pulling messages from the consumer-defined 
> {{bootstrap.servers}} and pushing to the producer-defined 
> {{bootstrap.servers}}, it consumes and producers on the same topic on the 
> same host repeatedly.
> To replicate, set up two instances of kafka following 
> [this|https://docs.confluent.io/current/installation/docker/docs/installation/recipes/single-node-client.html]
>  guide. The schema registry and rest proxy are unnecessary. 
> [Here|https://hub.docker.com/r/confluentinc/cp-kafka/] is the DockerHub page 
> for the image.  The Kafka version is 2.0.0.
> Using those two kafka instances, go {{docker exec}} into one and set up the 
> {{consumer.properties}} and the {{producer.properties}} following the 
> MirrorMaker guide.
> Oddly, if you put in garbage unresolvable server addresses in the config, 
> there will be an error, despite the configs not getting used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-21 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16587602#comment-16587602
 ] 

Jordan Moore commented on KAFKA-7315:
-

I understand the "Serde" class itself is a Streams class. I use the SerDe term 
loosely to mean serializer/deserializer. 

I wasn't trying to imply that there needed to be "how to (de)serialize to 
datatype X,Y,Z", maybe just mention some popular options. 

My main point was that the while the Serde class itself is specific to Streams 
API, the mention of custom Serializers (and the SerDe classes that are already 
implemented, which wrap their respective serializer) seems 
"hidden" under the Streams docs page; on the "front page", I'm only able to 
find the term "serializer" in the config sections, and it's not really clear 
what needs to be done to implement a custom one. E.g. saying "a class that 
implements the org.apache.kafka.common.serialization.Serializer interface", 1) 
really only applies to JVM-based clients, 2) doesn't specifically mention that 
the Serializer needs to return byte[] and Deserializer converts from byte[]

Like I said, though, maybe that is implicitly said/known already in some other 
section of the docs? 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-21 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16587602#comment-16587602
 ] 

Jordan Moore edited comment on KAFKA-7315 at 8/21/18 3:38 PM:
--

I understand the "Serde" class itself is a Streams class. I use the SerDe term 
loosely to mean serializer/deserializer. 

I wasn't trying to imply that there needed to be "how to (de)serialize to 
datatype X,Y,Z", maybe just mention some popular options, and say something 
along the lines of "as long one can translate a datatype into bytes, it can be 
transferred with Kafka."

My main point was that the while the Serde class itself is specific to Streams 
API, the mention of custom Serializers (and the SerDe classes that are already 
implemented, which wrap their respective serializer) seems 
"hidden" under the Streams docs page; on the "front page", I'm only able to 
find the term "serializer" in the config sections, and it's not really clear 
what needs to be done to implement a custom one. E.g. saying "a class that 
implements the org.apache.kafka.common.serialization.Serializer interface", 1) 
really only applies to JVM-based clients, 2) doesn't specifically mention that 
the Serializer needs to return byte[] and Deserializer converts from byte[]

Like I said, though, maybe that is implicitly said/known already in some other 
section of the docs? 


was (Author: cricket007):
I understand the "Serde" class itself is a Streams class. I use the SerDe term 
loosely to mean serializer/deserializer. 

I wasn't trying to imply that there needed to be "how to (de)serialize to 
datatype X,Y,Z", maybe just mention some popular options. 

My main point was that the while the Serde class itself is specific to Streams 
API, the mention of custom Serializers (and the SerDe classes that are already 
implemented, which wrap their respective serializer) seems 
"hidden" under the Streams docs page; on the "front page", I'm only able to 
find the term "serializer" in the config sections, and it's not really clear 
what needs to be done to implement a custom one. E.g. saying "a class that 
implements the org.apache.kafka.common.serialization.Serializer interface", 1) 
really only applies to JVM-based clients, 2) doesn't specifically mention that 
the Serializer needs to return byte[] and Deserializer converts from byte[]

Like I said, though, maybe that is implicitly said/known already in some other 
section of the docs? 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586650#comment-16586650
 ] 

Jordan Moore edited comment on KAFKA-7315 at 8/20/18 11:12 PM:
---

What about a separate section on SerDe's outside of the Streams docs?

I've seen examples of people using MsgPack and ProtoBuf as well. 

Sure, it's documented implicitly (or briefly under "Message Format") that Kafka 
uses only byte[] for keys/values, but lots of "getting started guides" are 
sending strings and not really discussing the benefits of alternative, more 
compact "structured" formats. 


was (Author: cricket007):
What about a separate section on SerDe's outside of the Streams docs?

I've seen examples of people using MsgPack and ProtoBuf as well. 

Sure, it's documented implicitly (or briefly under "Message Format") that Kafka 
uses only byte[] for keys/values, but lots of "getting started guides" are 
sending strings and not really discussing the values of alternative, more 
compact "structured" formats. 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586650#comment-16586650
 ] 

Jordan Moore commented on KAFKA-7315:
-

What about a separate section on SerDe's outside of the Streams docs?

I've seen examples of people using MsgPack and ProtoBuf as well. 

Sure, it's documented implicitly (or briefly under "Message Format") that Kafka 
uses only byte[] for keys/values, but lots of "getting started guides" are 
sending strings and not really discussing the values of alternative, more 
compact "structured" formats. 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586629#comment-16586629
 ] 

Jordan Moore edited comment on KAFKA-7315 at 8/20/18 10:56 PM:
---

bq.  should be vendor agnostic and not mention any of them

Fair enough. Though, in its place I feel it would be fair to mention at least 
_somewhere_ the alternatives to primitive Java types, byte arrays and JSON. 

The only mentions of Avro elsewhere in the docs that I can easily find are in 
the Connect sections. 


was (Author: cricket007):
bq.  should be vendor agnostic and not mention any of them

Fair enough. Though, in it's place I feel it would be fair to mention at least 
_somewhere_ the alternatives to primitive Java types, byte arrays and JSON. 

The only mentions of Avro elsewhere in the docs that I can easily find are in 
the Connect sections. 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586629#comment-16586629
 ] 

Jordan Moore commented on KAFKA-7315:
-

bq.  should be vendor agnostic and not mention any of them

Fair enough. Though, in it's place I feel it would be fair to mention at least 
_somewhere_ the alternatives to primitive Java types, byte arrays and JSON. 

The only mentions of Avro elsewhere in the docs that I can easily find are in 
the Connect sections. 

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586476#comment-16586476
 ] 

Jordan Moore commented on KAFKA-7315:
-

Thanks for opening [~vvcephei]

Thoughts:

List out the options in the documentation, don't force a developer towards 
using a Schema Registry (since this is AK docs, not Confluent or Hortonworks 
for example). 

So, options:
 # Take an Avro object and push it as raw bytes directly; [example 
here|http://aseigneurin.github.io/2016/03/04/kafka-spark-avro-producing-and-consuming-avro-messages.html]
 # Use a Schema Registry, and document how it works, in general (using 
referential lookups to external sources within the SerDe interfaces)
 ## Possibly link to (or simply mention) known implementations
--- [Confluent Schema 
Registry|https://www.confluent.io/confluent-schema-registry/]
--- [Hortonworks Registry|https://registry-project.readthedocs.io/en/latest/]
--- 
[Cloudera|http://blog.cloudera.com/blog/2018/07/robust-message-serialization-in-apache-kafka-using-apache-avro-part-2/]

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7077) KIP-318: Make Kafka Connect Source idempotent

2018-07-01 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529130#comment-16529130
 ] 

Jordan Moore commented on KAFKA-7077:
-

Is this related to KAFKA-6340 ?

> KIP-318: Make Kafka Connect Source idempotent
> -
>
> Key: KAFKA-7077
> URL: https://issues.apache.org/jira/browse/KAFKA-7077
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Stephane Maarek
>Assignee: Stephane Maarek
>Priority: Major
>
> KIP Link: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-318%3A+Make+Kafka+Connect+Source+idempotent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7060) Command-line overrides for ConnectDistributed worker properties

2018-06-30 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528950#comment-16528950
 ] 

Jordan Moore commented on KAFKA-7060:
-

Personally, I would prefer {{--property key=value}} flags, just as the 
consumer/producer CLI tools

> Command-line overrides for ConnectDistributed worker properties
> ---
>
> Key: KAFKA-7060
> URL: https://issues.apache.org/jira/browse/KAFKA-7060
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Kevin Lafferty
>Priority: Minor
>  Labels: needs-kip
>
> This Jira is for tracking the implementation for 
> [KIP-316|https://cwiki.apache.org/confluence/display/KAFKA/KIP-316%3A+Command-line+overrides+for+ConnectDistributed+worker+properties].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6932) Avroconverter does not propagate subjectname stratergy

2018-06-18 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516374#comment-16516374
 ] 

Jordan Moore commented on KAFKA-6932:
-

Shouldn't this be posted to the [Schema Registry Issues 
Page|https://github.com/confluentinc/schema-registry/issues]? 

The problem you are having doesn't affect core Kafka

> Avroconverter does not propagate subjectname stratergy
> --
>
> Key: KAFKA-6932
> URL: https://issues.apache.org/jira/browse/KAFKA-6932
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Masthan Shaik
>Priority: Major
>
> Thecustomer is using 
> io.confluent.kafka.serializers.subject.TopicRecordNameStrategy for 
> AvroConverter but the avroconverter is defaulting to TopicNameStratergy
> It looks like the avroconverter doesn't initialize or pass the strategy into 
> the deserializer.
> POST with input 
> {"schema":"{\"type\":\"record\",\"name\":\"Channel\",\"namespace\":\"com.thing\",\"doc\":\"Channel
>  Object\",\"fields\":[
> {\"name\":\"channelId\",\"type\":\"string\",\"doc\":\"Unique channel id for a 
> single time series\"}
> ]}"} to
> in the above log it is using newgen.changelog.v1-key instead of the 
> newgen.changelog.v1-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6942) Connect connectors api doesn't show versions of connectors

2018-06-18 Thread Jordan Moore (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516369#comment-16516369
 ] 

Jordan Moore commented on KAFKA-6942:
-

Doesn't this already exist?

GET /connector-plugins
{code}
[
  {
"class": "io.confluent.connect.replicator.ReplicatorSourceConnector",
"type": "source",
"version": "4.0.1"
  },
  {
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "1.0.0-cp1"
  },
  {
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "1.0.0-cp1"
  }
]
{code}

> Connect connectors api doesn't show versions of connectors
> --
>
> Key: KAFKA-6942
> URL: https://issues.apache.org/jira/browse/KAFKA-6942
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Antony Stubbs
>Priority: Minor
>  Labels: needs-kip
>
> Would be very useful to have the connector list API response also return the 
> version of the installed connectors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)