[jira] [Resolved] (KAFKA-8614) Rename the `responses` field of IncrementalAlterConfigsResponse to match AlterConfigs

2019-07-12 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8614.

Resolution: Fixed

> Rename the `responses` field of IncrementalAlterConfigsResponse to match 
> AlterConfigs
> -
>
> Key: KAFKA-8614
> URL: https://issues.apache.org/jira/browse/KAFKA-8614
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: Bob Barrett
>Assignee: Bob Barrett
>Priority: Minor
>
> IncrementalAlterConfigsResponse and AlterConfigsResponse have an identical 
> structure for per-resource error codes, but in AlterConfigsResponse it is 
> named `Resources` while in IncrementalAlterConfigsResponse it is named 
> `responses`.
> AlterConfigsResponse:
> {code:java}
> { "name": "Resources", "type": "[]AlterConfigsResourceResponse", "versions": 
> "0+", "about": "The responses for each resource.", "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The 
> resource error code." },
> { "name": "ErrorMessage", "type": "string", "nullableVersions": "0+", 
> "versions": "0+", "about": "The resource error message, or null if there was 
> no error." },
> { "name": "ResourceType", "type": "int8", "versions": "0+", "about": "The 
> resource type." },
> { "name": "ResourceName", "type": "string", "versions": "0+", "about": 
> "The resource name." }
> ]}{code}
>  
> IncrementalAlterConfigsResponse:
>  
> {code:java}
> { "name": "responses", "type": "[]AlterConfigsResourceResult", "versions": 
> "0+", "about": "The responses for each resource.", "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The 
> resource error code." },
> { "name": "ErrorMessage", "type": "string", "nullableVersions": "0+", 
> "versions": "0+", "about": "The resource error message, or null if there was 
> no error." },
> { "name": "ResourceType", "type": "int8", "versions": "0+", "about": "The 
> resource type." },
> { "name": "ResourceName", "type": "string", "versions": "0+", "about": 
> "The resource name." }
> ]}
> {code}
> We should change the field in IncrementalAlterConfigsResponse to be 
> consistent with AlterConfigsResponse.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8614) Rename the `responses` field of IncrementalAlterConfigsResponse to match AlterConfigs

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884247#comment-16884247
 ] 

ASF GitHub Bot commented on KAFKA-8614:
---

hachikuji commented on pull request #7022: KAFKA-8614: Rename the `responses` 
field of IncrementalAlterConfigsRe…
URL: https://github.com/apache/kafka/pull/7022
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Rename the `responses` field of IncrementalAlterConfigsResponse to match 
> AlterConfigs
> -
>
> Key: KAFKA-8614
> URL: https://issues.apache.org/jira/browse/KAFKA-8614
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: Bob Barrett
>Assignee: Bob Barrett
>Priority: Minor
>
> IncrementalAlterConfigsResponse and AlterConfigsResponse have an identical 
> structure for per-resource error codes, but in AlterConfigsResponse it is 
> named `Resources` while in IncrementalAlterConfigsResponse it is named 
> `responses`.
> AlterConfigsResponse:
> {code:java}
> { "name": "Resources", "type": "[]AlterConfigsResourceResponse", "versions": 
> "0+", "about": "The responses for each resource.", "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The 
> resource error code." },
> { "name": "ErrorMessage", "type": "string", "nullableVersions": "0+", 
> "versions": "0+", "about": "The resource error message, or null if there was 
> no error." },
> { "name": "ResourceType", "type": "int8", "versions": "0+", "about": "The 
> resource type." },
> { "name": "ResourceName", "type": "string", "versions": "0+", "about": 
> "The resource name." }
> ]}{code}
>  
> IncrementalAlterConfigsResponse:
>  
> {code:java}
> { "name": "responses", "type": "[]AlterConfigsResourceResult", "versions": 
> "0+", "about": "The responses for each resource.", "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The 
> resource error code." },
> { "name": "ErrorMessage", "type": "string", "nullableVersions": "0+", 
> "versions": "0+", "about": "The resource error message, or null if there was 
> no error." },
> { "name": "ResourceType", "type": "int8", "versions": "0+", "about": "The 
> resource type." },
> { "name": "ResourceName", "type": "string", "versions": "0+", "about": 
> "The resource name." }
> ]}
> {code}
> We should change the field in IncrementalAlterConfigsResponse to be 
> consistent with AlterConfigsResponse.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-07-12 Thread Sam Deo (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884198#comment-16884198
 ] 

Sam Deo commented on KAFKA-7500:


Hello, 

Tentatively when is 2.4.0 expected to be  released ? 

Will  Kafka version 0.10.2.1 work with MM 2.0 or we should plan to upgrade to 
0.11.0 at the minimum ?

Thanks

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Reporter: Ryanne Dolan
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: Active-Active XDCR setup.png
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8635) Unnecessary wait when looking up coordinator before transactional request

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884194#comment-16884194
 ] 

ASF GitHub Bot commented on KAFKA-8635:
---

bob-barrett commented on pull request #7085: KAFKA-8635: Skip client poll in 
Sender loop when no request is sent
URL: https://github.com/apache/kafka/pull/7085
 
 
   This patch breaks up maybeSendTransactionalRequest() and changes it to 
return false if a FindCoordinatorRequest is enqueued. If this is the case, we 
no longer poll because no request was actually sent.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unnecessary wait when looking up coordinator before transactional request
> -
>
> Key: KAFKA-8635
> URL: https://issues.apache.org/jira/browse/KAFKA-8635
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0, 2.2.1
>Reporter: Denis Washington
>Assignee: Bob Barrett
>Priority: Major
>
> In our Kafka Streams applications (with EOS enabled), we were seeing 
> mysterious long delays between records being produced by a stream task and 
> the same records being consumed by the next task. These delays turned out to 
> always be around {{retry.backoff.ms}} long; reducing that value reduced the 
> delays by about the same amount.
> After digging further, I pinned down the problem to the following lines in 
> {{org.apache.kafka.clients.producer.internals.Sender#runOnce}}:
> {{} else if (transactionManager.hasInFlightTransactionalRequest() || 
> maybeSendTransactionalRequest()) {}}
>  {{ // as long as there are outstanding transactional requests, we simply 
> wait for them to return
>  {{ client.poll(retryBackoffMs, time.milliseconds());}}
>  {{ return;}}
>  {{}}}
> This code seems to assume that, if {{maybeSendTransactionalRequest}} returns 
> true, a transactional request has been sent out that should be waited for. 
> However, this is not true if the request requires a coordinator lookup:
> {{if (nextRequestHandler.needsCoordinator()) {}}
> {{     targetNode = 
> transactionManager.coordinator(nextRequestHandler.coordinatorType());}}
> {{     if (targetNode == null) {}}
> {{          transactionManager.lookupCoordinator(nextRequestHandler); }}
> {{  break;}}
> {{    }}}
> {{     ...}}
> {{lookupCoordinator()}} does not actually send anything, but just enqueues a 
> coordinator lookup request for the {{Sender}}'s next run loop iteration. 
> {{maybeSendTransactionalRequest}} still returns true, though (the {{break}} 
> jumps to a {{return true}} at the end of the method), leading the {{Sender}} 
> to needlessly wait via {{client.poll()}} although there is actually no 
> request in-flight.
> I _think_ the fix is to let {{maybeSendTransactionalRequest}} return false if 
> it merely enqueues the coordinator lookup instead of actually sending 
> anything. But I'm not sure, hence the bug report instead of a pull request.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-7016) Reconsider the "avoid the expensive and useless stack trace for api exceptions" practice

2019-07-12 Thread Patrick Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884127#comment-16884127
 ] 

Patrick Taylor commented on KAFKA-7016:
---

I very much agree and find this code rather unbelievable.  A stack trace is 
expensive and useless?  How is it either one of those?

> Reconsider the "avoid the expensive and useless stack trace for api 
> exceptions" practice
> 
>
> Key: KAFKA-7016
> URL: https://issues.apache.org/jira/browse/KAFKA-7016
> Project: Kafka
>  Issue Type: Bug
>Reporter: Martin Vysny
>Priority: Major
>
> I am trying to write a Kafka Consumer; upon running it only prints out:
> {\{ org.apache.kafka.common.errors.InvalidGroupIdException: The configured 
> groupId is invalid}}
> Note that the stack trace is missing, so that I have no information which 
> part of my code is bad and need fixing; I also have no information which 
> Kafka Client method has been called. Upon closer examination I found this in 
> ApiException:
>  
> {{/* avoid the expensive and useless stack trace for api exceptions */}}
>  {{@Override}}
>  {{public Throwable fillInStackTrace() {}}
>  \{{ return this;}}
>  {{}}}
>  
> I think it is a bad practice to hide all useful debugging info and trade it 
> for dubious performance gains. Exceptions are for exceptional code flow which 
> are allowed to be slow.
>  
> This applies to kafka-clients 1.1.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8322) Flaky test: SslTransportLayerTest.testListenerConfigOverride

2019-07-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884123#comment-16884123
 ] 

Matthias J. Sax commented on KAFKA-8322:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/252/testReport/junit/org.apache.kafka.common.network/SslTransportLayerTest/testListenerConfigOverride/]

> Flaky test: SslTransportLayerTest.testListenerConfigOverride
> 
>
> Key: KAFKA-8322
> URL: https://issues.apache.org/jira/browse/KAFKA-8322
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Reporter: Dhruvil Shah
>Priority: Major
>
> java.lang.AssertionError: expected: but 
> was: at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:120) at 
> org.junit.Assert.assertEquals(Assert.java:146) at 
> org.apache.kafka.common.network.NetworkTestUtils.waitForChannelClose(NetworkTestUtils.java:111)
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/4250/testReport/junit/org.apache.kafka.common.network/SslTransportLayerTest/testListenerConfigOverride/]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2019-07-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884122#comment-16884122
 ] 

Matthias J. Sax commented on KAFKA-4222:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/252/testReport/junit/org.apache.kafka.streams.integration/QueryableStateIntegrationTest/queryOnRebalance/]

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 0.11.0.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8122) Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState

2019-07-12 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884062#comment-16884062
 ] 

Boyang Chen commented on KAFKA-8122:


Fail again : 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/251/console]

 

> Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState
> 
>
> Key: KAFKA-8122
> URL: https://issues.apache.org/jira/browse/KAFKA-8122
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3285/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState/]
> {quote}java.lang.AssertionError: Expected: <[KeyValue(0, 0), KeyValue(0, 1), 
> KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), KeyValue(0, 15), KeyValue(0, 
> 21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 45)]> but: was 
> <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105)]> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:212)
>  at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:414){quote}
> STDOUT
> {quote}[2019-03-17 01:19:51,971] INFO Created server with tickTime 800 
> minSessionTimeout 1600 maxSessionTimeout 16000 datadir 
> /tmp/kafka-10997967593034298484/version-2 snapdir 
> /tmp/kafka-5184295822696533708/version-2 
> (org.apache.zookeeper.server.ZooKeeperServer:174) [2019-03-17 01:19:51,971] 
> INFO binding to port /127.0.0.1:0 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:89) [2019-03-17 
> 01:19:51,973] INFO KafkaConfig values: advertised.host.name = null 
> advertised.listeners = null advertised.port = null 
> alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 60 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 3 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 360 
> delegation.token.expiry.time.ms = 8640 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 60480 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 30 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 8640 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 6 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 6 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.m

[jira] [Resolved] (KAFKA-7157) Connect TimestampConverter SMT doesn't handle null values

2019-07-12 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7157.
--
   Resolution: Fixed
Fix Version/s: 2.3.1
   2.4.0
   2.2.2
   2.1.2
   2.0.2
   1.1.2
   1.0.3

> Connect TimestampConverter SMT doesn't handle null values
> -
>
> Key: KAFKA-7157
> URL: https://issues.apache.org/jira/browse/KAFKA-7157
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.4.0, 2.3.1
>
>
> TimestampConverter SMT is not able to handle null values (in any versions), 
> so it's always trying to apply the transformation to the value. Instead, it 
> needs to check for null and use the default value for the new schema's field.
> {noformat}
> [2018-07-03 02:31:52,490] ERROR Task MySourceConnector-2 threw an uncaught 
> and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) 
> java.lang.NullPointerException 
> at 
> org.apache.kafka.connect.transforms.TimestampConverter$2.toRaw(TimestampConverter.java:137)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.convertTimestamp(TimestampConverter.java:440)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyValueWithSchema(TimestampConverter.java:368)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:358)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
>  
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:435)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:264) 
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:182)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:150)
>  
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146) 
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190) 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> at java.lang.Thread.run(Thread.java:748) 
> [2018-07-03 02:31:52,491] ERROR Task is being killed and will not recover 
> until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8643) Incompatible MemberDescription constructor change

2019-07-12 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8643.

Resolution: Fixed

> Incompatible MemberDescription constructor change
> -
>
> Key: KAFKA-8643
> URL: https://issues.apache.org/jira/browse/KAFKA-8643
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Accidentally deleted the existing public constructor interface in the 
> MemberDescription. Need to bring back the old constructors for compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8570) Downconversion could fail when log contains out of order message formats

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884010#comment-16884010
 ] 

ASF GitHub Bot commented on KAFKA-8570:
---

hachikuji commented on pull request #7071: KAFKA-8570: Grow buffer to hold down 
converted records if it was insufficiently sized
URL: https://github.com/apache/kafka/pull/7071
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Downconversion could fail when log contains out of order message formats
> 
>
> Key: KAFKA-8570
> URL: https://issues.apache.org/jira/browse/KAFKA-8570
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dhruvil Shah
>Assignee: Dhruvil Shah
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> When the log contains out of order message formats (for example a v2 message 
> followed by a v1 message), it is possible for down-conversion to fail in 
> certain scenarios where batches compressed and greater than 1kB in size. 
> Down-conversion fails with a stack like the following:
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:275)
> at 
> org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.writeTo(FileLogInputStream.java:176)
> at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:107)
> at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:242)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8570) Downconversion could fail when log contains out of order message formats

2019-07-12 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-8570:
---
Fix Version/s: 1.1.2

> Downconversion could fail when log contains out of order message formats
> 
>
> Key: KAFKA-8570
> URL: https://issues.apache.org/jira/browse/KAFKA-8570
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dhruvil Shah
>Assignee: Dhruvil Shah
>Priority: Major
> Fix For: 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> When the log contains out of order message formats (for example a v2 message 
> followed by a v1 message), it is possible for down-conversion to fail in 
> certain scenarios where batches compressed and greater than 1kB in size. 
> Down-conversion fails with a stack like the following:
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:275)
> at 
> org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.writeTo(FileLogInputStream.java:176)
> at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:107)
> at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:242)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8637) WriteBatch objects leak off-heap memory

2019-07-12 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8637:
--

Assignee: Sophie Blee-Goldman

> WriteBatch objects leak off-heap memory
> ---
>
> Key: KAFKA-8637
> URL: https://issues.apache.org/jira/browse/KAFKA-8637
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.3.0
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.1.2, 2.2.2, 2.4.0, 2.3.1
>
>
> In 2.1 we did some refactoring that led to the WriteBatch objects in 
> RocksDBSegmentedBytesStore#restoreAllInternal being created in a separate 
> method, rather than in a try-with-resources statement as used elsewhere. This 
> causes a memory leak as the WriteBatches are no longer closed automatically



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8637) WriteBatch objects leak off-heap memory

2019-07-12 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8637:
---
Fix Version/s: 2.4.0

> WriteBatch objects leak off-heap memory
> ---
>
> Key: KAFKA-8637
> URL: https://issues.apache.org/jira/browse/KAFKA-8637
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.3.0
>Reporter: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.1.2, 2.2.2, 2.4.0, 2.3.1
>
>
> In 2.1 we did some refactoring that led to the WriteBatch objects in 
> RocksDBSegmentedBytesStore#restoreAllInternal being created in a separate 
> method, rather than in a try-with-resources statement as used elsewhere. This 
> causes a memory leak as the WriteBatches are no longer closed automatically



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8637) WriteBatch objects leak off-heap memory

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884004#comment-16884004
 ] 

ASF GitHub Bot commented on KAFKA-8637:
---

mjsax commented on pull request #7077: KAFKA-8637: WriteBatch objects leak 
off-heap memory
URL: https://github.com/apache/kafka/pull/7077
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WriteBatch objects leak off-heap memory
> ---
>
> Key: KAFKA-8637
> URL: https://issues.apache.org/jira/browse/KAFKA-8637
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.3.0
>Reporter: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.1.2, 2.2.2, 2.3.1
>
>
> In 2.1 we did some refactoring that led to the WriteBatch objects in 
> RocksDBSegmentedBytesStore#restoreAllInternal being created in a separate 
> method, rather than in a try-with-resources statement as used elsewhere. This 
> causes a memory leak as the WriteBatches are no longer closed automatically



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-7157) Connect TimestampConverter SMT doesn't handle null values

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883997#comment-16883997
 ] 

ASF GitHub Bot commented on KAFKA-7157:
---

rhauch commented on pull request #7070: KAFKA-7157: Fix handling of nulls in 
TimestampConverter
URL: https://github.com/apache/kafka/pull/7070
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Connect TimestampConverter SMT doesn't handle null values
> -
>
> Key: KAFKA-7157
> URL: https://issues.apache.org/jira/browse/KAFKA-7157
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
>
> TimestampConverter SMT is not able to handle null values (in any versions), 
> so it's always trying to apply the transformation to the value. Instead, it 
> needs to check for null and use the default value for the new schema's field.
> {noformat}
> [2018-07-03 02:31:52,490] ERROR Task MySourceConnector-2 threw an uncaught 
> and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) 
> java.lang.NullPointerException 
> at 
> org.apache.kafka.connect.transforms.TimestampConverter$2.toRaw(TimestampConverter.java:137)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.convertTimestamp(TimestampConverter.java:440)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyValueWithSchema(TimestampConverter.java:368)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:358)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
>  
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:435)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:264) 
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:182)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:150)
>  
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146) 
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190) 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> at java.lang.Thread.run(Thread.java:748) 
> [2018-07-03 02:31:52,491] ERROR Task is being killed and will not recover 
> until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-6605) Flatten SMT does not properly handle fields that are null

2019-07-12 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-6605.
--
   Resolution: Fixed
 Reviewer: Randall Hauch
Fix Version/s: 2.3.1
   2.4.0
   2.2.2
   2.1.2
   2.0.2
   1.1.2
   1.0.3

> Flatten SMT does not properly handle fields that are null
> -
>
> Key: KAFKA-6605
> URL: https://issues.apache.org/jira/browse/KAFKA-6605
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Randall Hauch
>Assignee: Michal Borowiecki
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.4.0, 2.3.1
>
>
> When a message has a null field, the `Flatten` SMT does not properly handle 
> this and throws an NPE. Consider this message from Debezium:
> {code}
> {
>   "before": null,
>   "after": {
> "dbserver1.mydb.team.Value": {
>   "id": 1,
>   "name": "kafka",
>   "email": "ka...@apache.org",
>   "last_modified": 1519939449000
> }
>   },
>   "source": {
> "version": {
>   "string": "0.7.3"
> },
> "name": "dbserver1",
> "server_id": 0,
> "ts_sec": 0,
> "gtid": null,
> "file": "mysql-bin.03",
> "pos": 154,
> "row": 0,
> "snapshot": {
>   "boolean": true
> },
> "thread": null,
> "db": {
>   "string": "mydb"
> },
> "table": {
>   "string": "team"
> }
>   },
>   "op": "c",
>   "ts_ms": {
> "long": 1519939520285
>   }
> }
> {code}
> Note how `before` is null; this event represents a row was INSERTED and thus 
> there is no `before` state of the row. This results in an NPE:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:219)
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:234)
> at 
> org.apache.kafka.connect.transforms.Flatten.applyWithSchema(Flatten.java:151)
> at org.apache.kafka.connect.transforms.Flatten.apply(Flatten.java:75)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:211)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:187)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Here's the connector configuration that was used:
> {code}
> {
> "name": "debezium-connector-flatten",
> "config": {
> "connector.class": "io.debezium.connector.mysql.MySqlConnector",
> "tasks.max": "1",
> "database.hostname": "mysql",
> "database.port": "3306",
> "database.user": "debezium",
> "database.password": "dbz",
> "database.server.id": "223345",
> "database.server.name": "dbserver-flatten",
> "database.whitelist": "mydb",
> "database.history.kafka.bootstrap.servers": 
> "kafka-1:9092,kafka-2:9092,kafka-3:9092",
> "database.history.kafka.topic": "schema-flatten.mydb",
> "include.schema.changes": "true",
> "transforms": "flatten",
> "transforms.flatten.type": 
> "org.apache.kafka.connect.transforms.Flatten$Value",
> "transforms.flatten.delimiter": "_"
>   }
> }
> {code}
> Note that the above configuration sets the delimiter to `_`. The default 
> delimiter is `.`, which is not a valid character within an Avro field, and 
> doing this results in the following exception:
> {noformat}
> org.apache.avro.SchemaParseException: Illegal character in: source.version
>   at org.apache.avro.Schema.validateName(Schema.java:1151)
>   at org.apache.avro.Schema.access$200(Schema.java:81)
>   at org.apache.avro.Schema$Field.(Schema.java:403)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2116)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.access$5300(SchemaBuilder.java:2034)
>   at 
> org.apache.avro.SchemaBuilder$GenericDefault.withDefault(SchemaBuilder.java:2423)
>   at 
> io.co

[jira] [Commented] (KAFKA-6605) Flatten SMT does not properly handle fields that are null

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883986#comment-16883986
 ] 

ASF GitHub Bot commented on KAFKA-6605:
---

rhauch commented on pull request #5706: KAFKA-6605 fix NPE in Flatten when 
optional Struct is null - backport…
URL: https://github.com/apache/kafka/pull/5706
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flatten SMT does not properly handle fields that are null
> -
>
> Key: KAFKA-6605
> URL: https://issues.apache.org/jira/browse/KAFKA-6605
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Randall Hauch
>Assignee: Michal Borowiecki
>Priority: Major
>
> When a message has a null field, the `Flatten` SMT does not properly handle 
> this and throws an NPE. Consider this message from Debezium:
> {code}
> {
>   "before": null,
>   "after": {
> "dbserver1.mydb.team.Value": {
>   "id": 1,
>   "name": "kafka",
>   "email": "ka...@apache.org",
>   "last_modified": 1519939449000
> }
>   },
>   "source": {
> "version": {
>   "string": "0.7.3"
> },
> "name": "dbserver1",
> "server_id": 0,
> "ts_sec": 0,
> "gtid": null,
> "file": "mysql-bin.03",
> "pos": 154,
> "row": 0,
> "snapshot": {
>   "boolean": true
> },
> "thread": null,
> "db": {
>   "string": "mydb"
> },
> "table": {
>   "string": "team"
> }
>   },
>   "op": "c",
>   "ts_ms": {
> "long": 1519939520285
>   }
> }
> {code}
> Note how `before` is null; this event represents a row was INSERTED and thus 
> there is no `before` state of the row. This results in an NPE:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:219)
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:234)
> at 
> org.apache.kafka.connect.transforms.Flatten.applyWithSchema(Flatten.java:151)
> at org.apache.kafka.connect.transforms.Flatten.apply(Flatten.java:75)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:211)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:187)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Here's the connector configuration that was used:
> {code}
> {
> "name": "debezium-connector-flatten",
> "config": {
> "connector.class": "io.debezium.connector.mysql.MySqlConnector",
> "tasks.max": "1",
> "database.hostname": "mysql",
> "database.port": "3306",
> "database.user": "debezium",
> "database.password": "dbz",
> "database.server.id": "223345",
> "database.server.name": "dbserver-flatten",
> "database.whitelist": "mydb",
> "database.history.kafka.bootstrap.servers": 
> "kafka-1:9092,kafka-2:9092,kafka-3:9092",
> "database.history.kafka.topic": "schema-flatten.mydb",
> "include.schema.changes": "true",
> "transforms": "flatten",
> "transforms.flatten.type": 
> "org.apache.kafka.connect.transforms.Flatten$Value",
> "transforms.flatten.delimiter": "_"
>   }
> }
> {code}
> Note that the above configuration sets the delimiter to `_`. The default 
> delimiter is `.`, which is not a valid character within an Avro field, and 
> doing this results in the following exception:
> {noformat}
> org.apache.avro.SchemaParseException: Illegal character in: source.version
>   at org.apache.avro.Schema.validateName(Schema.java:1151)
>   at org.apache.avro.Schema.access$200(Schema.java:81)
>   at org.apache.avro.Schema$Field.(Schema.java:403)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.complete

[jira] [Assigned] (KAFKA-8360) Docs do not mention RequestQueueSize JMX metric

2019-07-12 Thread Ankit Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Kumar reassigned KAFKA-8360:
--

Assignee: Ankit Kumar

> Docs do not mention RequestQueueSize JMX metric
> ---
>
> Key: KAFKA-8360
> URL: https://issues.apache.org/jira/browse/KAFKA-8360
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, metrics, network
>Reporter: Charles Francis Larrieu Casias
>Assignee: Ankit Kumar
>Priority: Major
>  Labels: documentation
>
> In the [monitoring 
> documentation|[https://kafka.apache.org/documentation/#monitoring],] there is 
> no mention of the `kafka.network:type=RequestChannel,name=RequestQueueSize` 
> JMX metric. This is an important metric because it can indicate that there 
> are too many requests in queue and suggest either increasing 
> `queued.max.requests` (along with perhaps memory), or increasing 
> `num.io.threads`.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8657) Client-side Automatic Topic Creation on Producer

2019-07-12 Thread Justine Olshan (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan updated KAFKA-8657:
--
Summary: Client-side Automatic Topic Creation on Producer  (was: Automatic 
Topic Creation on Producer)

> Client-side Automatic Topic Creation on Producer
> 
>
> Key: KAFKA-8657
> URL: https://issues.apache.org/jira/browse/KAFKA-8657
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Minor
>
> Kafka has a feature that allows for the auto-creation of topics. Usually this 
> is done through a metadata request to the broker. KIP 487 aims to give the 
> producer the functionality to auto-create topics through a separate request. 
> See 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-487%3A+Automatic+Topic+Creation+on+Producer]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8646) Materialized.withLoggingDisabled() does not disable changelog topics creation

2019-07-12 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883951#comment-16883951
 ] 

Bill Bejeck commented on KAFKA-8646:


[~jmhostalet] thanks for reporting this, we'll look into this.

> Materialized.withLoggingDisabled() does not disable changelog topics creation
> -
>
> Key: KAFKA-8646
> URL: https://issues.apache.org/jira/browse/KAFKA-8646
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: jmhostalet
>Assignee: Bill Bejeck
>Priority: Minor
>
> I have a cluster with 3 brokers running version 0.11
> My kafka-streams app was using kafka-client 0.11.0.1 but recently I've 
> migrated to 2.3.0
> I have no executed any migration as my data is disposable, therefore I have 
> deleted all intermediate topics, except input and output topics.
> My streams config is: 
> {code:java}
> application.id = consumer-id-v1.00
> application.server =
> bootstrap.servers = [foo1:9092, foo2:9092, foo3:9092]
> buffered.records.per.partition = 1000
> cache.max.bytes.buffering = 524288000
> client.id =
> commit.interval.ms = 3
> connections.max.idle.ms = 54
> default.deserialization.exception.handler = class 
> org.apache.kafka.streams.errors.LogAndFailExceptionHandler
> default.key.serde = class 
> org.apache.kafka.common.serialization.Serdes$StringSerde
> default.production.exception.handler = class 
> org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
> default.timestamp.extractor = class com.acme.stream.TimeExtractor
> default.value.serde = class com.acme.serde.MyDtoSerde
> max.task.idle.ms = 0
> metadata.max.age.ms = 30
> metric.reporters = []
> metrics.num.samples = 2
> metrics.recording.level = INFO
> metrics.sample.window.ms = 3
> num.standby.replicas = 0
> num.stream.threads = 25
> partition.grouper = class 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper
> poll.ms = 100
> processing.guarantee = at_least_once
> receive.buffer.bytes = 32768
> reconnect.backoff.max.ms = 1000
> reconnect.backoff.ms = 50
> replication.factor = 1
> request.timeout.ms = 4
> retries = 0
> retry.backoff.ms = 100
> rocksdb.config.setter = null
> security.protocol = PLAINTEXT
> send.buffer.bytes = 131072
> state.cleanup.delay.ms = 60
> state.dir = /tmp/kafka-streams
> topology.optimization = none
> upgrade.from = null
> windowstore.changelog.additional.retention.ms = 8640
> {code}
> in my stream I am using withLoggingDisabled 
> {code:java}
> stream.filter((key, val) -> val!=null)
> .selectKey((key, val) -> getId(val))
> .groupByKey(Grouped.as("key-grouper").with(Serdes.String(), new 
> MyDtoSerde()))
> .windowedBy(TimeWindows.of(aggregationWindowSizeDuration)
>.grace(windowRetentionPeriodDuration))
> .aggregate(MyDto::new,
>new MyUpdater(),
>Materialized.as("aggregation-updater")
>.withLoggingDisabled()
>.with(Serdes.String(), new MyDtoSerde()))
> .toStream((k, v) -> k.key())
> .mapValues(val -> { ...
> {code}
> but changelog topics are created (KSTREAM-AGGREGATE-STATE-STORE), no matter 
> if I delete them before running again the app or if I change the 
> application.id
> With a new application.id, topics are recreated with the new prefix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8555) Flaky test ExampleConnectIntegrationTest#testSourceConnector

2019-07-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883934#comment-16883934
 ] 

Matthias J. Sax commented on KAFKA-8555:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/23372/testReport/junit/org.apache.kafka.connect.integration/ExampleConnectIntegrationTest/testSourceConnector/]

> Flaky test ExampleConnectIntegrationTest#testSourceConnector
> 
>
> Key: KAFKA-8555
> URL: https://issues.apache.org/jira/browse/KAFKA-8555
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Priority: Major
> Attachments: log-job139.txt, log-job141.txt, log-job23145.txt, 
> log-job23215.txt, log-job6046.txt
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/22798/console]
> *02:03:21* 
> org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.11/connect/runtime/build/reports/testOutput/org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector.test.stdout*02:03:21*
>  *02:03:21* 
> org.apache.kafka.connect.integration.ExampleConnectIntegrationTest > 
> testSourceConnector FAILED*02:03:21* 
> org.apache.kafka.connect.errors.DataException: Insufficient records committed 
> by connector simple-conn in 15000 millis. Records expected=2000, 
> actual=1013*02:03:21* at 
> org.apache.kafka.connect.integration.ConnectorHandle.awaitCommits(ConnectorHandle.java:188)*02:03:21*
>  at 
> org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector(ExampleConnectIntegrationTest.java:181)*02:03:21*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2019-07-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883932#comment-16883932
 ] 

Matthias J. Sax commented on KAFKA-4222:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/224/testReport/junit/org.apache.kafka.streams.integration/QueryableStateIntegrationTest/queryOnRebalance/]

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 0.11.0.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8661) Flaky Test RebalanceSourceConnectorsIntegrationTest#testStartTwoConnectors

2019-07-12 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8661:
--

 Summary: Flaky Test 
RebalanceSourceConnectorsIntegrationTest#testStartTwoConnectors
 Key: KAFKA-8661
 URL: https://issues.apache.org/jira/browse/KAFKA-8661
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.4.0, 2.3.1


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/224/testReport/junit/org.apache.kafka.connect.integration/RebalanceSourceConnectorsIntegrationTest/testStartTwoConnectors/]
{quote}java.lang.AssertionError: Condition not met within timeout 3. 
Connector tasks did not start in time. at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376) at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353) at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testStartTwoConnectors(RebalanceSourceConnectorsIntegrationTest.java:120){quote}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8646) Materialized.withLoggingDisabled() does not disable changelog topics creation

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8646:
--

Assignee: Bill Bejeck

> Materialized.withLoggingDisabled() does not disable changelog topics creation
> -
>
> Key: KAFKA-8646
> URL: https://issues.apache.org/jira/browse/KAFKA-8646
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: jmhostalet
>Assignee: Bill Bejeck
>Priority: Minor
>
> I have a cluster with 3 brokers running version 0.11
> My kafka-streams app was using kafka-client 0.11.0.1 but recently I've 
> migrated to 2.3.0
> I have no executed any migration as my data is disposable, therefore I have 
> deleted all intermediate topics, except input and output topics.
> My streams config is: 
> {code:java}
> application.id = consumer-id-v1.00
> application.server =
> bootstrap.servers = [foo1:9092, foo2:9092, foo3:9092]
> buffered.records.per.partition = 1000
> cache.max.bytes.buffering = 524288000
> client.id =
> commit.interval.ms = 3
> connections.max.idle.ms = 54
> default.deserialization.exception.handler = class 
> org.apache.kafka.streams.errors.LogAndFailExceptionHandler
> default.key.serde = class 
> org.apache.kafka.common.serialization.Serdes$StringSerde
> default.production.exception.handler = class 
> org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
> default.timestamp.extractor = class com.acme.stream.TimeExtractor
> default.value.serde = class com.acme.serde.MyDtoSerde
> max.task.idle.ms = 0
> metadata.max.age.ms = 30
> metric.reporters = []
> metrics.num.samples = 2
> metrics.recording.level = INFO
> metrics.sample.window.ms = 3
> num.standby.replicas = 0
> num.stream.threads = 25
> partition.grouper = class 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper
> poll.ms = 100
> processing.guarantee = at_least_once
> receive.buffer.bytes = 32768
> reconnect.backoff.max.ms = 1000
> reconnect.backoff.ms = 50
> replication.factor = 1
> request.timeout.ms = 4
> retries = 0
> retry.backoff.ms = 100
> rocksdb.config.setter = null
> security.protocol = PLAINTEXT
> send.buffer.bytes = 131072
> state.cleanup.delay.ms = 60
> state.dir = /tmp/kafka-streams
> topology.optimization = none
> upgrade.from = null
> windowstore.changelog.additional.retention.ms = 8640
> {code}
> in my stream I am using withLoggingDisabled 
> {code:java}
> stream.filter((key, val) -> val!=null)
> .selectKey((key, val) -> getId(val))
> .groupByKey(Grouped.as("key-grouper").with(Serdes.String(), new 
> MyDtoSerde()))
> .windowedBy(TimeWindows.of(aggregationWindowSizeDuration)
>.grace(windowRetentionPeriodDuration))
> .aggregate(MyDto::new,
>new MyUpdater(),
>Materialized.as("aggregation-updater")
>.withLoggingDisabled()
>.with(Serdes.String(), new MyDtoSerde()))
> .toStream((k, v) -> k.key())
> .mapValues(val -> { ...
> {code}
> but changelog topics are created (KSTREAM-AGGREGATE-STATE-STORE), no matter 
> if I delete them before running again the app or if I change the 
> application.id
> With a new application.id, topics are recreated with the new prefix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-6605) Flatten SMT does not properly handle fields that are null

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883910#comment-16883910
 ] 

ASF GitHub Bot commented on KAFKA-6605:
---

rhauch commented on pull request #5705: KAFKA-6605 fix NPE in Flatten when 
optional Struct is null
URL: https://github.com/apache/kafka/pull/5705
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flatten SMT does not properly handle fields that are null
> -
>
> Key: KAFKA-6605
> URL: https://issues.apache.org/jira/browse/KAFKA-6605
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Randall Hauch
>Assignee: Michal Borowiecki
>Priority: Major
>
> When a message has a null field, the `Flatten` SMT does not properly handle 
> this and throws an NPE. Consider this message from Debezium:
> {code}
> {
>   "before": null,
>   "after": {
> "dbserver1.mydb.team.Value": {
>   "id": 1,
>   "name": "kafka",
>   "email": "ka...@apache.org",
>   "last_modified": 1519939449000
> }
>   },
>   "source": {
> "version": {
>   "string": "0.7.3"
> },
> "name": "dbserver1",
> "server_id": 0,
> "ts_sec": 0,
> "gtid": null,
> "file": "mysql-bin.03",
> "pos": 154,
> "row": 0,
> "snapshot": {
>   "boolean": true
> },
> "thread": null,
> "db": {
>   "string": "mydb"
> },
> "table": {
>   "string": "team"
> }
>   },
>   "op": "c",
>   "ts_ms": {
> "long": 1519939520285
>   }
> }
> {code}
> Note how `before` is null; this event represents a row was INSERTED and thus 
> there is no `before` state of the row. This results in an NPE:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:219)
> at 
> org.apache.kafka.connect.transforms.Flatten.buildWithSchema(Flatten.java:234)
> at 
> org.apache.kafka.connect.transforms.Flatten.applyWithSchema(Flatten.java:151)
> at org.apache.kafka.connect.transforms.Flatten.apply(Flatten.java:75)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:211)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:187)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Here's the connector configuration that was used:
> {code}
> {
> "name": "debezium-connector-flatten",
> "config": {
> "connector.class": "io.debezium.connector.mysql.MySqlConnector",
> "tasks.max": "1",
> "database.hostname": "mysql",
> "database.port": "3306",
> "database.user": "debezium",
> "database.password": "dbz",
> "database.server.id": "223345",
> "database.server.name": "dbserver-flatten",
> "database.whitelist": "mydb",
> "database.history.kafka.bootstrap.servers": 
> "kafka-1:9092,kafka-2:9092,kafka-3:9092",
> "database.history.kafka.topic": "schema-flatten.mydb",
> "include.schema.changes": "true",
> "transforms": "flatten",
> "transforms.flatten.type": 
> "org.apache.kafka.connect.transforms.Flatten$Value",
> "transforms.flatten.delimiter": "_"
>   }
> }
> {code}
> Note that the above configuration sets the delimiter to `_`. The default 
> delimiter is `.`, which is not a valid character within an Avro field, and 
> doing this results in the following exception:
> {noformat}
> org.apache.avro.SchemaParseException: Illegal character in: source.version
>   at org.apache.avro.Schema.validateName(Schema.java:1151)
>   at org.apache.avro.Schema.access$200(Schema.java:81)
>   at org.apache.avro.Schema$Field.(Schema.java:403)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
>   at 
> org.apache.avro.SchemaBuilder$FieldBuilder.completeField(Schema

[jira] [Commented] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-07-12 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883908#comment-16883908
 ] 

Bill Bejeck commented on KAFKA-8198:


cherry-picked to 2.3, 2.2, 2.1, 2.0 and 1.1

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Assignee: Slim Ouertani
>Priority: Minor
>  Labels: documentation, newbie
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> We should correct the docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8198:
---
Fix Version/s: 1.1.0
   2.0.0
   2.1.0
   2.2.0
   2.3.0
   2.4.0

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Assignee: Slim Ouertani
>Priority: Minor
>  Labels: documentation, newbie
> Fix For: 1.1.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0
>
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> We should correct the docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8198:
--

Assignee: Slim Ouertani  (was: Victoria Bialas)

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Assignee: Slim Ouertani
>Priority: Minor
>  Labels: documentation, newbie
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> We should correct the docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8198.

Resolution: Fixed

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Assignee: Slim Ouertani
>Priority: Minor
>  Labels: documentation, newbie
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> We should correct the docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883864#comment-16883864
 ] 

ASF GitHub Bot commented on KAFKA-8602:
---

cadonna commented on pull request #7008: KAFKA-8602: Fix bug in stand-by task 
creation
URL: https://github.com/apache/kafka/pull/7008
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic
> -
>
> Key: KAFKA-8602
> URL: https://issues.apache.org/jira/browse/KAFKA-8602
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Critical
>
> StreamThread dies with the following exception:
> {code:java}
> java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
> assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
> {code}
> The reason is that the restore consumer is not subscribed to any topic. This 
> happens when a StreamThread gets assigned standby tasks for sub-topologies 
> with just state stores with disabled logging.
> To reproduce the bug start two applications with one StreamThread and one 
> standby replica each and the following topology. The input topic should have 
> two partitions:
> {code:java}
> final StreamsBuilder builder = new StreamsBuilder();
> final String stateStoreName = "myTransformState";
> final StoreBuilder> keyValueStoreBuilder =
> 
> Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
> Serdes.Integer(),
> Serdes.Integer())
> .withLoggingDisabled();
> builder.addStateStore(keyValueStoreBuilder);
> builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
> .transform(() -> new Transformer Integer>>() {
> private KeyValueStore state;
> @SuppressWarnings("unchecked")
> @Override
> public void init(final ProcessorContext context) {
> state = (KeyValueStore) 
> context.getStateStore(stateStoreName);
> }
> @Override
> public KeyValue transform(final Integer key, 
> final Integer value) {
> final KeyValue result = new KeyValue<>(key, 
> value);
> return result;
> }
> @Override
> public void close() {}
> }, stateStoreName)
> .to(OUTPUT_TOPIC);
> {code}
> Both StreamThreads should die with the above exception.
> The root cause is that standby tasks are created although all state stores of 
> the sub-topology have logging disabled.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883863#comment-16883863
 ] 

ASF GitHub Bot commented on KAFKA-8602:
---

cadonna commented on pull request #7008: KAFKA-8602: Fix bug in stand-by task 
creation
URL: https://github.com/apache/kafka/pull/7008
 
 
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic
> -
>
> Key: KAFKA-8602
> URL: https://issues.apache.org/jira/browse/KAFKA-8602
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Critical
>
> StreamThread dies with the following exception:
> {code:java}
> java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
> assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
> {code}
> The reason is that the restore consumer is not subscribed to any topic. This 
> happens when a StreamThread gets assigned standby tasks for sub-topologies 
> with just state stores with disabled logging.
> To reproduce the bug start two applications with one StreamThread and one 
> standby replica each and the following topology. The input topic should have 
> two partitions:
> {code:java}
> final StreamsBuilder builder = new StreamsBuilder();
> final String stateStoreName = "myTransformState";
> final StoreBuilder> keyValueStoreBuilder =
> 
> Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
> Serdes.Integer(),
> Serdes.Integer())
> .withLoggingDisabled();
> builder.addStateStore(keyValueStoreBuilder);
> builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
> .transform(() -> new Transformer Integer>>() {
> private KeyValueStore state;
> @SuppressWarnings("unchecked")
> @Override
> public void init(final ProcessorContext context) {
> state = (KeyValueStore) 
> context.getStateStore(stateStoreName);
> }
> @Override
> public KeyValue transform(final Integer key, 
> final Integer value) {
> final KeyValue result = new KeyValue<>(key, 
> value);
> return result;
> }
> @Override
> public void close() {}
> }, stateStoreName)
> .to(OUTPUT_TOPIC);
> {code}
> Both StreamThreads should die with the above exception.
> The root cause is that standby tasks are created although all state stores of 
> the sub-topology have logging disabled.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8555) Flaky test ExampleConnectIntegrationTest#testSourceConnector

2019-07-12 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883862#comment-16883862
 ] 

Bill Bejeck commented on KAFKA-8555:


Seen again [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/23365/]

PR [https://github.com/apache/kafka/pull/7054]
h3. Error Message

org.apache.kafka.connect.errors.DataException: Insufficient records committed 
by connector simple-conn in 15000 millis. Records expected=2000, actual=407
h3. Stacktrace

org.apache.kafka.connect.errors.DataException: Insufficient records committed 
by connector simple-conn in 15000 millis. Records expected=2000, actual=407 at 
org.apache.kafka.connect.integration.ConnectorHandle.awaitCommits(ConnectorHandle.java:188)
 at 
org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector(ExampleConnectIntegrationTest.java:181)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
 at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:175)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:157)
 at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
 at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExe

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883856#comment-16883856
 ] 

Bill Bejeck commented on KAFKA-5998:


cherry-picked to 2.3 and 2.2

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: John Roesler
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internal

[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-5998:
---
Fix Version/s: 2.3.1
   2.4.0
   2.2.2

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: John Roesler
>Priority: Critical
> Fix For: 2.2.2, 2.4.0, 2.3.1
>
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache

[jira] [Updated] (KAFKA-8660) Make ValueToKey SMT work only on a whitelist of topics

2019-07-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Löhe updated KAFKA-8660:
-
Description: 
For source connectors that publish on multiple topics it is essential to be 
able to configure transforms to be active only for certain topics. I'll add a 
PR to implement this on the example of the ValueToKey SMT.

I'm also interested in opionions if this would make sense to add as a 
configurable option to all packaged SMTs or even as a capability for SMTs in 
general.

  was:
For source connectors that publish on multiple topics it is essential to be 
able to configure transforms to be active only for certain topics. I'll add a 
PR to implement this on the example of the ValueToKey SMT.

I'm also interested if this would make sense to add as a configurable option to 
all packaged SMTs or even as a capability for SMTs in general.


> Make ValueToKey SMT work only on a whitelist of topics
> --
>
> Key: KAFKA-8660
> URL: https://issues.apache.org/jira/browse/KAFKA-8660
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
>
> For source connectors that publish on multiple topics it is essential to be 
> able to configure transforms to be active only for certain topics. I'll add a 
> PR to implement this on the example of the ValueToKey SMT.
> I'm also interested in opionions if this would make sense to add as a 
> configurable option to all packaged SMTs or even as a capability for SMTs in 
> general.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8660) Make ValueToKey SMT work only on a whitelist of topics

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883835#comment-16883835
 ] 

ASF GitHub Bot commented on KAFKA-8660:
---

bfncs commented on pull request #7084: KAFKA-8660: Make ValueToKey SMT work 
only on a whitelist of topics
URL: https://github.com/apache/kafka/pull/7084
 
 
   For source connectors that publish on multiple topics it is essential to be 
able to configure transforms to be active only for certain topics.This PR adds 
this config option for the ValueToKey SMT.
   
   I'm also interested in opinions if this would make sense to add as a 
configurable option to all packaged SMTs or even as a capability for SMTs in 
general.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Make ValueToKey SMT work only on a whitelist of topics
> --
>
> Key: KAFKA-8660
> URL: https://issues.apache.org/jira/browse/KAFKA-8660
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
>
> For source connectors that publish on multiple topics it is essential to be 
> able to configure transforms to be active only for certain topics. I'll add a 
> PR to implement this on the example of the ValueToKey SMT.
> I'm also interested if this would make sense to add as a configurable option 
> to all packaged SMTs or even as a capability for SMTs in general.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8660) Make ValueToKey SMT work only on a whitelist of topics

2019-07-12 Thread JIRA
Marc Löhe created KAFKA-8660:


 Summary: Make ValueToKey SMT work only on a whitelist of topics
 Key: KAFKA-8660
 URL: https://issues.apache.org/jira/browse/KAFKA-8660
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Marc Löhe


For source connectors that publish on multiple topics it is essential to be 
able to configure transforms to be active only for certain topics. I'll add a 
PR to implement this on the example of the ValueToKey SMT.

I'm also interested if this would make sense to add as a configurable option to 
all packaged SMTs or even as a capability for SMTs in general.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-5998.

Resolution: Fixed

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: John Roesler
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.

[jira] [Assigned] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-5998:
--

Assignee: John Roesler  (was: Bill Bejeck)

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: John Roesler
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.jav

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883821#comment-16883821
 ] 

ASF GitHub Bot commented on KAFKA-5998:
---

bbejeck commented on pull request #7030: KAFKA-5998: fix checkpointableOffsets 
handling
URL: https://github.com/apache/kafka/pull/7030
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: Bill Bejeck
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171)

[jira] [Commented] (KAFKA-8482) alterReplicaLogDirs should be better documented

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883701#comment-16883701
 ] 

ASF GitHub Bot commented on KAFKA-8482:
---

dongjinleekr commented on pull request #7083: KAFKA-8482: alterReplicaLogDirs 
should be better documented
URL: https://github.com/apache/kafka/pull/7083
 
 
   Here is the draft fix. In summary:
   
   1. `AdminClient#alterReplicaLogDirs`
   - Add documentation on when `InterruptedException` is thrown.
   - Add note on `AlterReplicaLogDirsResult` instance.
   2. `AlterReplicaLogDirsResult`
   - Add a guide to retrieve the results to class documentation.
   - Add detailed guide to `AlterReplicaLogDirsResult#values` what returns 
or what is thrown in which situation.
   - Add detailed guide to `AlterReplicaLogDirsResult#all` what returns or 
what is thrown in which situation.
   
   cc/ @cmccabe
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> alterReplicaLogDirs should be better documented
> ---
>
> Key: KAFKA-8482
> URL: https://issues.apache.org/jira/browse/KAFKA-8482
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Colin P. McCabe
>Priority: Major
>  Labels: newbie
> Fix For: 2.4.0
>
>
> alterReplicaLogDirs should be better documented.  In particular, it should 
> document what exceptions it throws in {{AdminClient.java}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8482) alterReplicaLogDirs should be better documented

2019-07-12 Thread Lee Dongjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Dongjin reassigned KAFKA-8482:
--

Assignee: Lee Dongjin

> alterReplicaLogDirs should be better documented
> ---
>
> Key: KAFKA-8482
> URL: https://issues.apache.org/jira/browse/KAFKA-8482
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Colin P. McCabe
>Assignee: Lee Dongjin
>Priority: Major
>  Labels: newbie
> Fix For: 2.4.0
>
>
> alterReplicaLogDirs should be better documented.  In particular, it should 
> document what exceptions it throws in {{AdminClient.java}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-7214) Mystic FATAL error

2019-07-12 Thread Seweryn Habdank-Wojewodzki (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883690#comment-16883690
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-7214:
---

I have found the problem. It is following.

If the value of the _max.block.ms_ is too small in comparison to what is doable 
in client server connection, then streamming API after getting problems to 
obtain meta data from broker is not trying to do it more times, but only throws 
exception and ended the life of the stream.

Finally I observe this in two situations:
# When client really lost connection to the broker, but was not intended to 
lose it. For example network downtime.
# When broker is so much loaded with other work, that will not respond to the 
streamming client. Mostly this was my case.

IMHO this is bug. Streamming API shall not give up - or the number of retries 
to obtain metadata shall be configurable. This is different to the number of 
retries to send the message. Metadata message from logical point of view has 
completely different function than normal messages.

To reproduce it, one can set this _max.block.ms_ to low value like 10-100ms and 
start streamming app connected to broker, which is loaded or disconnect 
connection after start.

> Mystic FATAL error
> --
>
> Key: KAFKA-7214
> URL: https://issues.apache.org/jira/browse/KAFKA-7214
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3, 1.1.1, 2.3.0, 2.2.1
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Critical
> Attachments: qns-1.1.zip, qns-1.zip
>
>
> Dears,
> Very often at startup of the streaming application I got exception:
> {code}
> Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-00, 
> topic=my_instance_medium_topic, partition=1, offset=198900203; 
> [org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
>  in thread 
> my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
> {code}
> and then (without shutdown request from my side):
> {code}
> 2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
> [my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
>  State transition from PENDING_SHUTDOWN to DEAD.
> {code}
> What is this?
> How to correctly handle it?
> Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-7214) Mystic FATAL error

2019-07-12 Thread Seweryn Habdank-Wojewodzki (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki updated KAFKA-7214:
--
Affects Version/s: 2.3.0

> Mystic FATAL error
> --
>
> Key: KAFKA-7214
> URL: https://issues.apache.org/jira/browse/KAFKA-7214
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3, 1.1.1, 2.3.0, 2.2.1
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Critical
> Attachments: qns-1.1.zip, qns-1.zip
>
>
> Dears,
> Very often at startup of the streaming application I got exception:
> {code}
> Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-00, 
> topic=my_instance_medium_topic, partition=1, offset=198900203; 
> [org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
>  in thread 
> my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
> {code}
> and then (without shutdown request from my side):
> {code}
> 2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
> [my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
>  State transition from PENDING_SHUTDOWN to DEAD.
> {code}
> What is this?
> How to correctly handle it?
> Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-6882) Wrong producer settings may lead to DoS on Kafka Server

2019-07-12 Thread Seweryn Habdank-Wojewodzki (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-6882.
---
Resolution: Won't Fix

As there are no improvment proposals I am closing it. :-)

> Wrong producer settings may lead to DoS on Kafka Server
> ---
>
> Key: KAFKA-6882
> URL: https://issues.apache.org/jira/browse/KAFKA-6882
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 1.0.1, 1.1.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> The documentation of the following parameters “linger.ms” and “batch.size” is 
> a bit confusing. In fact those parameters wrongly set on the producer side 
> might completely destroy BROKER throughput.
> I see, that smart developers are reading documentation of those parameters.
> Then they want to have super performance and super safety, so they set 
> something like this below:
> {code}
> kafkaProps.put(ProducerConfig.LINGER_MS_CONFIG, 1);
> kafkaProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 0);
> {code}
> Then we have situation, when each and every message is send separately. 
> TCP/IP protocol is really busy in that case and when they needed high 
> throughput they got much less throughput, as every message is goes separately 
> causing all network communication and TCP/IP overhead significant.
> Those settings are good only if someone sends critical messages like once a 
> while (e.g. one message per minute) and not when throughput is important by 
> sending thousands messages per second.
> Situation is even worse when smart developers are reading, that for safety, 
> they need acknowledges from all cluster nodes. So they are adding:
> {code}
> kafkaProps.put(ProducerConfig.ACKS_CONFIG, "all");
> {code}
> And this is the end of Kafka performance! 
> Even worse it is not a problem for the Kafka producer. The problem remains at 
> the server (cluster, broker) side. The server is so busy by acknowledging 
> *each and every* message from all nodes, that other work is NOT performed, so 
> the end to end performance is almost none.
> I would like to ask you to improve documentation of this parameters.
> And consider corner cases is case of providing detailed information how 
> extreme values of parameters - namely lowest and highest – may influence work 
> of the cluster.
> This was documentation issue. 
> On the other hand it is security/safety matter.
> Technically the problem is that __commit_offsets topic is loaded with 
> enormous amount of messages. It leads to the situation, when Kafka Broker is 
> exposed to *DoS *due to the Producer settings. Three lines of code a bit load 
> and the Kafka cluster is dead.
> I suppose there are ways to prevent such a situation on the cluster side, but 
> it require some logic to be implemented to detect such a simple but efficient 
> DoS.
> BTW. Do Kafka Admin Tools provide any kind of "kill" connection, when one or 
> the other producer makes problems?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883656#comment-16883656
 ] 

ASF GitHub Bot commented on KAFKA-8659:
---

bfncs commented on pull request #7082: KAFKA-8659: SetSchemaMetadata SMT fails 
on records with null value and schema
URL: https://github.com/apache/kafka/pull/7082
 
 
   Make SetSchemaMetadata SMT ignore records with null `value` and 
`valueSchema` or `key` and `keySchema`.
   
   The transform has been unit tested for handling null values gracefully while 
still providing the necessary validation for non-null values.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883602#comment-16883602
 ] 

ASF GitHub Bot commented on KAFKA-8659:
---

bfncs commented on pull request #7080: KAFKA-8659: SetSchemaMetadata SMT fails 
on records with null value and schema
URL: https://github.com/apache/kafka/pull/7080
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2019-07-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883600#comment-16883600
 ] 

ASF GitHub Bot commented on KAFKA-8659:
---

bfncs commented on pull request #7080: KAFKA-8659: SetSchemaMetadata SMT fails 
on records with null value and schema
URL: https://github.com/apache/kafka/pull/7080
 
 
   Make SetSchemaMetadata SMT ignore records with null `value` and 
`valueSchema` or `key` and `keySchema`.
   
   The transform has been unit tested for handling null values gracefully while 
still providing the necessary validation for non-null values.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2019-07-12 Thread JIRA
Marc Löhe created KAFKA-8659:


 Summary: SetSchemaMetadata SMT fails on records with null value 
and schema
 Key: KAFKA-8659
 URL: https://issues.apache.org/jira/browse/KAFKA-8659
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Marc Löhe


If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
value and corresponding schema are {{null}} (i.e. tombstone records from 
[Debezium|[https://debezium.io/]), the transform will fail.
{code:java}
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
handler
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
[updating schema metadata]
at 
org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
at 
org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
at 
org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more

{code}
 

I don't see any problem in passing those records as is in favor of failing and 
will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8658) A way to configure the jmx rmi port

2019-07-12 Thread Agostino Sarubbo (JIRA)
Agostino Sarubbo created KAFKA-8658:
---

 Summary: A way to configure the jmx rmi port
 Key: KAFKA-8658
 URL: https://issues.apache.org/jira/browse/KAFKA-8658
 Project: Kafka
  Issue Type: Improvement
  Components: metrics
Affects Versions: 1.0.0
 Environment: Centos 7
Reporter: Agostino Sarubbo


Hello,

I'm on kafka-1.0.0 so I'm not sure if it is fixed in the current version.

Atm we are using the following in the service script to use JMX:
Environment=JMX_PORT=7666

However there is no way to set the jmx_rmi_port. When there is no specification 
for jmx_rmi_port the jvm assigns a random port. This complicates the way we 
manage the firewall.



Would be great if there is a way to set the jmx_rmi_port in the same way, e.g.:
Environment=JMX_RMI_PORT=7667

The variable used during the jvm start is: 
-Dcom.sun.management.jmxremote.rmi.port=



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)