Build failed in Jenkins: kafka-trunk-jdk7 #2294

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5211; Do not skip a corrupted record in consumer

--
[...truncated 904.24 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderEle

[jira] [Commented] (KAFKA-4785) Records from internal repartitioning topics should always use RecordMetadataTimestampExtractor

2017-05-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030732#comment-16030732
 ] 

Ismael Juma commented on KAFKA-4785:


Is this still planned for 0.11.0.0?

> Records from internal repartitioning topics should always use 
> RecordMetadataTimestampExtractor
> --
>
> Key: KAFKA-4785
> URL: https://issues.apache.org/jira/browse/KAFKA-4785
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Jeyhun Karimov
> Fix For: 0.11.0.0
>
>
> Users can specify what timestamp extractor should be used to decode the 
> timestamp of input topic records. As long as RecordMetadataTimestamp or 
> WallclockTime is use this is fine. 
> However, for custom timestamp extractors it might be invalid to apply this 
> custom extractor to records received from internal repartitioning topics. The 
> reason is that Streams sets the current "stream time" as record metadata 
> timestamp explicitly before writing to intermediate repartitioning topics 
> because this timestamp should be use by downstream subtopologies. A custom 
> timestamp extractor might return something different breaking this assumption.
> Thus, for reading data from intermediate repartitioning topic, the configured 
> timestamp extractor should not be used, but the record's metadata timestamp 
> should be extracted as record timestamp.
> In order to leverage the same behavior for intermediate user topic (ie, used 
> in {{through()}})  we can leverage KAFKA-4144 and internally set an extractor 
> for those "intermediate sources" that returns the record's metadata timestamp 
> in order to overwrite the global extractor from {{StreamsConfig}} (ie, set 
> {{FailOnInvalidTimestampExtractor}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4686) Null Message payload is shutting down broker

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4686:
---
Priority: Critical  (was: Major)

> Null Message payload is shutting down broker
> 
>
> Key: KAFKA-4686
> URL: https://issues.apache.org/jira/browse/KAFKA-4686
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Amazon Linux AMI release 2016.03 kernel 
> 4.4.19-29.55.amzn1.x86_64
>Reporter: Rodrigo Queiroz Saramago
>Priority: Critical
> Fix For: 0.11.0.1
>
> Attachments: KAFKA-4686-NullMessagePayloadError.tar.xz, 
> kafkaServer.out
>
>
> Hello, I have a test environment with 3 brokers and 1 zookeeper nodes, in 
> which clients connect using two-way ssl authentication. I use kafka version 
> 0.10.1.1, the system works as expected for a while, but if the broker goes 
> down and then is restarted, something got corrupted and is not possible start 
> broker again, it always fails with the same error. What this error mean? What 
> can I do in this case? Is this the expected behavior?
> [2017-01-23 07:03:28,927] ERROR There was an error in one of the threads 
> during logs loading: kafka.common.KafkaException: Message payload is null: 
> Message(magic = 0, attributes = 1, crc = 4122289508, key = null, payload = 
> null) (kafka.log.LogManager)
> [2017-01-23 07:03:28,929] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Message payload is null: Message(magic = 0, 
> attributes = 1, crc = 4122289508, key = null, payload = null)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.(ByteBufferMessageSet.scala:90)
> at 
> kafka.message.ByteBufferMessageSet$.deepIterator(ByteBufferMessageSet.scala:85)
> at kafka.message.MessageAndOffset.firstOffset(MessageAndOffset.scala:33)
> at kafka.log.LogSegment.recover(LogSegment.scala:223)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:218)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:179)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:108)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:151)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:58)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> [2017-01-23 07:03:28,946] INFO shutting down (kafka.server.KafkaServer)
> [2017-01-23 07:03:28,949] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-01-23 07:03:28,954] INFO EventThread shut down for session: 
> 0x159bd458ae70008 (org.apache.zookeeper.ClientCnxn)
> [2017-01-23 07:03:28,954] INFO Session: 0x159bd458ae70008 closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-01-23 07:03:28,957] INFO shut down completed (kafka.server.KafkaServer)
> [2017-01-23 07:03:28,959] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> kafka.common.KafkaException: Message payload is null: Message(magic = 0, 
> attributes = 1, crc = 4122289508, key = null, payload = null)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.(ByteBufferMessageSet.scala:90)
> at 
> kafka.message.ByteBufferMessageSet$.deepIterator(ByteBufferMessageSet.scala:85)
> at kafka.message.MessageAndOffset.firstOffset(MessageAndOffset.scala:33)
> at kafka.log.LogSegment.recover(LogSegment.scala:223)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:218)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:179)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:108)
> at 
> kafka.log.LogManager$$ano

[jira] [Updated] (KAFKA-4686) Null Message payload is shutting down broker

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4686:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> Null Message payload is shutting down broker
> 
>
> Key: KAFKA-4686
> URL: https://issues.apache.org/jira/browse/KAFKA-4686
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Amazon Linux AMI release 2016.03 kernel 
> 4.4.19-29.55.amzn1.x86_64
>Reporter: Rodrigo Queiroz Saramago
> Fix For: 0.11.0.1
>
> Attachments: KAFKA-4686-NullMessagePayloadError.tar.xz, 
> kafkaServer.out
>
>
> Hello, I have a test environment with 3 brokers and 1 zookeeper nodes, in 
> which clients connect using two-way ssl authentication. I use kafka version 
> 0.10.1.1, the system works as expected for a while, but if the broker goes 
> down and then is restarted, something got corrupted and is not possible start 
> broker again, it always fails with the same error. What this error mean? What 
> can I do in this case? Is this the expected behavior?
> [2017-01-23 07:03:28,927] ERROR There was an error in one of the threads 
> during logs loading: kafka.common.KafkaException: Message payload is null: 
> Message(magic = 0, attributes = 1, crc = 4122289508, key = null, payload = 
> null) (kafka.log.LogManager)
> [2017-01-23 07:03:28,929] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Message payload is null: Message(magic = 0, 
> attributes = 1, crc = 4122289508, key = null, payload = null)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.(ByteBufferMessageSet.scala:90)
> at 
> kafka.message.ByteBufferMessageSet$.deepIterator(ByteBufferMessageSet.scala:85)
> at kafka.message.MessageAndOffset.firstOffset(MessageAndOffset.scala:33)
> at kafka.log.LogSegment.recover(LogSegment.scala:223)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:218)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:179)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:108)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:151)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:58)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> [2017-01-23 07:03:28,946] INFO shutting down (kafka.server.KafkaServer)
> [2017-01-23 07:03:28,949] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-01-23 07:03:28,954] INFO EventThread shut down for session: 
> 0x159bd458ae70008 (org.apache.zookeeper.ClientCnxn)
> [2017-01-23 07:03:28,954] INFO Session: 0x159bd458ae70008 closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-01-23 07:03:28,957] INFO shut down completed (kafka.server.KafkaServer)
> [2017-01-23 07:03:28,959] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> kafka.common.KafkaException: Message payload is null: Message(magic = 0, 
> attributes = 1, crc = 4122289508, key = null, payload = null)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.(ByteBufferMessageSet.scala:90)
> at 
> kafka.message.ByteBufferMessageSet$.deepIterator(ByteBufferMessageSet.scala:85)
> at kafka.message.MessageAndOffset.firstOffset(MessageAndOffset.scala:33)
> at kafka.log.LogSegment.recover(LogSegment.scala:223)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:218)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:179)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:108)
> at 
> kafka.log.LogManager$$anonf

[jira] [Updated] (KAFKA-4808) send of null key to a compacted topic should throw error back to user

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4808:
---
Status: Patch Available  (was: Open)

> send of null key to a compacted topic should throw error back to user
> -
>
> Key: KAFKA-4808
> URL: https://issues.apache.org/jira/browse/KAFKA-4808
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Ismael Juma
>Assignee: Mayuresh Gharat
> Fix For: 0.11.1.0
>
>
> If a message with a null key is produced to a compacted topic, the broker 
> returns `CorruptRecordException`, which is a retriable exception. As such, 
> the producer keeps retrying until retries are exhausted or request.timeout.ms 
> expires and eventually throws a TimeoutException. This is confusing and not 
> user-friendly.
> We should throw a meaningful error back to the user. From an implementation 
> perspective, we would have to use a non retriable error code to avoid this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Jira-Spam on Dev-Mailinglist

2017-05-30 Thread Michal Borowiecki

+1 agree with Jeff,

Michał


On 31/05/17 06:25, Jeff Widman wrote:

I'm hugely in favor of this change as well...

Although I actually find the Github pull request emails less useful than
the jirabot ones since Jira typically has more info when I'm trying to
figure out if the issue is relevant to me or not...

On Tue, May 30, 2017 at 2:28 PM, Guozhang Wang  wrote:


I actually do not know.. Maybe Jun knows better than me?


Guozhang

On Mon, May 29, 2017 at 12:58 PM, Gwen Shapira  wrote:


I agree.

Guozhang, do you know how to implement the suggestion? JIRA to Apache
Infra? Or is this something we can do ourselves somehow?

On Mon, May 29, 2017 at 9:33 PM Guozhang Wang 

wrote:

I share your pains. Right now I use filters on my email accounts and it

has

been down to about 25 per day.

I think setup a separate mailing list for jirabot and jenkins auto
generated emails is a good idea.


Guozhang


On Mon, May 29, 2017 at 12:58 AM,  wrote:


Hello everyone

I find it hard to follow this mailinglist due to all the mails

generated

by Jira. Just over this weekend there are 240 new mails.
Would it be possible to setup something like j...@kafka.apache.org

where

everyone can subscribe interested in those Jira mails?

Right now I am going to setup a filter which just deletes the

jira-tagged

mails, but I think the current setup also makes it hard to read

through

the archives.

regards
Marc




--
-- Guozhang




--
-- Guozhang



--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowie...@openbet.com
W:  www.openbet.com 


OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If you 
have received this message in error, please immediately notify the 
postmas...@openbet.com  and delete it 
from your system as well as any copies. The content of e-mails as well 
as traffic data may be monitored by OpenBet for employment and security 
purposes. To protect the environment please do not print this e-mail 
unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building 
9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company 
registered in England and Wales. Registered no. 3134634. VAT no. 
GB927523612




[jira] [Commented] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-30 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030729#comment-16030729
 ] 

Apurva Mehta commented on KAFKA-5351:
-

I saw the following in the logs for two of the brokerss. at 06:12:05 the first 
broker sends COORDINATOR_NOT_AVAILABLE, at which point it sends a 
FindCoordinator request which routes it back to the same broker (knode09) 
below. It then keeps retrying the EndTxn request, and getting 
CONCURRENT_TRANSACTIONS error code. 

At 06:12:17, it gets a NOT_COORDINATOR error, does a 'FindCoordinator' request, 
and gets routed to knode03, at which point, it keeps getting 
CONCURRENT_TRANSACTIONS until the process is terminated. 

{noformat}
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:956:[2017-05-31 06:12:05,724] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:963:[2017-05-31 06:12:05,736] INFO [Transaction Log Manager 
3]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211125723) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException,returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:964:[2017-05-31 06:12:05,737] INFO [Transaction Coordinator 
3]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit,topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) with 
coordinator epoch 1 for my-second-transactional-id failed since the transaction 
message cannot be appended to the log. Returning error code 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionCoordinator)
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 cd ..
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592#
 cd knode03/
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode03#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:1737:[2017-05-31 06:12:17,906] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:1741:[2017-05-31 06:12:17,926] INFO [Transaction Log Manager 
1]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211137906) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException, returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:1742:[2017-05-31 06:12:17,933] INFO [Transaction Coordinator 
1]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) with 
coordinator epoch 2 for my-second-transactional-id failed since the transaction 
message cannot be appended to the log. Returning error code 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transact

[jira] [Updated] (KAFKA-4808) send of null key to a compacted topic should throw error back to user

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4808:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> send of null key to a compacted topic should throw error back to user
> -
>
> Key: KAFKA-4808
> URL: https://issues.apache.org/jira/browse/KAFKA-4808
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Ismael Juma
>Assignee: Mayuresh Gharat
> Fix For: 0.11.1.0
>
>
> If a message with a null key is produced to a compacted topic, the broker 
> returns `CorruptRecordException`, which is a retriable exception. As such, 
> the producer keeps retrying until retries are exhausted or request.timeout.ms 
> expires and eventually throws a TimeoutException. This is confusing and not 
> user-friendly.
> We should throw a meaningful error back to the user. From an implementation 
> perspective, we would have to use a non retriable error code to avoid this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5031) Additional validation in validateMessagesAndAssignOffsets

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5031:
---
Status: Patch Available  (was: Open)

> Additional validation in validateMessagesAndAssignOffsets
> -
>
> Key: KAFKA-5031
> URL: https://issues.apache.org/jira/browse/KAFKA-5031
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> In validateMessagesAndAssignOffsets(), when validating the 
> DefaultRecordBatch, we should also validate:
> 1. Message count matches the actual number of messages in the array
> 2. The header count matches the actual number of headers



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5051) Avoid DNS reverse lookup in security-critical TLS code path

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5051:
---
Status: Patch Available  (was: Open)

> Avoid DNS reverse lookup in security-critical TLS code path
> ---
>
> Key: KAFKA-5051
> URL: https://issues.apache.org/jira/browse/KAFKA-5051
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> At the moment SSL engine is created using the hostname obtained using 
> {{InetAddress#getHostName}} which performs unnecessary reverse DNS lookups.
> h2.Scenarios:
> h3. Server-side
> h4. Scenario: Server accepts connection from a client
> Broker knows only client IP address. At the moment broker does a reverse 
> lookup. This is unnecessary since the server does not verify or use client 
> hostname. It can block the network thread for several seconds in some 
> configurations. The IP address should be used directly.
> h3. Client-side
> h4. Scenario: Client connects to server using hostname
> No lookup is necessary and the hostname is used to create the SSL engine. 
> This hostname is validated against the hostname in SubjectAltName (dns) or 
> CommonName in the certificate if hostname verification is enabled. 
> Authentication fails if hostname does not match. This is handled correctly in 
> the current code.
> h4. Scenario: Client connects to server using IP address, but certificate 
> contains only SubjectAltName (dns)
> The current code does hostname verification using the hostname obtained 
> through reverse name lookup. But use of reverse DNS lookup to determine 
> hostname introduces a security vulnerability since authentication would be 
> reliant on a secure DNS. Hence hostname verification should fail in this 
> case. 
> h4. Scenario: Client connects to server using IP address and certificate 
> contains SubjectAltName (ipaddress).
> This could be used when Kafka is on a private network. The current code uses 
> reverse DNS lookup to determine hostname. If reverse lookup succeeds, 
> authentication fails since the hostname is matched against the IP address in 
> the certificate. But if reverse lookup fails, SSL engine is created with the 
> IP address and authentication succeeds. For consistency and to avoid 
> dependency on a potentially insecure DNS, reverse DNS lookup should be 
> avoided and the IP address specified by the client for connection should be 
> used to create the SSL engine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5059) Implement Transactional Coordinator

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5059.

Resolution: Fixed

Marking this as resolved since all, but one task have been completed and the 
uncompleted task will be done in a subsequent release.

> Implement Transactional Coordinator
> ---
>
> Key: KAFKA-5059
> URL: https://issues.apache.org/jira/browse/KAFKA-5059
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.11.0.0
>
>
> This covers the implementation of the transaction coordinator to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5212) Consumer ListOffsets request can starve group heartbeats

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5212:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Consumer ListOffsets request can starve group heartbeats
> 
>
> Key: KAFKA-5212
> URL: https://issues.apache.org/jira/browse/KAFKA-5212
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.1.0
>
>
> The consumer is not able to send heartbeats while it is awaiting a 
> ListOffsets response. Typically this is not a problem because ListOffsets 
> requests are handled quickly, but in the worst case if the request takes 
> longer than the session timeout, the consumer will fall out of the group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5308) TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in WriteTxnMarker response

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030711#comment-16030711
 ] 

ASF GitHub Bot commented on KAFKA-5308:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3152


> TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in WriteTxnMarker response
> --
>
> Key: KAFKA-5308
> URL: https://issues.apache.org/jira/browse/KAFKA-5308
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Damian Guy
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> It could be the case that one of the topics added to a transaction is on a 
> lower message format version. Because of 
> https://github.com/apache/kafka/pull/3118, the producer won't be able to send 
> any data to that topic, but the TC will nevertheless try to write the 
> commit/abort marker to the log. Like the Produce request, the WriteTxnMarker 
> request should return the UNSUPPORTED_FOR_MESSSAGE_FORMAT error. Instead of 
> retrying, we should log a warning and remove the partition from the set of 
> partitions awaiting marker completion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3152: KAFKA-5308: TC should handle UNSUPPORTED_FOR_MESSA...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3152


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1482:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.11.1.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 

[jira] [Updated] (KAFKA-5308) TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in WriteTxnMarker response

2017-05-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5308:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3152
[https://github.com/apache/kafka/pull/3152]

> TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in WriteTxnMarker response
> --
>
> Key: KAFKA-5308
> URL: https://issues.apache.org/jira/browse/KAFKA-5308
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Damian Guy
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> It could be the case that one of the topics added to a transaction is on a 
> lower message format version. Because of 
> https://github.com/apache/kafka/pull/3118, the producer won't be able to send 
> any data to that topic, but the TC will nevertheless try to write the 
> commit/abort marker to the log. Like the Produce request, the WriteTxnMarker 
> request should return the UNSUPPORTED_FOR_MESSSAGE_FORMAT error. Instead of 
> retrying, we should log a warning and remove the partition from the set of 
> partitions awaiting marker completion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4972:
---
Fix Version/s: (was: 0.12.0.0)
   (was: 0.11.0.0)
   0.11.0.1

> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> --
>
> Key: KAFKA-4972
> URL: https://issues.apache.org/jira/browse/KAFKA-4972
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
> Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>Reporter: fangjinuo
>Priority: Critical
> Fix For: 0.11.0.1
>
> Attachments: Snap3.png
>
>
> After force shutdown all kafka brokers one by one, restart them one by one, 
> but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  .index , delet it  ...
> you can view details by following attachment.
> I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)  // here has a synchronized 
> lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, 
> messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 0xx.log file 
> (logsegment's .log) 
> Although this is just my inference, but I hope that this problem can be 
> quickly repaired



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4669) KafkaProducer.flush hangs when NetworkClient.handleCompletedReceives throws exception

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4669:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> KafkaProducer.flush hangs when NetworkClient.handleCompletedReceives throws 
> exception
> -
>
> Key: KAFKA-4669
> URL: https://issues.apache.org/jira/browse/KAFKA-4669
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Cheng Ju
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.1
>
>
> There is no try catch in NetworkClient.handleCompletedReceives.  If an 
> exception is thrown after inFlightRequests.completeNext(source), then the 
> corresponding RecordBatch's done will never get called, and 
> KafkaProducer.flush will hang on this RecordBatch.
> I've checked 0.10 code and think this bug does exist in 0.10 versions.
> A real case.  First a correlateId not match exception happens:
> 13 Jan 2017 17:08:24,059 ERROR [kafka-producer-network-thread | producer-21] 
> (org.apache.kafka.clients.producer.internals.Sender.run:130)  - Uncaught 
> error in kafka producer I/O thread: 
> java.lang.IllegalStateException: Correlation id for response (703766) does 
> not match request (703764)
>   at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:477)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:440)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Then jstack shows the thread is hanging on:
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544)
>   at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:224)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
>   at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
>   at java.lang.Thread.run(Thread.java:745)
> client code 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4595) Controller send thread can't stop when broker change listener event trigger for dead brokers

2017-05-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030705#comment-16030705
 ] 

Ismael Juma commented on KAFKA-4595:


[~junrao], is this fixed given that deletion no longer happens in a separate 
thread?

> Controller send thread can't stop when broker change listener event trigger 
> for  dead brokers
> -
>
> Key: KAFKA-4595
> URL: https://issues.apache.org/jira/browse/KAFKA-4595
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.1.1
>Reporter: Pengwei
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.0
>
>
> In our test env, we found controller is not working after a delete topic 
> opertation and network issue, the stack is below:
> "ZkClient-EventThread-15-192.168.1.3:2184,192.168.1.4:2184,192.168.1.5:2184" 
> #15 daemon prio=5 os_prio=0 tid=0x7fb76416e000 nid=0x3019 waiting on 
> condition [0x7fb76b7c8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc05497b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> kafka.utils.ShutdownableThread.awaitShutdown(ShutdownableThread.scala:50)
>   at kafka.utils.ShutdownableThread.shutdown(ShutdownableThread.scala:32)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$removeExistingBroker(ControllerChannelManager.scala:128)
>   at 
> kafka.controller.ControllerChannelManager.removeBroker(ControllerChannelManager.scala:81)
>   - locked <0xc0258760> (a java.lang.Object)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcVI$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
>Locked ownable synchronizers:
>   - <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> "Controller-1001-to-broker-1003-send-thread" #88 prio=5 os_prio=0 
> tid=0x7fb778342000 nid=0x5a4c waiting on condition [0x7fb761de]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Abs

[jira] [Updated] (KAFKA-3955) Kafka log recovery doesn't truncate logs on non-monotonic offsets, leading to failed broker boot

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3955:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> Kafka log recovery doesn't truncate logs on non-monotonic offsets, leading to 
> failed broker boot
> 
>
> Key: KAFKA-3955
> URL: https://issues.apache.org/jira/browse/KAFKA-3955
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2.0, 0.8.2.1, 0.8.2.2, 
> 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.1.1
>Reporter: Tom Crayford
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.1
>
>
> Hi,
> I've found a bug impacting kafka brokers on startup after an unclean 
> shutdown. If a log segment is corrupt and has non-monotonic offsets (see the 
> appendix of this bug for a sample output from {{DumpLogSegments}}), then 
> {{LogSegment.recover}} throws an {{InvalidOffsetException}} error: 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/OffsetIndex.scala#L218
> That code is called by {{LogSegment.recover}}: 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/LogSegment.scala#L191
> Which is called in several places in {{Log.scala}}. Notably it's called four 
> times during recovery:
> Thrice in Log.loadSegments
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L199
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L204
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L226
> and once in Log.recoverLog
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L268
> Of these, only the very last one has a {{catch}} for 
> {{InvalidOffsetException}}. When that catches the issue, it truncates the 
> whole log (not just this segment): 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L274
>  to the start segment of the bad log segment.
> However, this code can't be hit on recovery, because of the code paths in 
> {{loadSegments}} - they mean we'll never hit truncation here, as we always 
> throw this exception and that goes all the way to the toplevel exception 
> handler and crashes the JVM.
> As {{Log.recoverLog}} is always called during recovery, I *think* a fix for 
> this is to move this crash recovery/truncate code inside a new method in 
> {{Log.scala}}, and call that instead of {{LogSegment.recover}} in each place. 
> That code should return the number of {{truncatedBytes}} like we do in 
> {{Log.recoverLog}} and then truncate the log. The callers will have to be 
> notified "stop iterating over files in the directory", likely via a return 
> value of {{truncatedBytes}} like {{Log.recoverLog` does right now.
> I'm happy working on a patch for this. I'm aware this recovery code is tricky 
> and important to get right.
> I'm also curious (and currently don't have good theories as of yet) as to how 
> this log segment got into this state with non-monotonic offsets. This segment 
> is using gzip compression, and is under 0.9.0.1. The same bug with respect to 
> recovery exists in trunk, but I'm unsure if the new handling around 
> compressed messages (KIP-31) means the bug where non-monotonic offsets get 
> appended is still present in trunk.
> As a production workaround, one can manually truncate that log folder 
> yourself (delete all .index/.log files including and after the one with the 
> bad offset). However, kafka should (and can) handle this case well - with 
> replication we can truncate in broker startup.
> stacktrace and error message:
> {code}
> pri=WARN  t=pool-3-thread-4 at=Log Found a corrupted index file, 
> /$DIRECTORY/$TOPIC-22/14306536.index, deleting and rebuilding 
> index...
> pri=ERROR t=main at=LogManager There was an error in one of the threads 
> during logs loading: kafka.common.InvalidOffsetException: Attempt to append 
> an offset (15000337) to position 111719 no larger than the last offset 
> appended (15000337) to /$DIRECTORY/$TOPIC-8/14008931.index.
> pri=FATAL t=main at=KafkaServer Fatal error during KafkaServer startup. 
> Prepare to shutdown kafka.common.InvalidOffsetException: Attempt to append an 
> offset (15000337) to position 111719 no larger than the last offset appended 
> (15000337) to /$DIRECTORY/$TOPIC-8/14008931.index.
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:207)
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)

[jira] [Updated] (KAFKA-1694) KIP-4: Command line and centralized operations

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1694:
---
Fix Version/s: (was: 0.11.0.0)

> KIP-4: Command line and centralized operations
> --
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Critical
> Attachments: KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1694.patch, KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5321) MemoryRecords.filterTo can return corrupt data if output buffer is not large enough

2017-05-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030703#comment-16030703
 ] 

Ismael Juma commented on KAFKA-5321:


[~hachikuji], this was fixed as part of KAFKA-5316, right?

> MemoryRecords.filterTo can return corrupt data if output buffer is not large 
> enough
> ---
>
> Key: KAFKA-5321
> URL: https://issues.apache.org/jira/browse/KAFKA-5321
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> Due to KAFKA-5316, it is possible for a record set to grow during cleaning 
> and overflow the output buffer allocated for writing. When we reach the 
> record set which is doomed to overflow the buffer, there are two 
> possibilities:
> 1. No records were removed and the original entry is directly appended to the 
> log. This results in the overflow reported in KAFKA-5316.
> 2. Records were removed and a new record set is built. 
> Here we are concerned with the latter case.The problem is that the builder 
> code automatically allocates a new buffer when we reach the end of the 
> existing buffer and does not reset the position in the original buffer. Since 
> {{MemoryRecords.filterTo}} continues using the old buffer, this can lead to 
> data corruption after cleaning (the data left in the overflowed buffer is 
> garbage). 
> Note that this issue could get fixed as part of a general solution 
> KAFKA-5316, but if that seems too risk, we might fix this separately. A 
> simple solution is to make both paths consistent and ensure that we raise an 
> exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4693) Consumer subscription change during rebalance causes exception

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4693:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Consumer subscription change during rebalance causes exception
> --
>
> Key: KAFKA-4693
> URL: https://issues.apache.org/jira/browse/KAFKA-4693
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> After every rebalance, the consumer validates that the assignment received 
> contains only partitions from topics that were subscribed. If not, then we 
> raise an exception to the user. It is possible for a wakeup or an interrupt 
> to leave the consumer with a rebalance in progress (e.g. with a JoinGroup to 
> the coordinator in-flight). If the user then changes the topic subscription, 
> then this validation upon completion of the rebalance will fail. We should 
> probably detect the subscription change, eat the exception, and request 
> another rebalance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4950) ConcurrentModificationException when iterating over Kafka Metrics

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4950:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> ConcurrentModificationException when iterating over Kafka Metrics
> -
>
> Key: KAFKA-4950
> URL: https://issues.apache.org/jira/browse/KAFKA-4950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Dumitru Postoronca
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.0.1
>
>
> It looks like the when calling {{PartitionStates.partitionSet()}}, while the 
> resulting Hashmap is being built, the internal state of the allocations can 
> change, which leads to ConcurrentModificationException during the copy 
> operation.
> {code}
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
> at 
> java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:119)
> at 
> org.apache.kafka.common.internals.PartitionStates.partitionSet(PartitionStates.java:66)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedPartitions(SubscriptionState.java:291)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$ConsumerCoordinatorMetrics$1.measure(ConsumerCoordinator.java:783)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:61)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:52)
> {code}
> {code}
> // client code:
> import java.util.Collections;
> import java.util.HashMap;
> import java.util.Map;
> import com.codahale.metrics.Gauge;
> import com.codahale.metrics.Metric;
> import com.codahale.metrics.MetricSet;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.MetricName;
> import static com.codahale.metrics.MetricRegistry.name;
> public class KafkaMetricSet implements MetricSet {
> private final KafkaConsumer client;
> public KafkaMetricSet(KafkaConsumer client) {
> this.client = client;
> }
> @Override
> public Map getMetrics() {
> final Map gauges = new HashMap();
> Map m = client.metrics();
> for (Map.Entry e : 
> m.entrySet()) {
> gauges.put(name(e.getKey().group(), e.getKey().name(), "count"), 
> new Gauge() {
> @Override
> public Double getValue() {
> return e.getValue().value(); // exception thrown here 
> }
> });
> }
> return Collections.unmodifiableMap(gauges);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5218) New Short serializer, deserializer, serde

2017-05-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030701#comment-16030701
 ] 

Ismael Juma commented on KAFKA-5218:


Is this still planned for 0.11.0.0?

> New Short serializer, deserializer, serde
> -
>
> Key: KAFKA-5218
> URL: https://issues.apache.org/jira/browse/KAFKA-5218
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Mario Molina
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> There is no Short serializer/deserializer in the current clients component.
> It could be useful when using Kafka-Connect to write data to databases with 
> SMALLINT fields (or similar) and avoiding conversions to int improving a bit 
> the performance in terms of memory and network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2875:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.11.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3733) Avoid long command lines by setting CLASSPATH in environment

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3733:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Avoid long command lines by setting CLASSPATH in environment
> 
>
> Key: KAFKA-3733
> URL: https://issues.apache.org/jira/browse/KAFKA-3733
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{kafka-run-class.sh}} sets the JVM classpath in the command line via {{-cp}}.
> This generates long command lines that gets trimmed by the shell in commands 
> like ps, pgrep,etc.
> An alternative is to set the CLASSPATH in environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-05-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030700#comment-16030700
 ] 

Ismael Juma commented on KAFKA-4064:


Is this still planned for 0.11.0.0?

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.11.0.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-1548) Refactor the "replica_id" in requests

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1548:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Refactor the "replica_id" in requests
> -
>
> Key: KAFKA-1548
> URL: https://issues.apache.org/jira/browse/KAFKA-1548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
> Fix For: 0.11.1.0
>
>
> Today in many requests like fetch and offset we have a integer replica_id 
> field, if the request is from a follower consumer it is the broker id from 
> that follower replica, if it is from a regular consumer it could be one of 
> the two values: "-1" for ordinary consumer, or "-2" for debugging consumer. 
> Hence this replica_id field is used in two folds:
> 1) Logging for trouble shooting in request logs, which can be helpful only 
> when this is from a follower replica, 
> 2) Deciding if it is from the consumer or a replica to logically handle the 
> request in different ways. For this purpose we do not really care about the 
> actually id value.
> We probably would like to do the following improvements:
> 1) Rename "replica_id" to sth. less confusing?
> 2) Change the request.toString() function based on the replica_id, whether it 
> is a positive integer (meaning from a broker replica fetcher) or -1/-2 
> (meaning from a regular consumer).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-1899) Update code coverage report generation once gradle-scoverage starts publishing scoverage report

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1899:
---
Fix Version/s: (was: 0.11.0.0)

> Update code coverage report generation once gradle-scoverage starts 
> publishing scoverage report
> ---
>
> Key: KAFKA-1899
> URL: https://issues.apache.org/jira/browse/KAFKA-1899
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Minor
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-1900) Add documentation on usage of code coverage in the project and how it works

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1900:
---
Fix Version/s: (was: 0.11.0.0)

> Add documentation on usage of code coverage in the project and how it works
> ---
>
> Key: KAFKA-1900
> URL: https://issues.apache.org/jira/browse/KAFKA-1900
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Minor
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2394) Use RollingFileAppender by default in log4j.properties

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2394:
---
Fix Version/s: (was: 0.11.0.0)
   0.12.0.0

> Use RollingFileAppender by default in log4j.properties
> --
>
> Key: KAFKA-2394
> URL: https://issues.apache.org/jira/browse/KAFKA-2394
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Dustin Cote
>Priority: Minor
>  Labels: newbie
> Fix For: 0.12.0.0
>
> Attachments: log4j.properties.patch
>
>
> The default log4j.properties bundled with Kafka uses ConsoleAppender and 
> DailyRollingFileAppender, which offer no protection to users from spammy 
> logging. In extreme cases (such as when issues like KAFKA-1461 are 
> encountered), the logs can exhaust the local disk space. This could be a 
> problem for Kafka adoption since new users are less likely to adjust the 
> logging properties themselves, and are more likely to have configuration 
> problems which result in log spam. 
> To fix this, we can use RollingFileAppender, which offers two settings for 
> controlling the maximum space that log files will use.
> maxBackupIndex: how many backup files to retain
> maxFileSize: the max size of each log file
> One question is whether this change is a compatibility concern? The backup 
> strategy and filenames used by RollingFileAppender are different from those 
> used by DailyRollingFileAppender, so any tools which depend on the old format 
> will break. If we think this is a serious problem, one solution would be to 
> provide two versions of log4j.properties and add a flag to enable the new 
> one. Another solution would be to include the RollingFileAppender 
> configuration in the default log4j.properties, but commented out.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3155: KAKFA-5334: Allow rocksdb.config.setter to be spec...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3155


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5334) rocksdb.config.setter must be a class instance, not a class name

2017-05-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5334.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3155
[https://github.com/apache/kafka/pull/3155]

> rocksdb.config.setter must be a class instance, not a class name
> 
>
> Key: KAFKA-5334
> URL: https://issues.apache.org/jira/browse/KAFKA-5334
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Tommy Becker
>Assignee: Tommy Becker
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> Unlike other config properties that are classes, {{rocksdb.config.setter}} 
> cannot be a class _name_, it must be a class _instance_. This is because the 
> raw config Map gets passed to RocksDBStore instead of the typed 
> StreamsConfig, which is where the String -> Class conversion is handled. This 
> means the config setter cannot be set via a config file, which is pretty 
> limiting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3114: KAFKA-5211: Do not skip a corrupted record in cons...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3114


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

2017-05-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5211:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3114
[https://github.com/apache/kafka/pull/3114]

> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -
>
> Key: KAFKA-5211
> URL: https://issues.apache.org/jira/browse/KAFKA-5211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: clients, consumer
> Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw 
> an exception and block on that corrupted record. In the latest trunk this 
> behavior has changed to skip the corrupted record (which is the old consumer 
> behavior). With KIP-98, skipping corrupted messages would be a little 
> dangerous as the message could be a control message for a transaction. We 
> should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030683#comment-16030683
 ] 

ASF GitHub Bot commented on KAFKA-5211:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3114


> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -
>
> Key: KAFKA-5211
> URL: https://issues.apache.org/jira/browse/KAFKA-5211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: clients, consumer
> Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw 
> an exception and block on that corrupted record. In the latest trunk this 
> behavior has changed to skip the corrupted record (which is the old consumer 
> behavior). With KIP-98, skipping corrupted messages would be a little 
> dangerous as the message could be a control message for a transaction. We 
> should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1615

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5251; Producer should cancel unsent AddPartitions and Produce

--
[...truncated 904.12 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.Unc

Re: Jira-Spam on Dev-Mailinglist

2017-05-30 Thread Jeff Widman
I'm hugely in favor of this change as well...

Although I actually find the Github pull request emails less useful than
the jirabot ones since Jira typically has more info when I'm trying to
figure out if the issue is relevant to me or not...

On Tue, May 30, 2017 at 2:28 PM, Guozhang Wang  wrote:

> I actually do not know.. Maybe Jun knows better than me?
>
>
> Guozhang
>
> On Mon, May 29, 2017 at 12:58 PM, Gwen Shapira  wrote:
>
> > I agree.
> >
> > Guozhang, do you know how to implement the suggestion? JIRA to Apache
> > Infra? Or is this something we can do ourselves somehow?
> >
> > On Mon, May 29, 2017 at 9:33 PM Guozhang Wang 
> wrote:
> >
> > > I share your pains. Right now I use filters on my email accounts and it
> > has
> > > been down to about 25 per day.
> > >
> > > I think setup a separate mailing list for jirabot and jenkins auto
> > > generated emails is a good idea.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Mon, May 29, 2017 at 12:58 AM,  wrote:
> > >
> > > > Hello everyone
> > > >
> > > > I find it hard to follow this mailinglist due to all the mails
> > generated
> > > > by Jira. Just over this weekend there are 240 new mails.
> > > > Would it be possible to setup something like j...@kafka.apache.org
> > where
> > > > everyone can subscribe interested in those Jira mails?
> > > >
> > > > Right now I am going to setup a filter which just deletes the
> > jira-tagged
> > > > mails, but I think the current setup also makes it hard to read
> through
> > > > the archives.
> > > >
> > > > regards
> > > > Marc
> > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Assigned] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-30 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta reassigned KAFKA-5351:
---

Assignee: Apurva Mehta

> Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' 
> state permanently
> ---
>
> Key: KAFKA-5351
> URL: https://issues.apache.org/jira/browse/KAFKA-5351
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> In the broker clean bounce test, sometimes the consumer just hangs on a 
> request to the transactional coordinator because it keeps getting a 
> `CONCURRENT_TRANSACTIONS` error. This continues for 30 seconds, until the 
> process is killed. 
> {noformat}
> [2017-05-31 04:54:14,053] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,053] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,053] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,154] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,191] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,191] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error COORDINATOR_NOT_AVAILABLE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,192] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,192] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) to 
> node 3 (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE Got transactional response for 
> request:(type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,295] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error CONCURRENT_TRANSACTIONS 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,295] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-

[jira] [Created] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-30 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5351:
---

 Summary: Broker clean bounce test puts the broker into a 
'CONCURRENT_TRANSACTIONS' state permanently
 Key: KAFKA-5351
 URL: https://issues.apache.org/jira/browse/KAFKA-5351
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apurva Mehta
Priority: Blocker


In the broker clean bounce test, sometimes the consumer just hangs on a request 
to the transactional coordinator because it keeps getting a 
`CONCURRENT_TRANSACTIONS` error. This continues for 30 seconds, until the 
process is killed. 

{noformat}
[2017-05-31 04:54:14,053] DEBUG TransactionalId my-second-transactional-id -- 
Received FindCoordinator response with error NONE 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,053] DEBUG TransactionalId: my-second-transactional-id -- 
Sending transactional request (transactionalId=my-second-transactional-id, 
producerId=2000, producerEpoch=0, result=COMMIT) 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,053] TRACE TransactionalId: my-second-transactional-id -- 
Waiting 100ms before resending a transactional request 
(transactionalId=my-second-transactional-id, producerId=2000, producerEpoch=0, 
result=COMMIT) (org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,154] TRACE TransactionalId: my-second-transactional-id -- 
Sending transactional request (transactionalId=my-second-transactional-id, 
producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,191] TRACE Got transactional response for 
request:(transactionalId=my-second-transactional-id, producerId=2000, 
producerEpoch=0, result=COMMIT) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,191] DEBUG TransactionalId my-second-transactional-id -- 
Received EndTxn response with error COORDINATOR_NOT_AVAILABLE 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,192] DEBUG TransactionalId: my-second-transactional-id -- 
Sending transactional request (type=FindCoordinatorRequest, 
coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,192] TRACE TransactionalId: my-second-transactional-id -- 
Sending transactional request (type=FindCoordinatorRequest, 
coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) to node 
3 (org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,193] TRACE Got transactional response for 
request:(type=FindCoordinatorRequest, 
coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,193] DEBUG TransactionalId my-second-transactional-id -- 
Received FindCoordinator response with error NONE 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,193] DEBUG TransactionalId: my-second-transactional-id -- 
Sending transactional request (transactionalId=my-second-transactional-id, 
producerId=2000, producerEpoch=0, result=COMMIT) 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,193] TRACE TransactionalId: my-second-transactional-id -- 
Waiting 100ms before resending a transactional request 
(transactionalId=my-second-transactional-id, producerId=2000, producerEpoch=0, 
result=COMMIT) (org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,294] TRACE TransactionalId: my-second-transactional-id -- 
Sending transactional request (transactionalId=my-second-transactional-id, 
producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,294] TRACE Got transactional response for 
request:(transactionalId=my-second-transactional-id, producerId=2000, 
producerEpoch=0, result=COMMIT) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,295] DEBUG TransactionalId my-second-transactional-id -- 
Received EndTxn response with error CONCURRENT_TRANSACTIONS 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-05-31 04:54:14,295] DEBUG TransactionalId: my-second-transactional-id -- 
Sending transactional request (transactionalId=my-second-transactional-id, 
producerId=2000, producerEpoch=0, result=COMMIT) 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,295] TRACE TransactionalId: my-second-transactional-id -- 
Waiting 100ms before resending a transactional request 
(transactionalId=my-second-transactional-id, producerId=2000, producerEpoch=0, 
result=COMMIT) (org.apache.kafka.clients.producer.internals.Sender)
[2017-05-31 04:54:14,395] TRACE TransactionalId: my-second-transactional-id -- 
Sending transactional request (tran

Build failed in Jenkins: kafka-0.11.0-jdk7 #53

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5251; Producer should cancel unsent AddPartitions and Produce

--
[...truncated 902.76 KB...]

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady PASSED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
STARTED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
PASSED

kafka.common.ConfigTest > testInvalidGroupIds STARTED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigT

Request a permission to KIP

2017-05-30 Thread Ma Tianchi
Hi,
I want to get a permission to add a KIP.My wiki username is marktcma.
Tahnks.

[jira] [Commented] (KAFKA-5251) Producer should drop queued sends when transaction is aborted

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030591#comment-16030591
 ] 

ASF GitHub Bot commented on KAFKA-5251:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3161


> Producer should drop queued sends when transaction is aborted
> -
>
> Key: KAFKA-5251
> URL: https://issues.apache.org/jira/browse/KAFKA-5251
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> As an optimization, if a transaction is aborted, we can drop any records 
> which have not yet been sent to the brokers. However, to avoid the sequence 
> number getting out of sync, we need to continue sending any request which has 
> been sent at least once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3161: KAFKA-5251: Producer should cancel unsent AddParti...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3161


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5251) Producer should drop queued sends when transaction is aborted

2017-05-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5251:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3161
[https://github.com/apache/kafka/pull/3161]

> Producer should drop queued sends when transaction is aborted
> -
>
> Key: KAFKA-5251
> URL: https://issues.apache.org/jira/browse/KAFKA-5251
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> As an optimization, if a transaction is aborted, we can drop any records 
> which have not yet been sent to the brokers. However, to avoid the sequence 
> number getting out of sync, we need to continue sending any request which has 
> been sent at least once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk7 #2291

2017-05-30 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5293) Do not apply exponential backoff if users have overridden reconnect.backoff.ms

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030503#comment-16030503
 ] 

ASF GitHub Bot commented on KAFKA-5293:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3174

KAFKA-5293. Do not apply exponential backoff if users have overridden…

… reconnect.backoff.ms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5293

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3174.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3174


commit 41b1839be4f87516bf771ccd0802c068f8da067c
Author: Colin P. Mccabe 
Date:   2017-05-31T02:15:14Z

KAFKA-5293. Do not apply exponential backoff if users have overridden 
reconnect.backoff.ms




> Do not apply exponential backoff if users have overridden reconnect.backoff.ms
> --
>
> Key: KAFKA-5293
> URL: https://issues.apache.org/jira/browse/KAFKA-5293
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> The PR for KAFKA-3878 implemented KIP-144 with one exception: it 
> automatically enables exponential backoff for the producer and consumer even 
> if reconnect.backoff.ms is set by users. The KIP stated that this would not 
> be the case.
> As part of this JIRA, we should also add a few unit tests for connectionDelay 
> and perhaps consider enabling exponential backoff for Connect and Streams as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3174: KAFKA-5293. Do not apply exponential backoff if us...

2017-05-30 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3174

KAFKA-5293. Do not apply exponential backoff if users have overridden…

… reconnect.backoff.ms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5293

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3174.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3174


commit 41b1839be4f87516bf771ccd0802c068f8da067c
Author: Colin P. Mccabe 
Date:   2017-05-31T02:15:14Z

KAFKA-5293. Do not apply exponential backoff if users have overridden 
reconnect.backoff.ms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.11.0-jdk7 #51

2017-05-30 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5324) AdminClient: add close with timeout, fix some timeout bugs

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5324.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

> AdminClient: add close with timeout, fix some timeout bugs
> --
>
> Key: KAFKA-5324
> URL: https://issues.apache.org/jira/browse/KAFKA-5324
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> * Add a close method with a timeout, similar to some other APIs.  Close waits 
> for all calls to complete.  Once the timeout expires, all calls are aborted.
> * Fix a minor bug which made it impossible to have per-call timeouts which 
> were longer than the default timeout.
> * Fix a minor bug where we could oversleep short timeouts because we didn't 
> adjust the polling interval based on the extant timeouts.
> * Add some unit tests for timeouts



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5324) AdminClient: add close with timeout, fix some timeout bugs

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030494#comment-16030494
 ] 

ASF GitHub Bot commented on KAFKA-5324:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3141


> AdminClient: add close with timeout, fix some timeout bugs
> --
>
> Key: KAFKA-5324
> URL: https://issues.apache.org/jira/browse/KAFKA-5324
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> * Add a close method with a timeout, similar to some other APIs.  Close waits 
> for all calls to complete.  Once the timeout expires, all calls are aborted.
> * Fix a minor bug which made it impossible to have per-call timeouts which 
> were longer than the default timeout.
> * Fix a minor bug where we could oversleep short timeouts because we didn't 
> adjust the polling interval based on the extant timeouts.
> * Add some unit tests for timeouts



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3173: MINOR: Traverse plugin path recursively in connect

2017-05-30 Thread kkonstantine
GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/3173

MINOR: Traverse plugin path recursively in connect



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
MINOR-Traverse-plugin-path-recursively-in-Connect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3173.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3173


commit 20978460c18a85ae263e28ab0578dccc9c90f366
Author: Konstantine Karantasis 
Date:   2017-05-30T23:43:24Z

Add depth-first search traversal to load plugin classes.

commit 256154c6a08925f929459ff58e9fade797707617
Author: Konstantine Karantasis 
Date:   2017-05-31T02:05:00Z

Remove iterative search among plugin loaders.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3141: KAFKA-5324: AdminClient: add close with timeout, f...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3141


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-5293) Do not apply exponential backoff if users have overridden reconnect.backoff.ms

2017-05-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-5293:
--

Assignee: Colin P. McCabe  (was: Ismael Juma)

> Do not apply exponential backoff if users have overridden reconnect.backoff.ms
> --
>
> Key: KAFKA-5293
> URL: https://issues.apache.org/jira/browse/KAFKA-5293
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> The PR for KAFKA-3878 implemented KIP-144 with one exception: it 
> automatically enables exponential backoff for the producer and consumer even 
> if reconnect.backoff.ms is set by users. The KIP stated that this would not 
> be the case.
> As part of this JIRA, we should also add a few unit tests for connectionDelay 
> and perhaps consider enabling exponential backoff for Connect and Streams as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2290

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[junrao] MINOR: onControllerResignation should be invoked if

--
[...truncated 904.48 KB...]

kafka.server.ReplicaManagerTest > testClearPurgatoryOnBecomingFollower PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
STARTED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ReplicaManagerTest > testReadCommittedFetchLimitedAtLSO STARTED

kafka.server.ReplicaManagerTest > testReadCommittedFetchLimitedAtLSO PASSED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent STARTED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets STARTED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload STARTED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion STARTED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets STARTED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration STARTED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit STARTED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.MetadataRequestTest > testReplicaDownResponse STARTED

kafka.server.MetadataRequestTest > testReplicaDownResponse PASSED

kafka.server.MetadataRequestTest > testRack STARTED

kafka.server.MetadataRequestTest > testRack PASSED

kafka.server.MetadataRequestTest > testIsInternal STARTED

kafka.server.MetadataRequestTest > testIsInternal PASSED

kafka.server.MetadataRequestTest > testControllerId STARTED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest STARTED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testClusterIdIsValid STARTED

kafka.server.MetadataRequestTest > testClusterIdIsValid PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest STARTED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 STARTED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 PASSED

kafka.server.SessionExpireListenerTest > testSessionExpireListenerMetrics 
STARTED

kafka.server.SessionExpireListenerTest > testSessionExpireListenerMetrics PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
STARTED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic STARTED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets STARTED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegments STARTED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegments PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.LogOffsetTest > testGetOffsetsAfterDeleteRecords STARTED

kafka.server.LogOffsetTest > testGetOffsetsAfterDeleteRecords P

[jira] [Commented] (KAFKA-5150) LZ4 decompression is 4-5x slower than Snappy on small batches / messages

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030468#comment-16030468
 ] 

ASF GitHub Bot commented on KAFKA-5150:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2967


> LZ4 decompression is 4-5x slower than Snappy on small batches / messages
> 
>
> Key: KAFKA-5150
> URL: https://issues.apache.org/jira/browse/KAFKA-5150
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.11.0.0, 0.10.2.1
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch 
> sizes with small messages after observing some performance bottlenecks in the 
> consumer. 
> For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms 
> compared to Snappy (see benchmark below). Most of our time is currently spent 
> allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we 
> default to larger 64kB block sizes. Some quick testing shows we could improve 
> performance by almost an order of magnitude for small batches and messages if 
> we re-used buffers between instantiations of the input stream.
> [Benchmark 
> Code|https://github.com/xvrl/kafka/blob/small-batch-lz4-benchmark/clients/src/test/java/org/apache/kafka/common/record/DeepRecordsIteratorBenchmark.java#L86]
> {code}
> Benchmark  (compressionType)  
> (messageSize)   Mode  Cnt   Score   Error  Units
> DeepRecordsIteratorBenchmark.measureSingleMessageLZ4  
>   100  thrpt   20   84802.279 ±  1983.847  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage SNAPPY  
>   100  thrpt   20  407585.747 ±  9877.073  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage   NONE  
>   100  thrpt   20  579141.634 ± 18482.093  ops/s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2967: KAFKA-5150 reduce lz4 decompression overhead

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2967


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2054: kafka-4295: ConsoleConsumer does not delete the te...

2017-05-30 Thread amethystic
Github user amethystic closed the pull request at:

https://github.com/apache/kafka/pull/2054


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2317: kafka-4592: Kafka Producer Metrics Invalid Value

2017-05-30 Thread amethystic
Github user amethystic closed the pull request at:

https://github.com/apache/kafka/pull/2317


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2576: kafka-4767: KafkaProducer is not joining its IO th...

2017-05-30 Thread amethystic
Github user amethystic closed the pull request at:

https://github.com/apache/kafka/pull/2576


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2289

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[junrao] MINOR: Fix doc for producer throttle time metrics

--
[...truncated 905.43 KB...]

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
STARTED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest STARTED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire STARTED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest PASSED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader STARTED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage STARTED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldReturnEmptyMapForEmptyFile STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldReturnEmptyMapForEmptyFile PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldThrowIfVersionIsNotRecognised STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldThrowIfVersionIsNotRecognised PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldPersistAndOverwriteAndReloadFile STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldPersistAndOverwriteAndReloadFile PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > shouldHandleMultipleLin

Build failed in Jenkins: kafka-0.11.0-jdk7 #50

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[junrao] MINOR: Fix doc for producer throttle time metrics

--
[...truncated 906.15 KB...]

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate STARTED

kafka.controller.ControllerFailoverTest > testMetadataUpdate SKIPPED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tool

[GitHub] kafka-site pull request #59: Add auto syntax highlighter

2017-05-30 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/59

Add auto syntax highlighter

Added plugin to do auto syntax highlighting for code snippets in our docs.

Here's an example of how to syntax highlight some java code:
``

Here's what this would look like:

![image](https://cloud.githubusercontent.com/assets/271961/26610631/a5e4341a-455d-11e7-9fe4-7e6bc801f8a9.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site syntax-highlighting

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/59.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #59


commit 8a7f7baa2ac3aa54a3b32f5aa2fdd25bed2b4c45
Author: Derrick Or 
Date:   2017-05-31T00:27:07Z

add auto syntax highlighter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Ismael Juma
Yes, and the only way to enforce it is by restricting ZK access to brokers
only. If ZK access is available to users, they can get around the broker
config proposal as well. And then the question is about benefit versus
cost. We can have that discussion when the KIP is proposed.

Ismael

On Wed, May 31, 2017 at 12:48 AM, Dong Lin  wrote:

> Certainly. I think it is reasonable to create a separate KIP to enforce the
> topic creation policy. After all the administrator needs a guarantee that
> the policy that they have specified in the broker will be enforce --
> otherwise the feature doesn't seem complete.
>
> On Tue, May 30, 2017 at 4:45 PM, Ismael Juma  wrote:
>
> > I am not sure if the additional complexity in the Controller is worth it
> > for this use case. It seems like it would be better to swap the tools to
> > use AdminClient and then restrict access to ZK (via ACLs and/or network
> > segmentation). Either way, that proposal should be done via a separate
> KIP
> > as KIP-108 was specifically about create topic requests that are done via
> > the Kafka protocol.
> >
> > Ismael
> >
> > On Wed, May 31, 2017 at 12:37 AM, Dong Lin  wrote:
> >
> > > On Tue, May 30, 2017 at 4:26 PM, Colin McCabe 
> > wrote:
> > >
> > > > On Tue, May 30, 2017, at 15:55, Dong Lin wrote:
> > > > > Hey Colin,
> > > > >
> > > > > I think one big advantage of the broker side config is that it can
> > not
> > > be
> > > > > ignored by the malicious client, right?
> > > >
> > > > Hi Dong,
> > > >
> > > > The scenario I was thinking of is where a malicious client
> communicates
> > > > directly with ZooKeeper, bypassing the broker.  As far as I can see,
> > > > nothing that we do on the broker can prevent this from happening.  It
> > > > has to be blocked by ZooKeeper itself.
> > > >
> > >
> > > I see. I agree that malicious client can still create the topic via
> > > zookeeper if we don't have ACL. The approach using the new config can
> > > prevent non-malicious client from using old script to create topic via
> > > zookeeper.
> > >
> > >
> > > >
> > > > >
> > > > > Thanks,
> > > > > Dong
> > > > >
> > > > > On Tue, May 30, 2017 at 3:53 PM, Dong Lin 
> > wrote:
> > > > >
> > > > > > Do we have an old version of bin/kafka-topics.sh which creates
> > topic
> > > > via
> > > > > > ZK and still allows user to access ZK with ACL? Another concern
> is
> > > that
> > > > > > some user may not have ACL service deployed in their cluster. If
> > > > neither of
> > > > > > these is issue,  then I would prefer the zookeeper approach
> instead
> > > of
> > > > > > adding a new broker config if the zookeeper approach is doable.
> > > >
> > > > Unfortunately, the latest version of kafka-topics.sh still creates
> > > > topics by talking directly to ZK.  It has not been converted to use
> the
> > > > new AdminClient, although that is planned.
> > > >
> > >
> > > Yep. Thus the new config-based approach still has its advantage over
> > > ZK-based approach because it ensures that non-malicious user will not
> > > create topic via zookeeper.
> > >
> > >
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > > >
> > > > > > However, regardless of whether we secure the zookeeper from
> > > > unauthorized
> > > > > > user, I think KIP-108 should provide a solution to guarantee that
> > all
> > > > topic
> > > > > > creation logic goes through the topic creation policy.
> > > > > >
> > > > > > Thanks,
> > > > > > Dong
> > > > > >
> > > > > > On Tue, May 30, 2017 at 3:39 PM, Colin McCabe <
> cmcc...@apache.org>
> > > > wrote:
> > > > > >
> > > > > >> It seems like, to make it really secure, we need the enforcement
> > to
> > > be
> > > > > >> done at the ZooKepeer level.  Any broker or client-side
> > > configuration
> > > > > >> can just be ignored by a malicious client.  Do we have
> > documentation
> > > > or
> > > > > >> code that configures ZK to prevent unprivileged users from
> > modifying
> > > > the
> > > > > >> topic configurations?
> > > > > >>
> > > > > >> best,
> > > > > >> Colin
> > > > > >>
> > > > > >>
> > > > > >> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > > > > >> > Hey Ismael,
> > > > > >> >
> > > > > >> > I agree that it makes sense not to cover ZK-based topic
> creation
> > > > with
> > > > > >> the
> > > > > >> > topic creation policy and limit ZK access to brokers only
> going
> > > > forward.
> > > > > >> > My
> > > > > >> > point is that we need a way to disable ZK-based topic creation
> > so
> > > > that
> > > > > >> > all
> > > > > >> > topic creation goes through the topic creation policy as
> > specified
> > > > in
> > > > > >> > KIP-108. Does this make sense?
> > > > > >> >
> > > > > >> > One example solution is to add a broker-side config
> > > > > >> > "enable.zookeeper.topic.creation"
> > > > > >> > which defaults to "true". If user has overridden this config
> to
> > be
> > > > > >> > "false",
> > > > > >> > then controller will delete the znode /brokers/topics/{topic}
> > that
> > > > is
> > > > > >> not
> >

Jenkins build is back to normal : kafka-trunk-jdk8 #1612

2017-05-30 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3172: KAFKA-5350: Modify unstable annotations in Streams...

2017-05-30 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3172

KAFKA-5350: Modify unstable annotations in Streams API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K5350-compatibility-annotations

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3172.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3172


commit 03e2974d9e359beeec9244e912b2d4377acf8991
Author: Guozhang Wang 
Date:   2017-05-30T23:59:40Z

modify compatbility annotations in Streams API




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5350) Modify Unstable annotations in Streams API

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030395#comment-16030395
 ] 

ASF GitHub Bot commented on KAFKA-5350:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3172

KAFKA-5350: Modify unstable annotations in Streams API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K5350-compatibility-annotations

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3172.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3172


commit 03e2974d9e359beeec9244e912b2d4377acf8991
Author: Guozhang Wang 
Date:   2017-05-30T23:59:40Z

modify compatbility annotations in Streams API




> Modify Unstable annotations in Streams API
> --
>
> Key: KAFKA-5350
> URL: https://issues.apache.org/jira/browse/KAFKA-5350
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> As discussed in the email thread: 
> https://www.mail-archive.com/dev@kafka.apache.org/msg72581.html
> We are going to make the following changes to the Streams API compatibility 
> annotations:
> 1. For "o.a.k.streams.errors" and "o.a.k.streams.state" packages: remove the 
> annotations except `Stores`.
> 2. For "o.a.k.streams.kstream": remove the annotations except "KStream", 
> "KTable", "GroupedKStream", "GroupedKTable", "GlobalKTable" and 
> "KStreamBuilder".
> 3. For all the other public classes, including "o.a.k.streams.processor", 
> change the annotation to "Evolving", which means "we might break 
> compatibility at minor releases (i.e. 0.12.x, 0.13.x, 1.0.x etc) only". 
> The ultimate goal is to make sure we won't break anything going forward, 
> hence in the future we should remove all the annotations to make that clear. 
> The above changes in 0.11.0.0 is to give us some "buffer time" in case there 
> are some major API change proposals after the release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2935: MINOR: onControllerResignation should be invoked i...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2935


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3761) Controller has RunningAsBroker instead of RunningAsController state

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030397#comment-16030397
 ] 

ASF GitHub Bot commented on KAFKA-3761:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2935


> Controller has RunningAsBroker instead of RunningAsController state
> ---
>
> Key: KAFKA-3761
> URL: https://issues.apache.org/jira/browse/KAFKA-3761
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Roger Hoover
> Fix For: 0.10.1.0
>
>
> In `KafkaServer.start`, we start `KafkaController`:
> {code}
> /* start kafka controller */
> kafkaController = new KafkaController(config, zkUtils, brokerState, 
> kafkaMetricsTime, metrics, threadNamePrefix)
> kafkaController.startup()
> {code}
> Which sets the state to `RunningAsController` in 
> `KafkaController.onControllerFailover`:
> `brokerState.newState(RunningAsController)`
> And this later gets set to `RunningAsBroker`.
> This doesn't match the diagram in `BrokerStates`. [~junrao] suggested that we 
> should start the controller after we register the broker in ZK, but this 
> seems tricky as we need to controller in `KafkaApis`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5350) Modify Unstable annotations in Streams API

2017-05-30 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-5350:


 Summary: Modify Unstable annotations in Streams API
 Key: KAFKA-5350
 URL: https://issues.apache.org/jira/browse/KAFKA-5350
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.11.0.0


As discussed in the email thread: 
https://www.mail-archive.com/dev@kafka.apache.org/msg72581.html

We are going to make the following changes to the Streams API compatibility 
annotations:

1. For "o.a.k.streams.errors" and "o.a.k.streams.state" packages: remove the 
annotations except `Stores`.

2. For "o.a.k.streams.kstream": remove the annotations except "KStream", 
"KTable", "GroupedKStream", "GroupedKTable", "GlobalKTable" and 
"KStreamBuilder".

3. For all the other public classes, including "o.a.k.streams.processor", 
change the annotation to "Evolving", which means "we might break compatibility 
at minor releases (i.e. 0.12.x, 0.13.x, 1.0.x etc) only". 

The ultimate goal is to make sure we won't break anything going forward, hence 
in the future we should remove all the annotations to make that clear. The 
above changes in 0.11.0.0 is to give us some "buffer time" in case there are 
some major API change proposals after the release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
Certainly. I think it is reasonable to create a separate KIP to enforce the
topic creation policy. After all the administrator needs a guarantee that
the policy that they have specified in the broker will be enforce --
otherwise the feature doesn't seem complete.

On Tue, May 30, 2017 at 4:45 PM, Ismael Juma  wrote:

> I am not sure if the additional complexity in the Controller is worth it
> for this use case. It seems like it would be better to swap the tools to
> use AdminClient and then restrict access to ZK (via ACLs and/or network
> segmentation). Either way, that proposal should be done via a separate KIP
> as KIP-108 was specifically about create topic requests that are done via
> the Kafka protocol.
>
> Ismael
>
> On Wed, May 31, 2017 at 12:37 AM, Dong Lin  wrote:
>
> > On Tue, May 30, 2017 at 4:26 PM, Colin McCabe 
> wrote:
> >
> > > On Tue, May 30, 2017, at 15:55, Dong Lin wrote:
> > > > Hey Colin,
> > > >
> > > > I think one big advantage of the broker side config is that it can
> not
> > be
> > > > ignored by the malicious client, right?
> > >
> > > Hi Dong,
> > >
> > > The scenario I was thinking of is where a malicious client communicates
> > > directly with ZooKeeper, bypassing the broker.  As far as I can see,
> > > nothing that we do on the broker can prevent this from happening.  It
> > > has to be blocked by ZooKeeper itself.
> > >
> >
> > I see. I agree that malicious client can still create the topic via
> > zookeeper if we don't have ACL. The approach using the new config can
> > prevent non-malicious client from using old script to create topic via
> > zookeeper.
> >
> >
> > >
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > > On Tue, May 30, 2017 at 3:53 PM, Dong Lin 
> wrote:
> > > >
> > > > > Do we have an old version of bin/kafka-topics.sh which creates
> topic
> > > via
> > > > > ZK and still allows user to access ZK with ACL? Another concern is
> > that
> > > > > some user may not have ACL service deployed in their cluster. If
> > > neither of
> > > > > these is issue,  then I would prefer the zookeeper approach instead
> > of
> > > > > adding a new broker config if the zookeeper approach is doable.
> > >
> > > Unfortunately, the latest version of kafka-topics.sh still creates
> > > topics by talking directly to ZK.  It has not been converted to use the
> > > new AdminClient, although that is planned.
> > >
> >
> > Yep. Thus the new config-based approach still has its advantage over
> > ZK-based approach because it ensures that non-malicious user will not
> > create topic via zookeeper.
> >
> >
> > >
> > > best,
> > > Colin
> > >
> > >
> > > > >
> > > > > However, regardless of whether we secure the zookeeper from
> > > unauthorized
> > > > > user, I think KIP-108 should provide a solution to guarantee that
> all
> > > topic
> > > > > creation logic goes through the topic creation policy.
> > > > >
> > > > > Thanks,
> > > > > Dong
> > > > >
> > > > > On Tue, May 30, 2017 at 3:39 PM, Colin McCabe 
> > > wrote:
> > > > >
> > > > >> It seems like, to make it really secure, we need the enforcement
> to
> > be
> > > > >> done at the ZooKepeer level.  Any broker or client-side
> > configuration
> > > > >> can just be ignored by a malicious client.  Do we have
> documentation
> > > or
> > > > >> code that configures ZK to prevent unprivileged users from
> modifying
> > > the
> > > > >> topic configurations?
> > > > >>
> > > > >> best,
> > > > >> Colin
> > > > >>
> > > > >>
> > > > >> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > > > >> > Hey Ismael,
> > > > >> >
> > > > >> > I agree that it makes sense not to cover ZK-based topic creation
> > > with
> > > > >> the
> > > > >> > topic creation policy and limit ZK access to brokers only going
> > > forward.
> > > > >> > My
> > > > >> > point is that we need a way to disable ZK-based topic creation
> so
> > > that
> > > > >> > all
> > > > >> > topic creation goes through the topic creation policy as
> specified
> > > in
> > > > >> > KIP-108. Does this make sense?
> > > > >> >
> > > > >> > One example solution is to add a broker-side config
> > > > >> > "enable.zookeeper.topic.creation"
> > > > >> > which defaults to "true". If user has overridden this config to
> be
> > > > >> > "false",
> > > > >> > then controller will delete the znode /brokers/topics/{topic}
> that
> > > is
> > > > >> not
> > > > >> > created by the controller. We probably need some trick to
> > > differentiate
> > > > >> > between znode created by controller and znode created by
> outdated
> > > tools.
> > > > >> > For example, the new controller code can add a new field
> > > "isController"
> > > > >> > in
> > > > >> > the znode /brokers/topics/{topic} when it creates this new
> znode.
> > > Then
> > > > >> if
> > > > >> > the znode doesn't have this field AND there is no child under
> this
> > > > >> znode,
> > > > >> > controller can be sure it is created by outdated tools and
> remove
> > > this
> > > > >> > znode from zookeeper. Users who are using outda

Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Ismael Juma
I am not sure if the additional complexity in the Controller is worth it
for this use case. It seems like it would be better to swap the tools to
use AdminClient and then restrict access to ZK (via ACLs and/or network
segmentation). Either way, that proposal should be done via a separate KIP
as KIP-108 was specifically about create topic requests that are done via
the Kafka protocol.

Ismael

On Wed, May 31, 2017 at 12:37 AM, Dong Lin  wrote:

> On Tue, May 30, 2017 at 4:26 PM, Colin McCabe  wrote:
>
> > On Tue, May 30, 2017, at 15:55, Dong Lin wrote:
> > > Hey Colin,
> > >
> > > I think one big advantage of the broker side config is that it can not
> be
> > > ignored by the malicious client, right?
> >
> > Hi Dong,
> >
> > The scenario I was thinking of is where a malicious client communicates
> > directly with ZooKeeper, bypassing the broker.  As far as I can see,
> > nothing that we do on the broker can prevent this from happening.  It
> > has to be blocked by ZooKeeper itself.
> >
>
> I see. I agree that malicious client can still create the topic via
> zookeeper if we don't have ACL. The approach using the new config can
> prevent non-malicious client from using old script to create topic via
> zookeeper.
>
>
> >
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Tue, May 30, 2017 at 3:53 PM, Dong Lin  wrote:
> > >
> > > > Do we have an old version of bin/kafka-topics.sh which creates topic
> > via
> > > > ZK and still allows user to access ZK with ACL? Another concern is
> that
> > > > some user may not have ACL service deployed in their cluster. If
> > neither of
> > > > these is issue,  then I would prefer the zookeeper approach instead
> of
> > > > adding a new broker config if the zookeeper approach is doable.
> >
> > Unfortunately, the latest version of kafka-topics.sh still creates
> > topics by talking directly to ZK.  It has not been converted to use the
> > new AdminClient, although that is planned.
> >
>
> Yep. Thus the new config-based approach still has its advantage over
> ZK-based approach because it ensures that non-malicious user will not
> create topic via zookeeper.
>
>
> >
> > best,
> > Colin
> >
> >
> > > >
> > > > However, regardless of whether we secure the zookeeper from
> > unauthorized
> > > > user, I think KIP-108 should provide a solution to guarantee that all
> > topic
> > > > creation logic goes through the topic creation policy.
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > > On Tue, May 30, 2017 at 3:39 PM, Colin McCabe 
> > wrote:
> > > >
> > > >> It seems like, to make it really secure, we need the enforcement to
> be
> > > >> done at the ZooKepeer level.  Any broker or client-side
> configuration
> > > >> can just be ignored by a malicious client.  Do we have documentation
> > or
> > > >> code that configures ZK to prevent unprivileged users from modifying
> > the
> > > >> topic configurations?
> > > >>
> > > >> best,
> > > >> Colin
> > > >>
> > > >>
> > > >> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > > >> > Hey Ismael,
> > > >> >
> > > >> > I agree that it makes sense not to cover ZK-based topic creation
> > with
> > > >> the
> > > >> > topic creation policy and limit ZK access to brokers only going
> > forward.
> > > >> > My
> > > >> > point is that we need a way to disable ZK-based topic creation so
> > that
> > > >> > all
> > > >> > topic creation goes through the topic creation policy as specified
> > in
> > > >> > KIP-108. Does this make sense?
> > > >> >
> > > >> > One example solution is to add a broker-side config
> > > >> > "enable.zookeeper.topic.creation"
> > > >> > which defaults to "true". If user has overridden this config to be
> > > >> > "false",
> > > >> > then controller will delete the znode /brokers/topics/{topic} that
> > is
> > > >> not
> > > >> > created by the controller. We probably need some trick to
> > differentiate
> > > >> > between znode created by controller and znode created by outdated
> > tools.
> > > >> > For example, the new controller code can add a new field
> > "isController"
> > > >> > in
> > > >> > the znode /brokers/topics/{topic} when it creates this new znode.
> > Then
> > > >> if
> > > >> > the znode doesn't have this field AND there is no child under this
> > > >> znode,
> > > >> > controller can be sure it is created by outdated tools and remove
> > this
> > > >> > znode from zookeeper. Users who are using outdated tools to create
> > topic
> > > >> > will find that the topic is not created.
> > > >> >
> > > >> > Dong
> > > >> >
> > > >> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma 
> > wrote:
> > > >> >
> > > >> > > Hi Dong,
> > > >> > >
> > > >> > > No, ZK-based topic creation doesn't go through the policy since
> it
> > > >> doesn't
> > > >> > > go through the broker. Given that, I am not sure how the broker
> > config
> > > >> > > would work. Can you please elaborate? It seems like the way
> > forward
> > > >> is to
> > > >> > > limit ZK access to brokers only.
> > > >> > >
> > > >> > > Ismael
> > > >> >

[GitHub] kafka pull request #3171: MINOR: A few cleanups in KafkaApis and Transaction...

2017-05-30 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3171

MINOR: A few cleanups in KafkaApis and TransactionMarkerChannelManager



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka minor-txn-channel-cleanups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3171.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3171


commit 4b5717a935abc1c6e9ceccf51f7a0fdf91aee763
Author: Jason Gustafson 
Date:   2017-05-30T23:09:57Z

MINOR: A few cleanups in KafkaApis and TransactionMarkerChannelManager




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
On Tue, May 30, 2017 at 4:26 PM, Colin McCabe  wrote:

> On Tue, May 30, 2017, at 15:55, Dong Lin wrote:
> > Hey Colin,
> >
> > I think one big advantage of the broker side config is that it can not be
> > ignored by the malicious client, right?
>
> Hi Dong,
>
> The scenario I was thinking of is where a malicious client communicates
> directly with ZooKeeper, bypassing the broker.  As far as I can see,
> nothing that we do on the broker can prevent this from happening.  It
> has to be blocked by ZooKeeper itself.
>

I see. I agree that malicious client can still create the topic via
zookeeper if we don't have ACL. The approach using the new config can
prevent non-malicious client from using old script to create topic via
zookeeper.


>
> >
> > Thanks,
> > Dong
> >
> > On Tue, May 30, 2017 at 3:53 PM, Dong Lin  wrote:
> >
> > > Do we have an old version of bin/kafka-topics.sh which creates topic
> via
> > > ZK and still allows user to access ZK with ACL? Another concern is that
> > > some user may not have ACL service deployed in their cluster. If
> neither of
> > > these is issue,  then I would prefer the zookeeper approach instead of
> > > adding a new broker config if the zookeeper approach is doable.
>
> Unfortunately, the latest version of kafka-topics.sh still creates
> topics by talking directly to ZK.  It has not been converted to use the
> new AdminClient, although that is planned.
>

Yep. Thus the new config-based approach still has its advantage over
ZK-based approach because it ensures that non-malicious user will not
create topic via zookeeper.


>
> best,
> Colin
>
>
> > >
> > > However, regardless of whether we secure the zookeeper from
> unauthorized
> > > user, I think KIP-108 should provide a solution to guarantee that all
> topic
> > > creation logic goes through the topic creation policy.
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Tue, May 30, 2017 at 3:39 PM, Colin McCabe 
> wrote:
> > >
> > >> It seems like, to make it really secure, we need the enforcement to be
> > >> done at the ZooKepeer level.  Any broker or client-side configuration
> > >> can just be ignored by a malicious client.  Do we have documentation
> or
> > >> code that configures ZK to prevent unprivileged users from modifying
> the
> > >> topic configurations?
> > >>
> > >> best,
> > >> Colin
> > >>
> > >>
> > >> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > >> > Hey Ismael,
> > >> >
> > >> > I agree that it makes sense not to cover ZK-based topic creation
> with
> > >> the
> > >> > topic creation policy and limit ZK access to brokers only going
> forward.
> > >> > My
> > >> > point is that we need a way to disable ZK-based topic creation so
> that
> > >> > all
> > >> > topic creation goes through the topic creation policy as specified
> in
> > >> > KIP-108. Does this make sense?
> > >> >
> > >> > One example solution is to add a broker-side config
> > >> > "enable.zookeeper.topic.creation"
> > >> > which defaults to "true". If user has overridden this config to be
> > >> > "false",
> > >> > then controller will delete the znode /brokers/topics/{topic} that
> is
> > >> not
> > >> > created by the controller. We probably need some trick to
> differentiate
> > >> > between znode created by controller and znode created by outdated
> tools.
> > >> > For example, the new controller code can add a new field
> "isController"
> > >> > in
> > >> > the znode /brokers/topics/{topic} when it creates this new znode.
> Then
> > >> if
> > >> > the znode doesn't have this field AND there is no child under this
> > >> znode,
> > >> > controller can be sure it is created by outdated tools and remove
> this
> > >> > znode from zookeeper. Users who are using outdated tools to create
> topic
> > >> > will find that the topic is not created.
> > >> >
> > >> > Dong
> > >> >
> > >> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma 
> wrote:
> > >> >
> > >> > > Hi Dong,
> > >> > >
> > >> > > No, ZK-based topic creation doesn't go through the policy since it
> > >> doesn't
> > >> > > go through the broker. Given that, I am not sure how the broker
> config
> > >> > > would work. Can you please elaborate? It seems like the way
> forward
> > >> is to
> > >> > > limit ZK access to brokers only.
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > On Tue, May 30, 2017 at 10:19 PM, Dong Lin 
> > >> wrote:
> > >> > >
> > >> > > > Hey Ismael,
> > >> > > >
> > >> > > > Thanks for the KIP. This is definitely useful.
> > >> > > >
> > >> > > > Does the KIP apply the topic creation policy to ZK-based topic
> > >> creation?
> > >> > > If
> > >> > > > not, which seems to be the case from my understanding, should we
> > >> have a
> > >> > > new
> > >> > > > broker config to disable ZK-based topic creation? This seems
> > >> necessary to
> > >> > > > prevent user from using stray builds to evade the topic creation
> > >> policy.
> > >> > > >
> > >> > > > Thanks,
> > >> > > > Dong
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >

[GitHub] kafka pull request #3169: MINOR: Fix doc for producer throttle time metrics

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3169


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Colin McCabe
On Tue, May 30, 2017, at 15:55, Dong Lin wrote:
> Hey Colin,
> 
> I think one big advantage of the broker side config is that it can not be
> ignored by the malicious client, right?

Hi Dong,

The scenario I was thinking of is where a malicious client communicates
directly with ZooKeeper, bypassing the broker.  As far as I can see,
nothing that we do on the broker can prevent this from happening.  It
has to be blocked by ZooKeeper itself.

> 
> Thanks,
> Dong
> 
> On Tue, May 30, 2017 at 3:53 PM, Dong Lin  wrote:
> 
> > Do we have an old version of bin/kafka-topics.sh which creates topic via
> > ZK and still allows user to access ZK with ACL? Another concern is that
> > some user may not have ACL service deployed in their cluster. If neither of
> > these is issue,  then I would prefer the zookeeper approach instead of
> > adding a new broker config if the zookeeper approach is doable.

Unfortunately, the latest version of kafka-topics.sh still creates
topics by talking directly to ZK.  It has not been converted to use the
new AdminClient, although that is planned.

best,
Colin


> >
> > However, regardless of whether we secure the zookeeper from unauthorized
> > user, I think KIP-108 should provide a solution to guarantee that all topic
> > creation logic goes through the topic creation policy.
> >
> > Thanks,
> > Dong
> >
> > On Tue, May 30, 2017 at 3:39 PM, Colin McCabe  wrote:
> >
> >> It seems like, to make it really secure, we need the enforcement to be
> >> done at the ZooKepeer level.  Any broker or client-side configuration
> >> can just be ignored by a malicious client.  Do we have documentation or
> >> code that configures ZK to prevent unprivileged users from modifying the
> >> topic configurations?
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> >> > Hey Ismael,
> >> >
> >> > I agree that it makes sense not to cover ZK-based topic creation with
> >> the
> >> > topic creation policy and limit ZK access to brokers only going forward.
> >> > My
> >> > point is that we need a way to disable ZK-based topic creation so that
> >> > all
> >> > topic creation goes through the topic creation policy as specified in
> >> > KIP-108. Does this make sense?
> >> >
> >> > One example solution is to add a broker-side config
> >> > "enable.zookeeper.topic.creation"
> >> > which defaults to "true". If user has overridden this config to be
> >> > "false",
> >> > then controller will delete the znode /brokers/topics/{topic} that is
> >> not
> >> > created by the controller. We probably need some trick to differentiate
> >> > between znode created by controller and znode created by outdated tools.
> >> > For example, the new controller code can add a new field "isController"
> >> > in
> >> > the znode /brokers/topics/{topic} when it creates this new znode. Then
> >> if
> >> > the znode doesn't have this field AND there is no child under this
> >> znode,
> >> > controller can be sure it is created by outdated tools and remove this
> >> > znode from zookeeper. Users who are using outdated tools to create topic
> >> > will find that the topic is not created.
> >> >
> >> > Dong
> >> >
> >> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
> >> >
> >> > > Hi Dong,
> >> > >
> >> > > No, ZK-based topic creation doesn't go through the policy since it
> >> doesn't
> >> > > go through the broker. Given that, I am not sure how the broker config
> >> > > would work. Can you please elaborate? It seems like the way forward
> >> is to
> >> > > limit ZK access to brokers only.
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Tue, May 30, 2017 at 10:19 PM, Dong Lin 
> >> wrote:
> >> > >
> >> > > > Hey Ismael,
> >> > > >
> >> > > > Thanks for the KIP. This is definitely useful.
> >> > > >
> >> > > > Does the KIP apply the topic creation policy to ZK-based topic
> >> creation?
> >> > > If
> >> > > > not, which seems to be the case from my understanding, should we
> >> have a
> >> > > new
> >> > > > broker config to disable ZK-based topic creation? This seems
> >> necessary to
> >> > > > prevent user from using stray builds to evade the topic creation
> >> policy.
> >> > > >
> >> > > > Thanks,
> >> > > > Dong
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover <
> >> roger.hoo...@gmail.com>
> >> > > > wrote:
> >> > > >
> >> > > > > Got it.  Thanks, Ismael.
> >> > > > >
> >> > > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> >> > > wrote:
> >> > > > >
> >> > > > > > Hi Roger,
> >> > > > > >
> >> > > > > > That's a good question. The server defaults are passed via the
> >> > > > > `configure`
> >> > > > > > method of the `Configurable` interface that is implemented by
> >> > > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> >> > > > > >
> >> > > > > > Ismael
> >> > > > > >
> >> > > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover <
> >> roger.hoo...@gmail.com
> >> > > >
> >> > > > > > wrote:
> >>

[GitHub] kafka-site pull request #58: [PLEASE DO NOT MERGE] Further break apart Strea...

2017-05-30 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/58

[PLEASE DO NOT MERGE] Further break apart Streams docs

This is just to illustrate necessary changes for changing the structure of 
Streams documentation. 

Please don't merge yet.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site 
streams-routing-and-styles

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/58.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #58


commit 432ece78379fa09b749a01658b9809c095f56704
Author: Derrick Or 
Date:   2017-05-30T23:03:27Z

structure and style adjustments for streams docs

remove debugger




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Onur Karaman
@Colin checkout kafka.admin.ZkSecurityMigrator. This is what I meant in my
earlier comment on "acl off zookeeper"

On Tue, May 30, 2017 at 3:39 PM Colin McCabe  wrote:

> It seems like, to make it really secure, we need the enforcement to be
> done at the ZooKepeer level.  Any broker or client-side configuration
> can just be ignored by a malicious client.  Do we have documentation or
> code that configures ZK to prevent unprivileged users from modifying the
> topic configurations?
>
> best,
> Colin
>
>
> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > Hey Ismael,
> >
> > I agree that it makes sense not to cover ZK-based topic creation with the
> > topic creation policy and limit ZK access to brokers only going forward.
> > My
> > point is that we need a way to disable ZK-based topic creation so that
> > all
> > topic creation goes through the topic creation policy as specified in
> > KIP-108. Does this make sense?
> >
> > One example solution is to add a broker-side config
> > "enable.zookeeper.topic.creation"
> > which defaults to "true". If user has overridden this config to be
> > "false",
> > then controller will delete the znode /brokers/topics/{topic} that is not
> > created by the controller. We probably need some trick to differentiate
> > between znode created by controller and znode created by outdated tools.
> > For example, the new controller code can add a new field "isController"
> > in
> > the znode /brokers/topics/{topic} when it creates this new znode. Then if
> > the znode doesn't have this field AND there is no child under this znode,
> > controller can be sure it is created by outdated tools and remove this
> > znode from zookeeper. Users who are using outdated tools to create topic
> > will find that the topic is not created.
> >
> > Dong
> >
> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
> >
> > > Hi Dong,
> > >
> > > No, ZK-based topic creation doesn't go through the policy since it
> doesn't
> > > go through the broker. Given that, I am not sure how the broker config
> > > would work. Can you please elaborate? It seems like the way forward is
> to
> > > limit ZK access to brokers only.
> > >
> > > Ismael
> > >
> > > On Tue, May 30, 2017 at 10:19 PM, Dong Lin 
> wrote:
> > >
> > > > Hey Ismael,
> > > >
> > > > Thanks for the KIP. This is definitely useful.
> > > >
> > > > Does the KIP apply the topic creation policy to ZK-based topic
> creation?
> > > If
> > > > not, which seems to be the case from my understanding, should we
> have a
> > > new
> > > > broker config to disable ZK-based topic creation? This seems
> necessary to
> > > > prevent user from using stray builds to evade the topic creation
> policy.
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover  >
> > > > wrote:
> > > >
> > > > > Got it.  Thanks, Ismael.
> > > > >
> > > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> > > wrote:
> > > > >
> > > > > > Hi Roger,
> > > > > >
> > > > > > That's a good question. The server defaults are passed via the
> > > > > `configure`
> > > > > > method of the `Configurable` interface that is implemented by
> > > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover <
> roger.hoo...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > This is great.  Thanks, Ismael.
> > > > > > >
> > > > > > > One question.  When TopicDetails are passed to the policy
> > > > > implementation,
> > > > > > > would the server defaults already have been merged?  If not, I
> > > think
> > > > > the
> > > > > > > policy also needs access to the server defaults.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > > Roger
> > > > > > >
> > > > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the review Jun. Yes, that's a good point, I have
> > > updated
> > > > > the
> > > > > > > > KIP.
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao 
> > > wrote:
> > > > > > > >
> > > > > > > > > Hi, Ismael,
> > > > > > > > >
> > > > > > > > > Thanks for the KIP. Looks reasonable to me. To be
> consistent
> > > with
> > > > > the
> > > > > > > > > pattern used in other pluggable interfaces, we probably
> should
> > > > make
> > > > > > the
> > > > > > > > new
> > > > > > > > > interface configurable and closable?
> > > > > > > > >
> > > > > > > > > Jun
> > > > > > > > >
> > > > > > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma <
> ism...@juma.me.uk
> > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Thanks Dan and Colin for the feedback. I updated the KIP
> to
> > > > > include
> > > > > > > the
> > > > > > > > > > addition of a validation mode. Since we need to bump the
> > > > protocol
> > > > > > > > version
> > > > > > > > > > for that, I also added a

Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
Hey Colin,

I think one big advantage of the broker side config is that it can not be
ignored by the malicious client, right?

Thanks,
Dong

On Tue, May 30, 2017 at 3:53 PM, Dong Lin  wrote:

> Do we have an old version of bin/kafka-topics.sh which creates topic via
> ZK and still allows user to access ZK with ACL? Another concern is that
> some user may not have ACL service deployed in their cluster. If neither of
> these is issue,  then I would prefer the zookeeper approach instead of
> adding a new broker config if the zookeeper approach is doable.
>
> However, regardless of whether we secure the zookeeper from unauthorized
> user, I think KIP-108 should provide a solution to guarantee that all topic
> creation logic goes through the topic creation policy.
>
> Thanks,
> Dong
>
> On Tue, May 30, 2017 at 3:39 PM, Colin McCabe  wrote:
>
>> It seems like, to make it really secure, we need the enforcement to be
>> done at the ZooKepeer level.  Any broker or client-side configuration
>> can just be ignored by a malicious client.  Do we have documentation or
>> code that configures ZK to prevent unprivileged users from modifying the
>> topic configurations?
>>
>> best,
>> Colin
>>
>>
>> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
>> > Hey Ismael,
>> >
>> > I agree that it makes sense not to cover ZK-based topic creation with
>> the
>> > topic creation policy and limit ZK access to brokers only going forward.
>> > My
>> > point is that we need a way to disable ZK-based topic creation so that
>> > all
>> > topic creation goes through the topic creation policy as specified in
>> > KIP-108. Does this make sense?
>> >
>> > One example solution is to add a broker-side config
>> > "enable.zookeeper.topic.creation"
>> > which defaults to "true". If user has overridden this config to be
>> > "false",
>> > then controller will delete the znode /brokers/topics/{topic} that is
>> not
>> > created by the controller. We probably need some trick to differentiate
>> > between znode created by controller and znode created by outdated tools.
>> > For example, the new controller code can add a new field "isController"
>> > in
>> > the znode /brokers/topics/{topic} when it creates this new znode. Then
>> if
>> > the znode doesn't have this field AND there is no child under this
>> znode,
>> > controller can be sure it is created by outdated tools and remove this
>> > znode from zookeeper. Users who are using outdated tools to create topic
>> > will find that the topic is not created.
>> >
>> > Dong
>> >
>> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
>> >
>> > > Hi Dong,
>> > >
>> > > No, ZK-based topic creation doesn't go through the policy since it
>> doesn't
>> > > go through the broker. Given that, I am not sure how the broker config
>> > > would work. Can you please elaborate? It seems like the way forward
>> is to
>> > > limit ZK access to brokers only.
>> > >
>> > > Ismael
>> > >
>> > > On Tue, May 30, 2017 at 10:19 PM, Dong Lin 
>> wrote:
>> > >
>> > > > Hey Ismael,
>> > > >
>> > > > Thanks for the KIP. This is definitely useful.
>> > > >
>> > > > Does the KIP apply the topic creation policy to ZK-based topic
>> creation?
>> > > If
>> > > > not, which seems to be the case from my understanding, should we
>> have a
>> > > new
>> > > > broker config to disable ZK-based topic creation? This seems
>> necessary to
>> > > > prevent user from using stray builds to evade the topic creation
>> policy.
>> > > >
>> > > > Thanks,
>> > > > Dong
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover <
>> roger.hoo...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Got it.  Thanks, Ismael.
>> > > > >
>> > > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
>> > > wrote:
>> > > > >
>> > > > > > Hi Roger,
>> > > > > >
>> > > > > > That's a good question. The server defaults are passed via the
>> > > > > `configure`
>> > > > > > method of the `Configurable` interface that is implemented by
>> > > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
>> > > > > >
>> > > > > > Ismael
>> > > > > >
>> > > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover <
>> roger.hoo...@gmail.com
>> > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > This is great.  Thanks, Ismael.
>> > > > > > >
>> > > > > > > One question.  When TopicDetails are passed to the policy
>> > > > > implementation,
>> > > > > > > would the server defaults already have been merged?  If not, I
>> > > think
>> > > > > the
>> > > > > > > policy also needs access to the server defaults.
>> > > > > > >
>> > > > > > > Cheers,
>> > > > > > >
>> > > > > > > Roger
>> > > > > > >
>> > > > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma <
>> ism...@juma.me.uk>
>> > > > wrote:
>> > > > > > >
>> > > > > > > > Thanks for the review Jun. Yes, that's a good point, I have
>> > > updated
>> > > > > the
>> > > > > > > > KIP.
>> > > > > > > >
>> > > > > > > > Ismael
>> > > > > > > >
>> > > > > > > > On Fri, Jan 6, 2

Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
Do we have an old version of bin/kafka-topics.sh which creates topic via ZK
and still allows user to access ZK with ACL? Another concern is that some
user may not have ACL service deployed in their cluster. If neither of
these is issue,  then I would prefer the zookeeper approach instead of
adding a new broker config if the zookeeper approach is doable.

However, regardless of whether we secure the zookeeper from unauthorized
user, I think KIP-108 should provide a solution to guarantee that all topic
creation logic goes through the topic creation policy.

Thanks,
Dong

On Tue, May 30, 2017 at 3:39 PM, Colin McCabe  wrote:

> It seems like, to make it really secure, we need the enforcement to be
> done at the ZooKepeer level.  Any broker or client-side configuration
> can just be ignored by a malicious client.  Do we have documentation or
> code that configures ZK to prevent unprivileged users from modifying the
> topic configurations?
>
> best,
> Colin
>
>
> On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> > Hey Ismael,
> >
> > I agree that it makes sense not to cover ZK-based topic creation with the
> > topic creation policy and limit ZK access to brokers only going forward.
> > My
> > point is that we need a way to disable ZK-based topic creation so that
> > all
> > topic creation goes through the topic creation policy as specified in
> > KIP-108. Does this make sense?
> >
> > One example solution is to add a broker-side config
> > "enable.zookeeper.topic.creation"
> > which defaults to "true". If user has overridden this config to be
> > "false",
> > then controller will delete the znode /brokers/topics/{topic} that is not
> > created by the controller. We probably need some trick to differentiate
> > between znode created by controller and znode created by outdated tools.
> > For example, the new controller code can add a new field "isController"
> > in
> > the znode /brokers/topics/{topic} when it creates this new znode. Then if
> > the znode doesn't have this field AND there is no child under this znode,
> > controller can be sure it is created by outdated tools and remove this
> > znode from zookeeper. Users who are using outdated tools to create topic
> > will find that the topic is not created.
> >
> > Dong
> >
> > On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
> >
> > > Hi Dong,
> > >
> > > No, ZK-based topic creation doesn't go through the policy since it
> doesn't
> > > go through the broker. Given that, I am not sure how the broker config
> > > would work. Can you please elaborate? It seems like the way forward is
> to
> > > limit ZK access to brokers only.
> > >
> > > Ismael
> > >
> > > On Tue, May 30, 2017 at 10:19 PM, Dong Lin 
> wrote:
> > >
> > > > Hey Ismael,
> > > >
> > > > Thanks for the KIP. This is definitely useful.
> > > >
> > > > Does the KIP apply the topic creation policy to ZK-based topic
> creation?
> > > If
> > > > not, which seems to be the case from my understanding, should we
> have a
> > > new
> > > > broker config to disable ZK-based topic creation? This seems
> necessary to
> > > > prevent user from using stray builds to evade the topic creation
> policy.
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover  >
> > > > wrote:
> > > >
> > > > > Got it.  Thanks, Ismael.
> > > > >
> > > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> > > wrote:
> > > > >
> > > > > > Hi Roger,
> > > > > >
> > > > > > That's a good question. The server defaults are passed via the
> > > > > `configure`
> > > > > > method of the `Configurable` interface that is implemented by
> > > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover <
> roger.hoo...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > This is great.  Thanks, Ismael.
> > > > > > >
> > > > > > > One question.  When TopicDetails are passed to the policy
> > > > > implementation,
> > > > > > > would the server defaults already have been merged?  If not, I
> > > think
> > > > > the
> > > > > > > policy also needs access to the server defaults.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > > Roger
> > > > > > >
> > > > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the review Jun. Yes, that's a good point, I have
> > > updated
> > > > > the
> > > > > > > > KIP.
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao 
> > > wrote:
> > > > > > > >
> > > > > > > > > Hi, Ismael,
> > > > > > > > >
> > > > > > > > > Thanks for the KIP. Looks reasonable to me. To be
> consistent
> > > with
> > > > > the
> > > > > > > > > pattern used in other pluggable interfaces, we probably
> should
> > > > make
> > > > > > the
> > > > > > > > new
> > > > > > > > > interface configurable and closable?
> > >

Re: [DISCUSS] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-30 Thread Colin McCabe
Hi all,

In the course of reviewing the AdminClient#apiVersions API, we found out
that it was using an internal class,
org.apache.kafka.clients.NodeApiVersions.  This internal class
references more internal stuff, including things in
org.apache.kafka.common.protocol.  So we filed KAFKA-5214 to create a
public, stable API for this.

The main changes in KAFKA-5214 are:
* Create a public NodeVersions class to be returned by AdminClient
* Split the private ApiKeys enum into a public ApiKey enum, and some
internal data in ApiKeys.java

The idea here is to keep a clean separation between the API and the
implementation, so that if we need to change internal details later, we
easily can.  For example, we should be able to refactor the protocol
classes without breaking users of AdminClient.

best,
Colin


On Thu, May 4, 2017, at 16:13, Colin McCabe wrote:
> That's a good point.  It's worth mentioning that there is a KIP in
> progress, KIP-133: List and Alter Configs Admin APIs, that should help
> with those.
> 
> In the long term, it would be nice if we could deprecate
> 'allow.everyone.if.no.acl.found', along with topic auto-creation.  It
> should be possible to get the functionality of
> 'allow.everyone.if.no.acl.found' by just adding a few ALLOW * ACLs. 
> Maybe we could even do it in an upgrade script?  It will take a while to
> get there.
> 
> best,
> Colin
> 
> 
> On Thu, May 4, 2017, at 15:09, dan wrote:
> > what about the configs for: `allow.everyone.if.no.acl.found` and `
> > super.users`?
> > 
> > i understand they are an implementation detail of
> > `SimpleAclAuthorizer` configs,
> > but without these its difficult to make sense of what permissions a
> > `ListAclsResponse`
> > actually represents.
> > 
> > maybe something for another kip.
> > 
> > dan
> > 
> > On Thu, May 4, 2017 at 2:36 PM, Colin McCabe  wrote:
> > 
> > > On Thu, May 4, 2017, at 13:46, Magnus Edenhill wrote:
> > > > Hey Colin,
> > > >
> > > > good KIP!
> > > >
> > > > Some comments:
> > > >
> > > > 1a. For operation, permission_type and resource_type: is there any 
> > > > reason
> > > > for having the any and unknown enums as negative values?
> > > > Since neither of these fields has an integer significance (unlike for
> > > > example offsets which use negative offsets for logical offsets) I dont
> > > > really see a reason to do this. It might also trick client developers to
> > > > make assumptions on future negative values (if resource_type < 0:
> > > > treat_as_invalid()...), unless that's the reason :). This might be my
> > > > personal preference but encoding extended meaning into types should be
> > > > avoided unless needed, and I dont think it is needed for enums.
> > >
> > > Hi Magnus,
> > >
> > > That's a fair question.  I don't have a strong preference either way.
> > > If it is more convenient or consistent to start at 0, we can certainly
> > > do that.
> > >
> > > >
> > > > but..
> > > >
> > > > 1b. Since most clients implementing the ACL requests probably wont make
> > > > much use of this API themselves but rather treat it as a straight
> > > > pass-through between the protocol and the client's public API to the
> > > > application, could we save ourselves some work (current and future) to
> > > > make
> > > > the enums as nullable_strings instead of integer enums? This would cut
> > > > down
> > > > on the number of enum-to-str and vice versa conversions needed, and 
> > > > would
> > > > also make the APIs more future proof since an added resource_type (et.al
> > > )
> > > > would not need a client, or even tool, update, and the new type will not
> > > > show up as UNKNOWN but of its real value.
> > > > From a broker-side verification perspective there should really be no
> > > > difference since the enum values will need to be interpreted anyway.
> > > > So instead of int enum { unknown = -2, any = -1, deny, allow }, we have 
> > > > {
> > > > null, "deny", "allow" }.
> > >
> > > Strings use much, much more space, though.  An INT8 is one byte, whereas
> > > the string "clusterAction" is 13 bytes, plus a 2 byte length field (if I
> > > remember our serialization correctly)  A 10x or 15x RPC space penalty
> > > starts to really hurt, especially when you have hundreds or thousands of
> > > ACLs, and each one has 6 fields.
> > >
> > > >
> > > >
> > > > 2. "Each of the arguments to ListAclsRequest acts as a filter."
> > > > Specify if these filters are OR:ed or AND:ed.
> > >
> > > Yeah, they are ANDed.  I'll add a note.
> > >
> > > >
> > > > 3. "CreateAclsRequest and CreateAclsResponse"
> > > > What happens if a client attempts to create an ACL entry which is
> > > > identical
> > > > to one already created in the cluster?
> > > > Is this an error? silently ignored? resulting in duplicates?
> > >
> > > Unfortunately, we are somewhat at the mercy of the
> > > kafka.security.auth.Authorizer implementation here.  The default
> > > SimpleAclAuthorizer does not allow duplicates to be added.
> > > If the Authorizer doesn't de-d

[GitHub] kafka pull request #3164: KAFKA-5150: Reduce lz4 decompression overhead (wit...

2017-05-30 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3164


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5150) LZ4 decompression is 4-5x slower than Snappy on small batches / messages

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030291#comment-16030291
 ] 

ASF GitHub Bot commented on KAFKA-5150:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3164


> LZ4 decompression is 4-5x slower than Snappy on small batches / messages
> 
>
> Key: KAFKA-5150
> URL: https://issues.apache.org/jira/browse/KAFKA-5150
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.11.0.0, 0.10.2.1
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch 
> sizes with small messages after observing some performance bottlenecks in the 
> consumer. 
> For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms 
> compared to Snappy (see benchmark below). Most of our time is currently spent 
> allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we 
> default to larger 64kB block sizes. Some quick testing shows we could improve 
> performance by almost an order of magnitude for small batches and messages if 
> we re-used buffers between instantiations of the input stream.
> [Benchmark 
> Code|https://github.com/xvrl/kafka/blob/small-batch-lz4-benchmark/clients/src/test/java/org/apache/kafka/common/record/DeepRecordsIteratorBenchmark.java#L86]
> {code}
> Benchmark  (compressionType)  
> (messageSize)   Mode  Cnt   Score   Error  Units
> DeepRecordsIteratorBenchmark.measureSingleMessageLZ4  
>   100  thrpt   20   84802.279 ±  1983.847  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage SNAPPY  
>   100  thrpt   20  407585.747 ±  9877.073  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage   NONE  
>   100  thrpt   20  579141.634 ± 18482.093  ops/s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Colin McCabe
It seems like, to make it really secure, we need the enforcement to be
done at the ZooKepeer level.  Any broker or client-side configuration
can just be ignored by a malicious client.  Do we have documentation or
code that configures ZK to prevent unprivileged users from modifying the
topic configurations?

best,
Colin


On Tue, May 30, 2017, at 15:02, Dong Lin wrote:
> Hey Ismael,
> 
> I agree that it makes sense not to cover ZK-based topic creation with the
> topic creation policy and limit ZK access to brokers only going forward.
> My
> point is that we need a way to disable ZK-based topic creation so that
> all
> topic creation goes through the topic creation policy as specified in
> KIP-108. Does this make sense?
> 
> One example solution is to add a broker-side config
> "enable.zookeeper.topic.creation"
> which defaults to "true". If user has overridden this config to be
> "false",
> then controller will delete the znode /brokers/topics/{topic} that is not
> created by the controller. We probably need some trick to differentiate
> between znode created by controller and znode created by outdated tools.
> For example, the new controller code can add a new field "isController"
> in
> the znode /brokers/topics/{topic} when it creates this new znode. Then if
> the znode doesn't have this field AND there is no child under this znode,
> controller can be sure it is created by outdated tools and remove this
> znode from zookeeper. Users who are using outdated tools to create topic
> will find that the topic is not created.
> 
> Dong
> 
> On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
> 
> > Hi Dong,
> >
> > No, ZK-based topic creation doesn't go through the policy since it doesn't
> > go through the broker. Given that, I am not sure how the broker config
> > would work. Can you please elaborate? It seems like the way forward is to
> > limit ZK access to brokers only.
> >
> > Ismael
> >
> > On Tue, May 30, 2017 at 10:19 PM, Dong Lin  wrote:
> >
> > > Hey Ismael,
> > >
> > > Thanks for the KIP. This is definitely useful.
> > >
> > > Does the KIP apply the topic creation policy to ZK-based topic creation?
> > If
> > > not, which seems to be the case from my understanding, should we have a
> > new
> > > broker config to disable ZK-based topic creation? This seems necessary to
> > > prevent user from using stray builds to evade the topic creation policy.
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover 
> > > wrote:
> > >
> > > > Got it.  Thanks, Ismael.
> > > >
> > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi Roger,
> > > > >
> > > > > That's a good question. The server defaults are passed via the
> > > > `configure`
> > > > > method of the `Configurable` interface that is implemented by
> > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover  > >
> > > > > wrote:
> > > > >
> > > > > > This is great.  Thanks, Ismael.
> > > > > >
> > > > > > One question.  When TopicDetails are passed to the policy
> > > > implementation,
> > > > > > would the server defaults already have been merged?  If not, I
> > think
> > > > the
> > > > > > policy also needs access to the server defaults.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Roger
> > > > > >
> > > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks for the review Jun. Yes, that's a good point, I have
> > updated
> > > > the
> > > > > > > KIP.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao 
> > wrote:
> > > > > > >
> > > > > > > > Hi, Ismael,
> > > > > > > >
> > > > > > > > Thanks for the KIP. Looks reasonable to me. To be consistent
> > with
> > > > the
> > > > > > > > pattern used in other pluggable interfaces, we probably should
> > > make
> > > > > the
> > > > > > > new
> > > > > > > > interface configurable and closable?
> > > > > > > >
> > > > > > > > Jun
> > > > > > > >
> > > > > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma  > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Dan and Colin for the feedback. I updated the KIP to
> > > > include
> > > > > > the
> > > > > > > > > addition of a validation mode. Since we need to bump the
> > > protocol
> > > > > > > version
> > > > > > > > > for that, I also added an error message per topic to the
> > > > response.
> > > > > I
> > > > > > > had
> > > > > > > > > the latter as "Future Work", but I actually felt that it
> > should
> > > > be
> > > > > in
> > > > > > > the
> > > > > > > > > first version (good to have feedback confirming that).
> > > > > > > > >
> > > > > > > > > Let me know if the changes look good to you.
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > On Thu, Jan 5, 2017 at 9:54 PM, Colin McCabe <
> > > cmcc...@apache.org
> > 

Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Onur Karaman
Just for completeness, I think one option is to:
1. Upgrade broker-side auto topic creation to send CreateTopicsRequest.
This is just to unify the topic creation flow and policy enforcement.
2. Migrate all clients away from topic creation from zookeeper and instead
send CreateTopicsRequest
3. ACL off zookeeper
Bonus: disable auto topic creation.

On Tue, May 30, 2017 at 3:03 PM Dong Lin  wrote:

> Hey Ismael,
>
> I agree that it makes sense not to cover ZK-based topic creation with the
> topic creation policy and limit ZK access to brokers only going forward. My
> point is that we need a way to disable ZK-based topic creation so that all
> topic creation goes through the topic creation policy as specified in
> KIP-108. Does this make sense?
>
> One example solution is to add a broker-side config
> "enable.zookeeper.topic.creation"
> which defaults to "true". If user has overridden this config to be "false",
> then controller will delete the znode /brokers/topics/{topic} that is not
> created by the controller. We probably need some trick to differentiate
> between znode created by controller and znode created by outdated tools.
> For example, the new controller code can add a new field "isController" in
> the znode /brokers/topics/{topic} when it creates this new znode. Then if
> the znode doesn't have this field AND there is no child under this znode,
> controller can be sure it is created by outdated tools and remove this
> znode from zookeeper. Users who are using outdated tools to create topic
> will find that the topic is not created.
>
> Dong
>
> On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:
>
> > Hi Dong,
> >
> > No, ZK-based topic creation doesn't go through the policy since it
> doesn't
> > go through the broker. Given that, I am not sure how the broker config
> > would work. Can you please elaborate? It seems like the way forward is to
> > limit ZK access to brokers only.
> >
> > Ismael
> >
> > On Tue, May 30, 2017 at 10:19 PM, Dong Lin  wrote:
> >
> > > Hey Ismael,
> > >
> > > Thanks for the KIP. This is definitely useful.
> > >
> > > Does the KIP apply the topic creation policy to ZK-based topic
> creation?
> > If
> > > not, which seems to be the case from my understanding, should we have a
> > new
> > > broker config to disable ZK-based topic creation? This seems necessary
> to
> > > prevent user from using stray builds to evade the topic creation
> policy.
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover 
> > > wrote:
> > >
> > > > Got it.  Thanks, Ismael.
> > > >
> > > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi Roger,
> > > > >
> > > > > That's a good question. The server defaults are passed via the
> > > > `configure`
> > > > > method of the `Configurable` interface that is implemented by
> > > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover <
> roger.hoo...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > This is great.  Thanks, Ismael.
> > > > > >
> > > > > > One question.  When TopicDetails are passed to the policy
> > > > implementation,
> > > > > > would the server defaults already have been merged?  If not, I
> > think
> > > > the
> > > > > > policy also needs access to the server defaults.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Roger
> > > > > >
> > > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks for the review Jun. Yes, that's a good point, I have
> > updated
> > > > the
> > > > > > > KIP.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao 
> > wrote:
> > > > > > >
> > > > > > > > Hi, Ismael,
> > > > > > > >
> > > > > > > > Thanks for the KIP. Looks reasonable to me. To be consistent
> > with
> > > > the
> > > > > > > > pattern used in other pluggable interfaces, we probably
> should
> > > make
> > > > > the
> > > > > > > new
> > > > > > > > interface configurable and closable?
> > > > > > > >
> > > > > > > > Jun
> > > > > > > >
> > > > > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma <
> ism...@juma.me.uk
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Dan and Colin for the feedback. I updated the KIP to
> > > > include
> > > > > > the
> > > > > > > > > addition of a validation mode. Since we need to bump the
> > > protocol
> > > > > > > version
> > > > > > > > > for that, I also added an error message per topic to the
> > > > response.
> > > > > I
> > > > > > > had
> > > > > > > > > the latter as "Future Work", but I actually felt that it
> > should
> > > > be
> > > > > in
> > > > > > > the
> > > > > > > > > first version (good to have feedback confirming that).
> > > > > > > > >
> > > > > > > > > Let me know if the changes look good to you.
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > 

[jira] [Commented] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-30 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030276#comment-16030276
 ] 

Apurva Mehta commented on KAFKA-5349:
-

cc [~hachikuji]

> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk7 #2287

2017-05-30 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-30 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5349:
---

 Summary: KafkaConsumer occasionally hits IllegalStateException
 Key: KAFKA-5349
 URL: https://issues.apache.org/jira/browse/KAFKA-5349
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta


I have noticed the following while debugging system tests. Sometimes a plain 
old console consumer hits the following exception when reading from a topic:

{noformat}
[2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
(kafka.tools.ConsoleConsumer$)
java.lang.IllegalStateException: Invalid attempt to complete a request future 
which is already complete
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
Hey Ismael,

I agree that it makes sense not to cover ZK-based topic creation with the
topic creation policy and limit ZK access to brokers only going forward. My
point is that we need a way to disable ZK-based topic creation so that all
topic creation goes through the topic creation policy as specified in
KIP-108. Does this make sense?

One example solution is to add a broker-side config
"enable.zookeeper.topic.creation"
which defaults to "true". If user has overridden this config to be "false",
then controller will delete the znode /brokers/topics/{topic} that is not
created by the controller. We probably need some trick to differentiate
between znode created by controller and znode created by outdated tools.
For example, the new controller code can add a new field "isController" in
the znode /brokers/topics/{topic} when it creates this new znode. Then if
the znode doesn't have this field AND there is no child under this znode,
controller can be sure it is created by outdated tools and remove this
znode from zookeeper. Users who are using outdated tools to create topic
will find that the topic is not created.

Dong

On Tue, May 30, 2017 at 2:24 PM, Ismael Juma  wrote:

> Hi Dong,
>
> No, ZK-based topic creation doesn't go through the policy since it doesn't
> go through the broker. Given that, I am not sure how the broker config
> would work. Can you please elaborate? It seems like the way forward is to
> limit ZK access to brokers only.
>
> Ismael
>
> On Tue, May 30, 2017 at 10:19 PM, Dong Lin  wrote:
>
> > Hey Ismael,
> >
> > Thanks for the KIP. This is definitely useful.
> >
> > Does the KIP apply the topic creation policy to ZK-based topic creation?
> If
> > not, which seems to be the case from my understanding, should we have a
> new
> > broker config to disable ZK-based topic creation? This seems necessary to
> > prevent user from using stray builds to evade the topic creation policy.
> >
> > Thanks,
> > Dong
> >
> >
> >
> >
> >
> >
> > On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover 
> > wrote:
> >
> > > Got it.  Thanks, Ismael.
> > >
> > > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma 
> wrote:
> > >
> > > > Hi Roger,
> > > >
> > > > That's a good question. The server defaults are passed via the
> > > `configure`
> > > > method of the `Configurable` interface that is implemented by
> > > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover  >
> > > > wrote:
> > > >
> > > > > This is great.  Thanks, Ismael.
> > > > >
> > > > > One question.  When TopicDetails are passed to the policy
> > > implementation,
> > > > > would the server defaults already have been merged?  If not, I
> think
> > > the
> > > > > policy also needs access to the server defaults.
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Roger
> > > > >
> > > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma 
> > wrote:
> > > > >
> > > > > > Thanks for the review Jun. Yes, that's a good point, I have
> updated
> > > the
> > > > > > KIP.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Ismael,
> > > > > > >
> > > > > > > Thanks for the KIP. Looks reasonable to me. To be consistent
> with
> > > the
> > > > > > > pattern used in other pluggable interfaces, we probably should
> > make
> > > > the
> > > > > > new
> > > > > > > interface configurable and closable?
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks Dan and Colin for the feedback. I updated the KIP to
> > > include
> > > > > the
> > > > > > > > addition of a validation mode. Since we need to bump the
> > protocol
> > > > > > version
> > > > > > > > for that, I also added an error message per topic to the
> > > response.
> > > > I
> > > > > > had
> > > > > > > > the latter as "Future Work", but I actually felt that it
> should
> > > be
> > > > in
> > > > > > the
> > > > > > > > first version (good to have feedback confirming that).
> > > > > > > >
> > > > > > > > Let me know if the changes look good to you.
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On Thu, Jan 5, 2017 at 9:54 PM, Colin McCabe <
> > cmcc...@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Yeah, I agree... having a validation mode would be nice.
> We
> > > > should
> > > > > > be
> > > > > > > > > explicit that passing validation doesn't 100% guarantee
> that
> > a
> > > > > > > > > subsequent call to create the topic will succeed, though.
> > > There
> > > > is
> > > > > > an
> > > > > > > > > obvious race condition there-- for example, with a plugin
> > which
> > > > > > > consults
> > > > > > > > > some external authentication system, there could be a
> change
> > to
> > > > the
> > > > > > > > > privileges in between validation and attempted creation.
> > > > > > > > >
> > > > > >

[jira] [Commented] (KAFKA-5202) Handle topic deletion for ongoing transactions

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030217#comment-16030217
 ] 

ASF GitHub Bot commented on KAFKA-5202:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3130


> Handle topic deletion for ongoing transactions
> --
>
> Key: KAFKA-5202
> URL: https://issues.apache.org/jira/browse/KAFKA-5202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Guozhang Wang
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> If a topic is deleted, we should remove its partitions from ongoing 
> transactions.  If the transaction has already begun rolling forward, we have 
> to let it continue for the rest of the included partitions, but we have the 
> option of failing a transaction which is yet to be committed or aborted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3130: KAFKA-5202: Handle topic deletion while trying to ...

2017-05-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3130


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5202) Handle topic deletion for ongoing transactions

2017-05-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5202:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3130
[https://github.com/apache/kafka/pull/3130]

> Handle topic deletion for ongoing transactions
> --
>
> Key: KAFKA-5202
> URL: https://issues.apache.org/jira/browse/KAFKA-5202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Guozhang Wang
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> If a topic is deleted, we should remove its partitions from ongoing 
> transactions.  If the transaction has already begun rolling forward, we have 
> to let it continue for the rest of the included partitions, but we have the 
> option of failing a transaction which is yet to be committed or aborted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1611

2017-05-30 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5316; Follow-up with ByteBufferOutputStream and other misc

--
[...truncated 2.96 MB...]

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage STARTED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo STARTED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarker

Re: Jira-Spam on Dev-Mailinglist

2017-05-30 Thread Guozhang Wang
I actually do not know.. Maybe Jun knows better than me?


Guozhang

On Mon, May 29, 2017 at 12:58 PM, Gwen Shapira  wrote:

> I agree.
>
> Guozhang, do you know how to implement the suggestion? JIRA to Apache
> Infra? Or is this something we can do ourselves somehow?
>
> On Mon, May 29, 2017 at 9:33 PM Guozhang Wang  wrote:
>
> > I share your pains. Right now I use filters on my email accounts and it
> has
> > been down to about 25 per day.
> >
> > I think setup a separate mailing list for jirabot and jenkins auto
> > generated emails is a good idea.
> >
> >
> > Guozhang
> >
> >
> > On Mon, May 29, 2017 at 12:58 AM,  wrote:
> >
> > > Hello everyone
> > >
> > > I find it hard to follow this mailinglist due to all the mails
> generated
> > > by Jira. Just over this weekend there are 240 new mails.
> > > Would it be possible to setup something like j...@kafka.apache.org
> where
> > > everyone can subscribe interested in those Jira mails?
> > >
> > > Right now I am going to setup a filter which just deletes the
> jira-tagged
> > > mails, but I think the current setup also makes it hard to read through
> > > the archives.
> > >
> > > regards
> > > Marc
> >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


[jira] [Issue Comment Deleted] (KAFKA-5339) Transactions system test with hard broker bounces fails sporadically

2017-05-30 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5339:

Comment: was deleted

(was: Another puzzling one: 

{noformat}
 21:02:37,450] ERROR Uncaught error in kafka producer I/O thread:  
(org.apache.kafka.clients.producer.internals.Sender)
java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:305)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:193)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:154)
at java.lang.Thread.run(Thread.java:748)
[2017-05-30 21:03:00,433] INFO Closing the Kafka producer with timeoutMillis = 
9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
{noformat}

This corresponds to this line in the Sender at the time the test was run: 
{code:java}
if (nextRequestHandler.isEndTxn() && 
transactionManager.isCompletingTransaction() && 
accumulator.hasUnflushedBatches()) { 
...
}
{code}

It should be impossible to hit an NPE there, because we already check for NPE's 
for `nextRequestHandler` and `transactionManager` before entering that line. )

> Transactions system test with hard broker bounces fails sporadically
> 
>
> Key: KAFKA-5339
> URL: https://issues.apache.org/jira/browse/KAFKA-5339
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The transactions hard bounce test occasionally fails because the 
> transactional message copy just seems to hang. In one of the client logs, I 
> noticed: 
> {noformat}
> [2017-05-27 20:36:12,596] WARN Got error produce response with correlation id 
> 124 on topic-partition output-topic-0, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-27 20:36:15,386] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:146)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:143)
> at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721)
> at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
> at java.util.PriorityQueue.poll(PriorityQueue.java:595)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.nextRequestHandler(TransactionManager.java:351)
> at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:303)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:193)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:154)
> at java.lang.Thread.run(Thread.java:748)
> [2017-05-27 20:36:52,007] INFO Closing the Kafka producer with timeoutMillis 
> = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
> [2017-05-27 20:36:52,036] INFO Marking the coordinator knode02:9092 (id: 
> 2147483645 rack: null) dead for group transactions-test-consumer-group 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> root@7dcd60017519:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=hard_bounce.bounce_target=brokers/1#
> {noformat}
> This suggests that the client has gotten to a bad state which is why it stops 
> processing messages, causing the tests to fail. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Ismael Juma
Hi Dong,

No, ZK-based topic creation doesn't go through the policy since it doesn't
go through the broker. Given that, I am not sure how the broker config
would work. Can you please elaborate? It seems like the way forward is to
limit ZK access to brokers only.

Ismael

On Tue, May 30, 2017 at 10:19 PM, Dong Lin  wrote:

> Hey Ismael,
>
> Thanks for the KIP. This is definitely useful.
>
> Does the KIP apply the topic creation policy to ZK-based topic creation? If
> not, which seems to be the case from my understanding, should we have a new
> broker config to disable ZK-based topic creation? This seems necessary to
> prevent user from using stray builds to evade the topic creation policy.
>
> Thanks,
> Dong
>
>
>
>
>
>
> On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover 
> wrote:
>
> > Got it.  Thanks, Ismael.
> >
> > On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma  wrote:
> >
> > > Hi Roger,
> > >
> > > That's a good question. The server defaults are passed via the
> > `configure`
> > > method of the `Configurable` interface that is implemented by
> > > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> > >
> > > Ismael
> > >
> > > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover 
> > > wrote:
> > >
> > > > This is great.  Thanks, Ismael.
> > > >
> > > > One question.  When TopicDetails are passed to the policy
> > implementation,
> > > > would the server defaults already have been merged?  If not, I think
> > the
> > > > policy also needs access to the server defaults.
> > > >
> > > > Cheers,
> > > >
> > > > Roger
> > > >
> > > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma 
> wrote:
> > > >
> > > > > Thanks for the review Jun. Yes, that's a good point, I have updated
> > the
> > > > > KIP.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao  wrote:
> > > > >
> > > > > > Hi, Ismael,
> > > > > >
> > > > > > Thanks for the KIP. Looks reasonable to me. To be consistent with
> > the
> > > > > > pattern used in other pluggable interfaces, we probably should
> make
> > > the
> > > > > new
> > > > > > interface configurable and closable?
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks Dan and Colin for the feedback. I updated the KIP to
> > include
> > > > the
> > > > > > > addition of a validation mode. Since we need to bump the
> protocol
> > > > > version
> > > > > > > for that, I also added an error message per topic to the
> > response.
> > > I
> > > > > had
> > > > > > > the latter as "Future Work", but I actually felt that it should
> > be
> > > in
> > > > > the
> > > > > > > first version (good to have feedback confirming that).
> > > > > > >
> > > > > > > Let me know if the changes look good to you.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Thu, Jan 5, 2017 at 9:54 PM, Colin McCabe <
> cmcc...@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Yeah, I agree... having a validation mode would be nice.  We
> > > should
> > > > > be
> > > > > > > > explicit that passing validation doesn't 100% guarantee that
> a
> > > > > > > > subsequent call to create the topic will succeed, though.
> > There
> > > is
> > > > > an
> > > > > > > > obvious race condition there-- for example, with a plugin
> which
> > > > > > consults
> > > > > > > > some external authentication system, there could be a change
> to
> > > the
> > > > > > > > privileges in between validation and attempted creation.
> > > > > > > >
> > > > > > > > It also seems like we should try to provide a helpful
> exception
> > > > > message
> > > > > > > > for the cases where topic creation fails.  This might involve
> > > > adding
> > > > > > > > more detail about error conditions to CreateTopicsRequest...
> > > right
> > > > > now
> > > > > > > > it just returns an error code, but a text message would be a
> > nice
> > > > > > > > addition.
> > > > > > > >
> > > > > > > > cheers,
> > > > > > > > Colin
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Jan 5, 2017, at 13:41, dan wrote:
> > > > > > > > > it would be nice to have a dry-run or validate ability
> added
> > to
> > > > > this
> > > > > > > kip.
> > > > > > > > > since we are offloading validation to a 3rd party
> > implementor a
> > > > > > random
> > > > > > > > > user
> > > > > > > > > can't know a priori (based solely on kafka configs)
> whether a
> > > > call
> > > > > > > should
> > > > > > > > > succeed without actually creating the topic.
> > > > > > > > >
> > > > > > > > > a similar case is in connect where there is a separate
> > endpoint
> > > > > > > > >  > > > > > > > runtime/src/main/java/org/apache/kafka/connect/runtime/rest/
> > > > > resources/
> > > > > > > > ConnectorPluginsResource.java#L49-L58>
> > > > > > > > > to attempt to validate a connect configuration without
> > actually
> > > > > > > creating
> > > > > > > > > the connector.
> > > > > > > > >

Re: [DISCUSS] KIP-108: Create Topic Policy

2017-05-30 Thread Dong Lin
Hey Ismael,

Thanks for the KIP. This is definitely useful.

Does the KIP apply the topic creation policy to ZK-based topic creation? If
not, which seems to be the case from my understanding, should we have a new
broker config to disable ZK-based topic creation? This seems necessary to
prevent user from using stray builds to evade the topic creation policy.

Thanks,
Dong






On Mon, Jan 9, 2017 at 1:42 PM, Roger Hoover  wrote:

> Got it.  Thanks, Ismael.
>
> On Mon, Jan 9, 2017 at 10:42 AM, Ismael Juma  wrote:
>
> > Hi Roger,
> >
> > That's a good question. The server defaults are passed via the
> `configure`
> > method of the `Configurable` interface that is implemented by
> > `CreateTopicPolicy`. I'll mention this explicitly in the KIP.
> >
> > Ismael
> >
> > On Mon, Jan 9, 2017 at 6:04 PM, Roger Hoover 
> > wrote:
> >
> > > This is great.  Thanks, Ismael.
> > >
> > > One question.  When TopicDetails are passed to the policy
> implementation,
> > > would the server defaults already have been merged?  If not, I think
> the
> > > policy also needs access to the server defaults.
> > >
> > > Cheers,
> > >
> > > Roger
> > >
> > > On Fri, Jan 6, 2017 at 9:26 AM, Ismael Juma  wrote:
> > >
> > > > Thanks for the review Jun. Yes, that's a good point, I have updated
> the
> > > > KIP.
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Jan 6, 2017 at 5:15 PM, Jun Rao  wrote:
> > > >
> > > > > Hi, Ismael,
> > > > >
> > > > > Thanks for the KIP. Looks reasonable to me. To be consistent with
> the
> > > > > pattern used in other pluggable interfaces, we probably should make
> > the
> > > > new
> > > > > interface configurable and closable?
> > > > >
> > > > > Jun
> > > > >
> > > > > On Fri, Jan 6, 2017 at 4:16 AM, Ismael Juma 
> > wrote:
> > > > >
> > > > > > Thanks Dan and Colin for the feedback. I updated the KIP to
> include
> > > the
> > > > > > addition of a validation mode. Since we need to bump the protocol
> > > > version
> > > > > > for that, I also added an error message per topic to the
> response.
> > I
> > > > had
> > > > > > the latter as "Future Work", but I actually felt that it should
> be
> > in
> > > > the
> > > > > > first version (good to have feedback confirming that).
> > > > > >
> > > > > > Let me know if the changes look good to you.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Thu, Jan 5, 2017 at 9:54 PM, Colin McCabe  >
> > > > wrote:
> > > > > >
> > > > > > > Yeah, I agree... having a validation mode would be nice.  We
> > should
> > > > be
> > > > > > > explicit that passing validation doesn't 100% guarantee that a
> > > > > > > subsequent call to create the topic will succeed, though.
> There
> > is
> > > > an
> > > > > > > obvious race condition there-- for example, with a plugin which
> > > > > consults
> > > > > > > some external authentication system, there could be a change to
> > the
> > > > > > > privileges in between validation and attempted creation.
> > > > > > >
> > > > > > > It also seems like we should try to provide a helpful exception
> > > > message
> > > > > > > for the cases where topic creation fails.  This might involve
> > > adding
> > > > > > > more detail about error conditions to CreateTopicsRequest...
> > right
> > > > now
> > > > > > > it just returns an error code, but a text message would be a
> nice
> > > > > > > addition.
> > > > > > >
> > > > > > > cheers,
> > > > > > > Colin
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Jan 5, 2017, at 13:41, dan wrote:
> > > > > > > > it would be nice to have a dry-run or validate ability added
> to
> > > > this
> > > > > > kip.
> > > > > > > > since we are offloading validation to a 3rd party
> implementor a
> > > > > random
> > > > > > > > user
> > > > > > > > can't know a priori (based solely on kafka configs) whether a
> > > call
> > > > > > should
> > > > > > > > succeed without actually creating the topic.
> > > > > > > >
> > > > > > > > a similar case is in connect where there is a separate
> endpoint
> > > > > > > >  > > > > > > runtime/src/main/java/org/apache/kafka/connect/runtime/rest/
> > > > resources/
> > > > > > > ConnectorPluginsResource.java#L49-L58>
> > > > > > > > to attempt to validate a connect configuration without
> actually
> > > > > > creating
> > > > > > > > the connector.
> > > > > > > >
> > > > > > > > thanks
> > > > > > > > dan
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Jan 5, 2017 at 7:34 AM, Ismael Juma <
> ism...@juma.me.uk
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > We've posted "KIP-108: Create Topic Policy" for discussion:
> > > > > > > > >
> > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > > 108%3A+Create+Topic+Policy
> > > > > > > > >
> > > > > > > > > Please take a look. Your feedback is appreciated.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > >
> > > > > >

[jira] [Commented] (KAFKA-5339) Transactions system test with hard broker bounces fails sporadically

2017-05-30 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030147#comment-16030147
 ] 

Apurva Mehta commented on KAFKA-5339:
-

Another puzzling one: 

{noformat}
 21:02:37,450] ERROR Uncaught error in kafka producer I/O thread:  
(org.apache.kafka.clients.producer.internals.Sender)
java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:305)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:193)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:154)
at java.lang.Thread.run(Thread.java:748)
[2017-05-30 21:03:00,433] INFO Closing the Kafka producer with timeoutMillis = 
9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
{noformat}

This corresponds to this line in the Sender at the time the test was run: 
{code:java}
if (nextRequestHandler.isEndTxn() && 
transactionManager.isCompletingTransaction() && 
accumulator.hasUnflushedBatches()) { 
...
}
{code}

It should be impossible to hit an NPE there, because we already check for NPE's 
for `nextRequestHandler` and `transactionManager` before entering that line. 

> Transactions system test with hard broker bounces fails sporadically
> 
>
> Key: KAFKA-5339
> URL: https://issues.apache.org/jira/browse/KAFKA-5339
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The transactions hard bounce test occasionally fails because the 
> transactional message copy just seems to hang. In one of the client logs, I 
> noticed: 
> {noformat}
> [2017-05-27 20:36:12,596] WARN Got error produce response with correlation id 
> 124 on topic-partition output-topic-0, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-27 20:36:15,386] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:146)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:143)
> at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721)
> at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
> at java.util.PriorityQueue.poll(PriorityQueue.java:595)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.nextRequestHandler(TransactionManager.java:351)
> at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:303)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:193)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:154)
> at java.lang.Thread.run(Thread.java:748)
> [2017-05-27 20:36:52,007] INFO Closing the Kafka producer with timeoutMillis 
> = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
> [2017-05-27 20:36:52,036] INFO Marking the coordinator knode02:9092 (id: 
> 2147483645 rack: null) dead for group transactions-test-consumer-group 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> root@7dcd60017519:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=hard_bounce.bounce_target=brokers/1#
> {noformat}
> This suggests that the client has gotten to a bad state which is why it stops 
> processing messages, causing the tests to fail. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5332) When resize the index file, maybe caused the content disappear

2017-05-30 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030120#comment-16030120
 ] 

Guozhang Wang commented on KAFKA-5332:
--

Does look like a bug, [~junrao] could you take a look at the PR?

> When resize the index file, maybe caused the content disappear
> --
>
> Key: KAFKA-5332
> URL: https://issues.apache.org/jira/browse/KAFKA-5332
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 0.10.2.1
>Reporter: xuzq
>
> When resize the index file, maybe caused the content disappear.
> When the kafka server is running, someone removed the index file on the disk 
> manually, if at this point, the function AbstractIndex.Resize(newSize: Int) 
> is triggered, it will create a new .index file which the size is 
> roundedNewSize, but the content is empty. 
> After this, the contents of mmap is empty. 
> When looking for specific offset corresponds to position, it also return 
> zero. The index file to locate the position does not provide any convenience.
> I think if the ".index" is not exist, we should copy the contents from old 
> "mmap" to new "mmap" to avoid the "empty file".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >