[jira] [Commented] (KAFKA-4164) Kafka produces excessive logs when publishing message to non-existent topic

2016-09-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495450#comment-15495450
 ] 

Manikumar Reddy commented on KAFKA-4164:


This is a valid concern. Currently metadata TimeoutException does't include 
underlying cause (UNKNOWN_TOPIC_OR_PARTITION, LEADER_NOT_AVAILABLE) etc. 
It may be tricky to add this, since wait and metadata fetch are running in 
different threads.

> Kafka produces excessive logs when publishing message to non-existent topic
> ---
>
> Key: KAFKA-4164
> URL: https://issues.apache.org/jira/browse/KAFKA-4164
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vimal Sharma
>
> When a message is published to a topic which is not already created(and 
> auto.create.topics.enable is set to false), Kafka produces excessive WARN 
> logs stating that metadata could not be fetched. Below are the logs
> 2016-08-22 06:43:47,655 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1177 :
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:47,756 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1178 : 
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:47,858 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1179 :
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:47,961 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1180 : 
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:48,062 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1181 :
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:48,165 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1182 : 
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:48,265 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1183 :
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:48,366 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1184 : 
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> 2016-08-22 06:43:48,467 WARN [kafka-producer-network-thread | producer-1]: 
> clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
> fetching metadata with correlation id 1185 :
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
> The error is not communicated to the caller so if these logs are suppressed 
> by setting Kafka log level to ERROR, there is no way to debug the issue. It 
> would be helpful if the error message( for example 
> {ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}) can be communicated to the caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1464) Add a throttling option to the Kafka replication tool

2016-09-15 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1464:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1776
[https://github.com/apache/kafka/pull/1776]

> Add a throttling option to the Kafka replication tool
> -
>
> Key: KAFKA-1464
> URL: https://issues.apache.org/jira/browse/KAFKA-1464
> Project: Kafka
>  Issue Type: New Feature
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: mjuarez
>Assignee: Ben Stopford
>Priority: Minor
>  Labels: replication, replication-tools
> Fix For: 0.10.1.0
>
>
> When performing replication on new nodes of a Kafka cluster, the replication 
> process will use all available resources to replicate as fast as possible.  
> This causes performance issues (mostly disk IO and sometimes network 
> bandwidth) when doing this in a production environment, in which you're 
> trying to serve downstream applications, at the same time you're performing 
> maintenance on the Kafka cluster.
> An option to throttle the replication to a specific rate (in either MB/s or 
> activities/second) would help production systems to better handle maintenance 
> tasks while still serving downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1776: KIP-73 - Replication Quotas

2016-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1776


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Issue Comment Deleted] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-4126:
---
Comment: was deleted

(was: Yes.  Another issue related to producer excessive warn logging is here: 
KAFKA-4164.)

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495420#comment-15495420
 ] 

Manikumar Reddy commented on KAFKA-4126:


Yes.  Another issue related to producer excessive warn logging is here: 
KAFKA-4164.

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495422#comment-15495422
 ] 

Manikumar Reddy commented on KAFKA-4126:


Yes.  Another issue related to producer excessive warn logging is here: 
KAFKA-4164.

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2700) delete topic should remove the corresponding ACL and configs

2016-09-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy reassigned KAFKA-2700:
--

Assignee: Manikumar Reddy  (was: Parth Brahmbhatt)

> delete topic should remove the corresponding ACL and configs
> 
>
> Key: KAFKA-2700
> URL: https://issues.apache.org/jira/browse/KAFKA-2700
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>
> After a topic is successfully deleted, we should also remove any ACL, configs 
> and perhaps committed offsets associated with topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #885

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4131; Multiple Regex KStream-Consumers cause Null pointer

--
[...truncated 3507 lines...]

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED
:test_core_2_11
Building project 'core' with Scala version 2.11.8
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:compileTestJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processTestResources UP-TO-DATE
:kafka-trunk-jdk8:clients:testClasses UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:505:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (offsetAndMetadata.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:40:
 no valid targets for annotation on variable _file - it is discarded unused. 
You may specify targets with meta-annotations, e.g. @(volatile @param)
abstract class AbstractIndex[K, V](@volatile private[this] var _file: File, val 
baseOffset: Long, val maxIndexSize: Int = -1)
^
:309:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:248:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:377:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^

Jenkins build is back to normal : kafka-trunk-jdk7 #1542

2016-09-15 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-09-15 Thread Harsha Chintalapani
The only pending update for the KIP is to write up the protocol changes
like we've it KIP-4. I'll update the wiki.

On Thu, Sep 15, 2016 at 4:27 PM Ashish Singh  wrote:

> I think we decided to not support secret rotation, I guess this can be
> stated clearly on the KIP. Also, more details on how clients will perform
> token distribution and how CLI will look like will be helpful.
>
> On Thu, Sep 15, 2016 at 3:20 PM, Gwen Shapira  wrote:
>
> > Hi Guys,
> >
> > This discussion was dead for a while. Are there still contentious
> > points? If not, why are there no votes?
> >
> > On Tue, Aug 23, 2016 at 1:26 PM, Jun Rao  wrote:
> > > Ashish,
> > >
> > > Yes, I will send out a KIP invite for next week to discuss KIP-48 and
> > other
> > > remaining KIPs.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Aug 23, 2016 at 1:22 PM, Ashish Singh 
> > wrote:
> > >
> > >> Thanks Harsha!
> > >>
> > >> Jun, can we add KIP-48 to next KIP hangout's agenda. Also, we did not
> > >> actually make a call on when we should have next KIP call. As there
> are
> > a
> > >> few outstanding KIPs that could not be discussed this week, can we
> have
> > a
> > >> KIP hangout call next week?
> > >>
> > >> On Tue, Aug 23, 2016 at 1:10 PM, Harsha Chintalapani  >
> > >> wrote:
> > >>
> > >>> Ashish,
> > >>> Yes we are working on it. Lets discuss in the next KIP
> meeting.
> > >>> I'll join.
> > >>> -Harsha
> > >>>
> > >>> On Tue, Aug 23, 2016 at 12:07 PM Ashish Singh 
> > >>> wrote:
> > >>>
> > >>> > Hello Harsha,
> > >>> >
> > >>> > Are you still working on this? Wondering if we can discuss this in
> > next
> > >>> KIP
> > >>> > meeting, if you can join.
> > >>> >
> > >>> > On Mon, Jul 18, 2016 at 9:51 AM, Harsha Chintalapani <
> > ka...@harsha.io>
> > >>> > wrote:
> > >>> >
> > >>> > > Hi Grant,
> > >>> > >   We are working on it. Will add the details to KIP about
> > the
> > >>> > > request protocol.
> > >>> > >
> > >>> > > Thanks,
> > >>> > > Harsha
> > >>> > >
> > >>> > > On Mon, Jul 18, 2016 at 6:50 AM Grant Henke  >
> > >>> wrote:
> > >>> > >
> > >>> > > > Hi Parth,
> > >>> > > >
> > >>> > > > Are you still working on this? If you need any help please
> don't
> > >>> > hesitate
> > >>> > > > to ask.
> > >>> > > >
> > >>> > > > Thanks,
> > >>> > > > Grant
> > >>> > > >
> > >>> > > > On Thu, Jun 30, 2016 at 4:35 PM, Jun Rao 
> > wrote:
> > >>> > > >
> > >>> > > > > Parth,
> > >>> > > > >
> > >>> > > > > Thanks for the reply.
> > >>> > > > >
> > >>> > > > > It makes sense to only allow the renewal by users that
> > >>> authenticated
> > >>> > > > using
> > >>> > > > > *non* delegation token mechanism. Then, should we make the
> > >>> renewal a
> > >>> > > > list?
> > >>> > > > > For example, in the case of rest proxy, it will be useful for
> > >>> every
> > >>> > > > > instance of rest proxy to be able to renew the tokens.
> > >>> > > > >
> > >>> > > > > It would be clearer if we can document the request protocol
> > like
> > >>> > > > >
> > >>> > > > >
> > >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> > > 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
> > >>> > > Commandlineandcentralizedadministrativeoperations-
> > >>> > > CreateTopicsRequest(KAFKA-2945):(VotedandPlannedforin0.10.1.0)
> > >>> > > > > .
> > >>> > > > >
> > >>> > > > > It would also be useful to document the client APIs.
> > >>> > > > >
> > >>> > > > > Thanks,
> > >>> > > > >
> > >>> > > > > Jun
> > >>> > > > >
> > >>> > > > > On Tue, Jun 28, 2016 at 2:55 PM, parth brahmbhatt <
> > >>> > > > > brahmbhatt.pa...@gmail.com> wrote:
> > >>> > > > >
> > >>> > > > > > Hi,
> > >>> > > > > >
> > >>> > > > > > I am suggesting that we will only allow the renewal by
> users
> > >>> that
> > >>> > > > > > authenticated using *non* delegation token mechanism. For
> > >>> example,
> > >>> > If
> > >>> > > > > user
> > >>> > > > > > Alice authenticated using kerberos and requested delegation
> > >>> tokens,
> > >>> > > > only
> > >>> > > > > > user Alice authenticated via non delegation token mechanism
> > can
> > >>> > > renew.
> > >>> > > > > > Clients that have  access to delegation tokens can not
> issue
> > >>> > renewal
> > >>> > > > > > request for renewing their own token and this is primarily
> > >>> > important
> > >>> > > to
> > >>> > > > > > reduce the time window for which a compromised token will
> be
> > >>> valid.
> > >>> > > > > >
> > >>> > > > > > To clarify, Yes any authenticated user can request
> delegation
> > >>> > tokens
> > >>> > > > but
> > >>> > > > > > even here I would recommend to avoid creating a chain
> where a
> > >>> > client
> > >>> > > > > > authenticated via delegation token request for more
> > delegation
> > >>> > > tokens.
> > >>> > > > > > Basically anyone can request delegation token, as long as
> > they
> > >>> > > 

[jira] [Commented] (KAFKA-4131) Multiple Regex KStream-Consumers cause Null pointer exception in addRawRecords in RecordQueue class

2016-09-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494913#comment-15494913
 ] 

ASF GitHub Bot commented on KAFKA-4131:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1843


> Multiple Regex KStream-Consumers cause Null pointer exception in 
> addRawRecords in RecordQueue class
> ---
>
> Key: KAFKA-4131
> URL: https://issues.apache.org/jira/browse/KAFKA-4131
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
> Environment: Servers: Confluent Distribution 3.0.0 (i.e. kafka 0.10.0 
> release)
> Client: Kafka-streams and Kafka-client... commit: 
> 6fb33afff976e467bfa8e0b29eb827
> 70a2a3aaec
>Reporter: David J. Garcia
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> When you start two consumer processes with a regex topic (with 2 or more
> partitions for the matching topics), the second (i.e. nonleader) consumer
> will fail with a null pointer exception.
> Exception in thread "StreamThread-4" java.lang.NullPointerException
>  at org.apache.kafka.streams.processor.internals.
> RecordQueue.addRawRecords(RecordQueue.java:78)
>  at org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:117)
>  at org.apache.kafka.streams.processor.internals.
> StreamTask.addRecords(StreamTask.java:139)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.runLoop(StreamThread.java:299)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:208)
> The issue may be in the TopologyBuilder line 832:
> String[] topics = (sourceNodeFactory.pattern != null) ?
> sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
> sourceNodeFactory.getTopics();
> Because the 2nd consumer joins as a follower, “getUpdates” returns an
> empty collection and the regular expression doesn’t get applied to any
> topics.
> Steps to Reproduce:
> 1.) Create at least two topics with at least 2 partitions each.  And start 
> sending messages to them.
> 2.) Start a single threaded Regex KStream-Consumer (i.e. becomes the leader)
> 3)  Start a new instance of this consumer (i.e. it should receive some of the 
> partitions)
> The second consumer will die with the above exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1843: KAFKA-4131 :Multiple Regex KStream-Consumers cause...

2016-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1843


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4131) Multiple Regex KStream-Consumers cause Null pointer exception in addRawRecords in RecordQueue class

2016-09-15 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4131:
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1843
[https://github.com/apache/kafka/pull/1843]

> Multiple Regex KStream-Consumers cause Null pointer exception in 
> addRawRecords in RecordQueue class
> ---
>
> Key: KAFKA-4131
> URL: https://issues.apache.org/jira/browse/KAFKA-4131
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
> Environment: Servers: Confluent Distribution 3.0.0 (i.e. kafka 0.10.0 
> release)
> Client: Kafka-streams and Kafka-client... commit: 
> 6fb33afff976e467bfa8e0b29eb827
> 70a2a3aaec
>Reporter: David J. Garcia
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> When you start two consumer processes with a regex topic (with 2 or more
> partitions for the matching topics), the second (i.e. nonleader) consumer
> will fail with a null pointer exception.
> Exception in thread "StreamThread-4" java.lang.NullPointerException
>  at org.apache.kafka.streams.processor.internals.
> RecordQueue.addRawRecords(RecordQueue.java:78)
>  at org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:117)
>  at org.apache.kafka.streams.processor.internals.
> StreamTask.addRecords(StreamTask.java:139)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.runLoop(StreamThread.java:299)
>  at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:208)
> The issue may be in the TopologyBuilder line 832:
> String[] topics = (sourceNodeFactory.pattern != null) ?
> sourceNodeFactory.getTopics(subscriptionUpdates.getUpdates()) :
> sourceNodeFactory.getTopics();
> Because the 2nd consumer joins as a follower, “getUpdates” returns an
> empty collection and the regular expression doesn’t get applied to any
> topics.
> Steps to Reproduce:
> 1.) Create at least two topics with at least 2 partitions each.  And start 
> sending messages to them.
> 2.) Start a single threaded Regex KStream-Consumer (i.e. becomes the leader)
> 3)  Start a new instance of this consumer (i.e. it should receive some of the 
> partitions)
> The second consumer will die with the above exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4126:
---
Status: Patch Available  (was: Open)

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494874#comment-15494874
 ] 

ASF GitHub Bot commented on KAFKA-4126:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1863

KAFKA-4126: Log a server-side warning when a record is sent to a missing 
topic with auto create disabled



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4126

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1863


commit a3d3f588d743b5457b507b8107ba04f68f9812db
Author: Vahid Hashemian 
Date:   2016-09-15T23:27:51Z

KAFKA-4126: Add a server-side warning message when a record is sent to a 
non-existing topic with auto create disabled




> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1863: KAFKA-4126: Log a server-side warning when a recor...

2016-09-15 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1863

KAFKA-4126: Log a server-side warning when a record is sent to a missing 
topic with auto create disabled



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4126

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1863


commit a3d3f588d743b5457b507b8107ba04f68f9812db
Author: Vahid Hashemian 
Date:   2016-09-15T23:27:51Z

KAFKA-4126: Add a server-side warning message when a record is sent to a 
non-existing topic with auto create disabled




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494870#comment-15494870
 ] 

ASF GitHub Bot commented on KAFKA-4126:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1863


> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1863: KAFKA-4126: Log a server-side warning when a recor...

2016-09-15 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1863


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1863: Log a server-side warning when a record is sent to...

2016-09-15 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1863

Log a server-side warning when a record is sent to a missing topic with 
auto create disabled



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4126

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1863


commit f742643af1d2835ebc846fa238a3737054804b7d
Author: Vahid Hashemian 
Date:   2016-09-15T23:27:51Z

Add a server-side warning message when a record is sent to a non-existing 
topic with auto create disabled




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-09-15 Thread Ashish Singh
I think we decided to not support secret rotation, I guess this can be
stated clearly on the KIP. Also, more details on how clients will perform
token distribution and how CLI will look like will be helpful.

On Thu, Sep 15, 2016 at 3:20 PM, Gwen Shapira  wrote:

> Hi Guys,
>
> This discussion was dead for a while. Are there still contentious
> points? If not, why are there no votes?
>
> On Tue, Aug 23, 2016 at 1:26 PM, Jun Rao  wrote:
> > Ashish,
> >
> > Yes, I will send out a KIP invite for next week to discuss KIP-48 and
> other
> > remaining KIPs.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Aug 23, 2016 at 1:22 PM, Ashish Singh 
> wrote:
> >
> >> Thanks Harsha!
> >>
> >> Jun, can we add KIP-48 to next KIP hangout's agenda. Also, we did not
> >> actually make a call on when we should have next KIP call. As there are
> a
> >> few outstanding KIPs that could not be discussed this week, can we have
> a
> >> KIP hangout call next week?
> >>
> >> On Tue, Aug 23, 2016 at 1:10 PM, Harsha Chintalapani 
> >> wrote:
> >>
> >>> Ashish,
> >>> Yes we are working on it. Lets discuss in the next KIP meeting.
> >>> I'll join.
> >>> -Harsha
> >>>
> >>> On Tue, Aug 23, 2016 at 12:07 PM Ashish Singh 
> >>> wrote:
> >>>
> >>> > Hello Harsha,
> >>> >
> >>> > Are you still working on this? Wondering if we can discuss this in
> next
> >>> KIP
> >>> > meeting, if you can join.
> >>> >
> >>> > On Mon, Jul 18, 2016 at 9:51 AM, Harsha Chintalapani <
> ka...@harsha.io>
> >>> > wrote:
> >>> >
> >>> > > Hi Grant,
> >>> > >   We are working on it. Will add the details to KIP about
> the
> >>> > > request protocol.
> >>> > >
> >>> > > Thanks,
> >>> > > Harsha
> >>> > >
> >>> > > On Mon, Jul 18, 2016 at 6:50 AM Grant Henke 
> >>> wrote:
> >>> > >
> >>> > > > Hi Parth,
> >>> > > >
> >>> > > > Are you still working on this? If you need any help please don't
> >>> > hesitate
> >>> > > > to ask.
> >>> > > >
> >>> > > > Thanks,
> >>> > > > Grant
> >>> > > >
> >>> > > > On Thu, Jun 30, 2016 at 4:35 PM, Jun Rao 
> wrote:
> >>> > > >
> >>> > > > > Parth,
> >>> > > > >
> >>> > > > > Thanks for the reply.
> >>> > > > >
> >>> > > > > It makes sense to only allow the renewal by users that
> >>> authenticated
> >>> > > > using
> >>> > > > > *non* delegation token mechanism. Then, should we make the
> >>> renewal a
> >>> > > > list?
> >>> > > > > For example, in the case of rest proxy, it will be useful for
> >>> every
> >>> > > > > instance of rest proxy to be able to renew the tokens.
> >>> > > > >
> >>> > > > > It would be clearer if we can document the request protocol
> like
> >>> > > > >
> >>> > > > >
> >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
> >>> > > Commandlineandcentralizedadministrativeoperations-
> >>> > > CreateTopicsRequest(KAFKA-2945):(VotedandPlannedforin0.10.1.0)
> >>> > > > > .
> >>> > > > >
> >>> > > > > It would also be useful to document the client APIs.
> >>> > > > >
> >>> > > > > Thanks,
> >>> > > > >
> >>> > > > > Jun
> >>> > > > >
> >>> > > > > On Tue, Jun 28, 2016 at 2:55 PM, parth brahmbhatt <
> >>> > > > > brahmbhatt.pa...@gmail.com> wrote:
> >>> > > > >
> >>> > > > > > Hi,
> >>> > > > > >
> >>> > > > > > I am suggesting that we will only allow the renewal by users
> >>> that
> >>> > > > > > authenticated using *non* delegation token mechanism. For
> >>> example,
> >>> > If
> >>> > > > > user
> >>> > > > > > Alice authenticated using kerberos and requested delegation
> >>> tokens,
> >>> > > > only
> >>> > > > > > user Alice authenticated via non delegation token mechanism
> can
> >>> > > renew.
> >>> > > > > > Clients that have  access to delegation tokens can not issue
> >>> > renewal
> >>> > > > > > request for renewing their own token and this is primarily
> >>> > important
> >>> > > to
> >>> > > > > > reduce the time window for which a compromised token will be
> >>> valid.
> >>> > > > > >
> >>> > > > > > To clarify, Yes any authenticated user can request delegation
> >>> > tokens
> >>> > > > but
> >>> > > > > > even here I would recommend to avoid creating a chain where a
> >>> > client
> >>> > > > > > authenticated via delegation token request for more
> delegation
> >>> > > tokens.
> >>> > > > > > Basically anyone can request delegation token, as long as
> they
> >>> > > > > authenticate
> >>> > > > > > via a non delegation token mechanism.
> >>> > > > > >
> >>> > > > > > Aren't classes listed here
> >>> > > > > > <
> >>> > > > > >
> >>> > > > >
> >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > 48+Delegation+token+support+for+Kafka#KIP-48Delegationtokens
> >>> upportforKaf
> >>> > > ka-PublicInterfaces
> >>> > > > > > >
> >>> > > > > > sufficient?
> >>> > > > > >
> >>> > > > > > Thanks
> >>> > > > > > Parth
> >>> > > > > >
> >>> > > > > >

Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-09-15 Thread Becket Qin
According to the meeting minutes of KIP hangout on 8/30, it seems the KIP
wiki needs some update?

KIP48 (delegation tokens): Harsha will update the wiki with more details on
how to use delegation tokens and how to configure it.

Not sure if that has been done or not.

On Thu, Sep 15, 2016 at 3:20 PM, Gwen Shapira  wrote:

> Hi Guys,
>
> This discussion was dead for a while. Are there still contentious
> points? If not, why are there no votes?
>
> On Tue, Aug 23, 2016 at 1:26 PM, Jun Rao  wrote:
> > Ashish,
> >
> > Yes, I will send out a KIP invite for next week to discuss KIP-48 and
> other
> > remaining KIPs.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Aug 23, 2016 at 1:22 PM, Ashish Singh 
> wrote:
> >
> >> Thanks Harsha!
> >>
> >> Jun, can we add KIP-48 to next KIP hangout's agenda. Also, we did not
> >> actually make a call on when we should have next KIP call. As there are
> a
> >> few outstanding KIPs that could not be discussed this week, can we have
> a
> >> KIP hangout call next week?
> >>
> >> On Tue, Aug 23, 2016 at 1:10 PM, Harsha Chintalapani 
> >> wrote:
> >>
> >>> Ashish,
> >>> Yes we are working on it. Lets discuss in the next KIP meeting.
> >>> I'll join.
> >>> -Harsha
> >>>
> >>> On Tue, Aug 23, 2016 at 12:07 PM Ashish Singh 
> >>> wrote:
> >>>
> >>> > Hello Harsha,
> >>> >
> >>> > Are you still working on this? Wondering if we can discuss this in
> next
> >>> KIP
> >>> > meeting, if you can join.
> >>> >
> >>> > On Mon, Jul 18, 2016 at 9:51 AM, Harsha Chintalapani <
> ka...@harsha.io>
> >>> > wrote:
> >>> >
> >>> > > Hi Grant,
> >>> > >   We are working on it. Will add the details to KIP about
> the
> >>> > > request protocol.
> >>> > >
> >>> > > Thanks,
> >>> > > Harsha
> >>> > >
> >>> > > On Mon, Jul 18, 2016 at 6:50 AM Grant Henke 
> >>> wrote:
> >>> > >
> >>> > > > Hi Parth,
> >>> > > >
> >>> > > > Are you still working on this? If you need any help please don't
> >>> > hesitate
> >>> > > > to ask.
> >>> > > >
> >>> > > > Thanks,
> >>> > > > Grant
> >>> > > >
> >>> > > > On Thu, Jun 30, 2016 at 4:35 PM, Jun Rao 
> wrote:
> >>> > > >
> >>> > > > > Parth,
> >>> > > > >
> >>> > > > > Thanks for the reply.
> >>> > > > >
> >>> > > > > It makes sense to only allow the renewal by users that
> >>> authenticated
> >>> > > > using
> >>> > > > > *non* delegation token mechanism. Then, should we make the
> >>> renewal a
> >>> > > > list?
> >>> > > > > For example, in the case of rest proxy, it will be useful for
> >>> every
> >>> > > > > instance of rest proxy to be able to renew the tokens.
> >>> > > > >
> >>> > > > > It would be clearer if we can document the request protocol
> like
> >>> > > > >
> >>> > > > >
> >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
> >>> > > Commandlineandcentralizedadministrativeoperations-
> >>> > > CreateTopicsRequest(KAFKA-2945):(VotedandPlannedforin0.10.1.0)
> >>> > > > > .
> >>> > > > >
> >>> > > > > It would also be useful to document the client APIs.
> >>> > > > >
> >>> > > > > Thanks,
> >>> > > > >
> >>> > > > > Jun
> >>> > > > >
> >>> > > > > On Tue, Jun 28, 2016 at 2:55 PM, parth brahmbhatt <
> >>> > > > > brahmbhatt.pa...@gmail.com> wrote:
> >>> > > > >
> >>> > > > > > Hi,
> >>> > > > > >
> >>> > > > > > I am suggesting that we will only allow the renewal by users
> >>> that
> >>> > > > > > authenticated using *non* delegation token mechanism. For
> >>> example,
> >>> > If
> >>> > > > > user
> >>> > > > > > Alice authenticated using kerberos and requested delegation
> >>> tokens,
> >>> > > > only
> >>> > > > > > user Alice authenticated via non delegation token mechanism
> can
> >>> > > renew.
> >>> > > > > > Clients that have  access to delegation tokens can not issue
> >>> > renewal
> >>> > > > > > request for renewing their own token and this is primarily
> >>> > important
> >>> > > to
> >>> > > > > > reduce the time window for which a compromised token will be
> >>> valid.
> >>> > > > > >
> >>> > > > > > To clarify, Yes any authenticated user can request delegation
> >>> > tokens
> >>> > > > but
> >>> > > > > > even here I would recommend to avoid creating a chain where a
> >>> > client
> >>> > > > > > authenticated via delegation token request for more
> delegation
> >>> > > tokens.
> >>> > > > > > Basically anyone can request delegation token, as long as
> they
> >>> > > > > authenticate
> >>> > > > > > via a non delegation token mechanism.
> >>> > > > > >
> >>> > > > > > Aren't classes listed here
> >>> > > > > > <
> >>> > > > > >
> >>> > > > >
> >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > 48+Delegation+token+support+for+Kafka#KIP-48Delegationtokens
> >>> upportforKaf
> >>> > > ka-PublicInterfaces
> >>> > > > > > >
> >>> > > > > > sufficient?
> >>> > > > > >
> >>> > > > > > 

Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-09-15 Thread Gwen Shapira
Hi Guys,

This discussion was dead for a while. Are there still contentious
points? If not, why are there no votes?

On Tue, Aug 23, 2016 at 1:26 PM, Jun Rao  wrote:
> Ashish,
>
> Yes, I will send out a KIP invite for next week to discuss KIP-48 and other
> remaining KIPs.
>
> Thanks,
>
> Jun
>
> On Tue, Aug 23, 2016 at 1:22 PM, Ashish Singh  wrote:
>
>> Thanks Harsha!
>>
>> Jun, can we add KIP-48 to next KIP hangout's agenda. Also, we did not
>> actually make a call on when we should have next KIP call. As there are a
>> few outstanding KIPs that could not be discussed this week, can we have a
>> KIP hangout call next week?
>>
>> On Tue, Aug 23, 2016 at 1:10 PM, Harsha Chintalapani 
>> wrote:
>>
>>> Ashish,
>>> Yes we are working on it. Lets discuss in the next KIP meeting.
>>> I'll join.
>>> -Harsha
>>>
>>> On Tue, Aug 23, 2016 at 12:07 PM Ashish Singh 
>>> wrote:
>>>
>>> > Hello Harsha,
>>> >
>>> > Are you still working on this? Wondering if we can discuss this in next
>>> KIP
>>> > meeting, if you can join.
>>> >
>>> > On Mon, Jul 18, 2016 at 9:51 AM, Harsha Chintalapani 
>>> > wrote:
>>> >
>>> > > Hi Grant,
>>> > >   We are working on it. Will add the details to KIP about the
>>> > > request protocol.
>>> > >
>>> > > Thanks,
>>> > > Harsha
>>> > >
>>> > > On Mon, Jul 18, 2016 at 6:50 AM Grant Henke 
>>> wrote:
>>> > >
>>> > > > Hi Parth,
>>> > > >
>>> > > > Are you still working on this? If you need any help please don't
>>> > hesitate
>>> > > > to ask.
>>> > > >
>>> > > > Thanks,
>>> > > > Grant
>>> > > >
>>> > > > On Thu, Jun 30, 2016 at 4:35 PM, Jun Rao  wrote:
>>> > > >
>>> > > > > Parth,
>>> > > > >
>>> > > > > Thanks for the reply.
>>> > > > >
>>> > > > > It makes sense to only allow the renewal by users that
>>> authenticated
>>> > > > using
>>> > > > > *non* delegation token mechanism. Then, should we make the
>>> renewal a
>>> > > > list?
>>> > > > > For example, in the case of rest proxy, it will be useful for
>>> every
>>> > > > > instance of rest proxy to be able to renew the tokens.
>>> > > > >
>>> > > > > It would be clearer if we can document the request protocol like
>>> > > > >
>>> > > > >
>>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> > > 4+-+Command+line+and+centralized+administrative+operations#KIP-4-
>>> > > Commandlineandcentralizedadministrativeoperations-
>>> > > CreateTopicsRequest(KAFKA-2945):(VotedandPlannedforin0.10.1.0)
>>> > > > > .
>>> > > > >
>>> > > > > It would also be useful to document the client APIs.
>>> > > > >
>>> > > > > Thanks,
>>> > > > >
>>> > > > > Jun
>>> > > > >
>>> > > > > On Tue, Jun 28, 2016 at 2:55 PM, parth brahmbhatt <
>>> > > > > brahmbhatt.pa...@gmail.com> wrote:
>>> > > > >
>>> > > > > > Hi,
>>> > > > > >
>>> > > > > > I am suggesting that we will only allow the renewal by users
>>> that
>>> > > > > > authenticated using *non* delegation token mechanism. For
>>> example,
>>> > If
>>> > > > > user
>>> > > > > > Alice authenticated using kerberos and requested delegation
>>> tokens,
>>> > > > only
>>> > > > > > user Alice authenticated via non delegation token mechanism can
>>> > > renew.
>>> > > > > > Clients that have  access to delegation tokens can not issue
>>> > renewal
>>> > > > > > request for renewing their own token and this is primarily
>>> > important
>>> > > to
>>> > > > > > reduce the time window for which a compromised token will be
>>> valid.
>>> > > > > >
>>> > > > > > To clarify, Yes any authenticated user can request delegation
>>> > tokens
>>> > > > but
>>> > > > > > even here I would recommend to avoid creating a chain where a
>>> > client
>>> > > > > > authenticated via delegation token request for more delegation
>>> > > tokens.
>>> > > > > > Basically anyone can request delegation token, as long as they
>>> > > > > authenticate
>>> > > > > > via a non delegation token mechanism.
>>> > > > > >
>>> > > > > > Aren't classes listed here
>>> > > > > > <
>>> > > > > >
>>> > > > >
>>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> > > 48+Delegation+token+support+for+Kafka#KIP-48Delegationtokens
>>> upportforKaf
>>> > > ka-PublicInterfaces
>>> > > > > > >
>>> > > > > > sufficient?
>>> > > > > >
>>> > > > > > Thanks
>>> > > > > > Parth
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > On Tue, Jun 21, 2016 at 4:33 PM, Jun Rao 
>>> wrote:
>>> > > > > >
>>> > > > > > > Parth,
>>> > > > > > >
>>> > > > > > > Thanks for the reply. A couple of comments inline below.
>>> > > > > > >
>>> > > > > > > On Tue, Jun 21, 2016 at 10:36 AM, parth brahmbhatt <
>>> > > > > > > brahmbhatt.pa...@gmail.com> wrote:
>>> > > > > > >
>>> > > > > > > > 1. Who / how are tokens renewed? By original requester
>>> only? or
>>> > > > using
>>> > > > > > > > Kerberos
>>> > > > > > > > auth only?
>>> > > > > > > > My recommendation is to do this only 

[jira] [Updated] (KAFKA-4151) Update public docs for KIP-78

2016-09-15 Thread Sumit Arrawatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Arrawatia updated KAFKA-4151:
---
Description: Add documentation to include details on Cluster Id in 
"Implementation" section. The actual implementation is tracked in KAFKA-4093.   
(was: System tests for KIP-78. The actual implementation is tracked in 
KAFKA-4093. )

> Update public docs for KIP-78
> -
>
> Key: KAFKA-4151
> URL: https://issues.apache.org/jira/browse/KAFKA-4151
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Sumit Arrawatia
>Assignee: Sumit Arrawatia
>
> Add documentation to include details on Cluster Id in "Implementation" 
> section. The actual implementation is tracked in KAFKA-4093. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4151) Update public docs for KIP-78

2016-09-15 Thread Sumit Arrawatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Arrawatia updated KAFKA-4151:
---
Summary: Update public docs for KIP-78  (was: System tests for KIP-78 )

> Update public docs for KIP-78
> -
>
> Key: KAFKA-4151
> URL: https://issues.apache.org/jira/browse/KAFKA-4151
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Sumit Arrawatia
>Assignee: Sumit Arrawatia
>
> System tests for KIP-78. The actual implementation is tracked in KAFKA-4093. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-15 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494627#comment-15494627
 ] 

Vahid Hashemian commented on KAFKA-4126:


[~omkreddy] Do you also get repeating log messages (like below) until the 
timeout is reached?
{code}
[2016-09-15 14:49:05,691] WARN Error while fetching metadata with correlation 
id 0 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2016-09-15 14:49:05,787] WARN Error while fetching metadata with correlation 
id 1 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2016-09-15 14:49:05,889] WARN Error while fetching metadata with correlation 
id 2 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
.
.
.
[2016-09-15 14:50:05,356] WARN Error while fetching metadata with correlation 
id 585 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2016-09-15 14:50:05,458] WARN Error while fetching metadata with correlation 
id 586 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2016-09-15 14:50:05,559] WARN Error while fetching metadata with correlation 
id 587 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2016-09-15 14:50:05,619] ERROR Error when sending message to topic new_topic 
with key: null, value: 1 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
[2016-09-15 14:50:05,661] WARN Error while fetching metadata with correlation 
id 588 : {new_topic=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
{code}

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1464) Add a throttling option to the Kafka replication tool

2016-09-15 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494596#comment-15494596
 ] 

Jiangjie Qin commented on KAFKA-1464:
-

It seems the PR title did not start with "KAFKA-1464" so the PR link is not 
updated. Anyway, the PR link is https://github.com/apache/kafka/pull/1776

> Add a throttling option to the Kafka replication tool
> -
>
> Key: KAFKA-1464
> URL: https://issues.apache.org/jira/browse/KAFKA-1464
> Project: Kafka
>  Issue Type: New Feature
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: mjuarez
>Assignee: Ben Stopford
>Priority: Minor
>  Labels: replication, replication-tools
> Fix For: 0.10.1.0
>
>
> When performing replication on new nodes of a Kafka cluster, the replication 
> process will use all available resources to replicate as fast as possible.  
> This causes performance issues (mostly disk IO and sometimes network 
> bandwidth) when doing this in a production environment, in which you're 
> trying to serve downstream applications, at the same time you're performing 
> maintenance on the Kafka cluster.
> An option to throttle the replication to a specific rate (in either MB/s or 
> activities/second) would help production systems to better handle maintenance 
> tasks while still serving downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3856) Move inner classes accessible only functions in TopologyBuilder out of public APIs

2016-09-15 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3856 started by Jeyhun Karimov.
-
> Move inner classes accessible only functions in TopologyBuilder out of public 
> APIs
> --
>
> Key: KAFKA-3856
> URL: https://issues.apache.org/jira/browse/KAFKA-3856
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Jeyhun Karimov
>  Labels: api
>
> In {{TopologyBuilder}} there are a couple of public functions that are 
> actually only used in the internal classes such as StreamThread and 
> StreamPartitionAssignor, and some accessible only in high-level DSL inner 
> classes, examples include {{addInternalTopic}}, {{sourceGroups}} and 
> {{copartitionGroups}}, etc. But they are still listed in Javadocs since this 
> class is part of public APIs.
> We should think about moving them out of the public functions. Unfortunately 
> there is no "friend" access mode as in C++, so we need to think of another 
> way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3825) Allow users to specify different types of state stores in Streams DSL

2016-09-15 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov updated KAFKA-3825:
--
Status: Patch Available  (was: Open)

> Allow users to specify different types of state stores in Streams DSL
> -
>
> Key: KAFKA-3825
> URL: https://issues.apache.org/jira/browse/KAFKA-3825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Jeyhun Karimov
>  Labels: api
>
> Today the high-level Streams DSL uses hard-coded types of state stores (i.e. 
> persistent RocksDB) for their stateful operations. But for advanced users 
> they should be able to specify different types of state stores (in-memory, 
> persistent, customized) also in the DSL, instead of resorting to the 
> lower-level APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3825) Allow users to specify different types of state stores in Streams DSL

2016-09-15 Thread Jeyhun Karimov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494574#comment-15494574
 ] 

Jeyhun Karimov commented on KAFKA-3825:
---

I am closing this issue as the PR has already been submitted. 

https://github.com/apache/kafka/pull/1588/commits/98a58d8241dbd95bbe10da220639ad0362259852



> Allow users to specify different types of state stores in Streams DSL
> -
>
> Key: KAFKA-3825
> URL: https://issues.apache.org/jira/browse/KAFKA-3825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Jeyhun Karimov
>  Labels: api
>
> Today the high-level Streams DSL uses hard-coded types of state stores (i.e. 
> persistent RocksDB) for their stateful operations. But for advanced users 
> they should be able to specify different types of state stores (in-memory, 
> persistent, customized) also in the DSL, instead of resorting to the 
> lower-level APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3184) Add Checkpoint for In-memory State Store

2016-09-15 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov updated KAFKA-3184:
--
Assignee: Guozhang Wang  (was: Jeyhun Karimov)

> Add Checkpoint for In-memory State Store
> 
>
> Key: KAFKA-3184
> URL: https://issues.apache.org/jira/browse/KAFKA-3184
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams does not make a checkpoint of the persistent state 
> store upon committing, which would be expensive since it is "stopping the 
> world" and write on disks: for example, RocksDB would require you to copy the 
> file directory to make a copy naively. 
> However, for in-memory stores checkpointing maybe doable in an asynchronous 
> manner hence it can be done quickly. And the benefit of having intermediate 
> checkpoint is to avoid restoring from scratch if standby tasks are not 
> present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3184) Add Checkpoint for In-memory State Store

2016-09-15 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov updated KAFKA-3184:
--
Assignee: (was: Guozhang Wang)

> Add Checkpoint for In-memory State Store
> 
>
> Key: KAFKA-3184
> URL: https://issues.apache.org/jira/browse/KAFKA-3184
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams does not make a checkpoint of the persistent state 
> store upon committing, which would be expensive since it is "stopping the 
> world" and write on disks: for example, RocksDB would require you to copy the 
> file directory to make a copy naively. 
> However, for in-memory stores checkpointing maybe doable in an asynchronous 
> manner hence it can be done quickly. And the benefit of having intermediate 
> checkpoint is to avoid restoring from scratch if standby tasks are not 
> present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-54: Sticky Partition Assignment Strategy

2016-09-15 Thread Bill Bejeck
+1

On Thu, Sep 15, 2016 at 5:16 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> +1 (non-binding)
>
> On Wed, Sep 14, 2016 at 12:37 AM, Jason Gustafson 
> wrote:
>
> > Thanks for the KIP. +1 from me.
> >
> > On Tue, Sep 13, 2016 at 12:05 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com> wrote:
> >
> > > Hi all,
> > >
> > > Thanks for providing feedback on this KIP so far.
> > > The KIP was discussed during the KIP meeting today and there doesn't
> seem
> > > to be any unaddressed issue at this point.
> > >
> > > So I would like to initiate the voting process.
> > >
> > > The KIP can be found here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 54+-+Sticky+Partition+Assignment+Strategy
> > > And the full discussion thread is here:
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg47607.html
> > >
> > > Thanks.
> > > --Vahid
> > >
> > >
> >
>
>
>
> --
> Regards,
>
> Rajini
>


Build failed in Jenkins: kafka-trunk-jdk8 #884

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix KafkaStreams SmokeTest

--
[...truncated 10897 lines...]

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnEmptyTopic PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testWakeupWithFetchDataAvailable STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testWakeupWithFetchDataAvailable PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopicCollection STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopicCollection PASSED

org.apache.kafka.clients.consumer.ConsumerConfigTest > 
testDeserializerToPropertyConfig STARTED

org.apache.kafka.clients.consumer.ConsumerConfigTest > 
testDeserializerToPropertyConfig PASSED

org.apache.kafka.clients.consumer.ConsumerConfigTest > 
testDeserializerToMapConfig STARTED

org.apache.kafka.clients.consumer.ConsumerConfigTest > 
testDeserializerToMapConfig PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerNoTopic STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerNoTopic PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersTwoTopicsSixPartitions STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersTwoTopicsSixPartitions PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerOneTopic STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerOneTopic PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testMultipleConsumersMixedTopics STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testMultipleConsumersMixedTopics PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersOneTopicOnePartition STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersOneTopicOnePartition PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerMultipleTopics STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerMultipleTopics PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOnlyAssignsPartitionsFromSubscribedTopics STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOnlyAssignsPartitionsFromSubscribedTopics PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersOneTopicTwoPartitions STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testTwoConsumersOneTopicTwoPartitions PASSED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerNonexistentTopic STARTED

org.apache.kafka.clients.consumer.RoundRobinAssignorTest > 
testOneConsumerNonexistentTopic PASSED

org.apache.kafka.clients.consumer.MockConsumerTest > testSimpleMock STARTED

org.apache.kafka.clients.consumer.MockConsumerTest > testSimpleMock PASSED

org.apache.kafka.clients.consumer.SerializeCompatibilityOffsetAndMetadataTest > 
testSerializationRoundtrip STARTED

org.apache.kafka.clients.consumer.SerializeCompatibilityOffsetAndMetadataTest > 
testSerializationRoundtrip PASSED

org.apache.kafka.clients.consumer.SerializeCompatibilityOffsetAndMetadataTest > 
testOffsetMetadataSerializationCompatibility STARTED

org.apache.kafka.clients.consumer.SerializeCompatibilityOffsetAndMetadataTest > 
testOffsetMetadataSerializationCompatibility PASSED

org.apache.kafka.clients.consumer.ConsumerRecordsTest > iterator STARTED

org.apache.kafka.clients.consumer.ConsumerRecordsTest > iterator PASSED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname STARTED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses STARTED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort STARTED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: ignoring option 
MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

Build failed in Jenkins: kafka-0.10.0-jdk7 #200

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: set sourceNodes to null for selectKey

--
[...truncated 5757 lines...]
org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > 
testNotJoinable PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
testToWithNullValueSerdeDoesntNPE PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testRepartition 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
testStateStoreLazyEval PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilter PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformTest > testTransform 
PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > nameMustNotBeEmpty PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > nameMustNotBeNull PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > nameMustNotBeEmpty 
PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
startTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedOnWindowStart PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > nameMustNotBeNull PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > startTimeCanBeZero 
PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedAfterWindowStart PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldExcludeRecordsThatHappenedBeforeWindowStart PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanupIsolation PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > testGrouping 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED


Build failed in Jenkins: kafka-trunk-jdk7 #1541

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix KafkaStreams SmokeTest

--
[...truncated 6686 lines...]

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED


[jira] [Commented] (KAFKA-2700) delete topic should remove the corresponding ACL and configs

2016-09-15 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15493958#comment-15493958
 ] 

Parth Brahmbhatt commented on KAFKA-2700:
-

all yours.

> delete topic should remove the corresponding ACL and configs
> 
>
> Key: KAFKA-2700
> URL: https://issues.apache.org/jira/browse/KAFKA-2700
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>
> After a topic is successfully deleted, we should also remove any ACL, configs 
> and perhaps committed offsets associated with topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1862: KAFKA-4175: Can't have StandbyTasks in KafkaStream...

2016-09-15 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1862

KAFKA-4175: Can't have StandbyTasks in KafkaStreams where 
NUM_STREAM_THREADS_CONFIG > 1

standby tasks should be assigned per consumer not per process

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1862


commit 4e4019f7143f27cf942c69f9eb2942cf4f83c01b
Author: Damian Guy 
Date:   2016-09-15T16:14:35Z

assign standby tasks per consumer not per process




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15493818#comment-15493818
 ] 

ASF GitHub Bot commented on KAFKA-4175:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1862

KAFKA-4175: Can't have StandbyTasks in KafkaStreams where 
NUM_STREAM_THREADS_CONFIG > 1

standby tasks should be assigned per consumer not per process

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1862


commit 4e4019f7143f27cf942c69f9eb2942cf4f83c01b
Author: Damian Guy 
Date:   2016-09-15T16:14:35Z

assign standby tasks per consumer not per process




> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the same StandbyTask has been assigned to each thread in the 
> same KafkaStreams instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4175:
--
Description: 
When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
instance we can run into:
{code}
Caused by: java.io.IOException: task [1_0] Failed to lock the state directory: 
/private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
{code}

This is because the same StandbyTask has been assigned to each thread in the 
same KafkaStreams instance

  was:
When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
instance we can run into:
{code}
Caused by: java.io.IOException: task [1_0] Failed to lock the state directory: 
/private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
{code}

This is because the StandbyTask has been assigned to the same KafkaStreams 
instance as the active task. We need to either use a different base state 
directory for the standby tasks or we need to change the assignment such that 
they don't end up in the same instance. 


> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the same StandbyTask has been assigned to each thread in the 
> same KafkaStreams instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-4175:
-

Assignee: Damian Guy  (was: Guozhang Wang)

> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the same StandbyTask has been assigned to each thread in the 
> same KafkaStreams instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15493763#comment-15493763
 ] 

Guozhang Wang commented on KAFKA-4175:
--

[~damianguy] Thanks for the finding. I think this was overlooked originally 
when we handle {{OverlappingFileLockException}}, that since 
{{FileChannel.tryLock}} is not bound to the thread, but is for the whole JVM, 
we thought returning null is sufficient. But it obviously is not the case.

> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the StandbyTask has been assigned to the same KafkaStreams 
> instance as the active task. We need to either use a different base state 
> directory for the standby tasks or we need to change the assignment such that 
> they don't end up in the same instance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1861: HOTFIX: fix KafkaStreams SmokeTest

2016-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1861


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1858: HOTFIX: set sourceNodes to null for selectKey

2016-09-15 Thread guozhangwang
Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/1858


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3964) Metadata update requests are sometimes received after LeaderAndIsrRequests

2016-09-15 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15493523#comment-15493523
 ] 

Jun Rao commented on KAFKA-3964:


[~krishna97], it seems that this is the same issue as in 
https://issues.apache.org/jira/browse/KAFKA-3042?

> Metadata update requests are sometimes received after LeaderAndIsrRequests
> --
>
> Key: KAFKA-3964
> URL: https://issues.apache.org/jira/browse/KAFKA-3964
> Project: Kafka
>  Issue Type: Bug
>Reporter: Maysam Yabandeh
>Priority: Minor
>
> The broker needs metadata of the leader before being able to process 
> LeaderAndIsrRequest from the controller. For this reason on broker startup 
> the controller first sends the metadata update requests and AFTER that it 
> sends the LeaderAndIsrRequests:
> {code}
>  def onBrokerStartup(newBrokers: Seq[Int]) {
> info("New broker startup callback for 
> %s".format(newBrokers.mkString(",")))
> val newBrokersSet = newBrokers.toSet
> // send update metadata request to all live and shutting down brokers. 
> Old brokers will get to know of the new
> // broker via this update.
> // In cases of controlled shutdown leaders will not be elected when a new 
> broker comes up. So at least in the
> // common controlled shutdown case, the metadata will reach the new 
> brokers faster
> 
> sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)
> // the very first thing to do when a new broker comes up is send it the 
> entire list of partitions that it is
> // supposed to host. Based on that the broker starts the high watermark 
> threads for the input list of partitions
> val allReplicasOnNewBrokers = 
> controllerContext.replicasOnBrokers(newBrokersSet)
> replicaStateMachine.handleStateChanges(allReplicasOnNewBrokers, 
> OnlineReplica)
> {code}
> However this protocol is not followed when a nodes becomes the controller: it 
> sends LeaderAndIsrRequests BEFORE sending the metadata update requests:
> {code}
>   def onControllerFailover() {
> ...
>   replicaStateMachine.startup()
> ...
>   /* send partition leadership info to all live brokers */  
> sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)
> {code}
> ReplicaStateMachine::startup
> {code}
>   def startup() {
> ...
> // move all Online replicas to Online
> handleStateChanges(controllerContext.allLiveReplicas(), 
> OnlineReplica){code}
> which trigger LeaderAndIsrRequest messages.
> Here is the symptoms that one would observe when this problem manifests:
> # The first set of messages that the broker receives from the controller is 
> LeaderAndIsrRequests
> # The broker fails to become the follower as requested by the controller
> {code}
> 2016-07-12 21:03:53,081 ERROR change.logger: Broker 14 received 
> LeaderAndIsrRequest with correlation id 0 from controller 21 epoch 290 for 
> partition [topicxyz,7] but cannot become follower since the new leader 22 is 
> unavailable.
> {code}
> # The fetcher hence does not start and the partition remains under-replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1861: HOTFIX: fix KafkaStreams SmokeTest

2016-09-15 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1861

HOTFIX: fix KafkaStreams SmokeTest

Set the NUM_STREAM_THREADS_CONFIG = 1 in SmokeTestClient as we get locking 
issues when we have NUM_STREAM_THREADS_CONFIG > 1 and we have Standby Tasks, 
i.e., replicas. This is because the Standby Tasks can be assigned to the same 
KafkaStreams instance as the active task, hence the directory is locked

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka fix-smoketest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1861


commit abbab326e99edaa12b9e34c875f2a0426228725f
Author: Damian Guy 
Date:   2016-09-15T14:17:35Z

change NUM_STREAMS_THREAD_CONFIG to 1 in SmokeTestClient




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4175:
--
Description: 
When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
instance we can run into:
{code}
Caused by: java.io.IOException: task [1_0] Failed to lock the state directory: 
/private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
{code}

This is because the StandbyTask has been assigned to the same KafkaStreams 
instance as the active task. We need to either use a different base state 
directory for the standby tasks or we need to change the assignment such that 
they don't end up in the same instance. 

  was:
When we have {code}StandbyTasks{code} in a Kafka Streams app and we have > 1 
threads per instance we can run into:
{code}
Caused by: java.io.IOException: task [1_0] Failed to lock the state directory: 
/private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
{code}

This is because the StandbyTask has been assigned to the same 
{code}KafkaStreams{code} instance as the active task. We need to either use a 
different base state directory for the standby tasks or we need to change the 
assignment such that they don't end up in the same instance. 


> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the StandbyTask has been assigned to the same KafkaStreams 
> instance as the active task. We need to either use a different base state 
> directory for the standby tasks or we need to change the assignment such that 
> they don't end up in the same instance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4175:
-

 Summary: Can't have StandbyTasks in KafkaStreams where 
NUM_STREAM_THREADS_CONFIG > 1
 Key: KAFKA-4175
 URL: https://issues.apache.org/jira/browse/KAFKA-4175
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Damian Guy
Assignee: Guozhang Wang
 Fix For: 0.10.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-15 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4175:
--
Description: 
When we have {code}StandbyTasks{code} in a Kafka Streams app and we have > 1 
threads per instance we can run into:
{code}
Caused by: java.io.IOException: task [1_0] Failed to lock the state directory: 
/private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
{code}

This is because the StandbyTask has been assigned to the same 
{code}KafkaStreams{code} instance as the active task. We need to either use a 
different base state directory for the standby tasks or we need to change the 
assignment such that they don't end up in the same instance. 

> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> When we have {code}StandbyTasks{code} in a Kafka Streams app and we have > 1 
> threads per instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the StandbyTask has been assigned to the same 
> {code}KafkaStreams{code} instance as the active task. We need to either use a 
> different base state directory for the standby tasks or we need to change the 
> assignment such that they don't end up in the same instance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4055) Add system tests for secure quotas

2016-09-15 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4055:
--
Status: Patch Available  (was: Open)

> Add system tests for secure quotas
> --
>
> Key: KAFKA-4055
> URL: https://issues.apache.org/jira/browse/KAFKA-4055
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Add system tests for quotas for authenticated users and  
> (corresponding to KIP-55). Implementation is being done under KAFKA-3492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4055) Add system tests for secure quotas

2016-09-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15493002#comment-15493002
 ] 

ASF GitHub Bot commented on KAFKA-4055:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1860

KAFKA-4055: System tests for secure quotas

Fix existing client-id quota test which currently don't configure quota 
overrides correctly. Add new tests for user and (user, client-id) quota 
overrides and default quotas.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4055

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1860


commit 01497fdbf6e3883423eb752fbb02ce75f840727c
Author: Rajini Sivaram 
Date:   2016-09-15T08:40:38Z

KAFKA-4055: System tests for secure quotas




> Add system tests for secure quotas
> --
>
> Key: KAFKA-4055
> URL: https://issues.apache.org/jira/browse/KAFKA-4055
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Add system tests for quotas for authenticated users and  
> (corresponding to KIP-55). Implementation is being done under KAFKA-3492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1860: KAFKA-4055: System tests for secure quotas

2016-09-15 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1860

KAFKA-4055: System tests for secure quotas

Fix existing client-id quota test which currently don't configure quota 
overrides correctly. Add new tests for user and (user, client-id) quota 
overrides and default quotas.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4055

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1860


commit 01497fdbf6e3883423eb752fbb02ce75f840727c
Author: Rajini Sivaram 
Date:   2016-09-15T08:40:38Z

KAFKA-4055: System tests for secure quotas




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-54: Sticky Partition Assignment Strategy

2016-09-15 Thread Rajini Sivaram
+1 (non-binding)

On Wed, Sep 14, 2016 at 12:37 AM, Jason Gustafson 
wrote:

> Thanks for the KIP. +1 from me.
>
> On Tue, Sep 13, 2016 at 12:05 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi all,
> >
> > Thanks for providing feedback on this KIP so far.
> > The KIP was discussed during the KIP meeting today and there doesn't seem
> > to be any unaddressed issue at this point.
> >
> > So I would like to initiate the voting process.
> >
> > The KIP can be found here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 54+-+Sticky+Partition+Assignment+Strategy
> > And the full discussion thread is here:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg47607.html
> >
> > Thanks.
> > --Vahid
> >
> >
>



-- 
Regards,

Rajini


[jira] [Assigned] (KAFKA-4072) improving memory usage in LogCleaner

2016-09-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy reassigned KAFKA-4072:
--

Assignee: Manikumar Reddy

> improving memory usage in LogCleaner
> 
>
> Key: KAFKA-4072
> URL: https://issues.apache.org/jira/browse/KAFKA-4072
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>
> This is a followup jira from KAFKA-3894.
> We can potentially make the allocation of the dedup buffer more dynamic. We 
> can start with something small like 100MB. If needed, we can grow the dedup 
> buffer up to the configured size. This will allow us to set a larger default 
> dedup buffer size (say 1GB). If there are not lots of keys, the broker won't 
> be using that much memory. This will allow the default configuration to 
> accommodate more keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2700) delete topic should remove the corresponding ACL and configs

2016-09-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492770#comment-15492770
 ] 

Manikumar Reddy commented on KAFKA-2700:


[~parth.brahmbhatt]], are you working on this JIRA?  If not, do you mind if I 
take up?


> delete topic should remove the corresponding ACL and configs
> 
>
> Key: KAFKA-2700
> URL: https://issues.apache.org/jira/browse/KAFKA-2700
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>
> After a topic is successfully deleted, we should also remove any ACL, configs 
> and perhaps committed offsets associated with topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1540

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4160: Ensure rebalance listener not called with coordinator 
lock

--
[...truncated 5917 lines...]
kafka.log.FileMessageSetTest > testPreallocateTrue STARTED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent STARTED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize STARTED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage STARTED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition STARTED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead STARTED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo STARTED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse STARTED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown STARTED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion STARTED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch STARTED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes STARTED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED


[jira] [Resolved] (KAFKA-4174) Delete a Config that does not exist in ConsumerConfig

2016-09-15 Thread shunichi ishii (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shunichi ishii resolved KAFKA-4174.
---
Resolution: Not A Bug

> Delete a Config that does not exist in ConsumerConfig
> -
>
> Key: KAFKA-4174
> URL: https://issues.apache.org/jira/browse/KAFKA-4174
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: shunichi ishii
>Assignee: Guozhang Wang
>Priority: Trivial
>  Labels: easyfix
>
> $ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
> [2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-09-15 13:10:49,790] WARN The configuration 
> 'windowstore.changelog.additional.retention.ms' was supplied but isn't a 
> known config. (org.apache.kafka.clients.consumer.ConsumerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4174) Delete a Config that does not exist in ConsumerConfig

2016-09-15 Thread shunichi ishii (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492652#comment-15492652
 ] 

shunichi ishii commented on KAFKA-4174:
---

sorry, it is not a bug.

> Delete a Config that does not exist in ConsumerConfig
> -
>
> Key: KAFKA-4174
> URL: https://issues.apache.org/jira/browse/KAFKA-4174
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: shunichi ishii
>Assignee: Guozhang Wang
>Priority: Trivial
>  Labels: easyfix
>
> $ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
> [2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-09-15 13:10:49,790] WARN The configuration 
> 'windowstore.changelog.additional.retention.ms' was supplied but isn't a 
> known config. (org.apache.kafka.clients.consumer.ConsumerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #883

2016-09-15 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4160: Ensure rebalance listener not called with coordinator 
lock

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on jenkins-test-ee0 (Ubuntu ubuntu jenkins-cloud-8GB 
jenkins-cloud-4GB cloud-slave) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f197ad4997032a848540a7d577b5846f76a26bfb 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f197ad4997032a848540a7d577b5846f76a26bfb
 > git rev-list 084a19e9acb43666cfcaa2ca155a775d47cd8b39 # timeout=10
Unpacking https://services.gradle.org/distributions/gradle-2.4-rc-2-bin.zip to 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
 on jenkins-test-ee0
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5640170997317946108.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Download 
https://repo1.maven.org/maven2/org/ajoberstar/grgit/1.5.0/grgit-1.5.0.pom
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.12.0/gradle-versions-plugin-0.12.0.pom
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.1.1.201511131810-r/org.eclipse.jgit-4.1.1.201511131810-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.1.1.201511131810-r/org.eclipse.jgit-parent-4.1.1.201511131810-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.1.1.201511131810-r/org.eclipse.jgit.ui-4.1.1.201511131810-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
Download 
https://repo1.maven.org/maven2/org/sonatype/oss/oss-parent/6/oss-parent-6.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
Download 
https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.12/slf4j-api-1.7.12.pom
Download 
https://repo1.maven.org/maven2/org/slf4j/slf4j-parent/1.7.12/slf4j-parent-1.7.12.pom
Download 
https://repo1.maven.org/maven2/com/thoughtworks/xstream/xstream/1.4.7/xstream-1.4.7.pom
Download 
https://repo1.maven.org/maven2/com/thoughtworks/xstream/xstream-parent/1.4.7/xstream-parent-1.4.7.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.53/jsch-0.1.53.pom
Download 
https://repo1.maven.org/maven2/com/googlecode/javaewah/JavaEWAH/0.7.9/JavaEWAH-0.7.9.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jdt/org.eclipse.jdt.annotation/1.1.0/org.eclipse.jdt.annotation-1.1.0.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
Download https://repo1.maven.org/maven2/net/java/dev/jna/jna/4.1.0/jna-4.1.0.pom
Download 

Re: [VOTE] 0.10.1 Release Plan

2016-09-15 Thread Neha Narkhede
+1 (binding)

On Tue, Sep 13, 2016 at 6:58 PM Becket Qin  wrote:

> +1 (non-binding)
>
> On Tue, Sep 13, 2016 at 5:33 PM, Dana Powers 
> wrote:
>
> > +1
> >
>
-- 
Thanks,
Neha