Re: [DISCUSS] KIP-54 Sticky Partition Assignment Strategy

2016-04-29 Thread Edoardo Comar
Hi 

why is the calculation of the partition assignments to group member being 
executed by the client (leader of the group), 
rather than server (eg by the group Coordinator) ?

This question came up working with  Vahid Hashemian on 
https://issues.apache.org/jira/browse/KAFKA-2273
We have implemented the propagation of the overall assignment solution of 
every consumer to every consumers in a group
by using the userdata field in PartitionAssignor.Assignment.
This way, even if the leader dies, any other consumer on becoming the 
leader has access to the last computed assignment for everyone.

However the fact that these pluggable assignment strategies execute on the 
client, makes the implementation of clients in other languages more 
laborious.
If they were executing in the broker, every language would take advantage 
of the available strategies.

Would it be feasible to move the execution on the server? Is this worth a 
new KIP?

thanks,
Edo
--
Edoardo Comar
MQ Cloud Technologies
eco...@uk.ibm.com
+44 (0)1962 81 5576 
IBM UK Ltd, Hursley Park, SO21 2JN

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-04-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15263678#comment-15263678
 ] 

Stig Rohde Døssing commented on KAFKA-2729:
---

We hit this on 0.9.0.1 today
{code}
[2016-04-28 19:18:22,834] INFO Partition [dce-data,13] on broker 3: Shrinking 
ISR for partition [dce-data,13] from 3,2 to 3 (kafka.cluster.Partition)
[2016-04-28 19:18:22,845] INFO Partition [dce-data,13] on broker 3: Cached 
zkVersion [304] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2016-04-28 19:18:32,785] INFO Partition [dce-data,16] on broker 3: Shrinking 
ISR for partition [dce-data,16] from 3,2 to 3 (kafka.cluster.Partition)
[2016-04-28 19:18:32,803] INFO Partition [dce-data,16] on broker 3: Cached 
zkVersion [312] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
{code}
which continued until we rebooted broker 3. The ISR at this time in Zookeeper 
had only broker 2, and there was no leader for the affected partitions. I 
believe the preferred leader for these partitions was 3.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1880:
---
Attachment: compatibilityReport-only-incompatible.html

Adding a more complete sample report that only includes breaking changes. 

I think we need to define more tightly whats "public" and whats not. Also what 
should be serializable. We have a decent number of serialization breaks in 
common.

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport-only-incompatible.html, 
> compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is 
> a tool for checking backward binary and source-level compatibility of a Java 
> library API. Kafka can leverage the tool to find and fix existing 
> incompatibility issues and avoid new issues from getting into the product.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] mbeans overwritten with identical clients on a single jvm

2016-04-29 Thread Onur Karaman
Hey everyone. I think we might need to have an actual discussion on an
issue I brought up a while ago in
https://issues.apache.org/jira/browse/KAFKA-3494. It seems like client-ids
are being used for too many things today:
1. kafka-request.log. This helps if you ever want to associate a client
with a specific request. Maybe you're looking for a badly behaved client.
Maybe the client has reported unexpectedly long response times from the
broker and you want to figure out what was happening.
2. quotas. Quotas today are implemented on a (client-id, broker)
granularity.
3. metrics. KafkaConsumer and KafkaProducer metrics only go as granular as
the client-id.

The reason I'm bringing this up is because it looks like there's a conflict
in intent for client-ids between the quota and metrics scenarios. One of
the motivating factors for choosing the client-id for quotas was that it
allows for flexibility in the granularity of the quota enforcement. For
instance, entire services can share the same id to get some form of
(service, broker) granularity quotas. From my understanding, client-id was
chosen as the quota id because it's a property that already exists on the
clients, so we'd be able to quota older clients with no additional work,
and reusing it had relatively low impact.

So while quotas encourage reuse of client-ids across client instances,
there is a common scenario where the metrics fall apart and mbeans get
overwritten. It looks like if there are two KafkaConsumers or two
KafkaProducers with the same client-id in the same jvm, then JmxReporter
will unregister the first client's mbeans while registering the second
client's mbeans.

It seems like for the three use cases noted above (kafka-request.log,
metrics, quotas), there are different desirable characteristics:
1. kafka-request.log at the very least would want an id that could
distinguish individual client instances, but it might be nice to go even
more granular at say a per connection level.
2. quotas would want an id that's sharable among a group of clients that
wish to be quotad together. This id can be defined by the user.
3. metrics would want an id that could distinguish invidual client
instance. This id can be defined by the user. We expect it to stay the same
across process restarts so we can potentially associate metrics across
process restarts.

To resolve this, I think we'd want metrics to have another tag to
differentiate mbeans from instances with the same client-id. Another
alternative is to make quotas depend on a quota id instead of client-id (as
brought up in KIP-55), but this means we no longer can quota older clients
out of the box.

Other suggestions are welcome!


[jira] [Assigned] (KAFKA-3637) Add method that checks if streams are initialised

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3637:
-

Assignee: Liquan Pei

> Add method that checks if streams are initialised
> -
>
> Key: KAFKA-3637
> URL: https://issues.apache.org/jira/browse/KAFKA-3637
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0
>
>
> Currently when streams are initialised and started with streams.start(), 
> there is no way for the caller to know if the initialisation procedure 
> (including starting tasks) is complete or not. Hence, the caller is forced to 
> guess for how long to wait. It would be good to have a way to return the 
> state of the streams to the caller.
> One option would be to follow a similar approach in Kafka Server 
> (BrokerStates.scala).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3213) [CONNECT] It looks like we are not backing off properly when reconfiguring tasks

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3213:
-

Assignee: Liquan Pei  (was: Ewen Cheslack-Postava)

> [CONNECT] It looks like we are not backing off properly when reconfiguring 
> tasks
> 
>
> Key: KAFKA-3213
> URL: https://issues.apache.org/jira/browse/KAFKA-3213
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Assignee: Liquan Pei
>
> Looking at logs of attempt to reconfigure connector while leader is 
> restarting, I see:
> {code}
> [2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
> (org.apache.kafka.connect.runtime.rest.RestServer)
> java.net.ConnectException: Connection refused
> [2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
> tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
> trying to forward REST request: Connection refused
> [2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
> trying to forward REST request: Connection refused
> [2016-01-29 20:31:01,803] DEBUG Sending POST with input 
> [{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
>  to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
> (org.apache.kafka.connect.runtime.rest.RestServer)
> java.net.ConnectException: Connection refused
> [2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
> tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
> trying to forward REST request: Connection refused
> {code}
> Note that it looks like we are retrying every 1ms, while I'd expect a retry 
> every 250ms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3613: Consolidate TumblingWindows and Ho...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1277


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3613) Consolidate tumbling windows and hopping windows

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264134#comment-15264134
 ] 

ASF GitHub Bot commented on KAFKA-3613:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1277


> Consolidate tumbling windows and hopping windows
> 
>
> Key: KAFKA-3613
> URL: https://issues.apache.org/jira/browse/KAFKA-3613
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Michael Noll
>Assignee: Michael Noll
>
> We currently have two separate implementations for tumbling windows and 
> hopping windows, even though tumbling windows are simply a specialization of 
> hopping windows.  We should thus consolidate/merge the two separate 
> implementations into a new TimeWindows / TimeWindow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3613) Consolidate tumbling windows and hopping windows

2016-04-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3613.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 1277
[https://github.com/apache/kafka/pull/1277]

> Consolidate tumbling windows and hopping windows
> 
>
> Key: KAFKA-3613
> URL: https://issues.apache.org/jira/browse/KAFKA-3613
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Michael Noll
>Assignee: Michael Noll
> Fix For: 0.10.0.0
>
>
> We currently have two separate implementations for tumbling windows and 
> hopping windows, even though tumbling windows are simply a specialization of 
> hopping windows.  We should thus consolidate/merge the two separate 
> implementations into a new TimeWindows / TimeWindow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3598: Improve JavaDoc of public API

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1250


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3598) Improve JavaDoc of public API

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264226#comment-15264226
 ] 

ASF GitHub Bot commented on KAFKA-3598:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1250


> Improve JavaDoc of public API
> -
>
> Key: KAFKA-3598
> URL: https://issues.apache.org/jira/browse/KAFKA-3598
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Trivial
>  Labels: docs
> Fix For: 0.10.0.0
>
>
> Add missing JavaDoc to all {{public}} methods of public API. Is related to 
> KAFKA-3440 and KAFKA-3574.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3598) Improve JavaDoc of public API

2016-04-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3598:
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1250
[https://github.com/apache/kafka/pull/1250]

> Improve JavaDoc of public API
> -
>
> Key: KAFKA-3598
> URL: https://issues.apache.org/jira/browse/KAFKA-3598
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Trivial
>  Labels: docs
> Fix For: 0.10.0.0
>
>
> Add missing JavaDoc to all {{public}} methods of public API. Is related to 
> KAFKA-3440 and KAFKA-3574.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2693) Run relevant ducktape tests with SASL/PLAIN and multiple mechanisms

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2693:
-
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1282
[https://github.com/apache/kafka/pull/1282]

> Run relevant ducktape tests with SASL/PLAIN and multiple mechanisms
> ---
>
> Key: KAFKA-2693
> URL: https://issues.apache.org/jira/browse/KAFKA-2693
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> KAFKA-2644 runs sanity test, replication tests and benchmarks with SASL using 
> mechanism GSSAPI. For SASL/PLAIN, run sanity test and replication tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2693: Ducktape tests for SASL/PLAIN and ...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1282


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2693) Run relevant ducktape tests with SASL/PLAIN and multiple mechanisms

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264312#comment-15264312
 ] 

ASF GitHub Bot commented on KAFKA-2693:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1282


> Run relevant ducktape tests with SASL/PLAIN and multiple mechanisms
> ---
>
> Key: KAFKA-2693
> URL: https://issues.apache.org/jira/browse/KAFKA-2693
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> KAFKA-2644 runs sanity test, replication tests and benchmarks with SASL using 
> mechanism GSSAPI. For SASL/PLAIN, run sanity test and replication tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-45: Standardize KafkaConsumer API to use Collection

2016-04-29 Thread Jason Gustafson
Hey Harsha,

One issue with adding back subscribe(List), but marking it deprecated is
that it may confuse some users if they use the typical Arrays.asList()
pattern. You'd have to cast to a Collection to avoid the deprecation
warning, which is awkward. Maybe it would be better in that case to keep
the List alternatives forever?

In general, I'm not opposed to adding the methods back. When we voted on
KIP-45, I think many of us were on the fence anyway. It would be nice to
hear what others think.

-Jason

On Thu, Apr 28, 2016 at 6:30 PM, Harsha  wrote:

> Hi Jason,
> "t. I think what you're
> saying is that the KafkaSpout has been compiled against the 0.9 client,
> but
> it may need to be to run against 0.10 (if the user depends on that
> version
> instead). Is that correct?"
>
> Yes thats true.
>
> " Is that correct? In general, are you expecting that KafkaSpout
> > > will work with any kafka-clients greater than 0.9?"
>
> In general yes . But given that interface is marked unstable its
> probably not reasonable to expect to work across the new versions
> of the Kafka.
>
> "Another question that
> > > comes to mind is whether we would also need to revert to the old
> versions
> > > of subscribe() and assign().
> Yes you are right on these methods. We need to add for these two as
> well.
>
>
> My issue is users who built their clients using 0.9.x java api will have
> to change once the 0.10 release is out. Alternative I am proposing is to
> give these users time to move onto the new api thats added and keep the
> old methods with deprecated tag for atleast one version.
>
> Thanks,
> Harsha
>
>
> On Thu, Apr 28, 2016, at 04:41 PM, Grant Henke wrote:
> > FYI. I have attached a sample clients API change/compatibility report to
> > KAFKA-1880 . The
> report
> > shows changes in the public apis between the 0.9 and trunk branches. Some
> > of them are expected per KIP-45 obviously.
> >
> > Thanks,
> > Grant
> >
> >
> > On Thu, Apr 28, 2016 at 6:33 PM, Jason Gustafson 
> > wrote:
> >
> > > Hey Harsha,
> > >
> > > We're just trying to understand the problem first. I think what you're
> > > saying is that the KafkaSpout has been compiled against the 0.9
> client, but
> > > it may need to be to run against 0.10 (if the user depends on that
> version
> > > instead). Is that correct? In general, are you expecting that
> KafkaSpout
> > > will work with any kafka-clients greater than 0.9? Another question
> that
> > > comes to mind is whether we would also need to revert to the old
> versions
> > > of subscribe() and assign(). The argument type was changed from List to
> > > Collection, which is not binary compatible, right?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Thu, Apr 28, 2016 at 1:41 PM, Harsha  wrote:
> > >
> > > > Hi Ismael,
> > > >   This will solve both binary and source compatibility.
> > > >   Storm has new KafkaSpout that used 0.9.x new KafkaSpout
> > > >   API. As part of that spout we used
> > > >   KafkaConsumer.seekToBeginning and other methods. Since
> the
> > > >   method signature changed as part of KIP-45. If we
> update
> > > >   the version to 0.10 we are breaking the KafkaConsumer
> > > >   calls in our Storm spout. In storm's case we ask users
> to
> > > >   create uber jar with all the required dependencies and
> > > >   users can free to use which version of kafka they can
> to
> > > >   be part of uber jar. If they use storm 1.0 release
> version
> > > >   of storm-kafka with kafka 0.10 than it will create
> issues
> > > >   without the patch.
> > > >  I am still not getting clear answer here. Whats exactly
> the
> > > >  issue in having these methods with deprecated tag? we
> keep
> > > >  the interface as it is.
> > > >
> > > > Thanks,
> > > > Harsha
> > > >
> > > > On Thu, Apr 28, 2016, at 01:27 PM, Ismael Juma wrote:
> > > > > Hi Harsha,
> > > > >
> > > > > What is the aim of the PR, is it to fix binary compatibility,
> source
> > > > > compatibility or both? I think it only fixes source compatibility,
> so I
> > > > > am
> > > > > interested in what testing has been done to ensure that this fix
> solves
> > > > > the
> > > > > Storm issue.
> > > > >
> > > > > Thanks,
> > > > > Ismael
> > > > >
> > > > > On Thu, Apr 28, 2016 at 12:58 PM, Harsha  wrote:
> > > > >
> > > > > > Hi,
> > > > > >We missed this vote earlier and realized thats its
> breaking
> > > the
> > > > > >0.9.x client api compatibility.  I opened a JIRA here
> > > > > >https://issues.apache.org/jira/browse/KAFKA-3633 . Can we
> > > keep
> > > > > >the old methods with deprecated tag in 0.10 release.
> > > > > >
> > > > > > Thanks,
> > > > > > Harsha
> > > > > >
> > > > > > On Fri, Mar 18, 2016, at 01:51 PM, Jason Gustafson wrote:
> > > > > > > Looks like th

[GitHub] kafka pull request: KAFKA-3559: lazy initialisation of state store...

2016-04-29 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1223


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3559) Task creation time taking too long in rebalance callback

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264350#comment-15264350
 ] 

ASF GitHub Bot commented on KAFKA-3559:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1223


> Task creation time taking too long in rebalance callback
> 
>
> Key: KAFKA-3559
> URL: https://issues.apache.org/jira/browse/KAFKA-3559
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Currently in Kafka Streams, we create stream tasks upon getting newly 
> assigned partitions in rebalance callback function {code} onPartitionAssigned 
> {code}, which involves initialization of the processor state stores as well 
> (including opening the rocksDB, restore the store from changelog, etc, which 
> takes time).
> With a large number of state stores, the initialization time itself could 
> take tens of seconds, which usually is larger than the consumer session 
> timeout. As a result, when the callback is completed, the consumer is already 
> treated as failed by the coordinator and rebalance again.
> We need to consider if we can optimize the initialization process, or move it 
> out of the callback function, and while initializing the stores one-by-one, 
> use poll call to send heartbeats to avoid being kicked out by coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3559) Task creation time taking too long in rebalance callback

2016-04-29 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3559:

Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

> Task creation time taking too long in rebalance callback
> 
>
> Key: KAFKA-3559
> URL: https://issues.apache.org/jira/browse/KAFKA-3559
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Currently in Kafka Streams, we create stream tasks upon getting newly 
> assigned partitions in rebalance callback function {code} onPartitionAssigned 
> {code}, which involves initialization of the processor state stores as well 
> (including opening the rocksDB, restore the store from changelog, etc, which 
> takes time).
> With a large number of state stores, the initialization time itself could 
> take tens of seconds, which usually is larger than the consumer session 
> timeout. As a result, when the callback is completed, the consumer is already 
> treated as failed by the coordinator and rebalance again.
> We need to consider if we can optimize the initialization process, or move it 
> out of the callback function, and while initializing the stores one-by-one, 
> use poll call to send heartbeats to avoid being kicked out by coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1232

2016-04-29 Thread Apache Jenkins Server
See 



Re: [VOTE] KIP-45: Standardize KafkaConsumer API to use Collection

2016-04-29 Thread Grant Henke
I think you are right Jason. People were definitely on the fence about this
and we went back and forth for quite some time.

I think the main point in the KIP discussion that made this decision, is
that the Consumer was annotated with the Unstable annotation. Given how
new the Consumer is, we wanted to leverage that to make sure the interface
is clean. The same will be true for KafkaStreams in the upcoming release.

We did agree that we should discuss the annotations and what our
compatibility story is in the future. However, for now the documentation of
the Unstable annotation says, "No guarantee is provided as to reliability
or stability across any level of release granularity."  If we can't
leverage the Unstable annotation to make breaking changes where necessary,
it will be tough to vet new apis without generating a lot of deprecated
code.

Note: We did remove the Unstable annotation from the Consumer interface for
0.10 implying that it is now stable. (KAFKA-3435
)

Thanks,
Grant

On Fri, Apr 29, 2016 at 12:05 PM, Jason Gustafson 
wrote:

> Hey Harsha,
>
> One issue with adding back subscribe(List), but marking it deprecated is
> that it may confuse some users if they use the typical Arrays.asList()
> pattern. You'd have to cast to a Collection to avoid the deprecation
> warning, which is awkward. Maybe it would be better in that case to keep
> the List alternatives forever?
>
> In general, I'm not opposed to adding the methods back. When we voted on
> KIP-45, I think many of us were on the fence anyway. It would be nice to
> hear what others think.
>
> -Jason
>
> On Thu, Apr 28, 2016 at 6:30 PM, Harsha  wrote:
>
> > Hi Jason,
> > "t. I think what you're
> > saying is that the KafkaSpout has been compiled against the 0.9 client,
> > but
> > it may need to be to run against 0.10 (if the user depends on that
> > version
> > instead). Is that correct?"
> >
> > Yes thats true.
> >
> > " Is that correct? In general, are you expecting that KafkaSpout
> > > > will work with any kafka-clients greater than 0.9?"
> >
> > In general yes . But given that interface is marked unstable its
> > probably not reasonable to expect to work across the new versions
> > of the Kafka.
> >
> > "Another question that
> > > > comes to mind is whether we would also need to revert to the old
> > versions
> > > > of subscribe() and assign().
> > Yes you are right on these methods. We need to add for these two as
> > well.
> >
> >
> > My issue is users who built their clients using 0.9.x java api will have
> > to change once the 0.10 release is out. Alternative I am proposing is to
> > give these users time to move onto the new api thats added and keep the
> > old methods with deprecated tag for atleast one version.
> >
> > Thanks,
> > Harsha
> >
> >
> > On Thu, Apr 28, 2016, at 04:41 PM, Grant Henke wrote:
> > > FYI. I have attached a sample clients API change/compatibility report
> to
> > > KAFKA-1880 . The
> > report
> > > shows changes in the public apis between the 0.9 and trunk branches.
> Some
> > > of them are expected per KIP-45 obviously.
> > >
> > > Thanks,
> > > Grant
> > >
> > >
> > > On Thu, Apr 28, 2016 at 6:33 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hey Harsha,
> > > >
> > > > We're just trying to understand the problem first. I think what
> you're
> > > > saying is that the KafkaSpout has been compiled against the 0.9
> > client, but
> > > > it may need to be to run against 0.10 (if the user depends on that
> > version
> > > > instead). Is that correct? In general, are you expecting that
> > KafkaSpout
> > > > will work with any kafka-clients greater than 0.9? Another question
> > that
> > > > comes to mind is whether we would also need to revert to the old
> > versions
> > > > of subscribe() and assign(). The argument type was changed from List
> to
> > > > Collection, which is not binary compatible, right?
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > > On Thu, Apr 28, 2016 at 1:41 PM, Harsha  wrote:
> > > >
> > > > > Hi Ismael,
> > > > >   This will solve both binary and source compatibility.
> > > > >   Storm has new KafkaSpout that used 0.9.x new
> KafkaSpout
> > > > >   API. As part of that spout we used
> > > > >   KafkaConsumer.seekToBeginning and other methods.
> Since
> > the
> > > > >   method signature changed as part of KIP-45. If we
> > update
> > > > >   the version to 0.10 we are breaking the KafkaConsumer
> > > > >   calls in our Storm spout. In storm's case we ask
> users
> > to
> > > > >   create uber jar with all the required dependencies
> and
> > > > >   users can free to use which version of kafka they can
> > to
> > > > >   be part of uber jar. If they use storm 1.0 release
> > version
> > > > >   of storm-kafka with kafka 0.10 than it will create

[GitHub] kafka pull request: KAFKA-3418: add javadoc section describing con...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1129


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3418:
-
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1129
[https://github.com/apache/kafka/pull/1129]

> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264384#comment-15264384
 ] 

ASF GitHub Bot commented on KAFKA-3418:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1129


> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3615: Exclude test jars in kafka-run-cla...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1263


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3642) Fix NPE from ProcessorStateManager when the changelog topic not exists

2016-04-29 Thread Yuto Kawamura (JIRA)
Yuto Kawamura created KAFKA-3642:


 Summary: Fix NPE from ProcessorStateManager when the changelog 
topic not exists
 Key: KAFKA-3642
 URL: https://issues.apache.org/jira/browse/KAFKA-3642
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.0.1
Reporter: Yuto Kawamura
Assignee: Yuto Kawamura
 Fix For: 0.10.1.0


# Fix NPE from ProcessorStateManager when the changelog topic not exists

When the following two conditions satisifed, ProcessorStateManager throws NPE:

- A state configured with logging enabled but the corresponding -changelog 
topic not exists,
- zookeeper.connect wasn't supplied in streams config.

so Streams should,
- expected that the -changelog topic is not exists and throw much meaningful 
exception.
- warn users if there's no -changelog topic prepared but zookeeper.connect 
wasn't also supplied.

BTW, I think making zookeeper.connect as mandatory argument should be another 
option if it doens't hurts.

{code}
$ git diff  

diff --git 
a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
 
b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
index 34c35b7..c5339f1 100644
--- 
a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
+++ 
b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
@@ -108,7 +108,7 @@ public class WordCountProcessorDemo {
 Properties props = new Properties();
 props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
"streams-wordcount-processor");
 props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
-props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
+// props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
 props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
 props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
 

$ ./bin/kafka-topics.sh --zookeeper localhost:2181 --list 2>/dev/null | grep 
'\-changelog'

$ ./bin/kafka-run-class.sh 
org.apache.kafka.streams.examples.wordcount.WordCountProcessorDemo 
...
[2016-04-30 02:25:04,960] ERROR User provided listener 
org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
streams-wordcount-processor failed on partition assignment 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:189)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:116)
at 
org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStore.init(InMemoryKeyValueLoggedStore.java:64)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:115)
at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:582)
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:609)
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:71)
at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:126)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:220)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:226)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:221)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:430)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHa

[jira] [Resolved] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3615.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

Issue resolved by pull request 1263
[https://github.com/apache/kafka/pull/1263]

> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264394#comment-15264394
 ] 

ASF GitHub Bot commented on KAFKA-3615:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1263


> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-45: Standardize KafkaConsumer API to use Collection

2016-04-29 Thread Grant Henke
If anyone wants to review the KIP call discussion we had on this just
before the vote, here is a link to the relevant session (6 minutes in):
https://youtu.be/Hcjur17TjBE?t=6m

On Fri, Apr 29, 2016 at 12:21 PM, Grant Henke  wrote:

> I think you are right Jason. People were definitely on the fence about
> this and we went back and forth for quite some time.
>
> I think the main point in the KIP discussion that made this decision, is
> that the Consumer was annotated with the Unstable annotation. Given how
> new the Consumer is, we wanted to leverage that to make sure the interface
> is clean. The same will be true for KafkaStreams in the upcoming release.
>
> We did agree that we should discuss the annotations and what our
> compatibility story is in the future. However, for now the documentation of
> the Unstable annotation says, "No guarantee is provided as to reliability
> or stability across any level of release granularity."  If we can't
> leverage the Unstable annotation to make breaking changes where necessary,
> it will be tough to vet new apis without generating a lot of deprecated
> code.
>
> Note: We did remove the Unstable annotation from the Consumer interface
> for 0.10 implying that it is now stable. (KAFKA-3435
> )
>
> Thanks,
> Grant
>
> On Fri, Apr 29, 2016 at 12:05 PM, Jason Gustafson 
> wrote:
>
>> Hey Harsha,
>>
>> One issue with adding back subscribe(List), but marking it deprecated is
>> that it may confuse some users if they use the typical Arrays.asList()
>> pattern. You'd have to cast to a Collection to avoid the deprecation
>> warning, which is awkward. Maybe it would be better in that case to keep
>> the List alternatives forever?
>>
>> In general, I'm not opposed to adding the methods back. When we voted on
>> KIP-45, I think many of us were on the fence anyway. It would be nice to
>> hear what others think.
>>
>> -Jason
>>
>> On Thu, Apr 28, 2016 at 6:30 PM, Harsha  wrote:
>>
>> > Hi Jason,
>> > "t. I think what you're
>> > saying is that the KafkaSpout has been compiled against the 0.9 client,
>> > but
>> > it may need to be to run against 0.10 (if the user depends on that
>> > version
>> > instead). Is that correct?"
>> >
>> > Yes thats true.
>> >
>> > " Is that correct? In general, are you expecting that KafkaSpout
>> > > > will work with any kafka-clients greater than 0.9?"
>> >
>> > In general yes . But given that interface is marked unstable its
>> > probably not reasonable to expect to work across the new versions
>> > of the Kafka.
>> >
>> > "Another question that
>> > > > comes to mind is whether we would also need to revert to the old
>> > versions
>> > > > of subscribe() and assign().
>> > Yes you are right on these methods. We need to add for these two as
>> > well.
>> >
>> >
>> > My issue is users who built their clients using 0.9.x java api will have
>> > to change once the 0.10 release is out. Alternative I am proposing is to
>> > give these users time to move onto the new api thats added and keep the
>> > old methods with deprecated tag for atleast one version.
>> >
>> > Thanks,
>> > Harsha
>> >
>> >
>> > On Thu, Apr 28, 2016, at 04:41 PM, Grant Henke wrote:
>> > > FYI. I have attached a sample clients API change/compatibility report
>> to
>> > > KAFKA-1880 . The
>> > report
>> > > shows changes in the public apis between the 0.9 and trunk branches.
>> Some
>> > > of them are expected per KIP-45 obviously.
>> > >
>> > > Thanks,
>> > > Grant
>> > >
>> > >
>> > > On Thu, Apr 28, 2016 at 6:33 PM, Jason Gustafson 
>> > > wrote:
>> > >
>> > > > Hey Harsha,
>> > > >
>> > > > We're just trying to understand the problem first. I think what
>> you're
>> > > > saying is that the KafkaSpout has been compiled against the 0.9
>> > client, but
>> > > > it may need to be to run against 0.10 (if the user depends on that
>> > version
>> > > > instead). Is that correct? In general, are you expecting that
>> > KafkaSpout
>> > > > will work with any kafka-clients greater than 0.9? Another question
>> > that
>> > > > comes to mind is whether we would also need to revert to the old
>> > versions
>> > > > of subscribe() and assign(). The argument type was changed from
>> List to
>> > > > Collection, which is not binary compatible, right?
>> > > >
>> > > > Thanks,
>> > > > Jason
>> > > >
>> > > > On Thu, Apr 28, 2016 at 1:41 PM, Harsha  wrote:
>> > > >
>> > > > > Hi Ismael,
>> > > > >   This will solve both binary and source
>> compatibility.
>> > > > >   Storm has new KafkaSpout that used 0.9.x new
>> KafkaSpout
>> > > > >   API. As part of that spout we used
>> > > > >   KafkaConsumer.seekToBeginning and other methods.
>> Since
>> > the
>> > > > >   method signature changed as part of KIP-45. If we
>> > update
>> > > > >   the version to 0.10 we are breaking the
>> KafkaConsumer
>> > > > >   calls 

Jenkins build is back to normal : kafka-trunk-jdk8 #568

2016-04-29 Thread Apache Jenkins Server
See 



Re: [VOTE] KIP-56 Allow cross origin HTTP requests on all HTTP methods

2016-04-29 Thread Ewen Cheslack-Postava
This passes w/ 5 binding and 2 non-binding votes (it wasn't explicit, but
I'm assuming Liquan was an implicit +1 by starting the thread).

Thanks for voting everyone!

-Ewen

On Wed, Apr 27, 2016 at 10:04 PM, Ismael Juma  wrote:

> +1
>
> On Wed, Apr 27, 2016 at 1:54 PM, Grant Henke  wrote:
>
> > +1 (non-binding)
> >
> > On Wed, Apr 27, 2016 at 3:35 PM, Harsha  wrote:
> >
> > > +1
> > > -Harsha
> > >
> > > On Wed, Apr 27, 2016, at 01:29 PM, Guozhang Wang wrote:
> > > > +1
> > > >
> > > > On Wed, Apr 27, 2016 at 1:21 PM, Ewen Cheslack-Postava
> > > > 
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Thu, Apr 21, 2016 at 10:30 AM, Liquan Pei 
> > > wrote:
> > > > >
> > > > > > Hi
> > > > > >
> > > > > > I would like to start vote on KIP-56.
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-56%3A+Allow+cross+origin+HTTP+requests+on+all+HTTP+methods
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > --
> > > > > > Liquan Pei
> > > > > > Software Engineer, Confluent Inc
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Ewen
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
Thanks,
Ewen


[GitHub] kafka pull request: KAFKA-3597: Query ConsoleConsumer and Verifiab...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1278


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3597) Enable query ConsoleConsumer and VerifiableProducer if they shutdown cleanly

2016-04-29 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3597:

   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1278
[https://github.com/apache/kafka/pull/1278]

> Enable query ConsoleConsumer and VerifiableProducer if they shutdown cleanly
> 
>
> Key: KAFKA-3597
> URL: https://issues.apache.org/jira/browse/KAFKA-3597
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.10.1.0
>
>
> It would be useful for some tests to check if ConsoleConsumer and 
> VerifiableProducer shutdown cleanly or not. 
> Add methods to ConsoleConsumer and VerifiableProducer that return true if all 
> producers/consumes shutdown cleanly; otherwise false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3597) Enable query ConsoleConsumer and VerifiableProducer if they shutdown cleanly

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264425#comment-15264425
 ] 

ASF GitHub Bot commented on KAFKA-3597:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1278


> Enable query ConsoleConsumer and VerifiableProducer if they shutdown cleanly
> 
>
> Key: KAFKA-3597
> URL: https://issues.apache.org/jira/browse/KAFKA-3597
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.10.1.0
>
>
> It would be useful for some tests to check if ConsoleConsumer and 
> VerifiableProducer shutdown cleanly or not. 
> Add methods to ConsoleConsumer and VerifiableProducer that return true if all 
> producers/consumes shutdown cleanly; otherwise false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3578) Allow cross origin HTTP requests on all HTTP methods

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3578.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

Issue resolved by pull request 1288
[https://github.com/apache/kafka/pull/1288]

> Allow cross origin HTTP requests on all HTTP methods
> 
>
> Key: KAFKA-3578
> URL: https://issues.apache.org/jira/browse/KAFKA-3578
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently, Kafka Connect only allows requests from the same domain of the 
> Kafka Connect cluster. To allow Kafka Connect to process requests from other 
> domains, we need to allow cross origin HTTP requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3642: Fix NPE from ProcessorStateManager...

2016-04-29 Thread kawamuray
GitHub user kawamuray opened a pull request:

https://github.com/apache/kafka/pull/1289

KAFKA-3642: Fix NPE from ProcessorStateManager when the changelog topic not 
exists

Issue: https://issues.apache.org/jira/browse/KAFKA-3642

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kawamuray/kafka KAFKA-3642-streams-NPE

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1289


commit 189571f9ab6555cc420190a9fb38ab2064ce42ab
Author: Yuto Kawamura 
Date:   2016-04-29T16:53:38Z

KAFKA-3642: Fix MockConsumer#partitionsFor to behave as same as 
KafkaConsumer

KafkaConsumer#partitionsFor returns null when the topic not exists.

commit f8d96209c97eef4328f6255f6a43ae0c2c70543b
Author: Yuto Kawamura 
Date:   2016-04-29T16:22:00Z

KAFKA-3642: Make ProcessorStateManager throw meaningful exception instead 
of NPE when topic not exists

commit f1cae8eb977965ec82a60ea45bdbe5c1ecee869a
Author: Yuto Kawamura 
Date:   2016-04-29T16:23:50Z

KAFKA-3642: Warn if expected internal topic not exists when 
zookeeper.connect isn't supplied

commit 4f7c6dc9becb547368f5dac6d508bd071bdfec91
Author: Yuto Kawamura 
Date:   2016-04-29T16:26:39Z

MINOR: Remove meaningless branching argument

- It doesn't hurts anything even always return filled list




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3578: Allow cross origin HTTP requests o...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1288


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3578) Allow cross origin HTTP requests on all HTTP methods

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264431#comment-15264431
 ] 

ASF GitHub Bot commented on KAFKA-3578:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1288


> Allow cross origin HTTP requests on all HTTP methods
> 
>
> Key: KAFKA-3578
> URL: https://issues.apache.org/jira/browse/KAFKA-3578
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently, Kafka Connect only allows requests from the same domain of the 
> Kafka Connect cluster. To allow Kafka Connect to process requests from other 
> domains, we need to allow cross origin HTTP requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3642) Fix NPE from ProcessorStateManager when the changelog topic not exists

2016-04-29 Thread Yuto Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuto Kawamura updated KAFKA-3642:
-
Status: Patch Available  (was: Open)

> Fix NPE from ProcessorStateManager when the changelog topic not exists
> --
>
> Key: KAFKA-3642
> URL: https://issues.apache.org/jira/browse/KAFKA-3642
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
> Fix For: 0.10.1.0
>
>
> # Fix NPE from ProcessorStateManager when the changelog topic not exists
> When the following two conditions satisifed, ProcessorStateManager throws NPE:
> - A state configured with logging enabled but the corresponding -changelog 
> topic not exists,
> - zookeeper.connect wasn't supplied in streams config.
> so Streams should,
> - expected that the -changelog topic is not exists and throw much meaningful 
> exception.
> - warn users if there's no -changelog topic prepared but zookeeper.connect 
> wasn't also supplied.
> BTW, I think making zookeeper.connect as mandatory argument should be another 
> option if it doens't hurts.
> {code}
> $ git diff
>   
> diff --git 
> a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
>  
> b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> index 34c35b7..c5339f1 100644
> --- 
> a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> +++ 
> b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> @@ -108,7 +108,7 @@ public class WordCountProcessorDemo {
>  Properties props = new Properties();
>  props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "streams-wordcount-processor");
>  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> -props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
> +// props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, 
> "localhost:2181");
>  props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
>  props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
>  
> $ ./bin/kafka-topics.sh --zookeeper localhost:2181 --list 2>/dev/null | grep 
> '\-changelog'
> $ ./bin/kafka-run-class.sh 
> org.apache.kafka.streams.examples.wordcount.WordCountProcessorDemo 
> ...
> [2016-04-30 02:25:04,960] ERROR User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
> streams-wordcount-processor failed on partition assignment 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:189)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:116)
> at 
> org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStore.init(InMemoryKeyValueLoggedStore.java:64)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85)
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:115)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:582)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:609)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:71)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:126)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:220)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:226)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:221)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
> at 
> org.apache.kafka.clients.consumer.internals

[jira] [Commented] (KAFKA-3642) Fix NPE from ProcessorStateManager when the changelog topic not exists

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264430#comment-15264430
 ] 

ASF GitHub Bot commented on KAFKA-3642:
---

GitHub user kawamuray opened a pull request:

https://github.com/apache/kafka/pull/1289

KAFKA-3642: Fix NPE from ProcessorStateManager when the changelog topic not 
exists

Issue: https://issues.apache.org/jira/browse/KAFKA-3642

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kawamuray/kafka KAFKA-3642-streams-NPE

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1289


commit 189571f9ab6555cc420190a9fb38ab2064ce42ab
Author: Yuto Kawamura 
Date:   2016-04-29T16:53:38Z

KAFKA-3642: Fix MockConsumer#partitionsFor to behave as same as 
KafkaConsumer

KafkaConsumer#partitionsFor returns null when the topic not exists.

commit f8d96209c97eef4328f6255f6a43ae0c2c70543b
Author: Yuto Kawamura 
Date:   2016-04-29T16:22:00Z

KAFKA-3642: Make ProcessorStateManager throw meaningful exception instead 
of NPE when topic not exists

commit f1cae8eb977965ec82a60ea45bdbe5c1ecee869a
Author: Yuto Kawamura 
Date:   2016-04-29T16:23:50Z

KAFKA-3642: Warn if expected internal topic not exists when 
zookeeper.connect isn't supplied

commit 4f7c6dc9becb547368f5dac6d508bd071bdfec91
Author: Yuto Kawamura 
Date:   2016-04-29T16:26:39Z

MINOR: Remove meaningless branching argument

- It doesn't hurts anything even always return filled list




> Fix NPE from ProcessorStateManager when the changelog topic not exists
> --
>
> Key: KAFKA-3642
> URL: https://issues.apache.org/jira/browse/KAFKA-3642
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
> Fix For: 0.10.1.0
>
>
> # Fix NPE from ProcessorStateManager when the changelog topic not exists
> When the following two conditions satisifed, ProcessorStateManager throws NPE:
> - A state configured with logging enabled but the corresponding -changelog 
> topic not exists,
> - zookeeper.connect wasn't supplied in streams config.
> so Streams should,
> - expected that the -changelog topic is not exists and throw much meaningful 
> exception.
> - warn users if there's no -changelog topic prepared but zookeeper.connect 
> wasn't also supplied.
> BTW, I think making zookeeper.connect as mandatory argument should be another 
> option if it doens't hurts.
> {code}
> $ git diff
>   
> diff --git 
> a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
>  
> b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> index 34c35b7..c5339f1 100644
> --- 
> a/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> +++ 
> b/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
> @@ -108,7 +108,7 @@ public class WordCountProcessorDemo {
>  Properties props = new Properties();
>  props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "streams-wordcount-processor");
>  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> -props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
> +// props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, 
> "localhost:2181");
>  props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
>  props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
>  
> $ ./bin/kafka-topics.sh --zookeeper localhost:2181 --list 2>/dev/null | grep 
> '\-changelog'
> $ ./bin/kafka-run-class.sh 
> org.apache.kafka.streams.examples.wordcount.WordCountProcessorDemo 
> ...
> [2016-04-30 02:25:04,960] ERROR User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
> streams-wordcount-processor failed on partition assignment 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:189)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:116)
> at 
> org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedS

[GitHub] kafka pull request: KAFKA-3634: Upgrade tests for SASL authenticat...

2016-04-29 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1290

KAFKA-3634: Upgrade tests for SASL authentication

Add a test for changing SASL mechanism using rolling upgrade and a test for 
rolling upgrade from 0.9.0.x to 0.10.0 with SASL/GSSAPI.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3634

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1290


commit def35d33525d7fb5384b6a7af2757f6afcd7428c
Author: Rajini Sivaram 
Date:   2016-04-28T14:56:48Z

KAFKA-3634: Upgrade tests for SASL authentication




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3634) Add ducktape tests for upgrade with SASL

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264439#comment-15264439
 ] 

ASF GitHub Bot commented on KAFKA-3634:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1290

KAFKA-3634: Upgrade tests for SASL authentication

Add a test for changing SASL mechanism using rolling upgrade and a test for 
rolling upgrade from 0.9.0.x to 0.10.0 with SASL/GSSAPI.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3634

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1290


commit def35d33525d7fb5384b6a7af2757f6afcd7428c
Author: Rajini Sivaram 
Date:   2016-04-28T14:56:48Z

KAFKA-3634: Upgrade tests for SASL authentication




> Add ducktape tests for upgrade with SASL
> 
>
> Key: KAFKA-3634
> URL: https://issues.apache.org/jira/browse/KAFKA-3634
> Project: Kafka
>  Issue Type: Test
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Add SASL upgrade tests (moved out of KAFKA-2693):
>   - 0.9.0.x to 0.10.0 with GSSAPI as inter-broker SASL mechanism
>   - Rolling upgrade with change in SASL mechanism



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3128) Add metrics for ZooKeeper events

2016-04-29 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264445#comment-15264445
 ] 

Jun Rao commented on KAFKA-3128:


[~fpj], yes, it's a good idea to track both session expirations and connection 
losses. Anything else worth tracking?

[~ijuma], I recommend that we continue using kafka.metrics.KafkaMetricsGroup on 
the broker side for now until when we want to migrate all metrics to 
org.apache.kafka.common.metrics.Metrics.

> Add metrics for ZooKeeper events
> 
>
> Key: KAFKA-3128
> URL: https://issues.apache.org/jira/browse/KAFKA-3128
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, zkclient
>Reporter: Flavio Junqueira
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> It would be useful to report via Kafka metrics the number of ZK event 
> notifications, such as connection loss events, session expiration events, 
> etc., as a way of spotting potential issues with the communication with the 
> ZK ensemble.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1880:
---
Description: Recent discussions around compatibility shows how important 
compatibility is to users. Kafka should leverage a tool to find, report, and 
avoid incompatibility issues in public methods.  (was: Recent discussions 
around compatibility shows how important compatibility is to users. [Java API 
Compliance 
Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] is a 
tool for checking backward binary and source-level compatibility of a Java 
library API. Kafka can leverage the tool to find and fix existing 
incompatibility issues and avoid new issues from getting into the product.)

> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport-only-incompatible.html, 
> compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. Kafka should leverage a tool to find, report, and avoid 
> incompatibility issues in public methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3634) Add ducktape tests for upgrade with SASL

2016-04-29 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264450#comment-15264450
 ] 

Rajini Sivaram commented on KAFKA-3634:
---

[~ijuma] Upgrade test from 0.9.0.x was being run only with PLAINTEXT. I have 
added SASL/GSSAPI (don't think this upgrade is being tested elsewhere).

> Add ducktape tests for upgrade with SASL
> 
>
> Key: KAFKA-3634
> URL: https://issues.apache.org/jira/browse/KAFKA-3634
> Project: Kafka
>  Issue Type: Test
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Add SASL upgrade tests (moved out of KAFKA-2693):
>   - 0.9.0.x to 0.10.0 with GSSAPI as inter-broker SASL mechanism
>   - Rolling upgrade with change in SASL mechanism



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3634) Add ducktape tests for upgrade with SASL

2016-04-29 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3634:
--
Status: Patch Available  (was: Open)

> Add ducktape tests for upgrade with SASL
> 
>
> Key: KAFKA-3634
> URL: https://issues.apache.org/jira/browse/KAFKA-3634
> Project: Kafka
>  Issue Type: Test
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Add SASL upgrade tests (moved out of KAFKA-2693):
>   - 0.9.0.x to 0.10.0 with GSSAPI as inter-broker SASL mechanism
>   - Rolling upgrade with change in SASL mechanism



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1880) Add support for checking binary/source compatibility

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264455#comment-15264455
 ] 

ASF GitHub Bot commented on KAFKA-1880:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1291

WIP - KAFKA-1880: Add support for checking binary/source compatibility

This is a WIP pull request to show how I am generating the reports attached 
to the Jira. I am putting it up now so that we understand what has been 
changed/broken before the 0.10 release. 

At some point we may want to leverage something like this to break the 
build too, but I think generating a report is a good start.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka api-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1291


commit 5af41e2ab52bb62a4bf9d13d516b4d2789e357da
Author: Grant Henke 
Date:   2016-04-29T18:00:39Z

WIP - KAFKA-1880: Add support for checking binary/source compatibility




> Add support for checking binary/source compatibility
> 
>
> Key: KAFKA-1880
> URL: https://issues.apache.org/jira/browse/KAFKA-1880
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ashish K Singh
>Assignee: Grant Henke
> Attachments: compatibilityReport-only-incompatible.html, 
> compatibilityReport.html
>
>
> Recent discussions around compatibility shows how important compatibility is 
> to users. Kafka should leverage a tool to find, report, and avoid 
> incompatibility issues in public methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: WIP - KAFKA-1880: Add support for checking bin...

2016-04-29 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1291

WIP - KAFKA-1880: Add support for checking binary/source compatibility

This is a WIP pull request to show how I am generating the reports attached 
to the Jira. I am putting it up now so that we understand what has been 
changed/broken before the 0.10 release. 

At some point we may want to leverage something like this to break the 
build too, but I think generating a report is a good start.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka api-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1291


commit 5af41e2ab52bb62a4bf9d13d516b4d2789e357da
Author: Grant Henke 
Date:   2016-04-29T18:00:39Z

WIP - KAFKA-1880: Add support for checking binary/source compatibility




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3618: Handle ApiVersionsRequest before S...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1286


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3618) Handle ApiVersionRequest before SASL handshake

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264471#comment-15264471
 ] 

ASF GitHub Bot commented on KAFKA-3618:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1286


> Handle ApiVersionRequest before SASL handshake
> --
>
> Key: KAFKA-3618
> URL: https://issues.apache.org/jira/browse/KAFKA-3618
> Project: Kafka
>  Issue Type: Task
>  Components: security
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Handle ApiVersionRequest in SaslServer authenticator before 
> SaslHandshakeRequest to enable clients to obtain handshake request version 
> from the server. This should be implemented after KAFKA-3307 which adds 
> support for version requests after authentication and KAFKA-3149 which adds 
> handshake requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3618) Handle ApiVersionRequest before SASL handshake

2016-04-29 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3618:

   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1286
[https://github.com/apache/kafka/pull/1286]

> Handle ApiVersionRequest before SASL handshake
> --
>
> Key: KAFKA-3618
> URL: https://issues.apache.org/jira/browse/KAFKA-3618
> Project: Kafka
>  Issue Type: Task
>  Components: security
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Handle ApiVersionRequest in SaslServer authenticator before 
> SaslHandshakeRequest to enable clients to obtain handshake request version 
> from the server. This should be implemented after KAFKA-3307 which adds 
> support for version requests after authentication and KAFKA-3149 which adds 
> handshake requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #569

2016-04-29 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-2693: Ducktape tests for SASL/PLAIN and multiple mechanisms

[me] KAFKA-3418: add javadoc section describing consumer failure detection

[me] KAFKA-3615: Exclude test jars in kafka-run-class.sh

--
[...truncated 4397 lines...]

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PAS

[GitHub] kafka pull request: KAFKA-3641: Fix RecordMetadata constructor bac...

2016-04-29 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1292

KAFKA-3641: Fix RecordMetadata constructor backward compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka recordmeta-compat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1292


commit 7ab70a562df5b971d129434bbf0c57c699507a6a
Author: Grant Henke 
Date:   2016-04-29T18:31:41Z

KAFKA-3641: Fix RecordMetadata constructor backward compatibility




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264502#comment-15264502
 ] 

ASF GitHub Bot commented on KAFKA-3641:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1292

KAFKA-3641: Fix RecordMetadata constructor backward compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka recordmeta-compat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1292


commit 7ab70a562df5b971d129434bbf0c57c699507a6a
Author: Grant Henke 
Date:   2016-04-29T18:31:41Z

KAFKA-3641: Fix RecordMetadata constructor backward compatibility




> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] mbeans overwritten with identical clients on a single jvm

2016-04-29 Thread Onur Karaman
fixing the cc for navina.

On Fri, Apr 29, 2016 at 1:06 AM, Onur Karaman 
wrote:

> Hey everyone. I think we might need to have an actual discussion on an
> issue I brought up a while ago in
> https://issues.apache.org/jira/browse/KAFKA-3494. It seems like
> client-ids are being used for too many things today:
> 1. kafka-request.log. This helps if you ever want to associate a client
> with a specific request. Maybe you're looking for a badly behaved client.
> Maybe the client has reported unexpectedly long response times from the
> broker and you want to figure out what was happening.
> 2. quotas. Quotas today are implemented on a (client-id, broker)
> granularity.
> 3. metrics. KafkaConsumer and KafkaProducer metrics only go as granular as
> the client-id.
>
> The reason I'm bringing this up is because it looks like there's a
> conflict in intent for client-ids between the quota and metrics scenarios.
> One of the motivating factors for choosing the client-id for quotas was
> that it allows for flexibility in the granularity of the quota enforcement.
> For instance, entire services can share the same id to get some form of
> (service, broker) granularity quotas. From my understanding, client-id was
> chosen as the quota id because it's a property that already exists on the
> clients, so we'd be able to quota older clients with no additional work,
> and reusing it had relatively low impact.
>
> So while quotas encourage reuse of client-ids across client instances,
> there is a common scenario where the metrics fall apart and mbeans get
> overwritten. It looks like if there are two KafkaConsumers or two
> KafkaProducers with the same client-id in the same jvm, then JmxReporter
> will unregister the first client's mbeans while registering the second
> client's mbeans.
>
> It seems like for the three use cases noted above (kafka-request.log,
> metrics, quotas), there are different desirable characteristics:
> 1. kafka-request.log at the very least would want an id that could
> distinguish individual client instances, but it might be nice to go even
> more granular at say a per connection level.
> 2. quotas would want an id that's sharable among a group of clients that
> wish to be quotad together. This id can be defined by the user.
> 3. metrics would want an id that could distinguish invidual client
> instance. This id can be defined by the user. We expect it to stay the same
> across process restarts so we can potentially associate metrics across
> process restarts.
>
> To resolve this, I think we'd want metrics to have another tag to
> differentiate mbeans from instances with the same client-id. Another
> alternative is to make quotas depend on a quota id instead of client-id (as
> brought up in KIP-55), but this means we no longer can quota older clients
> out of the box.
>
> Other suggestions are welcome!
>


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Affects Version/s: 0.10.0.0

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Status: Patch Available  (was: Open)

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3641:
---
Fix Version/s: 0.10.0.0

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: wrong keyvalue equals logic when keys ...

2016-04-29 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/1293

HOTFIX: wrong keyvalue equals logic when keys not equal but values equal

With the previous logic, if key does NOT equal, but value DOES equal, then 
equals returns TRUE.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka hotfix-keyvalue-equals

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1293.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1293


commit dfeacc0360d7fb718f6cfd79a5f023ed8712f405
Author: Eno Thereska 
Date:   2016-04-29T18:38:17Z

Fixed return value

commit c9eda62c0ab07fe3f7a7d5c55f08c9e466d82465
Author: Eno Thereska 
Date:   2016-04-29T18:49:50Z

Added unit test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3643) Data Duplication on clean restart of Kafka Broker

2016-04-29 Thread Arun Mathew (JIRA)
Arun Mathew created KAFKA-3643:
--

 Summary: Data Duplication on clean restart of Kafka Broker
 Key: KAFKA-3643
 URL: https://issues.apache.org/jira/browse/KAFKA-3643
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.1
Reporter: Arun Mathew


We observed event duplication while partition leadership is restored back to 
preferred leader from the new leader upon restart of the preferred leader.

Steps to Reproduce

- Three Broker Kafka Cluster (B1, B2, B3)
- Create a topic with 3 replica and 1 partition. 
- [B1 is assigned the (preferred) Leader, B2, B3 are ISR]
- Start sending events using performance producer for large number of events 
that can last for few minutes to cover the broker restart time interval (say 
4Million)
- set producer batch size = 1
- Clean shutdown Leader Broker B1
- Event sending continues
- Now, B2 is the new Leader and B3 is ISR.
- Restart the Broker B1 (preferred leader for Partition 0)
- The replica in B1 catches up and becomes the Leader for P-0
- Wait for producer to finish
- Use get offset command to get the event count in Partition, which is higher 
than events sent (4M)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3643) Data Duplication on clean restart of Kafka Broker

2016-04-29 Thread Arun Mathew (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264543#comment-15264543
 ] 

Arun Mathew commented on KAFKA-3643:


[~gwenshap] This is the issue I talked to you about during Kafka Summit 2016.

> Data Duplication on clean restart of Kafka Broker
> -
>
> Key: KAFKA-3643
> URL: https://issues.apache.org/jira/browse/KAFKA-3643
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Arun Mathew
>
> We observed event duplication while partition leadership is restored back to 
> preferred leader from the new leader upon restart of the preferred leader.
> Steps to Reproduce
> - Three Broker Kafka Cluster (B1, B2, B3)
> - Create a topic with 3 replica and 1 partition. 
>   - [B1 is assigned the (preferred) Leader, B2, B3 are ISR]
> - Start sending events using performance producer for large number of events 
> that can last for few minutes to cover the broker restart time interval (say 
> 4Million)
>   - set producer batch size = 1
> - Clean shutdown Leader Broker B1
>   - Event sending continues
>   - Now, B2 is the new Leader and B3 is ISR.
> - Restart the Broker B1 (preferred leader for Partition 0)
>   - The replica in B1 catches up and becomes the Leader for P-0
> - Wait for producer to finish
> - Use get offset command to get the event count in Partition, which is higher 
> than events sent (4M)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2016-04-29 Thread Nisarg Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264548#comment-15264548
 ] 

Nisarg Shah commented on KAFKA-3209:


Hey, I'm looking to contribute to open source and want to take this up. :) 

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #570

2016-04-29 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1233

2016-04-29 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-2693: Ducktape tests for SASL/PLAIN and multiple mechanisms

[me] KAFKA-3418: add javadoc section describing consumer failure detection

[me] KAFKA-3615: Exclude test jars in kafka-run-class.sh

--
[...truncated 2351 lines...]

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.MetadataRequestTest > testReplicaDownResponse PASSED

kafka.server.MetadataRequestTest > testRack PASSED

kafka.server.MetadataRequestTest > testIsInternal PASSED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslSslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslSslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslSslConsumerTest > testListTopics PASSED

kafka.api.SaslSslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslSslConsumerTest > testPartitio

"nag" PR 1143

2016-04-29 Thread Zack Dever
Just a friendly reminder for this minor PR
https://github.com/apache/kafka/pull/1143 as per the instructions on
http://kafka.apache.org/contributing.html.

It received a +1, but had a request for a minor test change which I made.

Thanks!
Zack


[GitHub] kafka pull request: KAFKA-3440: update JavaDoc

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1287


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3440) Add Javadoc for KTable (changelog stream) and KStream (record stream)

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264626#comment-15264626
 ] 

ASF GitHub Bot commented on KAFKA-3440:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1287


> Add Javadoc for KTable (changelog stream) and KStream (record stream)
> -
>
> Key: KAFKA-3440
> URL: https://issues.apache.org/jira/browse/KAFKA-3440
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: docs
> Fix For: 0.10.0.0
>
>
> Currently we only have a 1-liner in {code}KTable{code} and 
> {code}KStream{code} class describing the changelog and record streams. We'd 
> better have a more detailed explanation as in the web docs in Javadocs as 
> well.
> Also we want to have some more description in windowed {code}KTable{code}.
> As a side tasks: I am many classes, method JavaDoc lack the {{@return}} tag 
> which should be used always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2016-04-29 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264627#comment-15264627
 ] 

Gwen Shapira commented on KAFKA-3209:
-

Thank you Nisarg. I believe [~ewencp] already started working on this one...

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3440) Add Javadoc for KTable (changelog stream) and KStream (record stream)

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3440:
-
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava
Status: Resolved  (was: Patch Available)

> Add Javadoc for KTable (changelog stream) and KStream (record stream)
> -
>
> Key: KAFKA-3440
> URL: https://issues.apache.org/jira/browse/KAFKA-3440
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: docs
> Fix For: 0.10.0.0
>
>
> Currently we only have a 1-liner in {code}KTable{code} and 
> {code}KStream{code} class describing the changelog and record streams. We'd 
> better have a more detailed explanation as in the web docs in Javadocs as 
> well.
> Also we want to have some more description in windowed {code}KTable{code}.
> As a side tasks: I am many classes, method JavaDoc lack the {{@return}} tag 
> which should be used always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264633#comment-15264633
 ] 

Ewen Cheslack-Postava commented on KAFKA-3209:
--

[~snisarg] I haven't started work in earnest. This JIRA may not end up being 
particularly complicated code-wise, but given that it will be new public API, 
the impact it can have, and that we want to make sure all transformations we 
want to support will work with the implementation, it'll need a KIP proposal 
and discussion. (Though an initial prototype patch might also help drive that 
discussion.)

If you're interested in picking this up, I'd be happy to guide you through the 
process of writing up the KIP.

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264645#comment-15264645
 ] 

ASF GitHub Bot commented on KAFKA-3641:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/1292


> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3641: Fix RecordMetadata constructor bac...

2016-04-29 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/1292


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2016-04-29 Thread Nisarg Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264644#comment-15264644
 ] 

Nisarg Shah commented on KAFKA-3209:


That does sound good. I realise it is not insanely complicated, but since I 
haven't done much in terms of open source, I thought I'll start with something 
like this. I also think this is useful nonetheless. 

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264649#comment-15264649
 ] 

ASF GitHub Bot commented on KAFKA-3641:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/1292

KAFKA-3641: Fix RecordMetadata constructor backward compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka recordmeta-compat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1292


commit 7ab70a562df5b971d129434bbf0c57c699507a6a
Author: Grant Henke 
Date:   2016-04-29T18:31:41Z

KAFKA-3641: Fix RecordMetadata constructor backward compatibility




> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3641: Fix RecordMetadata constructor bac...

2016-04-29 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/1292

KAFKA-3641: Fix RecordMetadata constructor backward compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka recordmeta-compat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1292


commit 7ab70a562df5b971d129434bbf0c57c699507a6a
Author: Grant Henke 
Date:   2016-04-29T18:31:41Z

KAFKA-3641: Fix RecordMetadata constructor backward compatibility




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1234

2016-04-29 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #571

2016-04-29 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3440: Update streams javadocs

--
[...truncated 1641 lines...]

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTe

[jira] [Closed] (KAFKA-3582) remove references to Copcyat from connect property files

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei closed KAFKA-3582.
-

> remove references to Copcyat from connect property files
> 
>
> Key: KAFKA-3582
> URL: https://issues.apache.org/jira/browse/KAFKA-3582
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>Assignee: Liquan Pei
>Priority: Minor
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
>  grep -i Copcyat config/*
> config/connect-distributed.properties:# always want to use the built-in 
> default. Offset and config data is never visible outside of Copcyat in this 
> format.
> config/connect-standalone.properties:# always want to use the built-in 
> default. Offset and config data is never visible outside of Copcyat in this 
> format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-3615) Exclude test jars in CLASSPATH of kafka-run-class.sh

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei closed KAFKA-3615.
-

> Exclude test jars in CLASSPATH of kafka-run-class.sh
> 
>
> Key: KAFKA-3615
> URL: https://issues.apache.org/jira/browse/KAFKA-3615
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, build
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: newbie
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-3606) Traverse CLASSPATH during herder start to list connectors

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei closed KAFKA-3606.
-

> Traverse CLASSPATH during herder start to list connectors
> -
>
> Key: KAFKA-3606
> URL: https://issues.apache.org/jira/browse/KAFKA-3606
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> List connectors API requires CLASSPATH traversal, which can takes up to 30s  
> to return available connectors. To work around this, we traverse the 
> CLASSPATH when staring herder and cache the result. Also we should guard 
> against concurrent CLASSPATH traversal. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-3578) Allow cross origin HTTP requests on all HTTP methods

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei closed KAFKA-3578.
-

> Allow cross origin HTTP requests on all HTTP methods
> 
>
> Key: KAFKA-3578
> URL: https://issues.apache.org/jira/browse/KAFKA-3578
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently, Kafka Connect only allows requests from the same domain of the 
> Kafka Connect cluster. To allow Kafka Connect to process requests from other 
> domains, we need to allow cross origin HTTP requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-3611) Remove WARNs when using reflections

2016-04-29 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei closed KAFKA-3611.
-

> Remove WARNs when using reflections 
> 
>
> Key: KAFKA-3611
> URL: https://issues.apache.org/jira/browse/KAFKA-3611
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
> Fix For: 0.10.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When using reflections, there are logs with WARN level are created when some 
> URL types are not recognized by Vfs.  Removing these WARN logs will improve 
> use experiences. 
> Also, the way we create CLASSPATH in kafka-run-class.sh will cause 
> reflections to search for the directory running the script and will cause 
> some classes in test packages to be included in the list of subclass of the 
> Connector class. However, as the test jars may not be available, WARN level 
> logs will also be generated in this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3128) Add metrics for ZooKeeper events

2016-04-29 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264741#comment-15264741
 ] 

James Cheng commented on KAFKA-3128:


+1. These metrics would be super useful.

I know that there's an 0.10 release candidate expected today. Given that the 
final 0.10 is expected to be available within a couple weeks, is it still 
likely that this work will be done in time for 0.10?


> Add metrics for ZooKeeper events
> 
>
> Key: KAFKA-3128
> URL: https://issues.apache.org/jira/browse/KAFKA-3128
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, zkclient
>Reporter: Flavio Junqueira
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> It would be useful to report via Kafka metrics the number of ZK event 
> notifications, such as connection loss events, session expiration events, 
> etc., as a way of spotting potential issues with the communication with the 
> ZK ensemble.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3128) Add metrics for ZooKeeper events

2016-04-29 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264750#comment-15264750
 ] 

Ismael Juma commented on KAFKA-3128:


Thanks [~junrao], I'll update the PR to use `KafkaMetricsGroup`.

[~wushujames], it's likely that another RC will be needed for KIP-57 (which 
fixes a bug in the message format and would make a lot of sense to include 
before the release), so maybe we can include this too.

> Add metrics for ZooKeeper events
> 
>
> Key: KAFKA-3128
> URL: https://issues.apache.org/jira/browse/KAFKA-3128
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, zkclient
>Reporter: Flavio Junqueira
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> It would be useful to report via Kafka metrics the number of ZK event 
> notifications, such as connection loss events, session expiration events, 
> etc., as a way of spotting potential issues with the communication with the 
> ZK ensemble.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3643) Data Duplication on clean restart of Kafka Broker

2016-04-29 Thread Arun Mathew (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264820#comment-15264820
 ] 

Arun Mathew commented on KAFKA-3643:


Further Details of the Issue.

When an event is received by the broker who is the leader of the partition, it 
is appended to the log at
https://github.com/apache/kafka/blob/4f22705c7d0c8e8cab68883e76f554439341e34a/core/src/main/scala/kafka/server/ReplicaManager.scala#L328

Which writes the event to the leader broker log replica, after first checking 
if the broker is indeed the leader replica at
https://github.com/apache/kafka/blob/4f22705c7d0c8e8cab68883e76f554439341e34a/core/src/main/scala/kafka/cluster/Partition.scala#L430

This goes through fine, but since we have set acks = all from the producer, a 
DelayedProduce request is created and added to delayedProducePurgatory, to keep 
track of ISRs catching up, so as to ack the producer for the received event.

But, in our experiment there is a leadership change (due to broker restart) for 
the parition in this meantime, and the DelayedProduce request which 
periodically checks for completion fails at 
https://github.com/apache/kafka/blob/4f22705c7d0c8e8cab68883e76f554439341e34a/core/src/main/scala/kafka/cluster/Partition.scala#L305
where the tryCompleteDelayedProduce() checks if the broker is still the leader.

This causes causes the ProduceRequest to be negatively acknowledged with a 
NOT_LEADER_FOR_PARTITION error, even though the replica's might have correctly 
replicated the event.
Also nothing is done to roll back the event committed to the local log, while 
the broker was still leader for partition.

The producer then retries the event to the current Leader broker and it goes 
through correctly, unaware that the previous try was also committed and 
replicated by all replicas in the partition.


> Data Duplication on clean restart of Kafka Broker
> -
>
> Key: KAFKA-3643
> URL: https://issues.apache.org/jira/browse/KAFKA-3643
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Arun Mathew
>
> We observed event duplication while partition leadership is restored back to 
> preferred leader from the new leader upon restart of the preferred leader.
> Steps to Reproduce
> - Three Broker Kafka Cluster (B1, B2, B3)
> - Create a topic with 3 replica and 1 partition. 
>   - [B1 is assigned the (preferred) Leader, B2, B3 are ISR]
> - Start sending events using performance producer for large number of events 
> that can last for few minutes to cover the broker restart time interval (say 
> 4Million)
>   - set producer batch size = 1
> - Clean shutdown Leader Broker B1
>   - Event sending continues
>   - Now, B2 is the new Leader and B3 is ISR.
> - Restart the Broker B1 (preferred leader for Partition 0)
>   - The replica in B1 catches up and becomes the Leader for P-0
> - Wait for producer to finish
> - Use get offset command to get the event count in Partition, which is higher 
> than events sent (4M)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264834#comment-15264834
 ] 

ASF GitHub Bot commented on KAFKA-3641:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1292


> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3641: Fix RecordMetadata constructor bac...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1292


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3641) Fix RecordMetadata constructor backward compatibility

2016-04-29 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3641:

   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1292
[https://github.com/apache/kafka/pull/1292]

> Fix RecordMetadata constructor backward compatibility 
> --
>
> Key: KAFKA-3641
> URL: https://issues.apache.org/jira/browse/KAFKA-3641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> The old RecordMetadata constructor from 0.9.0 should be added back and 
> deprecated in order to maintain backward compatibility.
> {noformat}
> public RecordMetadata(TopicPartition topicPartition, long baseOffset, long 
> relativeOffset)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: Fix equality semantics of KeyValue

2016-04-29 Thread miguno
GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/1294

HOTFIX: Fix equality semantics of KeyValue

Fixes wrong KeyValue equals logic when keys not equal but values equal.

Original hotfix PR at https://github.com/apache/kafka/pull/1293 (/cc 
@enothereska)

Please review: @ewencp @ijuma @guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KeyValue-equality-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1294.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1294


commit 97f1df2c7c92f9e2486d6b75ed38831ff35a1f19
Author: Eno Thereska 
Date:   2016-04-29T18:38:17Z

Fix equality semantics of KeyValue




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] mbeans overwritten with identical clients on a single jvm

2016-04-29 Thread Jay Kreps
The definition for client id has always been "a logical name for an
application which (potentially) spans more than one process".

>From my point of view the rationalization that is most needed is client id
with "user" for the authenticated cases. There not quite the same but
they're similar.

I think all three of those uses are using client id for what it means so I
don't think we should necessarily introduce three more ids. It sounds like
the problem you have is actually that you don't like the default behavior
for JMX when two clients have the same id. Rather than removing the prior
JMX metric maybe you want it to register under some kind of mangled name
("xyz_1", say). Would that work or is there a need for introducing wholly
new concepts?

-Jay

On Fri, Apr 29, 2016 at 1:06 AM, Onur Karaman 
wrote:

> Hey everyone. I think we might need to have an actual discussion on an
> issue I brought up a while ago in
> https://issues.apache.org/jira/browse/KAFKA-3494. It seems like client-ids
> are being used for too many things today:
> 1. kafka-request.log. This helps if you ever want to associate a client
> with a specific request. Maybe you're looking for a badly behaved client.
> Maybe the client has reported unexpectedly long response times from the
> broker and you want to figure out what was happening.
> 2. quotas. Quotas today are implemented on a (client-id, broker)
> granularity.
> 3. metrics. KafkaConsumer and KafkaProducer metrics only go as granular as
> the client-id.
>
> The reason I'm bringing this up is because it looks like there's a conflict
> in intent for client-ids between the quota and metrics scenarios. One of
> the motivating factors for choosing the client-id for quotas was that it
> allows for flexibility in the granularity of the quota enforcement. For
> instance, entire services can share the same id to get some form of
> (service, broker) granularity quotas. From my understanding, client-id was
> chosen as the quota id because it's a property that already exists on the
> clients, so we'd be able to quota older clients with no additional work,
> and reusing it had relatively low impact.
>
> So while quotas encourage reuse of client-ids across client instances,
> there is a common scenario where the metrics fall apart and mbeans get
> overwritten. It looks like if there are two KafkaConsumers or two
> KafkaProducers with the same client-id in the same jvm, then JmxReporter
> will unregister the first client's mbeans while registering the second
> client's mbeans.
>
> It seems like for the three use cases noted above (kafka-request.log,
> metrics, quotas), there are different desirable characteristics:
> 1. kafka-request.log at the very least would want an id that could
> distinguish individual client instances, but it might be nice to go even
> more granular at say a per connection level.
> 2. quotas would want an id that's sharable among a group of clients that
> wish to be quotad together. This id can be defined by the user.
> 3. metrics would want an id that could distinguish invidual client
> instance. This id can be defined by the user. We expect it to stay the same
> across process restarts so we can potentially associate metrics across
> process restarts.
>
> To resolve this, I think we'd want metrics to have another tag to
> differentiate mbeans from instances with the same client-id. Another
> alternative is to make quotas depend on a quota id instead of client-id (as
> brought up in KIP-55), but this means we no longer can quota older clients
> out of the box.
>
> Other suggestions are welcome!
>


[GitHub] kafka pull request: KAFKA-3459: Returning zero task configurations...

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1248


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3459:
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1248
[https://github.com/apache/kafka/pull/1248]

> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.10.1.0
>
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264835#comment-15264835
 ] 

ASF GitHub Bot commented on KAFKA-3459:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1248


> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.10.1.0
>
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[RELEASE] Final merge of trunk into 0.10.0

2016-04-29 Thread Gwen Shapira
Hi,

I just merged trunk into 0.10.0 branch and pushed.

0.10.0 is updated as of commit d0dedc6 (KAFKA-3459: Returning zero
task configurations from a connector does not properly clean up
existing tasks).

Committers:
Please cherry-pick only critical bug fixes and/or low-risk changes
(preferably tests) into 0.10.0 branch. We are trying to stabilize it
for a release.

Thanks,
$RM


[GitHub] kafka pull request: HOTFIX: Fix equality semantics of KeyValue

2016-04-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1294


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3627: consumer fails to execute delayed ...

2016-04-29 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1295

KAFKA-3627: consumer fails to execute delayed tasks in poll when records 
are available



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1295


commit 04116c65369926955d79084851f6dfda313b5fee
Author: Jason Gustafson 
Date:   2016-04-29T21:58:35Z

KAFKA-3627: consumer fails to execute delayed tasks in poll when records 
are available




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3627) New consumer doesn't run delayed tasks while under load

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264870#comment-15264870
 ] 

ASF GitHub Bot commented on KAFKA-3627:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1295

KAFKA-3627: consumer fails to execute delayed tasks in poll when records 
are available



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1295


commit 04116c65369926955d79084851f6dfda313b5fee
Author: Jason Gustafson 
Date:   2016-04-29T21:58:35Z

KAFKA-3627: consumer fails to execute delayed tasks in poll when records 
are available




> New consumer doesn't run delayed tasks while under load
> ---
>
> Key: KAFKA-3627
> URL: https://issues.apache.org/jira/browse/KAFKA-3627
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Rob Underwood
>Assignee: Jason Gustafson
> Attachments: DelayedTaskBugConsumer.java, kafka-3627-output.log
>
>
> If the new consumer receives a steady flow of fetch responses it will not run 
> delayed tasks, which means it will not heartbeat or perform automatic offset 
> commits.
> The main cause is the code that attempts to pipeline fetch responses and keep 
> the consumer fed.  Specifically, in KafkaConsumer::pollOnce() there is a 
> check that skips calling client.poll() if there are fetched records ready 
> (line 903 in the 0.9.0 branch of this writing).  Then in 
> KafkaConsumer::poll(), if records are returned it will initiate another fetch 
> and perform a quick poll, which will send/receive fetch requests/responses 
> but will not run delayed tasks.
> If the timing works out, and the consumer is consistently receiving fetched 
> records, it won't run delayed tasks until it doesn't receive a fetch response 
> during its quick poll.  That leads to a rebalance since the consumer isn't 
> heartbeating, and typically means all the consumed records will be 
> re-delivered since the automatic offset commit wasn't able to run either.
> h5. Steps to reproduce
> # Start up a cluster with *at least 2 brokers*.  This seems to be required to 
> reproduce the issue, I'm guessing because the fetch responses all arrive 
> together when using a single broker.
> # Create a topic with a good number of partitions
> #* bq. bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 
> delayed-task-bug --partitions 10 --replication-factor 1
> # Generate some test data so the consumer has plenty to consume.  In this 
> case I'm just using uuids
> #* bq. for ((i=0;i<100;++i)) do; cat /proc/sys/kernel/random/uuid >>  
> /tmp/test-messages; done
> #* bq. bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
> delayed-task-bug < /tmp/test-messages
> # Start up a consumer with a small max fetch size to ensure it only pulls a 
> few records at a time.  The consumer can simply sleep for a moment when it 
> receives a record.
> #* I'll attach an example in Java
> # There's a timing aspect to this issue so it may take a few attempts to 
> reproduce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3644) Use Boolean protocol type for StopReplicaRequest delete_partitions

2016-04-29 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3644:
--

 Summary: Use Boolean protocol type for StopReplicaRequest 
delete_partitions
 Key: KAFKA-3644
 URL: https://issues.apache.org/jira/browse/KAFKA-3644
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.10.0.0


Recently the boolean protocol type was added. The StopReplicaRequest 
delete_partitions field already utilized and int8 to represent the boolean, so 
this compatible change is mostly for cleanup and documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3644) Use Boolean protocol type for StopReplicaRequest delete_partitions

2016-04-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3644:
---
Status: Patch Available  (was: Open)

> Use Boolean protocol type for StopReplicaRequest delete_partitions
> --
>
> Key: KAFKA-3644
> URL: https://issues.apache.org/jira/browse/KAFKA-3644
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Recently the boolean protocol type was added. The StopReplicaRequest 
> delete_partitions field already utilized and int8 to represent the boolean, 
> so this compatible change is mostly for cleanup and documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3644) Use Boolean protocol type for StopReplicaRequest delete_partitions

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15264892#comment-15264892
 ] 

ASF GitHub Bot commented on KAFKA-3644:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1296

KAFKA-3644: Use Boolean protocol type for StopReplicaRequest delete_p…

…artitions

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka stop-boolean

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1296


commit 9e3fb5adab30d571728cfa3b4f29cb9be3f53fd9
Author: Grant Henke 
Date:   2016-04-29T22:29:05Z

KAFKA-3644: Use Boolean protocol type for StopReplicaRequest 
delete_partitions




> Use Boolean protocol type for StopReplicaRequest delete_partitions
> --
>
> Key: KAFKA-3644
> URL: https://issues.apache.org/jira/browse/KAFKA-3644
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Recently the boolean protocol type was added. The StopReplicaRequest 
> delete_partitions field already utilized and int8 to represent the boolean, 
> so this compatible change is mostly for cleanup and documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >