[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-01-22 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288933#comment-14288933
 ] 

Joel Koshy commented on KAFKA-1729:
---

I was trying out the documentation I wrote on 
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
 but there are a couple of minor issues. I may need to make small changes to 
the javaapi versions of the request/responses. Will update tomorrow with a 
patch if necessary.

> add doc for Kafka-based offset management in 0.8.2
> --
>
> Key: KAFKA-1729
> URL: https://issues.apache.org/jira/browse/KAFKA-1729
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jun Rao
>Assignee: Joel Koshy
> Fix For: 0.8.2
>
> Attachments: KAFKA-1782-doc-v1.patch, KAFKA-1782-doc-v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-01-22 Thread Joe Stein
There is still some to-dos to be done in
https://reviews.apache.org/r/29513/diff/ to use changing to ConfigDef
https://reviews.apache.org/r/30126/diff/ once that is in.

We can get more written up on it, will do.

On Fri, Jan 23, 2015 at 12:05 AM, Jay Kreps  wrote:

> Hey Joe,
>
> Can you fill in this KIP? The purpose of these KIPs is to give a full
> overview of the feature, how it will work, be implemented, the
> considerations involved, etc. There is only like one sentence on this which
> isn't enough for anyone to know what you are thinking.
>
> Moving off of configs to something that I'm guessing would be
> Zookeeper-based (?) is a massive change so we really need to describe this
> in a way that can be widely circulated.
>
> I actually think this would be a good idea. But there are a ton of
> advantages to good old fashioned text files in terms of config management
> and change control. And trying to support both may or may not be better.
>
> -Jay
>
>
> On Wed, Jan 21, 2015 at 10:34 PM, Joe Stein  wrote:
>
> > Created a KIP
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
> >
> > JIRA https://issues.apache.org/jira/browse/KAFKA-1786
> >
> > /***
> >  Joe Stein
> >  Founder, Principal Consultant
> >  Big Data Open Source Security LLC
> >  http://www.stealth.ly
> >  Twitter: @allthingshadoop 
> > /
> >
>


Re: [DISCUSS] KIP-6 - New reassignment partition logic for re-balancing

2015-01-22 Thread Joe Stein
I will go back through the ticket and code and write more up. Should be
able to-do that sometime next week. The intention was to not replace
existing functionality by issue a WARN on use. The following version it is
released we could then deprecate it... I will fix the KIP for that too.

On Fri, Jan 23, 2015 at 12:34 AM, Neha Narkhede  wrote:

> Hey Joe,
>
> 1. Could you add details to the Public Interface section of the KIP? This
> should include the proposed changes to the partition reassignment tool.
> Also, maybe the new option can be named --rebalance instead of
> --re-balance?
> 2. It makes sense to list --decommission-broker as part of this KIP.
> Similarly, shouldn't we also have an --add-broker option? The way I see
> this is that there are several events when a partition reassignment is
> required. Before this functionality is automated on the broker, the tool
> will generate an ideal replica placement for each such event. The users
> should merely have to specify the nature of the event e.g. adding a broker
> or decommissioning an existing broker or merely rebalancing.
> 3. If I understand the KIP correctly, the upgrade plan for this feature
> includes removing the existing --generate option on the partition
> reassignment tool in 0.8.3 while adding all the new options in the same
> release. Is that correct?
>
> Thanks,
> Neha
>
> On Thu, Jan 22, 2015 at 9:23 PM, Jay Kreps  wrote:
>
> > Ditto on this one. Can you give the algorithm we want to implement?
> >
> > Also I think in terms of scope this is just proposing to change the logic
> > in ReassignPartitionsCommand? I think we've had the discussion various
> > times on the mailing list that what people really want is just for Kafka
> to
> > do it's best to balance data in an online fashion (for some definition of
> > balance). i.e. if you add a new node partitions would slowly migrate to
> it,
> > and if a node dies, partitions slowly migrate off it. This could
> > potentially be more work, but I'm not sure how much more. Has anyone
> > thought about how to do it?
> >
> > -Jay
> >
> > On Wed, Jan 21, 2015 at 10:11 PM, Joe Stein 
> wrote:
> >
> > > Posted a KIP for --re-balance for partition assignment in reassignment
> > > tool.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+re-balancing
> > >
> > > JIRA https://issues.apache.org/jira/browse/KAFKA-1792
> > >
> > > While going through the KIP I thought of one thing from the JIRA that
> we
> > > should change. We should preserve --generate to be existing
> functionality
> > > for the next release it is in. If folks want to use --re-balance then
> > > great, it just won't break any upgrade paths, yet.
> > >
> > > /***
> > >  Joe Stein
> > >  Founder, Principal Consultant
> > >  Big Data Open Source Security LLC
> > >  http://www.stealth.ly
> > >  Twitter: @allthingshadoop 
> > > /
> > >
> >
>
>
>
> --
> Thanks,
> Neha
>


Re: [DISCUSS] KIP-8 - Decommission a broker

2015-01-22 Thread Joe Stein
Ok

On Fri, Jan 23, 2015 at 12:26 AM, Neha Narkhede  wrote:

> Hi Joe,
>
> Thanks for starting a KIP for this change.
>
> This feature is trivial once we agree on KIP-6
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+re-balancing
> >
> since
> that feature figures out an ideal replica placement. Decomission broker is
> just one of the events that trigger the same logic mentioned in KIP-6. I
> wonder if we can just fold this trivial change under KIP-6. This will save
> some overhead and also make KIP-6 more complete.
>
> Thanks,
> Neha
>
>
> On Wed, Jan 21, 2015 at 10:20 PM, Joe Stein  wrote:
>
> > Hi, created a KIP
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-8+-+Decommission+a+broker
> >
> > JIRA related https://issues.apache.org/jira/browse/KAFKA-1753
> >
> > I took out the compatibility, migration section since this is new
> behavior.
> > If anyone can think of any we should add it back in.
> >
> > /***
> >  Joe Stein
> >  Founder, Principal Consultant
> >  Big Data Open Source Security LLC
> >  http://www.stealth.ly
> >  Twitter: @allthingshadoop 
> > /
> >
>
>
>
> --
> Thanks,
> Neha
>


Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations

2015-01-22 Thread Joe Stein
inline

On Thu, Jan 22, 2015 at 11:59 PM, Jay Kreps  wrote:

> Hey Joe,
>
> This is great. A few comments on KIP-4
>
> 1. This is much needed functionality, but there are a lot of the so let's
> really think these protocols through. We really want to end up with a set
> of well thought-out, orthoganol apis. For this reason I think it is really
> important to think through the end state even if that includes APIs we
> won't implement in the first phase.
>

ok


>
> 2. Let's please please please wait until we have switched the server over
> to the new java protocol definitions. If we add upteen more ad hoc scala
> objects that is just generating more work for the conversion we know we
> have to do.
>

ok :)


>
> 3. This proposal introduces a new type of optional parameter. This is
> inconsistent with everything else in the protocol where we use -1 or some
> other marker value. You could argue either way but let's stick with that
> for consistency. For clients that implemented the protocol in a better way
> than our scala code these basic primitives are hard to change.
>

yes, less confusing, ok.


>
> 4. ClusterMetadata: This seems to duplicate TopicMetadataRequest which has
> brokers, topics, and partitions. I think we should rename that request
> ClusterMetadataRequest (or just MetadataRequest) and include the id of the
> controller. Or are there other things we could add here?
>

We could add broker version to it.


>
> 5. We have a tendency to try to make a lot of requests that can only go to
> particular nodes. This adds a lot of burden for client implementations (it
> sounds easy but each discovery can fail in many parts so it ends up being a
> full state machine to do right). I think we should consider making admin
> commands and ideally as many of the other apis as possible available on all
> brokers and just redirect to the controller on the broker side. Perhaps
> there would be a general way to encapsulate this re-routing behavior.
>

If we do that then we should also preserve what we have and do both. The
client can then decide "do I want to go to any broker and proxy" or just
"go to controller and run admin task". Lots of folks have seen controllers
come under distress because of their producers/consumers. There is ticket
too for controller elect and re-elect
https://issues.apache.org/jira/browse/KAFKA-1778 so you can force it to a
broker that has 0 load.


>
> 6. We should probably normalize the key value pairs used for configs rather
> than embedding a new formatting. So two strings rather than one with an
> internal equals sign.
>

ok


>
> 7. Is the postcondition of these APIs that the command has begun or that
> the command has been completed? It is a lot more usable if the command has
> been completed so you know that if you create a topic and then publish to
> it you won't get an exception about there being no such topic.
>

We should define that more. There needs to be some more state there, yes.

We should try to cover https://issues.apache.org/jira/browse/KAFKA-1125
within what we come up with.


>
> 8. Describe topic and list topics duplicate a lot of stuff in the metadata
> request. Is there a reason to give back topics marked for deletion? I feel
> like if we just make the post-condition of the delete command be that the
> topic is deleted that will get rid of the need for this right? And it will
> be much more intuitive.
>

I will go back and look through it.


>
> 9. Should we consider batching these requests? We have generally tried to
> allow multiple operations to be batched. My suspicion is that without this
> we will get a lot of code that does something like
>for(topic: adminClient.listTopics())
>   adminClient.describeTopic(topic)
> this code will work great when you test on 5 topics but not do as well if
> you have 50k.
>

So => Input is a list of topics (or none for all) and a batch response from
the controller (which could be routed through another broker) of the entire
response? We could introduce a Batch keyword to explicitly show the usage
of it.


> 10. I think we should also discuss how we want to expose a programmatic JVM
> client api for these operations. Currently people rely on AdminUtils which
> is totally sketchy. I think we probably need another client under clients/
> that exposes administrative functionality. We will need this just to
> properly test the new apis, I suspect. We should figure out that API.
>

We were talking about that here
https://issues.apache.org/jira/browse/KAFKA-1774 and wrote it in java
https://reviews.apache.org/r/29301/diff/7/?page=4#75 so we could do
something like that, sure.


>
> 11. The other information that would be really useful to get would be
> information about partitions--how much data is in the partition, what are
> the segment offsets, what is the log-end offset (i.e. last offset), what is
> the compaction point, etc. I think that done right this would be the
> successor to the very awkward OffsetRequest we have

Re: [DISCUSS] KIP-6 - New reassignment partition logic for re-balancing

2015-01-22 Thread Neha Narkhede
Hey Joe,

1. Could you add details to the Public Interface section of the KIP? This
should include the proposed changes to the partition reassignment tool.
Also, maybe the new option can be named --rebalance instead of
--re-balance?
2. It makes sense to list --decommission-broker as part of this KIP.
Similarly, shouldn't we also have an --add-broker option? The way I see
this is that there are several events when a partition reassignment is
required. Before this functionality is automated on the broker, the tool
will generate an ideal replica placement for each such event. The users
should merely have to specify the nature of the event e.g. adding a broker
or decommissioning an existing broker or merely rebalancing.
3. If I understand the KIP correctly, the upgrade plan for this feature
includes removing the existing --generate option on the partition
reassignment tool in 0.8.3 while adding all the new options in the same
release. Is that correct?

Thanks,
Neha

On Thu, Jan 22, 2015 at 9:23 PM, Jay Kreps  wrote:

> Ditto on this one. Can you give the algorithm we want to implement?
>
> Also I think in terms of scope this is just proposing to change the logic
> in ReassignPartitionsCommand? I think we've had the discussion various
> times on the mailing list that what people really want is just for Kafka to
> do it's best to balance data in an online fashion (for some definition of
> balance). i.e. if you add a new node partitions would slowly migrate to it,
> and if a node dies, partitions slowly migrate off it. This could
> potentially be more work, but I'm not sure how much more. Has anyone
> thought about how to do it?
>
> -Jay
>
> On Wed, Jan 21, 2015 at 10:11 PM, Joe Stein  wrote:
>
> > Posted a KIP for --re-balance for partition assignment in reassignment
> > tool.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+re-balancing
> >
> > JIRA https://issues.apache.org/jira/browse/KAFKA-1792
> >
> > While going through the KIP I thought of one thing from the JIRA that we
> > should change. We should preserve --generate to be existing functionality
> > for the next release it is in. If folks want to use --re-balance then
> > great, it just won't break any upgrade paths, yet.
> >
> > /***
> >  Joe Stein
> >  Founder, Principal Consultant
> >  Big Data Open Source Security LLC
> >  http://www.stealth.ly
> >  Twitter: @allthingshadoop 
> > /
> >
>



-- 
Thanks,
Neha


Re: [DISCUSS] KIP-8 - Decommission a broker

2015-01-22 Thread Neha Narkhede
Hi Joe,

Thanks for starting a KIP for this change.

This feature is trivial once we agree on KIP-6

since
that feature figures out an ideal replica placement. Decomission broker is
just one of the events that trigger the same logic mentioned in KIP-6. I
wonder if we can just fold this trivial change under KIP-6. This will save
some overhead and also make KIP-6 more complete.

Thanks,
Neha


On Wed, Jan 21, 2015 at 10:20 PM, Joe Stein  wrote:

> Hi, created a KIP
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-8+-+Decommission+a+broker
>
> JIRA related https://issues.apache.org/jira/browse/KAFKA-1753
>
> I took out the compatibility, migration section since this is new behavior.
> If anyone can think of any we should add it back in.
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>



-- 
Thanks,
Neha


Re: [DISCUSS] KIP-6 - New reassignment partition logic for re-balancing

2015-01-22 Thread Jay Kreps
Ditto on this one. Can you give the algorithm we want to implement?

Also I think in terms of scope this is just proposing to change the logic
in ReassignPartitionsCommand? I think we've had the discussion various
times on the mailing list that what people really want is just for Kafka to
do it's best to balance data in an online fashion (for some definition of
balance). i.e. if you add a new node partitions would slowly migrate to it,
and if a node dies, partitions slowly migrate off it. This could
potentially be more work, but I'm not sure how much more. Has anyone
thought about how to do it?

-Jay

On Wed, Jan 21, 2015 at 10:11 PM, Joe Stein  wrote:

> Posted a KIP for --re-balance for partition assignment in reassignment
> tool.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+re-balancing
>
> JIRA https://issues.apache.org/jira/browse/KAFKA-1792
>
> While going through the KIP I thought of one thing from the JIRA that we
> should change. We should preserve --generate to be existing functionality
> for the next release it is in. If folks want to use --re-balance then
> great, it just won't break any upgrade paths, yet.
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>


Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations

2015-01-22 Thread Jay Kreps
Hey Joe,

This is great. A few comments on KIP-4

1. This is much needed functionality, but there are a lot of the so let's
really think these protocols through. We really want to end up with a set
of well thought-out, orthoganol apis. For this reason I think it is really
important to think through the end state even if that includes APIs we
won't implement in the first phase.

2. Let's please please please wait until we have switched the server over
to the new java protocol definitions. If we add upteen more ad hoc scala
objects that is just generating more work for the conversion we know we
have to do.

3. This proposal introduces a new type of optional parameter. This is
inconsistent with everything else in the protocol where we use -1 or some
other marker value. You could argue either way but let's stick with that
for consistency. For clients that implemented the protocol in a better way
than our scala code these basic primitives are hard to change.

4. ClusterMetadata: This seems to duplicate TopicMetadataRequest which has
brokers, topics, and partitions. I think we should rename that request
ClusterMetadataRequest (or just MetadataRequest) and include the id of the
controller. Or are there other things we could add here?

5. We have a tendency to try to make a lot of requests that can only go to
particular nodes. This adds a lot of burden for client implementations (it
sounds easy but each discovery can fail in many parts so it ends up being a
full state machine to do right). I think we should consider making admin
commands and ideally as many of the other apis as possible available on all
brokers and just redirect to the controller on the broker side. Perhaps
there would be a general way to encapsulate this re-routing behavior.

6. We should probably normalize the key value pairs used for configs rather
than embedding a new formatting. So two strings rather than one with an
internal equals sign.

7. Is the postcondition of these APIs that the command has begun or that
the command has been completed? It is a lot more usable if the command has
been completed so you know that if you create a topic and then publish to
it you won't get an exception about there being no such topic.

8. Describe topic and list topics duplicate a lot of stuff in the metadata
request. Is there a reason to give back topics marked for deletion? I feel
like if we just make the post-condition of the delete command be that the
topic is deleted that will get rid of the need for this right? And it will
be much more intuitive.

9. Should we consider batching these requests? We have generally tried to
allow multiple operations to be batched. My suspicion is that without this
we will get a lot of code that does something like
   for(topic: adminClient.listTopics())
  adminClient.describeTopic(topic)
this code will work great when you test on 5 topics but not do as well if
you have 50k.

10. I think we should also discuss how we want to expose a programmatic JVM
client api for these operations. Currently people rely on AdminUtils which
is totally sketchy. I think we probably need another client under clients/
that exposes administrative functionality. We will need this just to
properly test the new apis, I suspect. We should figure out that API.

11. The other information that would be really useful to get would be
information about partitions--how much data is in the partition, what are
the segment offsets, what is the log-end offset (i.e. last offset), what is
the compaction point, etc. I think that done right this would be the
successor to the very awkward OffsetRequest we have today.

-Jay

On Wed, Jan 21, 2015 at 10:27 PM, Joe Stein  wrote:

> Hi, created a KIP
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
>
> JIRA https://issues.apache.org/jira/browse/KAFKA-1694
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-01-22 Thread Jay Kreps
Hey Joe,

Can you fill in this KIP? The purpose of these KIPs is to give a full
overview of the feature, how it will work, be implemented, the
considerations involved, etc. There is only like one sentence on this which
isn't enough for anyone to know what you are thinking.

Moving off of configs to something that I'm guessing would be
Zookeeper-based (?) is a massive change so we really need to describe this
in a way that can be widely circulated.

I actually think this would be a good idea. But there are a ton of
advantages to good old fashioned text files in terms of config management
and change control. And trying to support both may or may not be better.

-Jay


On Wed, Jan 21, 2015 at 10:34 PM, Joe Stein  wrote:

> Created a KIP
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
>
> JIRA https://issues.apache.org/jira/browse/KAFKA-1786
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>


Build failed in Jenkins: Kafka-trunk #377

2015-01-22 Thread Apache Jenkins Server
See 

Changes:

[joe.stein] KAFKA-1891 MirrorMaker hides consumer exception - making 
troubleshooting challenging patch by Gwen Shapira reviewed by Joe Stein

--
[...truncated 1150 lines...]
kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafk

Re: [DISCUSS] KIPs

2015-01-22 Thread Jun Rao
Reviewed the latest patch in KAFKA-1809 :).

Thanks,

Jun

On Thu, Jan 22, 2015 at 12:38 PM, Gwen Shapira 
wrote:

> Thanks for validating our ideas. Updated the KIP with the workflow.
>
> Now if you can nudge Jun to review the latest patch... ;)
>
>
> On Thu, Jan 22, 2015 at 11:44 AM, Jay Kreps  wrote:
> > Oh yeah I think that is better, I hadn't thought of that approach! Any
> way
> > you could describe the usage in the KIP, just for completeness?
> >
> > -Jay
> >
> > On Thu, Jan 22, 2015 at 10:23 AM, Gwen Shapira 
> > wrote:
> >
> >> I think what you described was the original design, so no wonder you
> >> are confused :)
> >>
> >> Following suggestions from Jun, I changed it a bit. The current model
> is:
> >>
> >> - Clients (producers and consumers) need to know about the broker
> >> ports in advance. They don't need to know about all brokers, but they
> >> need to know at least one host:port pair that speaks the protocol they
> >> want to use. The change is that all host:port pairs in broker.list
> >> must be of the same protocol and match the security.protocol
> >> configuration parameter.
> >>
> >> - Client uses security.protocol configuration parameter to open a
> >> connection to one of the brokers and sends the good old
> >> MetadataRequest. The broker knows which port it got the connection on,
> >> therefore it knows which security protocol is expected (it needs to
> >> use the same protocol to accept the connection and respond), and
> >> therefore it can send a response that contains only the host:port
> >> pairs that are relevant to that protocol.
> >>
> >> - From the client side the MetadataResponse did not change - it
> >> contains a list of brokerId,host,port that the client can connect to.
> >> The fact that all those broker endpoints were chosen out of a larger
> >> collection to match the right protocol is irrelevant for the client.
> >>
> >> I really like the new design since it preserves a lot of the same
> >> configurations and APIs.
> >>
> >> Thoughts?
> >>
> >> Gwen
> >>
> >> On Thu, Jan 22, 2015 at 9:19 AM, Jay Kreps  wrote:
> >> > I think I am still confused. In addition to the UpdateMetadataRequest
> >> don't
> >> > we have to change the MetadataResponse so that it's possible for
> clients
> >> to
> >> > discover the new ports? Or is that a second phase? I was imagining it
> >> > worked by basically allowing the brokers to advertise multiple ports,
> one
> >> > per security type, and then in the client you configure a protocol
> which
> >> > will implicitly choose the port from the options returned in metadata
> to
> >> > you...
> >> >
> >> > Likewise in the ConsumerMetadataResponse we are currently giving back
> >> full
> >> > broker information. I think we would have two options here: either
> change
> >> > the broker information included in that response to match the
> >> > metadataresponse or else remove the broker information entirely and
> just
> >> > return the node id (since in order to use that request you would
> already
> >> > have to have the cluster metadata). The second option may be cleaner
> >> since
> >> > it means we won't have to continue evolving those two in lockstep...
> >> >
> >> > -Jay
> >> >
> >> > On Wed, Jan 21, 2015 at 6:19 PM, Gwen Shapira 
> >> wrote:
> >> >
> >> >> Good point :)
> >> >>
> >> >> I added the specifics of the new  UpdateMetadataRequest, which is the
> >> >> only protocol bump in this change.
> >> >>
> >> >> Highlighted the broker and producer/consumer configuration changes,
> >> >> added some example values and added the new zookeeper json.
> >> >>
> >> >> Hope this makes things clearer.
> >> >>
> >> >> On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps 
> wrote:
> >> >> > Hey Gwen,
> >> >> >
> >> >> > Could we get the actual changes in that KIP? I.e. changes to
> metadata
> >> >> > request, changes to UpdateMetadataRequest, new configs and what
> will
> >> >> their
> >> >> > valid values be, etc. This kind of says that those things will
> change
> >> but
> >> >> > doesn't say what they will change to...
> >> >> >
> >> >> > -Jay
> >> >> >
> >> >> > On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira <
> gshap...@cloudera.com>
> >> >> wrote:
> >> >> >
> >> >> >> I created a KIP for the multi-port broker change.
> >> >> >>
> >> >> >>
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
> >> >> >>
> >> >> >> I'm not re-opening the discussion, since it was agreed on over a
> >> month
> >> >> >> ago and implementation is close to complete (I hope!). Lets
> consider
> >> >> >> this voted and accepted?
> >> >> >>
> >> >> >> Gwen
> >> >> >>
> >> >> >> On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps 
> >> >> wrote:
> >> >> >> > Great! Sounds like everyone is on the same page
> >> >> >> >
> >> >> >> >- I created a template page to make things easier. If you do
> >> >> >> Tools->Copy
> >> >> >> >on this page you can just fill in the italic portions with
> your
> >> >> >> det

Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/test/scala/unit/kafka/utils/TestUtils.scala, lines 733-743
> > 
> >
> > Is this the same as createTopic in line 172?
> 
> Jay Kreps wrote:
> Yeah, weird, I swear I actually didn't add that method. The code style 
> isn't even mine (e.g. I would never never put spaces after the paren in a for 
> loop). Yet there it is. Maybe it came with the patch that added the request 
> that I built off of...dunno. Anyhow it isn't used so I'll delete it.

Nevermind, I see what it was, I moved that out of PrimativeApiTest since it was 
general purpose. But the right thing to do was delete it and just use the 
already existing methods in TestUtils as you point out. done.


- Jay


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review68947
---


On Jan. 23, 2015, 4:22 a.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 23, 2015, 4:22 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> 
> Diffs
> -
> 
>   build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/a

[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-22 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1760:
-
Attachment: KAFKA-1760_2015-01-22_20:21:56.patch

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch, KAFKA-1760_2015-01-22_20:21:56.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-22 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288721#comment-14288721
 ] 

Jay Kreps commented on KAFKA-1760:
--

Updated reviewboard https://reviews.apache.org/r/27799/diff/
 against branch trunk

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch, KAFKA-1760_2015-01-22_20:21:56.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 23, 2015, 4:22 a.m.)


Review request for kafka.


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description (updated)
---

New consumer.


Diffs (updated)
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
 99b52c23d639df010bf2affc0f79d1c6e16ed67c 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataResponse.java
 8b8f591c4b2802a9cbbe34746c0b3ca4a64a8681 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
2fc471f64f4352eeb128bbd3941779780076fb8c 
  clien

Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/common/network/Selector.java, line 
> > 198
> > 
> >
> > Just trying to understand the rationale why we want to special-care for 
> > failures in send() call and leave them as disconnected state at the 
> > beginning of the next poll() call?

The rationale is that the disconnection can either happen prior to the send 
call or after the send call. But how you handle these are pretty much the same. 
So it is easier to just treat them the same, otherwise you need to have the 
same handling logic in two places, once as a try/catch on send and once to 
handle the disconnection response.


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/common/utils/Utils.java, lines 
> > 334-336
> > 
> >
> > Newline here?

Hmm, I don't think we've been very consistent about that to date. If we want to 
get consistent let's do a clean sweep and fix them all. If I change just these 
two methods they will be inconsistent with everything else in Utils.java, and 
if I change that it will still be inconsistent with much of the other code.


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > clients/src/test/java/org/apache/kafka/clients/consumer/ConsumerExampleTest.java,
> >  lines 285-297
> > 
> >
> > Is this intentional?

No, totally forgot about that, I think these examples should just be deleted.


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > clients/src/test/java/org/apache/kafka/clients/consumer/internals/SubscriptionStateTest.java,
> >  lines 45-53
> > 
> >
> > "expected" usage may be fragile with junit, see KAFKA-1782 for more 
> > details.

Yeah but that is just due to scala stupidity. This is java, so it should work 
fine, right?


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, lines 460-461
> > 
> >
> > Just realized we might have a mis-match in our design while reading 
> > this code:
> > 
> > When consumer subscribe (topic-partition), does it still needs to send 
> > a join-group request to the coordinator? If not, then we will not be able 
> > to detect / exclude the case where consumers within the same group 
> > subscribe to both topic-partition and topic and the coordinator will only 
> > try to balance with those only subscribing to topics; if yes then the 
> > join-group request needs to be modified as it only contain topics field 
> > today.

Well the goal of this code was just to give back something to be able to test 
against.

But to answer your question, here is my belief about how it is supposed to work:
1. If you subscribe to topics we check validity on the server side and do all 
the fancy assignment.
2. If you subscribe to particular topic-partitions we don't check anything at 
all. It is up to you. You may have consumers for all partitions, you may not, 
we don't help in any way and there is no join group request. Basically if you 
subscribe to a partitition you are subscribed to that partition, there is 
nothing to check. I think this is actually the right thing as there are many 
possible patterns for partition-level subscription and nothing we can check 
that will be correct or helpful across them.
3. We disallow mixing of partition and topic subscriptions in the same client, 
just for implementation simplicity.

So say you have some code that subscribes to partitions and other code that 
subscribes to topics. The "right" behavior is that the topic subscribers all 
join the group and divide up ALL the partitions amongst themselves. The code 
subscribing to partitions gets whatever it subscribes to, duplicating the topic 
stuff. In practice this is a mistake that is a bit hard to make, I'd imagine.


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/ConsumerPerformance.scala, lines 60-64
> > 
> >
> > The usage of the new consumer here assumes there is only one broker and 
> > hence according to the stub logic it will get all the partitions, which is 
> > a bit risky.

That is why you need to implement the co-ordinator :-)


> On Jan. 22, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/test/scala/integration/kafka/api/ConsumerTest.scala, line 64
> > 
> >
> > Shall we add the @Test label just in case?

No I don't think so. I think the point you made in the other JIRA was that 
@Test only works for junit 4, jun

Re: Review Request 30199: Patch for KAFKA-1890

2015-01-22 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30199/
---

(Updated 一月 23, 2015, 3:57 a.m.)


Review request for kafka.


Bugs: KAFKA-1890
https://issues.apache.org/jira/browse/KAFKA-1890


Repository: kafka


Description (updated)
---

Patch for KAFKA-1890
Mirror maker hit NPE at startup.


Diffs
-

  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 

Diff: https://reviews.apache.org/r/30199/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 30199: Patch for KAFKA-1890

2015-01-22 Thread Jiangjie Qin


> On 一月 23, 2015, 3:14 a.m., Joe Stein wrote:
> > Can you add test cases for the failure please? We should be able to apply 
> > the test cases, run test, see the failure, apply fix, verify the success. 
> > Thanks!

It was actually a pretty silly mistake I made. Mirror maker will always have 
NPE thrown out when starting up. After apply the patch it won't occur again.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30199/#review69337
---


On 一月 23, 2015, 3:57 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30199/
> ---
> 
> (Updated 一月 23, 2015, 3:57 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1890
> https://issues.apache.org/jira/browse/KAFKA-1890
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Patch for KAFKA-1890
> Mirror maker hit NPE at startup.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 5cbc8103e33a0a234d158c048e5314e841da6249 
> 
> Diff: https://reviews.apache.org/r/30199/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-01-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288694#comment-14288694
 ] 

Joe Stein commented on KAFKA-1888:
--

I agree this is something that we need. I don't know if the current system 
tests are the right vehicle for the effort. The system tests haven't been able 
to help compatibility with clients or tooling. We started on some spark jobs 
https://github.com/stealthly/gauntlet which I think we can make work for this 
type of test too. If this is an approach that folks might be interested in the 
core project I could write up a KIP.

> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1891) MirrorMaker hides consumer exception - making troubleshooting challenging

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1891:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to trunk

> MirrorMaker hides consumer exception - making troubleshooting challenging
> -
>
> Key: KAFKA-1891
> URL: https://issues.apache.org/jira/browse/KAFKA-1891
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1891.patch
>
>
> When MirrorMaker encounters an issue creating a consumer, it gives a generic 
> "unable to create stream" error, while hiding the actual issue.
> We should print the original exception too, so users can resolve whatever 
> issue prevents MirrorMaker from creating a stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1891) MirrorMaker hides consumer exception - making troubleshooting challenging

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1891:
-
Fix Version/s: 0.8.3

> MirrorMaker hides consumer exception - making troubleshooting challenging
> -
>
> Key: KAFKA-1891
> URL: https://issues.apache.org/jira/browse/KAFKA-1891
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1891.patch
>
>
> When MirrorMaker encounters an issue creating a consumer, it gives a generic 
> "unable to create stream" error, while hiding the actual issue.
> We should print the original exception too, so users can resolve whatever 
> issue prevents MirrorMaker from creating a stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30199: Patch for KAFKA-1890

2015-01-22 Thread Joe Stein

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30199/#review69337
---


Can you add test cases for the failure please? We should be able to apply the 
test cases, run test, see the failure, apply fix, verify the success. Thanks!

- Joe Stein


On Jan. 22, 2015, 11:25 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30199/
> ---
> 
> (Updated Jan. 22, 2015, 11:25 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1890
> https://issues.apache.org/jira/browse/KAFKA-1890
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Patch for KAFKA-1890
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 5cbc8103e33a0a234d158c048e5314e841da6249 
> 
> Diff: https://reviews.apache.org/r/30199/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Reviewer: Gwen Shapira

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Priority: Blocker  (was: Major)

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Description: (was: Fix bug preventing Mirror Maker from successful 
rebalance.)

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Description: Follow-up patch for KAFKA-1650

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Summary: Fix bug preventing Mirror Maker from successful rebalance.  (was: 
Follow-up patch for KAFKA-1650)

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Fix Version/s: 0.8.3

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Latency Tracking Across All Kafka Component

2015-01-22 Thread Bhavesh Mistry
HI Kafka Team,

 I completely understand the use of the Audit event and reference material
posted here https://issues.apache.org/jira/browse/KAFKA-260 and slides .

Since we are enterprise customer of the Kafka end-to-end pipeline, it would
be great if Kafka have build-in support for distributive  tracing.  Here is
how I envision  Kakfa Distributive Tracing:

1) Would like to see the end to end journey for each major hops (including
major component within same JVM (
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Internals eg
network layer, API layer, Replication, and Log Sub System etc).

Once an app team produces audit log message, it will contain GUID and
ability to trace its journey through producer (queue) –network-to-broker
(request channel, to API layer, disk commit to consumer read etc).  This
gives both Kafka Customers (Operations) and Developers ability to trace
event journey and zoom into component which is bottle neck.  Of course, the
use case can be expended to have aggregated call graph for entire pipeline.
(far fetch vision).


Here are couple of reference were other company are using for tracing
distributive system.

http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/36356.pdf
https://blog.twitter.com/2012/distributed-systems-tracing-with-zipkin

eBay  Transactional Logger (Distributed Tree Logging)
http://server.dzone.com/articles/monitoring-ebay-big-data
https://devopsdotcom.files.wordpress.com/2012/11/screen-shot-2012-11-11-at-10-06-39-am.png


UI for tracking the Audit event:
http://4.bp.blogspot.com/-b0r71ZbJdmA/T9DYhbE0uXI/ABs/bXwyM76Iddc/s1600/web-screenshot.png

This is how I would implement:
Each of Kafka component logs its Transactional log for audit event into
disk  → Agent (Flume, Logstash etc) sends those pre-formatted(Structure)
 logs to → Elastic Search so people can search by the GUID and  produce
call graph similar to Zipkin or Chrome resource TimeLine View of Event
where it spent time etc.

This would be powerful tool for both Kafka Development team for customers
who have latency issues.  This requires lots of effort and code
instrumentation.  It would be cool if Kafka team at least gets started with
distributive tracing functionality.

I am sorry I got back to you so late.

Thanks,

Bhavesh


On Thu, Jan 15, 2015 at 4:01 PM, Guozhang Wang  wrote:

> Hi,
>
> At LinkedIn we used an audit module to track the latency / message counts
> at each "tier" of the pipeline (for your example it will have the producer
> / local / central / HDFS tiers). Some details can be found on our recent
> talk slides (slide 41/42):
>
> http://www.slideshare.net/GuozhangWang/apache-kafka-at-linkedin-43307044
>
> This audit is specific to the usage of Avro as our serialization tool
> though, and we are considering ways to get it generalized hence
> open-sourced.
>
> Guozhang
>
>
> On Mon, Jan 5, 2015 at 3:33 PM, Otis Gospodnetic <
> otis.gospodne...@gmail.com
> > wrote:
>
> > Hi,
> >
> > That sounds a bit like needing a full, cross-app, cross-network
> > transaction/call tracing, and not something specific or limited to Kafka,
> > doesn't it?
> >
> > Otis
> > --
> > Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> > Solr & Elasticsearch Support * http://sematext.com/
> >
> >
> > On Mon, Jan 5, 2015 at 2:43 PM, Bhavesh Mistry <
> mistry.p.bhav...@gmail.com
> > >
> > wrote:
> >
> > > Hi Kafka Team/Users,
> > >
> > > We are using Linked-in Kafka data pipe-line end-to-end.
> > >
> > > Producer(s) ->Local DC Brokers -> MM -> Central brokers -> Camus Job ->
> > > HDFS
> > >
> > > This is working out very well for us, but we need to have visibility of
> > > latency at each layer (Local DC Brokers -> MM -> Central brokers ->
> Camus
> > > Job ->  HDFS).  Our events are time-based (time event was produce).  Is
> > > there any feature or any audit trail  mentioned at (
> > > https://github.com/linkedin/camus/) ?  But, I would like to know
> > > in-between
> > > latency and time event spent in each hope? So, we do not know where is
> > > problem and what t o optimize ?
> > >
> > > Any of this cover in 0.9.0 or any other version of upcoming Kafka
> release
> > > ?  How might we achive this  latency tracking across all components ?
> > >
> > >
> > > Thanks,
> > >
> > > Bhavesh
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Commented] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2015-01-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288639#comment-14288639
 ] 

Guozhang Wang commented on KAFKA-1634:
--

Updated reviewboard https://reviews.apache.org/r/27391/diff/
 against branch origin/trunk

> Improve semantics of timestamp in OffsetCommitRequests and update 
> documentation
> ---
>
> Key: KAFKA-1634
> URL: https://issues.apache.org/jira/browse/KAFKA-1634
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
> KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch, 
> KAFKA-1634_2014-11-21_14:00:34.patch, KAFKA-1634_2014-12-01_11:44:35.patch, 
> KAFKA-1634_2014-12-01_18:03:12.patch, KAFKA-1634_2015-01-14_15:50:15.patch, 
> KAFKA-1634_2015-01-21_16:43:01.patch, KAFKA-1634_2015-01-22_18:47:37.patch
>
>
> From the mailing list -
> following up on this -- I think the online API docs for OffsetCommitRequest
> still incorrectly refer to client-side timestamps:
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
> Wasn't that removed and now always handled server-side now?  Would one of
> the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2015-01-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1634:
-
Attachment: KAFKA-1634_2015-01-22_18:47:37.patch

> Improve semantics of timestamp in OffsetCommitRequests and update 
> documentation
> ---
>
> Key: KAFKA-1634
> URL: https://issues.apache.org/jira/browse/KAFKA-1634
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
> KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch, 
> KAFKA-1634_2014-11-21_14:00:34.patch, KAFKA-1634_2014-12-01_11:44:35.patch, 
> KAFKA-1634_2014-12-01_18:03:12.patch, KAFKA-1634_2015-01-14_15:50:15.patch, 
> KAFKA-1634_2015-01-21_16:43:01.patch, KAFKA-1634_2015-01-22_18:47:37.patch
>
>
> From the mailing list -
> following up on this -- I think the online API docs for OffsetCommitRequest
> still incorrectly refer to client-side timestamps:
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
> Wasn't that removed and now always handled server-side now?  Would one of
> the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-01-22 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/
---

(Updated Jan. 23, 2015, 2:47 a.m.)


Review request for kafka.


Bugs: KAFKA-1634
https://issues.apache.org/jira/browse/KAFKA-1634


Repository: kafka


Description (updated)
---

Incorporated Jun's comments


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
7517b879866fc5dad5f8d8ad30636da8bbe7784a 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 
3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
df37fc6d8f0db0b8192a948426af603be3444da4 
  core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
050615c72efe7dbaa4634f53943bd73273d20ffb 
  core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
  core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
4cabffeacea09a49913505db19a96a55d58c0909 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
5487259751ebe19f137948249aa1fd2637d2deb4 
  core/src/main/scala/kafka/server/KafkaApis.scala 
ec8d9f7ba44741db40875458bd524c4062ad6a26 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6d74983472249eac808d361344c58cc2858ec971 
  core/src/main/scala/kafka/server/KafkaServer.scala 
89200da30a04943f0b9befe84ab17e62b747c8c4 
  core/src/main/scala/kafka/server/OffsetManager.scala 
0bdd42fea931cddd072c0fff765b10526db6840a 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
e58fbb922e93b0c31dff04f187fcadb4ece986d7 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
cd16ced5465d098be7a60498326b2a98c248f343 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
  core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
ba1e48e4300c9fb32e36e7266cb05294f2a481e5 

Diff: https://reviews.apache.org/r/27391/diff/


Testing
---


Thanks,

Guozhang Wang



Re: Review Request 27391: Fix KAFKA-1634

2015-01-22 Thread Guozhang Wang


> On Jan. 22, 2015, 1:56 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, lines 145-166
> > 
> >
> > I am not sure that we should change the timestamp for offsets produced 
> > in V0 and V1. There could be data in the offset topic already written by 
> > 0.8.2 code. See the other comment in OffsetManager on expiring.

I think if it (the commit timestamp) is set to default value -1, we should 
override it according to the wiki:

https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest

Otherwise it should not be overriden.


> On Jan. 22, 2015, 1:56 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, lines 121-123
> > 
> >
> > Does that change work correctly with offsets already stored in v0 and 
> > v1 format using 0.8.2 code? Would those offsets still be expired at the 
> > right time?

Changed the logic of overriding commit / expire timestamps as the following:

1. If version <= 1 or retention time is default (-1) override retention time to 
server default value.
2. If the original time stamp (i.e. the commit timestamp) is set to default 
(-1), override to the current time.
3. After 2) is done, compute the expire time to be commit timestamp + retention 
time.
4. Hence the above logic of checking expiration will be compatible (i.e. 
expiration time < now).


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review69106
---


On Jan. 22, 2015, 12:43 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Jan. 22, 2015, 12:43 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Incorporated Joel's comments
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
> 121e880a941fcd3e6392859edba11a94236494cc 
>   
> clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
>  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 050615c72efe7dbaa4634f53943bd73273d20ffb 
>   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
> c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
>   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
> 4cabffeacea09a49913505db19a96a55d58c0909 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 191a8677444e53b043e9ad6e94c5a9191c32599e 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> ec8d9f7ba44741db40875458bd524c4062ad6a26 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6d74983472249eac808d361344c58cc2858ec971 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 89200da30a04943f0b9befe84ab17e62b747c8c4 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 0bdd42fea931cddd072c0fff765b10526db6840a 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> e58fbb922e93b0c31dff04f187fcadb4ece986d7 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
>   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
> ba1e48e4300c9fb32e36e7266cb05294f2a481e5 
> 
> Diff: https://reviews.apache.org/r/27391/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Review Request 29831: Patch for KAFKA-1476

2015-01-22 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29831/#review69332
---


Onur, do you have an updated version of the console output from this tool?

- Neha Narkhede


On Jan. 22, 2015, 10:32 a.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29831/
> ---
> 
> (Updated Jan. 22, 2015, 10:32 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1476
> https://issues.apache.org/jira/browse/KAFKA-1476
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merged in work for KAFKA-1476 and sub-task KAFKA-1826
> 
> 
> Diffs
> -
> 
>   bin/kafka-consumer-groups.sh PRE-CREATION 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> 28b12c7b89a56c113b665fbde1b95f873f8624a3 
>   core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala PRE-CREATION 
>   core/src/main/scala/kafka/utils/ZkUtils.scala 
> c14bd455b6642f5e6eb254670bef9f57ae41d6cb 
>   core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> ac15d34425795d5be20c51b01fa1108bdcd66583 
> 
> Diff: https://reviews.apache.org/r/29831/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



Re: Review Request 28769: Patch for KAFKA-1809

2015-01-22 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review69281
---


Thanks for the patch. Looks promising. Some comments.

1. I overlooked this when I suggested the new broker format in ZK. This means 
that we will need to upgrade all consumer clients before we can turn on the 
flag of using the new protocol on the brokers, which may not be convenient. 
Now, I think your earlier approach is probably better because of this?


clients/src/main/java/org/apache/kafka/clients/tools/ProducerPerformance.java


This is probably not intended.



clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java


I thought the security protocol is a seperate config in the client and not 
part of the broker.list. Do we need to specify the protocol here?



core/src/main/scala/kafka/cluster/Broker.scala


channel -> protocol?



core/src/main/scala/kafka/cluster/Broker.scala


The implementation seems to represent the endpoints as a single string in 
csv, not a map?



core/src/main/scala/kafka/javaapi/TopicMetadata.scala


Technically, this is an api change since it's used in 
javaapi.SimpleConsumer. The caller will now get a different type in the 
response. An alternative is to leave Broker as it is and create sth like 
BrokerProfile to include all endpoints. Perhaps, we need to discuss this in WIP 
a bit, whether it's better to break the api in order to use a more 
meaningingful class name, or not break the api and stick with a lousy name.



core/src/main/scala/kafka/network/SocketServer.scala


KAFKA-1683



core/src/main/scala/kafka/network/SocketServer.scala


It seems that we should always be able to find the port in the map. So, 
perhaps we should just do portToProtocol(port).



core/src/main/scala/kafka/server/KafkaConfig.scala


Perhaps we can give an example URI.



core/src/main/scala/kafka/server/KafkaConfig.scala


Since this is also used for communication btw the controller and the 
brokers, perhaps it's better named as sth like "intra.broker.security.protocol"?



core/src/main/scala/kafka/server/KafkaConfig.scala


I am thinking about how we should name this field. Since this is only 
needed for internal communication among brokers, perhaps we should name it as 
sth like use.new.intra.broker.wire.protocol. My next question is what happens 
if we have intra broker protocol changes in 2 releases. Do we want to use 
different names so that we can enable each change independantly? An alternative 
is to have the same property name and the meaning is to turn on intra broker 
changes introduced in this release only. The latter implies that one can't skip 
the upgrading of the intermediate release. So, my feeling is that probably the 
former will be better? Perhaps we can bring this up in the WIP discussion.



core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala


This can be just endpoints(SecurityProtocol.PLAINTEXT).



kafka-patch-review.py


Are the changes in this file intended?



system_test/utils/kafka_system_test_utils.py


I thought protocol is specified separately, and not in broker.list?


- Jun Rao


On Jan. 14, 2015, 2:16 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated Jan. 14, 2015, 2:16 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> trivial change to add byte serializer to ProducerPerformance; patched by Jun 
> Rao
> 
> 
> first commit of refactoring.
> 
> 
> changed topicmetadata to include brokerendpoints and fixed few unit tests
> 
> 
> fixing systest and support for binding to default address
> 
> 
> fixed system tests
> 
> 
> fix default address binding and ipv6 support
> 
> 
> fix some issues regarding endpoint parsing. Also, larger segments for systest 
> make the validation much faster
> 
> 
> added link to security wiki in doc
> 
> 
> fixing unit test after rename

[jira] [Commented] (KAFKA-1703) The bat script failed to start on windows

2015-01-22 Thread JK Dong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288535#comment-14288535
 ] 

JK Dong commented on KAFKA-1703:


modify 
set BASE_DIR=%CD%\..
to 
set BASE_DIR=%CD%\..\..

and add classpath for release version:
for %%i in (%BASE_DIR%\libs\*.jar) do (
call :concat %%i
)

> The bat script failed to start on windows
> -
>
> Key: KAFKA-1703
> URL: https://issues.apache.org/jira/browse/KAFKA-1703
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.1.1
>Reporter: ChengRen
>  Labels: newbie
> Attachments: kafka-run-class.bat
>
>
> The bat script in bin\windows can not start zookeeper and kafka correctly 
> (where my os is just installed and only jdk ready). I modified the 
> kafka-run-class.bat and add jars in libs folder to classpath.
> for %%i in (%BASE_DIR%\core\lib\*.jar) do (
>   call :concat %%i
> )
>  added  begin
> for %%i in (%BASE_DIR%\..\libs\*.jar) do (
>   call :concat %%i
> )
>  added  end
> for %%i in (%BASE_DIR%\perf\target\scala-2.8.0/kafka*.jar) do (
>   call :concat %%i
> )
> Now it runs correctly.
> Under bin\windows:
> zookeeper-server-start.bat ..\..\config\zookeeper.properties
>kafka-server-start.bat ..\..\config\kafka.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1703) The bat script failed to start on windows

2015-01-22 Thread JK Dong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JK Dong updated KAFKA-1703:
---
Attachment: kafka-run-class.bat

> The bat script failed to start on windows
> -
>
> Key: KAFKA-1703
> URL: https://issues.apache.org/jira/browse/KAFKA-1703
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.1.1
>Reporter: ChengRen
>  Labels: newbie
> Attachments: kafka-run-class.bat
>
>
> The bat script in bin\windows can not start zookeeper and kafka correctly 
> (where my os is just installed and only jdk ready). I modified the 
> kafka-run-class.bat and add jars in libs folder to classpath.
> for %%i in (%BASE_DIR%\core\lib\*.jar) do (
>   call :concat %%i
> )
>  added  begin
> for %%i in (%BASE_DIR%\..\libs\*.jar) do (
>   call :concat %%i
> )
>  added  end
> for %%i in (%BASE_DIR%\perf\target\scala-2.8.0/kafka*.jar) do (
>   call :concat %%i
> )
> Now it runs correctly.
> Under bin\windows:
> zookeeper-server-start.bat ..\..\config\zookeeper.properties
>kafka-server-start.bat ..\..\config\kafka.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1703) The bat script failed to start on windows

2015-01-22 Thread JK Dong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JK Dong updated KAFKA-1703:
---
Status: Patch Available  (was: Open)

> The bat script failed to start on windows
> -
>
> Key: KAFKA-1703
> URL: https://issues.apache.org/jira/browse/KAFKA-1703
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.1.1
>Reporter: ChengRen
>  Labels: newbie
>
> The bat script in bin\windows can not start zookeeper and kafka correctly 
> (where my os is just installed and only jdk ready). I modified the 
> kafka-run-class.bat and add jars in libs folder to classpath.
> for %%i in (%BASE_DIR%\core\lib\*.jar) do (
>   call :concat %%i
> )
>  added  begin
> for %%i in (%BASE_DIR%\..\libs\*.jar) do (
>   call :concat %%i
> )
>  added  end
> for %%i in (%BASE_DIR%\perf\target\scala-2.8.0/kafka*.jar) do (
>   call :concat %%i
> )
> Now it runs correctly.
> Under bin\windows:
> zookeeper-server-start.bat ..\..\config\zookeeper.properties
>kafka-server-start.bat ..\..\config\kafka.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288521#comment-14288521
 ] 

Gwen Shapira commented on KAFKA-1890:
-

+1 - Solved my problem :)

Thanks, [~becket_qin]

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1890.patch
>
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30199: Patch for KAFKA-1890

2015-01-22 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30199/#review69302
---

Ship it!


Ship It!

- Gwen Shapira


On Jan. 22, 2015, 11:25 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30199/
> ---
> 
> (Updated Jan. 22, 2015, 11:25 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1890
> https://issues.apache.org/jira/browse/KAFKA-1890
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Patch for KAFKA-1890
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 5cbc8103e33a0a234d158c048e5314e841da6249 
> 
> Diff: https://reviews.apache.org/r/30199/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288438#comment-14288438
 ] 

Jiangjie Qin commented on KAFKA-1890:
-

Created reviewboard https://reviews.apache.org/r/30199/diff/
 against branch origin/trunk

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1890.patch
>
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1890:

Attachment: KAFKA-1890.patch

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1890.patch
>
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1890:

Status: Patch Available  (was: Open)

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1890.patch
>
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30199: Patch for KAFKA-1890

2015-01-22 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30199/
---

Review request for kafka.


Bugs: KAFKA-1890
https://issues.apache.org/jira/browse/KAFKA-1890


Repository: kafka


Description
---

Patch for KAFKA-1890


Diffs
-

  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5cbc8103e33a0a234d158c048e5314e841da6249 

Diff: https://reviews.apache.org/r/30199/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288404#comment-14288404
 ] 

Jiangjie Qin commented on KAFKA-1890:
-

Yes, this is the one I'm fixing.

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288395#comment-14288395
 ] 

Gwen Shapira commented on KAFKA-1890:
-

[~becket_qin], I'm getting the following error when running MirrorMaker with 
KAFKA-1650:

{code}
[2015-01-22 14:45:54,338] FATAL Unable to create stream - shutting down mirror 
maker: (kafka.tools.MirrorMaker$)
kafka.common.ConsumerRebalanceFailedException: 
g1_kafkaf-2.ent.cloudera.com-1421966744998-a84db773 can't rebalance after 4 
retries
at 
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:650)
at 
kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:932)
at 
kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:966)
at 
kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:163)
at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:271)
at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
{code}

Is this the error you are fixing in this JIRA, or is this a separate issue?


> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1891) MirrorMaker hides consumer exception - making troubleshooting challenging

2015-01-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1891:

Attachment: KAFKA-1891.patch

Its a 3 character patch, so I'm not creating an RB :)

> MirrorMaker hides consumer exception - making troubleshooting challenging
> -
>
> Key: KAFKA-1891
> URL: https://issues.apache.org/jira/browse/KAFKA-1891
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1891.patch
>
>
> When MirrorMaker encounters an issue creating a consumer, it gives a generic 
> "unable to create stream" error, while hiding the actual issue.
> We should print the original exception too, so users can resolve whatever 
> issue prevents MirrorMaker from creating a stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1891) MirrorMaker hides consumer exception - making troubleshooting challenging

2015-01-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1891:

Status: Patch Available  (was: Open)

> MirrorMaker hides consumer exception - making troubleshooting challenging
> -
>
> Key: KAFKA-1891
> URL: https://issues.apache.org/jira/browse/KAFKA-1891
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1891.patch
>
>
> When MirrorMaker encounters an issue creating a consumer, it gives a generic 
> "unable to create stream" error, while hiding the actual issue.
> We should print the original exception too, so users can resolve whatever 
> issue prevents MirrorMaker from creating a stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30196: Patch for KAFKA-1886

2015-01-22 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30196/
---

(Updated Jan. 22, 2015, 10:35 p.m.)


Review request for kafka and Joel Koshy.


Changes
---

+ Added joel to review


Bugs: KAFKA-1886
https://issues.apache.org/jira/browse/KAFKA-1886


Repository: kafka


Description
---

Fixing KAFKA-1886. Forcing the SimpleConsumer to throw a 
ClosedByInterruptException if thrown and not retry


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into trunk


Diffs
-

  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
cbef84ac76e62768981f74e71d451f2bda995275 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
a5386a03b62956bc440b40783247c8cdf7432315 

Diff: https://reviews.apache.org/r/30196/diff/


Testing (updated)
---

Added an integration test to PrimitiveAPITest.scala.


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-22 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-1886:
-
Attachment: KAFKA-1886.patch

> SimpleConsumer swallowing ClosedByInterruptException
> 
>
> Key: KAFKA-1886
> URL: https://issues.apache.org/jira/browse/KAFKA-1886
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Aditya A Auradkar
>Assignee: Jun Rao
> Attachments: KAFKA-1886.patch
>
>
> This issue was originally reported by a Samza developer. I've included an 
> exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
> my dev setup.
> From: criccomi
> Hey all,
> Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
> interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
> Throwable in its sendRequest method [2]. I'm wondering: if 
> blockingChannel.send/receive throws a ClosedByInterruptException
> when the thread is interrupted, what happens? It looks like sendRequest will 
> catch the exception (which I
> think clears the thread's interrupted flag), and then retries the send. If 
> the send succeeds on the retry, I think that the ClosedByInterruptException 
> exception is effectively swallowed, and the BrokerProxy will continue
> fetching messages as though its thread was never interrupted.
> Am I misunderstanding how things work?
> Cheers,
> Chris
> [1] 
> https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
> [2] 
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30196: Patch for KAFKA-1886

2015-01-22 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30196/
---

Review request for kafka.


Bugs: KAFKA-1886
https://issues.apache.org/jira/browse/KAFKA-1886


Repository: kafka


Description
---

Fixing KAFKA-1886. Forcing the SimpleConsumer to throw a 
ClosedByInterruptException if thrown and not retry


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into trunk


Diffs
-

  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
cbef84ac76e62768981f74e71d451f2bda995275 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
a5386a03b62956bc440b40783247c8cdf7432315 

Diff: https://reviews.apache.org/r/30196/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-22 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288350#comment-14288350
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

Created reviewboard https://reviews.apache.org/r/30196/diff/
 against branch origin/trunk

> SimpleConsumer swallowing ClosedByInterruptException
> 
>
> Key: KAFKA-1886
> URL: https://issues.apache.org/jira/browse/KAFKA-1886
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Aditya A Auradkar
>Assignee: Jun Rao
> Attachments: KAFKA-1886.patch
>
>
> This issue was originally reported by a Samza developer. I've included an 
> exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
> my dev setup.
> From: criccomi
> Hey all,
> Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
> interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
> Throwable in its sendRequest method [2]. I'm wondering: if 
> blockingChannel.send/receive throws a ClosedByInterruptException
> when the thread is interrupted, what happens? It looks like sendRequest will 
> catch the exception (which I
> think clears the thread's interrupted flag), and then retries the send. If 
> the send succeeds on the retry, I think that the ClosedByInterruptException 
> exception is effectively swallowed, and the BrokerProxy will continue
> fetching messages as though its thread was never interrupted.
> Am I misunderstanding how things work?
> Cheers,
> Chris
> [1] 
> https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
> [2] 
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1818) Code cleanup in ReplicationUtils including unit test

2015-01-22 Thread Eric Olander (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288328#comment-14288328
 ] 

Eric Olander commented on KAFKA-1818:
-

In review: https://reviews.apache.org/r/29840/

> Code cleanup in ReplicationUtils including unit test
> 
>
> Key: KAFKA-1818
> URL: https://issues.apache.org/jira/browse/KAFKA-1818
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Eric Olander
>Assignee: Neha Narkhede
>Priority: Trivial
> Attachments: 
> 0001-KAFKA-1818-clean-up-code-to-more-idiomatic-scala-usa.patch
>
>
> Code in getLeaderIsrAndEpochForPartition() and parseLeaderAndIsr() was 
> essentially reimplementing the flatMap function on the Option type.  The 
> attached patch refactors that code to more idiomatic Scala and provides a 
> unit test over the affected code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1863) Exception categories / hierarchy in clients

2015-01-22 Thread Navina Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288327#comment-14288327
 ] 

Navina Ramesh commented on KAFKA-1863:
--

As a part of the documentation, can you also add a comparison table for the 
configuration variables between the old and new Kafka versions of the Kafka 
Producer? It can probably be part of a separate document which describes the 
differences between the old and new producer design. If such a document already 
exists, please let me know!


> Exception categories / hierarchy in clients
> ---
>
> Key: KAFKA-1863
> URL: https://issues.apache.org/jira/browse/KAFKA-1863
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
>
> In the new clients package we introduces a new set of exceptions, but its 
> hierarchy is not very clear as of today:
> {code}
> RuntimeException -> KafkaException -> BufferExhastedException
>-> ConfigException
>-> 
> SerializationException
>-> 
> QuotaViolationException
>-> SchemaException
>-> ApiException
> ApiException -> InvalidTopicException
>  -> OffsetMetadataTooLarge (probabaly need to be renamed)
>  -> RecordBatchTooLargeException
>  -> RecordTooLargeException
>  -> UnknownServerException
>  -> RetriableException
> RetriableException -> CorruptRecordException
>-> InvalidMetadataException
>-> NotEnoughtReplicasAfterAppendException
>-> NotEnoughReplicasException
>-> OffsetOutOfRangeException
>-> TimeoutException
>-> UnknownTopicOrPartitionException
> {code}
> KafkaProducer.send() may throw KafkaExceptions that are not ApiExceptions; 
> other exceptions will be set in the returned future metadata.
> We need better to
> 1. Re-examine the hierarchy. For example, for producers only exceptions that 
> are thrown directly from the caller thread before it is appended to the batch 
> buffer should be ApiExceptions; some exceptions could be renamed / merged.
> 2. Clearly document the exception category / hierarchy as part of the release.
> [~criccomini] may have some more feedbacks for this issue from Samza's usage 
> experience. [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1891) MirrorMaker hides consumer exception - making troubleshooting challenging

2015-01-22 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-1891:
---

 Summary: MirrorMaker hides consumer exception - making 
troubleshooting challenging
 Key: KAFKA-1891
 URL: https://issues.apache.org/jira/browse/KAFKA-1891
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira


When MirrorMaker encounters an issue creating a consumer, it gives a generic 
"unable to create stream" error, while hiding the actual issue.

We should print the original exception too, so users can resolve whatever issue 
prevents MirrorMaker from creating a stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1867) liveBroker list not updated on a cluster with no topics

2015-01-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288254#comment-14288254
 ] 

Sriharsha Chintalapani commented on KAFKA-1867:
---

[~junrao] when you get a chance can you please review the patch from here 
https://reviews.apache.org/r/30007/diff/ . Thanks.

> liveBroker list not updated on a cluster with no topics
> ---
>
> Key: KAFKA-1867
> URL: https://issues.apache.org/jira/browse/KAFKA-1867
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1867.patch, KAFKA-1867.patch
>
>
> Currently, when there is no topic in a cluster, the controller doesn't send 
> any UpdateMetadataRequest to the broker when it starts up. As a result, the 
> liveBroker list in metadataCache is empty. This means that we will return 
> incorrect broker list in TopicMetatadataResponse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-01-22 Thread Abhishek Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288206#comment-14288206
 ] 

Abhishek Nigam commented on KAFKA-1888:
---

Gwen,
If you are not actively working on it can I pick it up.

> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288205#comment-14288205
 ] 

Francois Saint-Jacques commented on KAFKA-1889:
---

Also note that this patch keeps the existing behaviour (to some epsilon: var 
rename + default jmx port which will break multiple process). It is mostly a 
cleanup of "kafka-run-class.sh" that acquired technical debts over time.

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288193#comment-14288193
 ] 

Francois Saint-Jacques commented on KAFKA-1889:
---

It's not about working well with rpm and debian, it's about having sane default 
and behaviour so that packagers won't have to diff/patch said scripts.

My patch is not final at all, I just wanted to start the discussion on cleaning 
the scripts.

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-1890:
---

 Summary: Follow-up patch for KAFKA-1650
 Key: KAFKA-1890
 URL: https://issues.apache.org/jira/browse/KAFKA-1890
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Fix bug preventing Mirror Maker from successful rebalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIPs

2015-01-22 Thread Gwen Shapira
Thanks for validating our ideas. Updated the KIP with the workflow.

Now if you can nudge Jun to review the latest patch... ;)


On Thu, Jan 22, 2015 at 11:44 AM, Jay Kreps  wrote:
> Oh yeah I think that is better, I hadn't thought of that approach! Any way
> you could describe the usage in the KIP, just for completeness?
>
> -Jay
>
> On Thu, Jan 22, 2015 at 10:23 AM, Gwen Shapira 
> wrote:
>
>> I think what you described was the original design, so no wonder you
>> are confused :)
>>
>> Following suggestions from Jun, I changed it a bit. The current model is:
>>
>> - Clients (producers and consumers) need to know about the broker
>> ports in advance. They don't need to know about all brokers, but they
>> need to know at least one host:port pair that speaks the protocol they
>> want to use. The change is that all host:port pairs in broker.list
>> must be of the same protocol and match the security.protocol
>> configuration parameter.
>>
>> - Client uses security.protocol configuration parameter to open a
>> connection to one of the brokers and sends the good old
>> MetadataRequest. The broker knows which port it got the connection on,
>> therefore it knows which security protocol is expected (it needs to
>> use the same protocol to accept the connection and respond), and
>> therefore it can send a response that contains only the host:port
>> pairs that are relevant to that protocol.
>>
>> - From the client side the MetadataResponse did not change - it
>> contains a list of brokerId,host,port that the client can connect to.
>> The fact that all those broker endpoints were chosen out of a larger
>> collection to match the right protocol is irrelevant for the client.
>>
>> I really like the new design since it preserves a lot of the same
>> configurations and APIs.
>>
>> Thoughts?
>>
>> Gwen
>>
>> On Thu, Jan 22, 2015 at 9:19 AM, Jay Kreps  wrote:
>> > I think I am still confused. In addition to the UpdateMetadataRequest
>> don't
>> > we have to change the MetadataResponse so that it's possible for clients
>> to
>> > discover the new ports? Or is that a second phase? I was imagining it
>> > worked by basically allowing the brokers to advertise multiple ports, one
>> > per security type, and then in the client you configure a protocol which
>> > will implicitly choose the port from the options returned in metadata to
>> > you...
>> >
>> > Likewise in the ConsumerMetadataResponse we are currently giving back
>> full
>> > broker information. I think we would have two options here: either change
>> > the broker information included in that response to match the
>> > metadataresponse or else remove the broker information entirely and just
>> > return the node id (since in order to use that request you would already
>> > have to have the cluster metadata). The second option may be cleaner
>> since
>> > it means we won't have to continue evolving those two in lockstep...
>> >
>> > -Jay
>> >
>> > On Wed, Jan 21, 2015 at 6:19 PM, Gwen Shapira 
>> wrote:
>> >
>> >> Good point :)
>> >>
>> >> I added the specifics of the new  UpdateMetadataRequest, which is the
>> >> only protocol bump in this change.
>> >>
>> >> Highlighted the broker and producer/consumer configuration changes,
>> >> added some example values and added the new zookeeper json.
>> >>
>> >> Hope this makes things clearer.
>> >>
>> >> On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps  wrote:
>> >> > Hey Gwen,
>> >> >
>> >> > Could we get the actual changes in that KIP? I.e. changes to metadata
>> >> > request, changes to UpdateMetadataRequest, new configs and what will
>> >> their
>> >> > valid values be, etc. This kind of says that those things will change
>> but
>> >> > doesn't say what they will change to...
>> >> >
>> >> > -Jay
>> >> >
>> >> > On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira 
>> >> wrote:
>> >> >
>> >> >> I created a KIP for the multi-port broker change.
>> >> >>
>> >> >>
>> >>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
>> >> >>
>> >> >> I'm not re-opening the discussion, since it was agreed on over a
>> month
>> >> >> ago and implementation is close to complete (I hope!). Lets consider
>> >> >> this voted and accepted?
>> >> >>
>> >> >> Gwen
>> >> >>
>> >> >> On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps 
>> >> wrote:
>> >> >> > Great! Sounds like everyone is on the same page
>> >> >> >
>> >> >> >- I created a template page to make things easier. If you do
>> >> >> Tools->Copy
>> >> >> >on this page you can just fill in the italic portions with your
>> >> >> details.
>> >> >> >- I retrofitted KIP-1 to match this formatting
>> >> >> >- I added the metadata section people asked for (a link to the
>> >> >> >discussion, the JIRA, and the current status). Let's make sure
>> we
>> >> >> remember
>> >> >> >to update the current status as things are figured out.
>> >> >> >- Let's keep the discussion on the mailing list rather than on
>> the
>> >> >

[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288152#comment-14288152
 ] 

Joe Stein commented on KAFKA-1889:
--

Makes sense. How do we know this works well for rpm and deb? 

What about having rpm and deb script for folks to make packages that wrap what 
you did?

If we introduce something that doesn't do that we will just generate more 
questions and issues and have to support that. It would be great to best 
minimize those things with the changes.

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30126: Patch for KAFKA-1845

2015-01-22 Thread Gwen Shapira


> On Jan. 21, 2015, 10:55 p.m., Eric Olander wrote:
> > core/src/main/scala/kafka/server/KafkaConfig.scala, line 475
> > 
> >
> > Maybe some helper functions could help with this code:
> > 
> > def stringProp(prop: String) = parsed.get(prop).asInstanceOf[String]
> > 
> > then:
> > zkConnect = stringProp(ZkConnectProp)

I like this suggestion. I'd add it to ConfigDef or something similar, so all 
Config classes can enjoy this.
This can be a follow-up patch, since we are already using the 
parsed.get.asInstanceOf pattern everywhere, so the fix is not related to this 
patch specifically.


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30126/#review69063
---


On Jan. 21, 2015, 5:49 p.m., Andrii Biletskyi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30126/
> ---
> 
> (Updated Jan. 21, 2015, 5:49 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1845
> https://issues.apache.org/jira/browse/KAFKA-1845
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1845 - Fixed merge conflicts, ported added configs to KafkaConfig
> 
> 
> KAFKA-1845 - KafkaConfig to ConfigDef: moved validateValues so it's called on 
> instantiating KafkaConfig
> 
> 
> KAFKA-1845 - KafkaConfig to ConfigDef: MaxConnectionsPerIpOverrides refactored
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   core/src/main/scala/kafka/Kafka.scala 
> 77a49e12af6f869e63230162e9f87a7b0b12b610 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> 4a31c7271c2d0a4b9e8b28be729340ecfa0696e5 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6d74983472249eac808d361344c58cc2858ec971 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 89200da30a04943f0b9befe84ab17e62b747c8c4 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> 6879e730282185bda3d6bc3659cb15af0672cecf 
>   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
> e63558889272bc76551accdfd554bdafde2e0dd6 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 90c0b7a19c7af8e5416e4bdba62b9824f1abd5ab 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
> 1bf2667f47853585bc33ffb3e28256ec5f24ae84 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> e28979827110dfbbb92fe5b152e7f1cc973de400 
>   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
> 33c27678bf8ae8feebcbcdaa4b90a1963157b4a5 
>   core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
> c0355cc0135c6af2e346b4715659353a31723b86 
>   
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
> a17e8532c44aadf84b8da3a57bcc797a848b5020 
>   core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
> 95303e098d40cd790fb370e9b5a47d20860a6da3 
>   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
> 25845abbcad2e79f56f729e59239b738d3ddbc9d 
>   core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
> a5386a03b62956bc440b40783247c8cdf7432315 
>   core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
> eab4b5f619015af42e4554660eafb5208e72ea33 
>   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
> 35dc071b1056e775326981573c9618d8046e601d 
>   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
> ba3bcdcd1de9843e75e5395dff2fc31b39a5a9d5 
>   
> core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
>  d6248b09bb0f86ee7d3bd0ebce5b99135491453b 
>   core/src/test/scala/unit/kafka/log/LogTest.scala 
> c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 
>   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
> 4ea0489c9fd36983fe190491a086b39413f3a9cd 
>   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
> 3cf23b3d6d4460535b90cfb36281714788fc681c 
>   core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
> 1db6ac329f7b54e600802c8a623f80d159d4e69b 
>   core/src/test/scala/unit/kafka/producer/ProducerTest.scala 
> ce65dab4910d9182e6774f6ef1a7f45561ec0c23 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
> d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
>   core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
> f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
>   core/

[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288120#comment-14288120
 ] 

Francois Saint-Jacques commented on KAFKA-1889:
---

My goal is to fix the kafka startup script to make it friendlier to package as 
rpm or deb. I believe it is a disjoint concern than the issue you mentioned.

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30126: Patch for KAFKA-1845

2015-01-22 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30126/#review69224
---


Looks great. Just one minor comment.


core/src/main/scala/kafka/server/KafkaConfig.scala


This looks fairly generic. Shouldn't it be somewhere where it can be reused 
by other Config classes? Perhaps in ConfigDef or something similar?


- Gwen Shapira


On Jan. 21, 2015, 5:49 p.m., Andrii Biletskyi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30126/
> ---
> 
> (Updated Jan. 21, 2015, 5:49 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1845
> https://issues.apache.org/jira/browse/KAFKA-1845
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1845 - Fixed merge conflicts, ported added configs to KafkaConfig
> 
> 
> KAFKA-1845 - KafkaConfig to ConfigDef: moved validateValues so it's called on 
> instantiating KafkaConfig
> 
> 
> KAFKA-1845 - KafkaConfig to ConfigDef: MaxConnectionsPerIpOverrides refactored
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   core/src/main/scala/kafka/Kafka.scala 
> 77a49e12af6f869e63230162e9f87a7b0b12b610 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> 4a31c7271c2d0a4b9e8b28be729340ecfa0696e5 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6d74983472249eac808d361344c58cc2858ec971 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 89200da30a04943f0b9befe84ab17e62b747c8c4 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> 6879e730282185bda3d6bc3659cb15af0672cecf 
>   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
> e63558889272bc76551accdfd554bdafde2e0dd6 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 90c0b7a19c7af8e5416e4bdba62b9824f1abd5ab 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
> 1bf2667f47853585bc33ffb3e28256ec5f24ae84 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> e28979827110dfbbb92fe5b152e7f1cc973de400 
>   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
> 33c27678bf8ae8feebcbcdaa4b90a1963157b4a5 
>   core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
> c0355cc0135c6af2e346b4715659353a31723b86 
>   
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
> a17e8532c44aadf84b8da3a57bcc797a848b5020 
>   core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
> 95303e098d40cd790fb370e9b5a47d20860a6da3 
>   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
> 25845abbcad2e79f56f729e59239b738d3ddbc9d 
>   core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
> a5386a03b62956bc440b40783247c8cdf7432315 
>   core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
> eab4b5f619015af42e4554660eafb5208e72ea33 
>   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
> 35dc071b1056e775326981573c9618d8046e601d 
>   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
> ba3bcdcd1de9843e75e5395dff2fc31b39a5a9d5 
>   
> core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
>  d6248b09bb0f86ee7d3bd0ebce5b99135491453b 
>   core/src/test/scala/unit/kafka/log/LogTest.scala 
> c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 
>   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
> 4ea0489c9fd36983fe190491a086b39413f3a9cd 
>   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
> 3cf23b3d6d4460535b90cfb36281714788fc681c 
>   core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
> 1db6ac329f7b54e600802c8a623f80d159d4e69b 
>   core/src/test/scala/unit/kafka/producer/ProducerTest.scala 
> ce65dab4910d9182e6774f6ef1a7f45561ec0c23 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
> d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
>   core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
> f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
>   core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
> ad121169a5e80ebe1d311b95b219841ed69388e2 
>   core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 
> 8913fc1d59f717c6b3ed12c8362080fb5698986b 
>   core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
> a703d2715048c5602635127451593903f8d20576 
>   core/src/test/scala/unit/kafka/server/KafkaCon

Re: New consumer plans

2015-01-22 Thread Guozhang Wang
I have made a pass over the patch, the changes in NetworkClient / Selector
/ Sender look good to me.

But there is one issue I found in the KafkaConsumer implementation, that
when consumer subscribe (topic-partition), it will not send a join-group
request to the coordinator. This seems to be different to the initial
design:

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design

Originally, we want the consumer to also send join group requests to the
coordinator so that the coordinator is able to detect if some of the
consumers within the same group subscribe to topic-partition and while
others subscribe to topic, or if there are overlaps between consumers'
partition subscriptions; otherwise consumer users are responsible for such
cases and if it ever happens coordinator will only rebalance between
consumers with topic subscriptions. But the JoinGroupRequest format only
contains the topics field, which looks like a mis-match here, and now I am
a bit confused about which decision we made back then about these cases.

Another thing is that the current KafkaConsumer class is gigantic just like
the old ZookeeperConsumerConnector class, and I think it's better to be
refactored into layered modules. Will think about ways to do it.

Guozhang


On Tue, Jan 20, 2015 at 5:18 PM, Jay Kreps  wrote:

> There is a draft patch for the new consumer up on KAFKA-1760:
>   https://issues.apache.org/jira/browse/KAFKA-1760
>
> I chatted with Guozhang earlier today and here was our thought on how to
> proceed:
> 1. There are changes to NetworkClient  and Sender that I'll describe below.
> These should be closely reviewed as (a) NetworkClient is an important
> interface and we should want to get it right, and (b) these changes may
> break the new producer if there is any problem with them.
> 2. The rest of the consumer we will do a couple rounds of high-level review
> on but probably not as deep. We will check it in and the proceed to add
> more system and integration tests on consumer functionality.
> 3. In parallel a few of the LI folks will take up the consumer co-ordinator
> server-side implementation.
>
> So right now what would be helpful would be for people to take a look at
> the networkclient and sender changes. There are some annoying javadoc
> auto-formatting changes which I'll try to get out of there, so ignore those
> for now.
>
> Let me try to motivate the new NetworkClient changes so people can
> understand them:
> 1. Added a method to check the number of in-flight requests per node, it
> matches the existing in-flight method but is just for one node.
> 2. Added a completeAll() and completeAll(node) method that blocks until all
> requests (or all requests for a given node) have completed. This is added
> to help implement blocking requests in the co-ordinator. There are
> corresponding methods in the selector to allow muting individual
> connections so that you no longer select on them.
> 3. Separated poll into a poll method and a send method. Previously to
> initiate a new request you had to also poll, which returned responses. This
> was great if you were ready to process responses, but actually these two
> things are somewhat separate. Now you always initiate requests with send
> and actual I/O is always done by poll(). This makes it possible to initiate
> non-blocking requests without needing to process responses.
> 4. Added a new RequestCompletionHandler callback interface. This can
> optionally be provided when you initiate a request and will be invoked on
> the response when the request is complete. The rationale for this is to
> make it easier to implement asynchronous processing when it is possible for
> requests to be initiated from many places in the code. This makes it a lot
> easier to ensure the response is always handled and also to define the
> request and response in the same place.
>
> Cheers,
>
> -Jay
>



-- 
-- Guozhang


Re: [DISCUSS] KIPs

2015-01-22 Thread Jay Kreps
Oh yeah I think that is better, I hadn't thought of that approach! Any way
you could describe the usage in the KIP, just for completeness?

-Jay

On Thu, Jan 22, 2015 at 10:23 AM, Gwen Shapira 
wrote:

> I think what you described was the original design, so no wonder you
> are confused :)
>
> Following suggestions from Jun, I changed it a bit. The current model is:
>
> - Clients (producers and consumers) need to know about the broker
> ports in advance. They don't need to know about all brokers, but they
> need to know at least one host:port pair that speaks the protocol they
> want to use. The change is that all host:port pairs in broker.list
> must be of the same protocol and match the security.protocol
> configuration parameter.
>
> - Client uses security.protocol configuration parameter to open a
> connection to one of the brokers and sends the good old
> MetadataRequest. The broker knows which port it got the connection on,
> therefore it knows which security protocol is expected (it needs to
> use the same protocol to accept the connection and respond), and
> therefore it can send a response that contains only the host:port
> pairs that are relevant to that protocol.
>
> - From the client side the MetadataResponse did not change - it
> contains a list of brokerId,host,port that the client can connect to.
> The fact that all those broker endpoints were chosen out of a larger
> collection to match the right protocol is irrelevant for the client.
>
> I really like the new design since it preserves a lot of the same
> configurations and APIs.
>
> Thoughts?
>
> Gwen
>
> On Thu, Jan 22, 2015 at 9:19 AM, Jay Kreps  wrote:
> > I think I am still confused. In addition to the UpdateMetadataRequest
> don't
> > we have to change the MetadataResponse so that it's possible for clients
> to
> > discover the new ports? Or is that a second phase? I was imagining it
> > worked by basically allowing the brokers to advertise multiple ports, one
> > per security type, and then in the client you configure a protocol which
> > will implicitly choose the port from the options returned in metadata to
> > you...
> >
> > Likewise in the ConsumerMetadataResponse we are currently giving back
> full
> > broker information. I think we would have two options here: either change
> > the broker information included in that response to match the
> > metadataresponse or else remove the broker information entirely and just
> > return the node id (since in order to use that request you would already
> > have to have the cluster metadata). The second option may be cleaner
> since
> > it means we won't have to continue evolving those two in lockstep...
> >
> > -Jay
> >
> > On Wed, Jan 21, 2015 at 6:19 PM, Gwen Shapira 
> wrote:
> >
> >> Good point :)
> >>
> >> I added the specifics of the new  UpdateMetadataRequest, which is the
> >> only protocol bump in this change.
> >>
> >> Highlighted the broker and producer/consumer configuration changes,
> >> added some example values and added the new zookeeper json.
> >>
> >> Hope this makes things clearer.
> >>
> >> On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps  wrote:
> >> > Hey Gwen,
> >> >
> >> > Could we get the actual changes in that KIP? I.e. changes to metadata
> >> > request, changes to UpdateMetadataRequest, new configs and what will
> >> their
> >> > valid values be, etc. This kind of says that those things will change
> but
> >> > doesn't say what they will change to...
> >> >
> >> > -Jay
> >> >
> >> > On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira 
> >> wrote:
> >> >
> >> >> I created a KIP for the multi-port broker change.
> >> >>
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
> >> >>
> >> >> I'm not re-opening the discussion, since it was agreed on over a
> month
> >> >> ago and implementation is close to complete (I hope!). Lets consider
> >> >> this voted and accepted?
> >> >>
> >> >> Gwen
> >> >>
> >> >> On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps 
> >> wrote:
> >> >> > Great! Sounds like everyone is on the same page
> >> >> >
> >> >> >- I created a template page to make things easier. If you do
> >> >> Tools->Copy
> >> >> >on this page you can just fill in the italic portions with your
> >> >> details.
> >> >> >- I retrofitted KIP-1 to match this formatting
> >> >> >- I added the metadata section people asked for (a link to the
> >> >> >discussion, the JIRA, and the current status). Let's make sure
> we
> >> >> remember
> >> >> >to update the current status as things are figured out.
> >> >> >- Let's keep the discussion on the mailing list rather than on
> the
> >> >> wiki
> >> >> >pages. It makes sense to do one or the other so all the comments
> >> are
> >> >> in one
> >> >> >place and I think prior experience is that the wiki comments are
> >> the
> >> >> worse
> >> >> >way.
> >> >> >
> >> >> > I think it would be great do KIPs for some of the in-flight

[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288061#comment-14288061
 ] 

Joe Stein commented on KAFKA-1889:
--

Thanks for the patch [~fsaintjacques]. What is the motivation for this? Have 
you seen the work on the new CLI 
https://issues.apache.org/jira/browse/KAFKA-1694 and thought how that might be 
used moving forward?

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Francois Saint-Jacques (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Saint-Jacques updated KAFKA-1889:
--
Status: Open  (was: Patch Available)

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: New consumer

2015-01-22 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review68947
---



clients/src/main/java/org/apache/kafka/common/network/Selector.java


Just trying to understand the rationale why we want to special-care for 
failures in send() call and leave them as disconnected state at the beginning 
of the next poll() call?



clients/src/main/java/org/apache/kafka/common/network/Selector.java


Newline for @param?



clients/src/main/java/org/apache/kafka/common/utils/Utils.java


Newline here?



clients/src/main/java/org/apache/kafka/common/utils/Utils.java


Ditto above.



clients/src/test/java/org/apache/kafka/clients/consumer/ConsumerExampleTest.java


Is this intentional?



clients/src/test/java/org/apache/kafka/clients/consumer/MockConsumerTest.java


Do we need to commit here? position() call should return 1L anyways as it 
returns the fetch position of the next message right?



clients/src/test/java/org/apache/kafka/clients/consumer/internals/SubscriptionStateTest.java


"expected" usage may be fragile with junit, see KAFKA-1782 for more details.



core/src/main/scala/kafka/cluster/Partition.scala


Good catch. This warning keeps pop-up at server.



core/src/main/scala/kafka/server/KafkaApis.scala


Just realized we might have a mis-match in our design while reading this 
code:

When consumer subscribe (topic-partition), does it still needs to send a 
join-group request to the coordinator? If not, then we will not be able to 
detect / exclude the case where consumers within the same group subscribe to 
both topic-partition and topic and the coordinator will only try to balance 
with those only subscribing to topics; if yes then the join-group request needs 
to be modified as it only contain topics field today.



core/src/main/scala/kafka/tools/ConsumerPerformance.scala


The usage of the new consumer here assumes there is only one broker and 
hence according to the stub logic it will get all the partitions, which is a 
bit risky.



core/src/test/scala/integration/kafka/api/ConsumerTest.scala


Shall we add the @Test label just in case?



core/src/test/scala/integration/kafka/api/ConsumerTest.scala


I think a more comprehensive test will be running the producer / consumer 
in background threads while the main thread will just iterate over killing / 
restarting brokers, as with this we are assured at least enough iterations will 
be executed before all produced records get consumed.



core/src/test/scala/integration/kafka/api/ConsumerTest.scala


Same here.



core/src/test/scala/integration/kafka/api/ConsumerTest.scala


If there is not enough records then this will be blocked forever, so shall 
we add a timeout config and fail the test if timeout is reached?



core/src/test/scala/unit/kafka/utils/TestUtils.scala


Is this the same as createTopic in line 172?


- Guozhang Wang


On Jan. 22, 2015, 6:06 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 22, 2015, 6:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed second round of comments, rebased again.
> 
> 
> Diffs
> -
> 
>   build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 

[jira] [Assigned] (KAFKA-1757) Can not delete Topic index on Windows

2015-01-22 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-1757:
-

Assignee: Sriharsha Chintalapani

> Can not delete Topic index on Windows
> -
>
> Key: KAFKA-1757
> URL: https://issues.apache.org/jira/browse/KAFKA-1757
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2
>Reporter: Lukáš Vyhlídka
>Assignee: Sriharsha Chintalapani
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-1757.patch, lucky-v.patch
>
>
> When running the Kafka 0.8.2-Beta (Scala 2.10) on Windows, an attempt to 
> delete the Topic throwed an error:
> ERROR [KafkaApi-1] error when handling request Name: StopReplicaRequest; 
> Version: 0; CorrelationId: 38; ClientId: ; DeletePartitions: true; 
> ControllerId: 0; ControllerEpoch: 3; Partitions: [test,0] 
> (kafka.server.KafkaApis)
> kafka.common.KafkaStorageException: Delete of index 
> .index failed.
> at kafka.log.LogSegment.delete(LogSegment.scala:283)
> at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:608)
> at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:608)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.log.Log.delete(Log.scala:608)
> at kafka.log.LogManager.deleteLog(LogManager.scala:375)
> at 
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:144)
> at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:139)
> at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:139)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at kafka.utils.Utils$.inWriteLock(Utils.scala:543)
> at kafka.cluster.Partition.delete(Partition.scala:139)
> at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:158)
> at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:191)
> at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:190)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
> at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:190)
> at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:96)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:744)
> When I have investigated the issue I figured out that the index file (in my 
> environment it was 
> C:\tmp\kafka-logs\----0014-0\.index)
>  was locked by the kafka process and the OS did not allow to delete that file.
> I tried to fix the problem in source codes and when I added close() method 
> call into LogSegment.delete(), the Topic deletion started to work.
> I will add here (not sure how to upload the file during issue creation) a 
> diff with the changes I have made so You can take a look on that whether it 
> is reasonable or not. It would be perfect if it could make it into the 
> product...
> In the end I would like to say that on Linux the deletion works just fine...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] KIP-7 Security - IP Filtering

2015-01-22 Thread Joe Stein
Hey Jeff, thanks for the patch and writing this up.

I think the approach to explicitly deny and then set what is allowed or
explicitly allow then deny specifics makes sense. Supporting CIDR notation
and ip4 and ip6 both good too.

Waiting for KAFKA-1845 to get committed I think makes sense before
reworking this anymore right now, yes. Andrii posted a patch yesterday for
it so hopefully in the next ~ week(s).

Not sure what other folks think of this approach but whatever that is would
be good to have it in prior to reworking for the config def changes.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Wed, Jan 21, 2015 at 8:47 PM, Jeff Holoman  wrote:

> Posted a KIP for IP Filtering:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering
>
> Relevant JIRA:
> https://issues.apache.org/jira/browse/KAFKA-1810
>
> Appreciate any feedback.
>
> Thanks
>
> Jeff
>


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-01-22 Thread Gwen Shapira
Thanks for clarifying the upgrade path.

+1 for this proposal, it will greatly reduce errors resulting from
configuration mismatches.


On Thu, Jan 22, 2015 at 8:40 AM, Joe Stein  wrote:
> Deployment proposal:
>
>1. If global configuration isn't being used, do what you do today (there
>should be configuration brokers.admin.global.configuration=true
>[default=false]. This works for the upgrade. Once all brokers are upgraded
>proceed.
>2. if the global config is used then only some (host, port) will be able
>to be overridden in property for a single broker. There will always be
>configs that make sense for a single broker to be able to override. In the
>most case so far going through them there were few though. We should have
>this clear in the code for how to update those fields. This latter point is
>tied into https://issues.apache.org/jira/browse/KAFKA-1845 closely.
>3. If the property is not set in storage mechanism then use default from
>how we manage the configuration definitions.
>
>
> On Thu, Jan 22, 2015 at 11:16 AM, Harsha  wrote:
>
>> Hi Joe,
>> How does initial deployments will look in this environment. I guess
>> users will deploy the kafka cluster and it will be running with
>> defaults and they can use a command line tool to update the
>> configs?.
>> I think you need to update the JIRA and mailing list thread links on the
>> KIP page.
>> Thanks,
>> Harsha
>>
>>
>>
>> On Wed, Jan 21, 2015, at 10:34 PM, Joe Stein wrote:
>> > Created a KIP
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
>> >
>> > JIRA https://issues.apache.org/jira/browse/KAFKA-1786
>> >
>> > /***
>> >  Joe Stein
>> >  Founder, Principal Consultant
>> >  Big Data Open Source Security LLC
>> >  http://www.stealth.ly
>> >  Twitter: @allthingshadoop 
>> > /
>>


[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-01-22 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287949#comment-14287949
 ] 

Joel Koshy commented on KAFKA-1729:
---

Should be able to update the doc patch today.

> add doc for Kafka-based offset management in 0.8.2
> --
>
> Key: KAFKA-1729
> URL: https://issues.apache.org/jira/browse/KAFKA-1729
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jun Rao
>Assignee: Joel Koshy
> Fix For: 0.8.2
>
> Attachments: KAFKA-1782-doc-v1.patch, KAFKA-1782-doc-v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIPs

2015-01-22 Thread Gwen Shapira
I think what you described was the original design, so no wonder you
are confused :)

Following suggestions from Jun, I changed it a bit. The current model is:

- Clients (producers and consumers) need to know about the broker
ports in advance. They don't need to know about all brokers, but they
need to know at least one host:port pair that speaks the protocol they
want to use. The change is that all host:port pairs in broker.list
must be of the same protocol and match the security.protocol
configuration parameter.

- Client uses security.protocol configuration parameter to open a
connection to one of the brokers and sends the good old
MetadataRequest. The broker knows which port it got the connection on,
therefore it knows which security protocol is expected (it needs to
use the same protocol to accept the connection and respond), and
therefore it can send a response that contains only the host:port
pairs that are relevant to that protocol.

- From the client side the MetadataResponse did not change - it
contains a list of brokerId,host,port that the client can connect to.
The fact that all those broker endpoints were chosen out of a larger
collection to match the right protocol is irrelevant for the client.

I really like the new design since it preserves a lot of the same
configurations and APIs.

Thoughts?

Gwen

On Thu, Jan 22, 2015 at 9:19 AM, Jay Kreps  wrote:
> I think I am still confused. In addition to the UpdateMetadataRequest don't
> we have to change the MetadataResponse so that it's possible for clients to
> discover the new ports? Or is that a second phase? I was imagining it
> worked by basically allowing the brokers to advertise multiple ports, one
> per security type, and then in the client you configure a protocol which
> will implicitly choose the port from the options returned in metadata to
> you...
>
> Likewise in the ConsumerMetadataResponse we are currently giving back full
> broker information. I think we would have two options here: either change
> the broker information included in that response to match the
> metadataresponse or else remove the broker information entirely and just
> return the node id (since in order to use that request you would already
> have to have the cluster metadata). The second option may be cleaner since
> it means we won't have to continue evolving those two in lockstep...
>
> -Jay
>
> On Wed, Jan 21, 2015 at 6:19 PM, Gwen Shapira  wrote:
>
>> Good point :)
>>
>> I added the specifics of the new  UpdateMetadataRequest, which is the
>> only protocol bump in this change.
>>
>> Highlighted the broker and producer/consumer configuration changes,
>> added some example values and added the new zookeeper json.
>>
>> Hope this makes things clearer.
>>
>> On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps  wrote:
>> > Hey Gwen,
>> >
>> > Could we get the actual changes in that KIP? I.e. changes to metadata
>> > request, changes to UpdateMetadataRequest, new configs and what will
>> their
>> > valid values be, etc. This kind of says that those things will change but
>> > doesn't say what they will change to...
>> >
>> > -Jay
>> >
>> > On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira 
>> wrote:
>> >
>> >> I created a KIP for the multi-port broker change.
>> >>
>> >>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
>> >>
>> >> I'm not re-opening the discussion, since it was agreed on over a month
>> >> ago and implementation is close to complete (I hope!). Lets consider
>> >> this voted and accepted?
>> >>
>> >> Gwen
>> >>
>> >> On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps 
>> wrote:
>> >> > Great! Sounds like everyone is on the same page
>> >> >
>> >> >- I created a template page to make things easier. If you do
>> >> Tools->Copy
>> >> >on this page you can just fill in the italic portions with your
>> >> details.
>> >> >- I retrofitted KIP-1 to match this formatting
>> >> >- I added the metadata section people asked for (a link to the
>> >> >discussion, the JIRA, and the current status). Let's make sure we
>> >> remember
>> >> >to update the current status as things are figured out.
>> >> >- Let's keep the discussion on the mailing list rather than on the
>> >> wiki
>> >> >pages. It makes sense to do one or the other so all the comments
>> are
>> >> in one
>> >> >place and I think prior experience is that the wiki comments are
>> the
>> >> worse
>> >> >way.
>> >> >
>> >> > I think it would be great do KIPs for some of the in-flight items
>> folks
>> >> > mentioned.
>> >> >
>> >> > -Jay
>> >> >
>> >> > On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira 
>> >> wrote:
>> >> >
>> >> >> +1
>> >> >>
>> >> >> Will be happy to provide a KIP for the multiple-listeners patch.
>> >> >>
>> >> >> Gwen
>> >> >>
>> >> >> On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein 
>> >> wrote:
>> >> >> > +1 to everything we have been saying and where this (has settled
>> >> to)/(is
>> >> >> > sett

Re: Review Request 27799: New consumer

2015-01-22 Thread Aditya Auradkar


> On Jan. 22, 2015, 5:35 p.m., Aditya Auradkar wrote:
> > clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java,
> >  line 21
> > 
> >
> > nit. Can we remove the public from the interface methods?
> 
> Jay Kreps wrote:
> Can you explain...?
> 
> Aditya Auradkar wrote:
> I gather all interface methods are implicitly public.. so that should be 
> unnecessary.

http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69204
---


On Jan. 22, 2015, 6:06 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 22, 2015, 6:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed second round of comments, rebased again.
> 
> 
> Diffs
> -
> 
>   build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
> 7c948b166a8ac07616809f260754116ae7764973 
>   clie

Re: Review Request 27799: New consumer

2015-01-22 Thread Aditya Auradkar


> On Jan. 22, 2015, 5:35 p.m., Aditya Auradkar wrote:
> > clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java,
> >  line 21
> > 
> >
> > nit. Can we remove the public from the interface methods?
> 
> Jay Kreps wrote:
> Can you explain...?

I gather all interface methods are implicitly public.. so that should be 
unnecessary.


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69204
---


On Jan. 22, 2015, 6:06 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 22, 2015, 6:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed second round of comments, rebased again.
> 
> 
> Diffs
> -
> 
>   build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
> 7c948b166a8ac07616809f260754116ae7764973 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b68bbf00ab8eba6c5867d346c9118814259

Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 22, 2015, 6:06 p.m.)


Review request for kafka.


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description (updated)
---

New consumer.

Addressed second round of comments, rebased again.


Diffs
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
 99b52c23d639df010bf2affc0f79d1c6e16ed67c 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataResponse.java
 8b8f591c4b2802a9cbbe34746c0b3ca4a64a8681 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
2fc471f

[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-22 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287869#comment-14287869
 ] 

Jay Kreps commented on KAFKA-1760:
--

Updated reviewboard https://reviews.apache.org/r/27799/diff/
 against branch trunk

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-22 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1760:
-
Attachment: KAFKA-1760_2015-01-22_10:03:26.patch

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 22, 2015, 6:03 p.m.)


Review request for kafka.


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description (updated)
---

New consumer.


Diffs (updated)
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
 99b52c23d639df010bf2affc0f79d1c6e16ed67c 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataResponse.java
 8b8f591c4b2802a9cbbe34746c0b3ca4a64a8681 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
2fc471f64f4352eeb128bbd3941779780076fb8c 
  clien

Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps


> On Jan. 22, 2015, 5:35 p.m., Aditya Auradkar wrote:
> > clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java,
> >  line 21
> > 
> >
> > nit. Can we remove the public from the interface methods?

Can you explain...?


- Jay


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69204
---


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 21, 2015, 4:47 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed the first round of comments.
> 
> 
> Diffs
> -
> 
>   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
> 7c948b166a8ac07616809f260754116ae7764973 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b68bbf00ab8eba6c5867d346c91188142593ca6e 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> 74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
>   cl

Re: Review Request 27799: New consumer

2015-01-22 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69204
---



clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java


Can we remove these trailing spaces from a few files?



clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java


nit. Can we remove the public from the interface methods?


- Aditya Auradkar


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 21, 2015, 4:47 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed the first round of comments.
> 
> 
> Diffs
> -
> 
>   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
> 7c948b166a8ac07616809f260754116ae7764973 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b68bbf00ab8eba6c5867d346c91188142593ca6e 
>   clients/src/main/java/org/apache/kafka/common/n

Re: Review Request 27799: New consumer

2015-01-22 Thread Jay Kreps


> On Jan. 22, 2015, 2:45 a.m., Onur Karaman wrote:
> > clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java,
> >  line 32
> > 
> >
> > Other CURRENT_SCHEMA's throughout the rb were changed to be final.

Wups, yeah all the request objects were actually added in a different patch on 
a different JIRA. I encorporated that here since there were some minor changes 
needed. So that is just inhereted from that original patch. Fixed.


- Jay


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69117
---


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27799/
> ---
> 
> (Updated Jan. 21, 2015, 4:47 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1760
> https://issues.apache.org/jira/browse/KAFKA-1760
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> New consumer.
> 
> Addressed the first round of comments.
> 
> 
> Diffs
> -
> 
>   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
>   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
> 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
> ab7e3220f9b76b92ef981d695299656f041ad5ed 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 397695568d3fd8e835d8f923a89b3b00c96d0ead 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
>   
> clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> c0c636b3e1ba213033db6d23655032c9bbd5e378 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 57c1807ccba9f264186f83e91f37c34b959c8060 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
>  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 16af70a5de52cca786fdea147a6a639b7dc4a311 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 76efc216c9e6c3ab084461d792877092a189ad0f 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
> ea423ad15eebd262d20d5ec05d592cc115229177 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 904976fadf0610982958628eaee810b60a98d725 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  483899d2e69b33655d0e08949f5f64af2519660a 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ccc03d8447ebba40131a70e16969686ac4aab58a 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> d3299b944062d96852452de455902659ad8af757 
>   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
> b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
>   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
> 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
>   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
> 7c948b166a8ac07616809f260754116ae7764973 
>   clients/src/main/java/org

Re: [DISCUSS] KIPs

2015-01-22 Thread Jay Kreps
I think I am still confused. In addition to the UpdateMetadataRequest don't
we have to change the MetadataResponse so that it's possible for clients to
discover the new ports? Or is that a second phase? I was imagining it
worked by basically allowing the brokers to advertise multiple ports, one
per security type, and then in the client you configure a protocol which
will implicitly choose the port from the options returned in metadata to
you...

Likewise in the ConsumerMetadataResponse we are currently giving back full
broker information. I think we would have two options here: either change
the broker information included in that response to match the
metadataresponse or else remove the broker information entirely and just
return the node id (since in order to use that request you would already
have to have the cluster metadata). The second option may be cleaner since
it means we won't have to continue evolving those two in lockstep...

-Jay

On Wed, Jan 21, 2015 at 6:19 PM, Gwen Shapira  wrote:

> Good point :)
>
> I added the specifics of the new  UpdateMetadataRequest, which is the
> only protocol bump in this change.
>
> Highlighted the broker and producer/consumer configuration changes,
> added some example values and added the new zookeeper json.
>
> Hope this makes things clearer.
>
> On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps  wrote:
> > Hey Gwen,
> >
> > Could we get the actual changes in that KIP? I.e. changes to metadata
> > request, changes to UpdateMetadataRequest, new configs and what will
> their
> > valid values be, etc. This kind of says that those things will change but
> > doesn't say what they will change to...
> >
> > -Jay
> >
> > On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira 
> wrote:
> >
> >> I created a KIP for the multi-port broker change.
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
> >>
> >> I'm not re-opening the discussion, since it was agreed on over a month
> >> ago and implementation is close to complete (I hope!). Lets consider
> >> this voted and accepted?
> >>
> >> Gwen
> >>
> >> On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps 
> wrote:
> >> > Great! Sounds like everyone is on the same page
> >> >
> >> >- I created a template page to make things easier. If you do
> >> Tools->Copy
> >> >on this page you can just fill in the italic portions with your
> >> details.
> >> >- I retrofitted KIP-1 to match this formatting
> >> >- I added the metadata section people asked for (a link to the
> >> >discussion, the JIRA, and the current status). Let's make sure we
> >> remember
> >> >to update the current status as things are figured out.
> >> >- Let's keep the discussion on the mailing list rather than on the
> >> wiki
> >> >pages. It makes sense to do one or the other so all the comments
> are
> >> in one
> >> >place and I think prior experience is that the wiki comments are
> the
> >> worse
> >> >way.
> >> >
> >> > I think it would be great do KIPs for some of the in-flight items
> folks
> >> > mentioned.
> >> >
> >> > -Jay
> >> >
> >> > On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira 
> >> wrote:
> >> >
> >> >> +1
> >> >>
> >> >> Will be happy to provide a KIP for the multiple-listeners patch.
> >> >>
> >> >> Gwen
> >> >>
> >> >> On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein 
> >> wrote:
> >> >> > +1 to everything we have been saying and where this (has settled
> >> to)/(is
> >> >> > settling to).
> >> >> >
> >> >> > I am sure other folks have some more feedback and think we should
> try
> >> to
> >> >> > keep this discussion going if need be. I am also a firm believer of
> >> form
> >> >> > following function so kicking the tires some to flesh out the
> details
> >> of
> >> >> > this and have some organic growth with the process will be healthy
> and
> >> >> > beneficial to the community.
> >> >> >
> >> >> > For my part, what I will do is open a few KIP based on some of the
> >> work I
> >> >> > have been involved with for 0.8.3. Off the top of my head this
> would
> >> >> > include 1) changes to re-assignment of partitions 2) kafka cli 3)
> >> global
> >> >> > configs 4) security white list black list by ip 5) SSL 6) We
> probably
> >> >> will
> >> >> > have lots of Security related KIPs and should treat them all
> >> individually
> >> >> > when the time is appropriate 7) Kafka on Mesos.
> >> >> >
> >> >> > If someone else wants to jump in to start getting some of the
> security
> >> >> KIP
> >> >> > that we are going to have in 0.8.3 I think that would be great
> (e.g.
> >> >> > Multiple Listeners for Kafka Brokers). There are also a few other
> >> >> tickets I
> >> >> > can think of that are important to have in the code in 0.8.3 that
> >> should
> >> >> > have KIP also that I haven't really been involved in. I will take a
> >> few
> >> >> > minutes and go through JIRA (one I can think of like auto assign id
> >> that
> >> >> is
> >> >> > already committed I 

Re: NIO and Threading implementation

2015-01-22 Thread Jay Kreps
Hey Chittaranjan,

Yeah I think at a high level our goal is that classes are either threadsafe
or not and if threadsafe their safety doesn't depend on the details of
their current usage(s) since that often changes. In other words the
synchronization should be encapsulated in the class. So that's the goal.
I've seen often over time this strays as code gets refactored. If you see
issues definitely send a patch. :-)

For the socket server code, yes if that works let's definitely remove the
unneeded wakeups. It would be interesting to trace back the ancestry of
that code and figure out how it ended up like that (may have always been
that way).

For the selectors what I was saying is this. We need to have multiple
threads doing network I/O. This helps scale that cpu load and also since we
use sendfile which is blocking we need to have threads to absorb that load.
So we need multiple I/O threads. Two ways I know to do this: (1) give each
thread a selector and register accepted sockets with one of these (at
random or round-robin), or (2) have a single selector and attempt to share
it between threads. Theoretically (2) could have advantages because if one
thread is blocked others will pick up it's work. However there are a ton of
issues in making this correct and efficient. So basically I was advocating
(1) as being better than (2).

-Jay

On Wed, Jan 21, 2015 at 10:46 PM, Chittaranjan Hota 
wrote:

> Thanks for your comments Jay.
>
> Quote "Technically startup is not called from
> multiple threads but the classes correctness should not depended on the
> current usage so it should work correctly if it were." --> If this were a
> requirement then one can see that many methods are NOT thread safe while
> the start up happens. If we need to stick to the goal of exposing kafka
> initialization by other Parents, few things have to change. Nevertheless am
> currently doing some changes on my local copy and once I see how things
> look will sync back with you.
>
> For the other couple of things (removed wake up and also added awaits
> correctly) i have done the changes locally and deployed to our stage
> cluster (3 brokers, 3 zk nodes) and did some load tests today.
>
> Not sure if i understood what "single threaded selector loop" means and
> also the locking in selector loops, I would love to have a conversation
> with you around this.
>
> Thanks again  ..
>
>
>
>
> On Wed, Jan 21, 2015 at 2:15 PM, Jay Kreps  wrote:
>
> > 1. a. I think startup is a public method on KafkaServer so for people
> > embedding Kafka in some way this helps guarantee correctness.
> > b. I think KafkaScheduler tries to be a bit too clever, there is a patch
> > out there that just moves to global synchronization for the whole class
> > which is easier to reason about. Technically startup is not called from
> > multiple threads but the classes correctness should not depended on the
> > current usage so it should work correctly if it were.
> > c. I think in cases where you actually just want to start and run N
> > threads, using Thread directly is sensible. ExecutorService is useful but
> > does have a ton of gadgets and gizmos that obscure the basic usage in
> that
> > case.
> > d. Yeah we should probably wait until the processor threads start as
> well.
> > I think it probably doesn't cause misbehavior as is, but it would be
> better
> > if the postcondition of startup was that all threads had started.
> >
> > 2. a. There are different ways to do this. My overwhelming experience has
> > been that any attempt to share a selector across threads is very painful.
> > Making the selector loops single threaded just really really simplifies
> > everything, but also the performance tends to be a lot better because
> there
> > is far less locking inside that selector loop.
> > b. Yeah I share you skepticism of that call. I'm not sure why it is there
> > or if it is needed. I agree that wakeup should only be needed from other
> > threads. It would be good to untangle that mystery. I wonder what happens
> > if it is removed.
> >
> > -Jay
> >
> > On Wed, Jan 21, 2015 at 1:58 PM, Chittaranjan Hota <
> chitts.h...@gmail.com>
> > wrote:
> >
> > > Hello,
> > > Congratulations to the folks behind kafka. Its has been a smooth ride
> > > dealing with multi TB data when the same set up in JMS fell apart
> often.
> > >
> > > Although I have been using kafka for more than a few days now, started
> > > looking into the code base since yesterday and already have doubts at
> the
> > > very beginning. Would need some inputs on why the implementation is
> done
> > > the way it is.
> > >
> > > Version : 0.8.1
> > >
> > > THREADING RELATED
> > > 1. Why in the start up code synchronized? Who are the competing
> threads?
> > > a. startReporters func is synchronized
> > > b. KafkaScheduler startup is synchronized? There is also a volatile
> > > variable declared when the whole synchronized block is itself
> > guaranteeing
> > > "happens before".
> > >c. Use of n

[jira] [Commented] (KAFKA-1804) Kafka network thread lacks top exception handler

2015-01-22 Thread Alexey Ozeritskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287713#comment-14287713
 ] 

Alexey Ozeritskiy commented on KAFKA-1804:
--

The last time we saw the bug during restart the network switch on a cluster of 
20 machines. kafka-network-threads fell down on more than half machines. As a 
result, the cluster became unavailable. We are trying to find the specific 
steps that reproduce the problem.


> Kafka network thread lacks top exception handler
> 
>
> Key: KAFKA-1804
> URL: https://issues.apache.org/jira/browse/KAFKA-1804
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
>Reporter: Oleg Golovin
>Priority: Critical
>
> We have faced the problem that some kafka network threads may fail, so that 
> jstack attached to Kafka process showed fewer threads than we had defined in 
> our Kafka configuration. This leads to API requests processed by this thread 
> getting stuck unresponed.
> There were no error messages in the log regarding thread failure.
> We have examined Kafka code to find out there is no top try-catch block in 
> the network thread code, which could at least log possible errors.
> Could you add top-level try-catch block for the network thread, which should 
> recover network thread in case of exception?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-01-22 Thread Joe Stein
Deployment proposal:

   1. If global configuration isn't being used, do what you do today (there
   should be configuration brokers.admin.global.configuration=true
   [default=false]. This works for the upgrade. Once all brokers are upgraded
   proceed.
   2. if the global config is used then only some (host, port) will be able
   to be overridden in property for a single broker. There will always be
   configs that make sense for a single broker to be able to override. In the
   most case so far going through them there were few though. We should have
   this clear in the code for how to update those fields. This latter point is
   tied into https://issues.apache.org/jira/browse/KAFKA-1845 closely.
   3. If the property is not set in storage mechanism then use default from
   how we manage the configuration definitions.


On Thu, Jan 22, 2015 at 11:16 AM, Harsha  wrote:

> Hi Joe,
> How does initial deployments will look in this environment. I guess
> users will deploy the kafka cluster and it will be running with
> defaults and they can use a command line tool to update the
> configs?.
> I think you need to update the JIRA and mailing list thread links on the
> KIP page.
> Thanks,
> Harsha
>
>
>
> On Wed, Jan 21, 2015, at 10:34 PM, Joe Stein wrote:
> > Created a KIP
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
> >
> > JIRA https://issues.apache.org/jira/browse/KAFKA-1786
> >
> > /***
> >  Joe Stein
> >  Founder, Principal Consultant
> >  Big Data Open Source Security LLC
> >  http://www.stealth.ly
> >  Twitter: @allthingshadoop 
> > /
>


[jira] [Updated] (KAFKA-1845) KafkaConfig should use ConfigDef

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1845:
-
Fix Version/s: 0.8.3

> KafkaConfig should use ConfigDef 
> -
>
> Key: KAFKA-1845
> URL: https://issues.apache.org/jira/browse/KAFKA-1845
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Andrii Biletskyi
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1845.patch
>
>
> ConfigDef is already used for the new producer and for TopicConfig. 
> Will be nice to standardize and use one configuration and validation library 
> across the board.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-01-22 Thread Harsha
Hi Joe,
How does initial deployments will look in this environment. I guess
users will deploy the kafka cluster and it will be running with
defaults and they can use a command line tool to update the
configs?.
I think you need to update the JIRA and mailing list thread links on the
KIP page.
Thanks,
Harsha
 


On Wed, Jan 21, 2015, at 10:34 PM, Joe Stein wrote:
> Created a KIP
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
> 
> JIRA https://issues.apache.org/jira/browse/KAFKA-1786
> 
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /


[jira] [Commented] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-22 Thread QianHu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287341#comment-14287341
 ] 

QianHu commented on KAFKA-1869:
---

Can we set it in the configuration file of jdk? I hava tried, but don't work.

> Openning some random ports while running kafka service 
> ---
>
> Key: KAFKA-1869
> URL: https://issues.apache.org/jira/browse/KAFKA-1869
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka_2.9.2-0.8.1.1
>Reporter: QianHu
>Assignee: Manikumar Reddy
> Fix For: 0.8.2
>
>
> while running kafka service , four  random ports have been opened . In which 
> ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
> randomly . Can you help me that why this random ports will be opened , and 
> how can we give them constant values ? Thank you very much .
> [work@02 kafka]$ jps
> 8400 Jps
> 727 Kafka
> [work@02 kafka]$ netstat -tpln|grep 727
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> tcp0  0 0.0.0.0:0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:28538   0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:90920.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:16650   0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-22 Thread QianHu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287339#comment-14287339
 ] 

QianHu commented on KAFKA-1869:
---

Hello, this is my config:

export KAFKA_HEAP_OPTS="-Xms4g -Xmx4g -XX:+DisableAttachMechanism 
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.rmi.port=8890 -XX:PermSize=48m 
-XX:MaxPermSize=48m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
-XX:InitiatingHeapOccupancyPercent=35"

and the port of kafka is 9092,so 

[work@tc kafka]$ jps
5912 Jps
4466 Kafka
[work@tc kafka]$ netstat -pltn |grep 4466
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp0  0 0.0.0.0:0.0.0.0:*   
LISTEN  4466/./bin/../jdk1. 
tcp0  0 0.0.0.0:88900.0.0.0:*   
LISTEN  4466/./bin/../jdk1. 
tcp0  0 0.0.0.0:41213   0.0.0.0:*   
LISTEN  4466/./bin/../jdk1. 
tcp0  0 0.0.0.0:90920.0.0.0:*   
LISTEN  4466/./bin/../jdk1.

As you said ,the random port(41213) is an implementation specific for JRMP 
.Could you help me how can we set the port with a constant value ? I have tried 
many kinds of methods ,but don't work. Thank you for your attention.

> Openning some random ports while running kafka service 
> ---
>
> Key: KAFKA-1869
> URL: https://issues.apache.org/jira/browse/KAFKA-1869
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka_2.9.2-0.8.1.1
>Reporter: QianHu
>Assignee: Manikumar Reddy
> Fix For: 0.8.2
>
>
> while running kafka service , four  random ports have been opened . In which 
> ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
> randomly . Can you help me that why this random ports will be opened , and 
> how can we give them constant values ? Thank you very much .
> [work@02 kafka]$ jps
> 8400 Jps
> 727 Kafka
> [work@02 kafka]$ netstat -tpln|grep 727
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> tcp0  0 0.0.0.0:0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:28538   0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:90920.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 
> tcp0  0 0.0.0.0:16650   0.0.0.0:*   
> LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2015-01-22 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287215#comment-14287215
 ] 

Onur Karaman commented on KAFKA-1476:
-

Updated reviewboard https://reviews.apache.org/r/29831/diff/
 against branch origin/trunk

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2015-01-22 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1476:

Attachment: KAFKA-1476_2015-01-22_02:32:52.patch

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29831: Patch for KAFKA-1476

2015-01-22 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29831/
---

(Updated Jan. 22, 2015, 10:32 a.m.)


Review request for kafka.


Bugs: KAFKA-1476
https://issues.apache.org/jira/browse/KAFKA-1476


Repository: kafka


Description
---

Merged in work for KAFKA-1476 and sub-task KAFKA-1826


Diffs (updated)
-

  bin/kafka-consumer-groups.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
c14bd455b6642f5e6eb254670bef9f57ae41d6cb 
  core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
ac15d34425795d5be20c51b01fa1108bdcd66583 

Diff: https://reviews.apache.org/r/29831/diff/


Testing
---


Thanks,

Onur Karaman