[jira] [Updated] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-10-31 Thread Vladimir Tretyakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Tretyakov updated KAFKA-1481:
--
Attachment: KAFKA-1481_2014-10-31_14-35-43.patch

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-10-31 Thread Vladimir Tretyakov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14191698#comment-14191698
 ] 

Vladimir Tretyakov commented on KAFKA-1481:
---

Hi Jun, added new one, changed according to your last comments.

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1741) consumer get always old messages

2014-10-31 Thread hamza ezzi (JIRA)
hamza ezzi created KAFKA-1741:
-

 Summary: consumer get always old messages
 Key: KAFKA-1741
 URL: https://issues.apache.org/jira/browse/KAFKA-1741
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.1.1, 0.8.2
Reporter: hamza ezzi
Assignee: Neha Narkhede


every time when a consumer get a message, i have this error, and when i restart 
consumer i get old message knowing i specified in my consumer config to do not 
get old message 


my nodejs consumer code :

var kafka = require('kafka-node');
var HighLevelConsumer = kafka.HighLevelConsumer;
var Offset = kafka.Offset;
var Client = kafka.Client;
var argv = require('optimist').argv;
var topic = argv.topic || 'sLNzXYHLJA';
var client = new Client('XXX.XXX.XXX:2181','consumer'+process.pid);
var payloads = [{topic:topic}];
var options = {
groupId: 'kafka-node-group',
// Auto commit config
autoCommit: true,
autoCommitMsgCount: 100,
autoCommitIntervalMs: 5000,
// Fetch message config
fetchMaxWaitMs: 100,
fetchMinBytes: 1,
fetchMaxBytes: 1024 * 10,
fromOffset: false,
fromBeginning: false
};
var consumer = new HighLevelConsumer(client, payloads, options);
var offset = new Offset(client);

consumer.on('message', function (message) {
console.log(this.id, message);
});
consumer.on('error', function (err) {
console.log('error', err);
});
consumer.on('offsetOutOfRange', function (topic) {
console.log("- offsetOutOfRange ");
topic.maxNum = 2;
offset.fetch([topic], function (err, offsets) {
var min = Math.min.apply(null, offsets[topic.topic][topic.partition]);
consumer.setOffset(topic.topic, topic.partition, min);
});
});



error kafka log :


[2014-10-31 17:13:32,173] ERROR Closing socket for /212.XXX.XXX.XXX because of 
error (kafka.network.Processor)
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:498)
at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:406)
at 
kafka.api.OffsetCommitRequest$$anonfun$1$$anonfun$apply$1.apply(OffsetCommitRequest.scala:62)
at 
kafka.api.OffsetCommitRequest$$anonfun$1$$anonfun$apply$1.apply(OffsetCommitRequest.scala:58)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.api.OffsetCommitRequest$$anonfun$1.apply(OffsetCommitRequest.scala:58)
at 
kafka.api.OffsetCommitRequest$$anonfun$1.apply(OffsetCommitRequest.scala:55)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at kafka.api.OffsetCommitRequest$.readFrom(OffsetCommitRequest.scala:55)
at kafka.api.RequestKeys$$anonfun$9.apply(RequestKeys.scala:47)
at kafka.api.RequestKeys$$anonfun$9.apply(RequestKeys.scala:47)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:50)
at kafka.network.Processor.read(SocketServer.scala:450)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2014-10-31 Thread R. Tyler Croy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192003#comment-14192003
 ] 

R. Tyler Croy commented on KAFKA-1733:
--

[~junrao] we are seeing this behavior against this client library version, fwiw:

{code}

org.apache.kafka
kafka_2.10
0.8.1.1

{code}

> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Reporter: Marc Chung
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-10-31 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192034#comment-14192034
 ] 

Jun Rao commented on KAFKA-1481:


Thanks for the patch. A few more comments.

50. ClientIdBroker: Instead of having 2 subclasses, would it be better to have 
just one class
  case class ClientIdAndBroker(clientId: String, brokerHost: Option[String], 
brokerPort: Option[Int])?
Ditto to ClientIdTopic.

51. TopicPartitionRequestKey: Can this just be TopicAndPartition?

52. MetricsTest:
52.1 Could we remove the extra empty lines after the class?
52.2 remove unnecessary {} in the following (a few other files have a similar 
issue)
import java.util.{Properties}
52.3 In testMetricsLeak(), you don't need to create a new zkClient. You can get 
one from the base class in ZooKeeperTestHarness.
52.4 Instead of duplicating the createAndShutdownStep() calls, could we use a 
loop instead?
52.5 Instead of duplicating sendMessages() and getMessages() from 
ZookeeperConsumerConnectorTest, could we extract those methods to TestUtils and 
add comments to describe what they do? Then, we can reuse those methods.

53. Could you also include a patch to the 0.8.2 doc 
(https://svn.apache.org/repos/asf/kafka/site/082/ops.html) with the metric name 
changes?






> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 27430: Fix KAFKA-1720

2014-10-31 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27430/
---

Review request for kafka.


Bugs: KAFKA-1720
https://issues.apache.org/jira/browse/KAFKA-1720


Repository: kafka


Description
---

Rename delayed requests to delayed operations, change some class names


Diffs
-

  core/src/main/scala/kafka/cluster/Partition.scala 
1be57008e983fc3a831626ecf9a861f164fcca92 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
1ccbb4b6fdbbd4412ba77ffe7d4cf5adf939e439 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
8049e07e5d6d65913ec2492f1e22e5ed3ecbbea8 
  core/src/main/scala/kafka/server/DelayedRequestKey.scala 
628ef59564b9b9238d7b05d26aef79d3cfec174d 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
02fa3821271e97b24fd2ae25163933222797585d 
  core/src/main/scala/kafka/server/RequestPurgatory.scala 
323b12e765f981e9bba736a204e4a8159e5c5ada 
  core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala 
a7720d579ea15b71511c9da0e241bd087de3674e 
  system_test/metrics.json cd3fc142176b8a3638db5230fb74f055c04c59d4 

Diff: https://reviews.apache.org/r/27430/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Updated] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2014-10-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1720:
-
Status: Patch Available  (was: Open)

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1720.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2014-10-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192059#comment-14192059
 ] 

Guozhang Wang commented on KAFKA-1720:
--

Created reviewboard https://reviews.apache.org/r/27430/diff/
 against branch origin/trunk

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1720.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2014-10-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1720:
-
Attachment: KAFKA-1720.patch

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1720.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: MBeans, dashes, underscores, and KAFKA-1481

2014-10-31 Thread Jun Rao
To circle back on this thread. The patch in kafka-1482 is almost ready. To
make the mbean names more meaningful and easier to parse, the patch will
use explicit key/value pairs in the mbean name for things like clientId and
topic, and will get rid of the quotes.

So, instead of
   "kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
we will have
   kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1

Any objection to committing this to the 0.8.2 branch?

Thanks,

Jun

On Fri, Oct 17, 2014 at 11:54 AM, Jun Rao  wrote:

> Hi, everyone,
>
> We are fixing the mbean names in kafka-1482, by adding separate explicit
> tags in the name for things like clientId and topic. Another thing that
> some people have complained before is that we use quotes in the jmx name.
> Should we also just get rid of the quotes as part of kafka-1482? So,
> instead of
>"kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
> we will have
>kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1
>
> Thanks,
>
> Jun
>
>
> On Thu, Oct 9, 2014 at 11:12 AM, Neha Narkhede 
> wrote:
>
>> I am going to vote for 1482 to be included in 0.8.2, if we have a patch
>> submitted in a week. I think we've had this JIRA opened for too long and
>> we
>> held people back so it's only fair to release this.
>>
>> On Wed, Oct 8, 2014 at 9:40 PM, Jun Rao  wrote:
>>
>> > Otis,
>> >
>> > Just have the patch ready asap. We can make a call then.
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>> > On Wed, Oct 8, 2014 at 6:13 AM, Otis Gospodnetic <
>> > otis.gospodne...@gmail.com
>> > > wrote:
>> >
>> > > Hi Jun,
>> > >
>> > > Would by the end of next week be acceptable for 0.8.2?
>> > >
>> > > Thanks,
>> > > Otis
>> > > --
>> > > Monitoring * Alerting * Anomaly Detection * Centralized Log Management
>> > > Solr & Elasticsearch Support * http://sematext.com/
>> > >
>> > >
>> > > On Tue, Oct 7, 2014 at 4:04 PM, Jun Rao  wrote:
>> > >
>> > > > Otis,
>> > > >
>> > > > Yes, if you guys can help provide a patch in a few days, we can
>> > probably
>> > > > get it to the 0.8.2 release.
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Jun
>> > > >
>> > > > On Tue, Oct 7, 2014 at 12:10 PM, Otis Gospodnetic <
>> > > > otis.gospodne...@gmail.com> wrote:
>> > > >
>> > > > > Hi Jun,
>> > > > >
>> > > > > I think your MBean renaming approach will work.  I see
>> > > > > https://issues.apache.org/jira/browse/KAFKA-1481 has Fix Version
>> > > 0.8.2,
>> > > > > but
>> > > > > is not marked as a Blocker.  We'd love to get the MBeans fixed so
>> > this
>> > > > > makes it in 0.8.2 release.  Do you know if this is on anyone's
>> plate
>> > > (the
>> > > > > issue is currently Unassigned)?  If not, should we provide a new
>> > patch
>> > > > that
>> > > > > uses your approach?
>> > > > >
>> > > > > Thanks,
>> > > > > Otis
>> > > > > --
>> > > > > Monitoring * Alerting * Anomaly Detection * Centralized Log
>> > Management
>> > > > > Solr & Elasticsearch Support * http://sematext.com/
>> > > > >
>> > > > >
>> > > > > On Thu, Sep 18, 2014 at 4:49 PM, Jun Rao 
>> wrote:
>> > > > >
>> > > > > > Otis,
>> > > > > >
>> > > > > > In kafka-1481, we will have to change the mbean names (at least
>> the
>> > > > ones
>> > > > > > with clientid and topic) anyway. Using the name/value pair in
>> the
>> > > mbean
>> > > > > > name allows us to do this in a cleaner way. Yes, "," is not
>> allowed
>> > > in
>> > > > > > clientid or topic.
>> > > > > >
>> > > > > > Bhavesh,
>> > > > > >
>> > > > > > Yes, I was thinking of making changes in the new metrics
>> package.
>> > > > > Something
>> > > > > > like allowing the sensor names to have name/value pairs. The jmx
>> > > names
>> > > > > will
>> > > > > > just follow accordingly. This is probably cleaner than doing the
>> > > > > escaping.
>> > > > > > Also, the metric names are more intuitive (otherwise, you have
>> to
>> > > know
>> > > > > > which part is the clientid and which part is the topic).
>> > > > > >
>> > > > > > Thanks,
>> > > > > >
>> > > > > > Jun
>> > > > > >
>> > > > > > On Wed, Sep 17, 2014 at 2:32 PM, Otis Gospodnetic <
>> > > > > > otis.gospodne...@gmail.com> wrote:
>> > > > > >
>> > > > > > > Hi Jun,
>> > > > > > >
>> > > > > > > On Wed, Sep 17, 2014 at 12:35 PM, Jun Rao 
>> > > wrote:
>> > > > > > >
>> > > > > > > > Bhavesh,
>> > > > > > > >
>> > > > > > > > Yes, allowing dot in clientId and topic makes it a bit
>> harder
>> > to
>> > > > > define
>> > > > > > > the
>> > > > > > > > JMX bean names. I see a couple of solutions here.
>> > > > > > > >
>> > > > > > > > 1. Disable dot in clientId and topic names. The issue is
>> that
>> > dot
>> > > > may
>> > > > > > > > already be used in existing deployment.
>> > > > > > > >
>> > > > > > > > 2. We can represent the JMX bean name differently in the new
>> > > > > producer.
>> > > > > > > > Instead of
>> > > > > > > >   kafka.producer.myclientid:type=mytopic
>> > > > > > > > we could change it to
>> > > > > > > >   kafka.producer:

Re: MBeans, dashes, underscores, and KAFKA-1481

2014-10-31 Thread Joel Koshy
That sounds good, although is that the only change (sorry I have not
done a careful review of that patch and would like to before it gets
checked in).

On Fri, Oct 31, 2014 at 10:42:13AM -0700, Jun Rao wrote:
> To circle back on this thread. The patch in kafka-1482 is almost ready. To
> make the mbean names more meaningful and easier to parse, the patch will
> use explicit key/value pairs in the mbean name for things like clientId and
> topic, and will get rid of the quotes.
> 
> So, instead of
>"kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
> we will have
>kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1
> 
> Any objection to committing this to the 0.8.2 branch?
> 
> Thanks,
> 
> Jun
> 
> On Fri, Oct 17, 2014 at 11:54 AM, Jun Rao  wrote:
> 
> > Hi, everyone,
> >
> > We are fixing the mbean names in kafka-1482, by adding separate explicit
> > tags in the name for things like clientId and topic. Another thing that
> > some people have complained before is that we use quotes in the jmx name.
> > Should we also just get rid of the quotes as part of kafka-1482? So,
> > instead of
> >"kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
> > we will have
> >kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Oct 9, 2014 at 11:12 AM, Neha Narkhede 
> > wrote:
> >
> >> I am going to vote for 1482 to be included in 0.8.2, if we have a patch
> >> submitted in a week. I think we've had this JIRA opened for too long and
> >> we
> >> held people back so it's only fair to release this.
> >>
> >> On Wed, Oct 8, 2014 at 9:40 PM, Jun Rao  wrote:
> >>
> >> > Otis,
> >> >
> >> > Just have the patch ready asap. We can make a call then.
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >> >
> >> > On Wed, Oct 8, 2014 at 6:13 AM, Otis Gospodnetic <
> >> > otis.gospodne...@gmail.com
> >> > > wrote:
> >> >
> >> > > Hi Jun,
> >> > >
> >> > > Would by the end of next week be acceptable for 0.8.2?
> >> > >
> >> > > Thanks,
> >> > > Otis
> >> > > --
> >> > > Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> >> > > Solr & Elasticsearch Support * http://sematext.com/
> >> > >
> >> > >
> >> > > On Tue, Oct 7, 2014 at 4:04 PM, Jun Rao  wrote:
> >> > >
> >> > > > Otis,
> >> > > >
> >> > > > Yes, if you guys can help provide a patch in a few days, we can
> >> > probably
> >> > > > get it to the 0.8.2 release.
> >> > > >
> >> > > > Thanks,
> >> > > >
> >> > > > Jun
> >> > > >
> >> > > > On Tue, Oct 7, 2014 at 12:10 PM, Otis Gospodnetic <
> >> > > > otis.gospodne...@gmail.com> wrote:
> >> > > >
> >> > > > > Hi Jun,
> >> > > > >
> >> > > > > I think your MBean renaming approach will work.  I see
> >> > > > > https://issues.apache.org/jira/browse/KAFKA-1481 has Fix Version
> >> > > 0.8.2,
> >> > > > > but
> >> > > > > is not marked as a Blocker.  We'd love to get the MBeans fixed so
> >> > this
> >> > > > > makes it in 0.8.2 release.  Do you know if this is on anyone's
> >> plate
> >> > > (the
> >> > > > > issue is currently Unassigned)?  If not, should we provide a new
> >> > patch
> >> > > > that
> >> > > > > uses your approach?
> >> > > > >
> >> > > > > Thanks,
> >> > > > > Otis
> >> > > > > --
> >> > > > > Monitoring * Alerting * Anomaly Detection * Centralized Log
> >> > Management
> >> > > > > Solr & Elasticsearch Support * http://sematext.com/
> >> > > > >
> >> > > > >
> >> > > > > On Thu, Sep 18, 2014 at 4:49 PM, Jun Rao 
> >> wrote:
> >> > > > >
> >> > > > > > Otis,
> >> > > > > >
> >> > > > > > In kafka-1481, we will have to change the mbean names (at least
> >> the
> >> > > > ones
> >> > > > > > with clientid and topic) anyway. Using the name/value pair in
> >> the
> >> > > mbean
> >> > > > > > name allows us to do this in a cleaner way. Yes, "," is not
> >> allowed
> >> > > in
> >> > > > > > clientid or topic.
> >> > > > > >
> >> > > > > > Bhavesh,
> >> > > > > >
> >> > > > > > Yes, I was thinking of making changes in the new metrics
> >> package.
> >> > > > > Something
> >> > > > > > like allowing the sensor names to have name/value pairs. The jmx
> >> > > names
> >> > > > > will
> >> > > > > > just follow accordingly. This is probably cleaner than doing the
> >> > > > > escaping.
> >> > > > > > Also, the metric names are more intuitive (otherwise, you have
> >> to
> >> > > know
> >> > > > > > which part is the clientid and which part is the topic).
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > >
> >> > > > > > Jun
> >> > > > > >
> >> > > > > > On Wed, Sep 17, 2014 at 2:32 PM, Otis Gospodnetic <
> >> > > > > > otis.gospodne...@gmail.com> wrote:
> >> > > > > >
> >> > > > > > > Hi Jun,
> >> > > > > > >
> >> > > > > > > On Wed, Sep 17, 2014 at 12:35 PM, Jun Rao 
> >> > > wrote:
> >> > > > > > >
> >> > > > > > > > Bhavesh,
> >> > > > > > > >
> >> > > > > > > > Yes, allowing dot in clientId and topic makes it a bit
> >> harder
> >> > to
> >> > > > > define
> >> > > > > > > the

Re: Review Request 24676: Fix KAFKA-1583

2014-10-31 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24676/#review59370
---


Just one minor comment. Perhaps we can address it in a future patch.


core/src/main/scala/kafka/cluster/Partition.scala


Could we get rid of = since this method is supposed to not return any value?


- Jun Rao


On Oct. 28, 2014, 10:09 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24676/
> ---
> 
> (Updated Oct. 28, 2014, 10:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1583
> https://issues.apache.org/jira/browse/KAFKA-1583
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Incoporated Joel's comments round two
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> 59c09155dd25fad7bed07d3d00039e3dc66db95c 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> 8d085a1f18f803b3cebae4739ad8f58f95a6c600 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 861a6cf11dc6b6431fcbbe9de00c74a122f204bd 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> b2366e7eedcac17f657271d5293ff0bef6f3cbe6 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> a286272c834b6f40164999ff8b7f8998875f2cfe 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e88ecf224a4dab8bbd26ba7b0c3ccfe844c6b7f4 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> 880ab4a004f078e5d84446ea6e4454ecc06c95f2 
>   core/src/main/scala/kafka/log/Log.scala 
> 157d67369baabd2206a2356b2aa421e848adab17 
>   core/src/main/scala/kafka/network/BoundedByteBufferSend.scala 
> a624359fb2059340bb8dc1619c5b5f226e26eb9b 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> e0f14e25af03e6d4344386dcabc1457ee784d345 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 9481508fc2d6140b36829840c337e557f3d090da 
>   core/src/main/scala/kafka/server/FetchRequestPurgatory.scala 
> ed1318891253556cdf4d908033b704495acd5724 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 85498b4a1368d3506f19c4cfc64934e4d0ac4c90 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 43eb2a35bb54d32c66cdb94772df657b3a104d1a 
>   core/src/main/scala/kafka/server/ProducerRequestPurgatory.scala 
> d4a7d4a79b44263a1f7e1a92874dd36aa06e5a3f 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 78b7514cc109547c562e635824684fad581af653 
>   core/src/main/scala/kafka/server/RequestPurgatory.scala 
> 9d76234bc2c810ec08621dc92bb4061b8e7cd993 
>   core/src/main/scala/kafka/utils/DelayedItem.scala 
> d7276494072f14f1cdf7d23f755ac32678c5675c 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 209a409cb47eb24f83cee79f4e064dbc5f5e9d62 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
> fb61d552f2320fedec547400fbbe402a0b2f5d87 
>   core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 
> 03a424d45215e1e7780567d9559dae4d0ae6fc29 
>   core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
> cd302aa51eb8377d88b752d48274e403926439f2 
>   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
> a9c4ddc78df0b3695a77a12cf8cf25521a203122 
>   core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala 
> a577f4a8bf420a5bc1e62fad6d507a240a42bcaa 
>   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
> 3804a114e97c849cae48308997037786614173fc 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> 09ed8f5a7a414ae139803bf82d336c2d80bf4ac5 
> 
> Diff: https://reviews.apache.org/r/24676/diff/
> 
> 
> Testing
> ---
> 
> Unit tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: MBeans, dashes, underscores, and KAFKA-1481

2014-10-31 Thread Jun Rao
Yes, all changes are related to metric names. Feel free to review the patch.

Thanks,

Jun

On Fri, Oct 31, 2014 at 10:48 AM, Joel Koshy  wrote:

> That sounds good, although is that the only change (sorry I have not
> done a careful review of that patch and would like to before it gets
> checked in).
>
> On Fri, Oct 31, 2014 at 10:42:13AM -0700, Jun Rao wrote:
> > To circle back on this thread. The patch in kafka-1482 is almost ready.
> To
> > make the mbean names more meaningful and easier to parse, the patch will
> > use explicit key/value pairs in the mbean name for things like clientId
> and
> > topic, and will get rid of the quotes.
> >
> > So, instead of
> >"kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
> > we will have
> >kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1
> >
> > Any objection to committing this to the 0.8.2 branch?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Oct 17, 2014 at 11:54 AM, Jun Rao  wrote:
> >
> > > Hi, everyone,
> > >
> > > We are fixing the mbean names in kafka-1482, by adding separate
> explicit
> > > tags in the name for things like clientId and topic. Another thing that
> > > some people have complained before is that we use quotes in the jmx
> name.
> > > Should we also just get rid of the quotes as part of kafka-1482? So,
> > > instead of
> > >
> "kafka.server":type="BrokerTopicMetrics",name="topic-1-BytesInPerSec"
> > > we will have
> > >
> kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=topic-1
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Thu, Oct 9, 2014 at 11:12 AM, Neha Narkhede <
> neha.narkh...@gmail.com>
> > > wrote:
> > >
> > >> I am going to vote for 1482 to be included in 0.8.2, if we have a
> patch
> > >> submitted in a week. I think we've had this JIRA opened for too long
> and
> > >> we
> > >> held people back so it's only fair to release this.
> > >>
> > >> On Wed, Oct 8, 2014 at 9:40 PM, Jun Rao  wrote:
> > >>
> > >> > Otis,
> > >> >
> > >> > Just have the patch ready asap. We can make a call then.
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Jun
> > >> >
> > >> > On Wed, Oct 8, 2014 at 6:13 AM, Otis Gospodnetic <
> > >> > otis.gospodne...@gmail.com
> > >> > > wrote:
> > >> >
> > >> > > Hi Jun,
> > >> > >
> > >> > > Would by the end of next week be acceptable for 0.8.2?
> > >> > >
> > >> > > Thanks,
> > >> > > Otis
> > >> > > --
> > >> > > Monitoring * Alerting * Anomaly Detection * Centralized Log
> Management
> > >> > > Solr & Elasticsearch Support * http://sematext.com/
> > >> > >
> > >> > >
> > >> > > On Tue, Oct 7, 2014 at 4:04 PM, Jun Rao  wrote:
> > >> > >
> > >> > > > Otis,
> > >> > > >
> > >> > > > Yes, if you guys can help provide a patch in a few days, we can
> > >> > probably
> > >> > > > get it to the 0.8.2 release.
> > >> > > >
> > >> > > > Thanks,
> > >> > > >
> > >> > > > Jun
> > >> > > >
> > >> > > > On Tue, Oct 7, 2014 at 12:10 PM, Otis Gospodnetic <
> > >> > > > otis.gospodne...@gmail.com> wrote:
> > >> > > >
> > >> > > > > Hi Jun,
> > >> > > > >
> > >> > > > > I think your MBean renaming approach will work.  I see
> > >> > > > > https://issues.apache.org/jira/browse/KAFKA-1481 has Fix
> Version
> > >> > > 0.8.2,
> > >> > > > > but
> > >> > > > > is not marked as a Blocker.  We'd love to get the MBeans
> fixed so
> > >> > this
> > >> > > > > makes it in 0.8.2 release.  Do you know if this is on anyone's
> > >> plate
> > >> > > (the
> > >> > > > > issue is currently Unassigned)?  If not, should we provide a
> new
> > >> > patch
> > >> > > > that
> > >> > > > > uses your approach?
> > >> > > > >
> > >> > > > > Thanks,
> > >> > > > > Otis
> > >> > > > > --
> > >> > > > > Monitoring * Alerting * Anomaly Detection * Centralized Log
> > >> > Management
> > >> > > > > Solr & Elasticsearch Support * http://sematext.com/
> > >> > > > >
> > >> > > > >
> > >> > > > > On Thu, Sep 18, 2014 at 4:49 PM, Jun Rao 
> > >> wrote:
> > >> > > > >
> > >> > > > > > Otis,
> > >> > > > > >
> > >> > > > > > In kafka-1481, we will have to change the mbean names (at
> least
> > >> the
> > >> > > > ones
> > >> > > > > > with clientid and topic) anyway. Using the name/value pair
> in
> > >> the
> > >> > > mbean
> > >> > > > > > name allows us to do this in a cleaner way. Yes, "," is not
> > >> allowed
> > >> > > in
> > >> > > > > > clientid or topic.
> > >> > > > > >
> > >> > > > > > Bhavesh,
> > >> > > > > >
> > >> > > > > > Yes, I was thinking of making changes in the new metrics
> > >> package.
> > >> > > > > Something
> > >> > > > > > like allowing the sensor names to have name/value pairs.
> The jmx
> > >> > > names
> > >> > > > > will
> > >> > > > > > just follow accordingly. This is probably cleaner than
> doing the
> > >> > > > > escaping.
> > >> > > > > > Also, the metric names are more intuitive (otherwise, you
> have
> > >> to
> > >> > > know
> > >> > > > > > which part is the clientid and which part is the topic).
> > >> > > > > >
> > >> > > > > > Thanks,
> > >> > > > > >
>

[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability

2014-10-31 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192223#comment-14192223
 ] 

Joel Koshy commented on KAFKA-1555:
---

Given the current implementation, I think the documentation looks good. I do
have one comment/question on the implementation though (at the end).  I have
a couple of minor typo corrections and suggestions here:

* all (-1) _replicas_
* acknowledgement by all _replicas_
* (_which_ is achieved..)
* _provides_ the _strongest_ durability guarantee
* A message that _has been acknowledged_ by all in-sync replicas will not be 
lost as long as at least one of those in-sync replicas _remains available_.
* The sentence that follows ("Note, however...") contains details that seem 
redundant to what has already been said in parantheses. So we can remove one or 
the other.
* Instead of "To avoid this _unfortunately_ condition" -> _"Although this 
ensures maximum availability of the partition, this behavior may be undesirable 
to some users who prefer durability over availability.  Therefore, we provide 
two topic-level configurations ..."_
* if all replicas _become unavailable_, _then_ the partition will remain 
unavailable until the _most recent_ leader becomes available again. _This 
effectively prefers unavailability over the risk of message loss. See _the_ 
previous section on unclean leader election for more details.
* _Specify_ a minimum ISR size: ... above a certain minimum, _in order_ to 
prevent _the_ loss of messages... just a single replica, which _subsequently_ 
becomes unavailable... guarantees that the message will be _acknowledged at 
least this many in-sync replicas_.
* "The trade-off here" - appears to be in a separate paragraph altogether, but 
it seems it should belong under the second point on min.isr
* Also, perhaps we can rephrase it a bit: _"This setting offers a trade-off 
between consistency and availability. A higher setting for minimum ISR size 
guarantees better consistency since the message is guaranteed to be written to 
more replicas which reduces the probability that it will be lost. However, it 
reduces  availability since the partition will be unavailable for writes if the 
number of in-sync replicas drops below the minimum threshold._



My only remaining concern about the current implementation is that min.isr
is a broker-topic config and not explicitly a producer config. However, it
currently takes effect only if {{acks == -1}}. That seems slightly odd to me.
i.e., we could just as well have it take effect even if {{acks == 0/1}} -
i.e., reject the append if the current {{|ISR| < min.isr}} (with the caveat of
NotEnoughReplicasAfterAppend) regardless of ack setting. Do you think this
is uninituitive for users?


> provide strong consistency with reasonable availability
> ---
>
> Key: KAFKA-1555
> URL: https://issues.apache.org/jira/browse/KAFKA-1555
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Jiang Wu
>Assignee: Gwen Shapira
> Fix For: 0.8.2
>
> Attachments: KAFKA-1555-DOCS.0.patch, KAFKA-1555-DOCS.1.patch, 
> KAFKA-1555-DOCS.2.patch, KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
> KAFKA-1555.2.patch, KAFKA-1555.3.patch, KAFKA-1555.4.patch, 
> KAFKA-1555.5.patch, KAFKA-1555.5.patch, KAFKA-1555.6.patch, 
> KAFKA-1555.8.patch, KAFKA-1555.9.patch
>
>
> In a mission critical application, we expect a kafka cluster with 3 brokers 
> can satisfy two requirements:
> 1. When 1 broker is down, no message loss or service blocking happens.
> 2. In worse cases such as two brokers are down, service can be blocked, but 
> no message loss happens.
> We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
> due to its three behaviors:
> 1. when choosing a new leader from 2 followers in ISR, the one with less 
> messages may be chosen as the leader.
> 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
> has less messages than the leader.
> 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
> stored in only 1 broker.
> The following is an analytical proof. 
> We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
> that at the beginning, all 3 replicas, leader A, followers B and C, are in 
> sync, i.e., they have the same messages and are all in ISR.
> According to the value of request.required.acks (acks for short), there are 
> the following cases.
> 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
> 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
> time, although C hasn't received m, C is still in ISR. If A is killed, C can 
> be elected as the new leader, and consumers will miss m.
> 3. a

Re: Review Request 24676: Fix KAFKA-1583

2014-10-31 Thread Guozhang Wang


> On Oct. 31, 2014, 6:08 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/cluster/Partition.scala, line 236
> > 
> >
> > Could we get rid of = since this method is supposed to not return any 
> > value?

Thanks Jun. I will address this comment in KAFKA-1720


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24676/#review59370
---


On Oct. 28, 2014, 10:09 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24676/
> ---
> 
> (Updated Oct. 28, 2014, 10:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1583
> https://issues.apache.org/jira/browse/KAFKA-1583
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Incoporated Joel's comments round two
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> 59c09155dd25fad7bed07d3d00039e3dc66db95c 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> 8d085a1f18f803b3cebae4739ad8f58f95a6c600 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 861a6cf11dc6b6431fcbbe9de00c74a122f204bd 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> b2366e7eedcac17f657271d5293ff0bef6f3cbe6 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> a286272c834b6f40164999ff8b7f8998875f2cfe 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e88ecf224a4dab8bbd26ba7b0c3ccfe844c6b7f4 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> 880ab4a004f078e5d84446ea6e4454ecc06c95f2 
>   core/src/main/scala/kafka/log/Log.scala 
> 157d67369baabd2206a2356b2aa421e848adab17 
>   core/src/main/scala/kafka/network/BoundedByteBufferSend.scala 
> a624359fb2059340bb8dc1619c5b5f226e26eb9b 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> e0f14e25af03e6d4344386dcabc1457ee784d345 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 9481508fc2d6140b36829840c337e557f3d090da 
>   core/src/main/scala/kafka/server/FetchRequestPurgatory.scala 
> ed1318891253556cdf4d908033b704495acd5724 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 85498b4a1368d3506f19c4cfc64934e4d0ac4c90 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 43eb2a35bb54d32c66cdb94772df657b3a104d1a 
>   core/src/main/scala/kafka/server/ProducerRequestPurgatory.scala 
> d4a7d4a79b44263a1f7e1a92874dd36aa06e5a3f 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 78b7514cc109547c562e635824684fad581af653 
>   core/src/main/scala/kafka/server/RequestPurgatory.scala 
> 9d76234bc2c810ec08621dc92bb4061b8e7cd993 
>   core/src/main/scala/kafka/utils/DelayedItem.scala 
> d7276494072f14f1cdf7d23f755ac32678c5675c 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 209a409cb47eb24f83cee79f4e064dbc5f5e9d62 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
> fb61d552f2320fedec547400fbbe402a0b2f5d87 
>   core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 
> 03a424d45215e1e7780567d9559dae4d0ae6fc29 
>   core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
> cd302aa51eb8377d88b752d48274e403926439f2 
>   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
> a9c4ddc78df0b3695a77a12cf8cf25521a203122 
>   core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala 
> a577f4a8bf420a5bc1e62fad6d507a240a42bcaa 
>   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
> 3804a114e97c849cae48308997037786614173fc 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> 09ed8f5a7a414ae139803bf82d336c2d80bf4ac5 
> 
> Diff: https://reviews.apache.org/r/24676/diff/
> 
> 
> Testing
> ---
> 
> Unit tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2014-10-31 Thread BalajiSeshadri (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BalajiSeshadri updated KAFKA-1476:
--
Status: In Progress  (was: Patch Available)

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: BalajiSeshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2014-10-31 Thread BalajiSeshadri (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BalajiSeshadri updated KAFKA-1476:
--
Attachment: KAFKA-1476-REVIEW-COMMENTS.patch

[~nehanarkhede] Please find patch attached.

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: BalajiSeshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2014-10-31 Thread BalajiSeshadri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192269#comment-14192269
 ] 

BalajiSeshadri commented on KAFKA-1476:
---

[~nehanarkhede] Can i get commiter access ?.It looks like i have to delete 
everything and clone everytime i submit a patch or i'm not doing right thing.

This slows my contribution.


> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: BalajiSeshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1742) ControllerContext removeTopic does not correctly update state

2014-10-31 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-1742:
---

 Summary: ControllerContext removeTopic does not correctly update 
state
 Key: KAFKA-1742
 URL: https://issues.apache.org/jira/browse/KAFKA-1742
 Project: Kafka
  Issue Type: Bug
Reporter: Onur Karaman


removeTopic does not correctly update the state of ControllerContext.

This is because it removes the topic from some underlying maps through 
dropWhile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1742) ControllerContext removeTopic does not correctly update state

2014-10-31 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat reassigned KAFKA-1742:
--

Assignee: Mayuresh Gharat

> ControllerContext removeTopic does not correctly update state
> -
>
> Key: KAFKA-1742
> URL: https://issues.apache.org/jira/browse/KAFKA-1742
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Mayuresh Gharat
>
> removeTopic does not correctly update the state of ControllerContext.
> This is because it removes the topic from some underlying maps through 
> dropWhile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2014-10-31 Thread Marc Chung (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Chung updated KAFKA-1733:
--
Attachment: kafka-1733-add-connectTimeoutMs.patch

> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Reporter: Marc Chung
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2014-10-31 Thread Marc Chung (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Chung updated KAFKA-1733:
--
Attachment: (was: kafka-1733-add-connectTimeoutMs.patch)

> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Reporter: Marc Chung
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1742) ControllerContext removeTopic does not correctly update state

2014-10-31 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman reassigned KAFKA-1742:
---

Assignee: Onur Karaman  (was: Mayuresh Gharat)

> ControllerContext removeTopic does not correctly update state
> -
>
> Key: KAFKA-1742
> URL: https://issues.apache.org/jira/browse/KAFKA-1742
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> removeTopic does not correctly update the state of ControllerContext.
> This is because it removes the topic from some underlying maps through 
> dropWhile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1742) ControllerContext removeTopic does not correctly update state

2014-10-31 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1742:
--
 Priority: Blocker  (was: Major)
Fix Version/s: 0.8.2

> ControllerContext removeTopic does not correctly update state
> -
>
> Key: KAFKA-1742
> URL: https://issues.apache.org/jira/browse/KAFKA-1742
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 0.8.2
>
>
> removeTopic does not correctly update the state of ControllerContext.
> This is because it removes the topic from some underlying maps through 
> dropWhile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Compile failure going from kafka 0.8.1.1 to 0.8.2

2014-10-31 Thread Jack Foy
On Thu, Oct 30, 2014 at 9:20 AM, Jay Kreps 
mailto:jay.kr...@gmail.com>> wrote:
I think we should treat this like a bug for 0.8.2 final, we should be able
to add two commitOffsets methods with and without the param which should
fix the problem, right?

On Oct 30, 2014, at 9:51 AM, Jun Rao 
mailto:jun...@gmail.com>> wrote:
Yes, we can change this to two methods in 0.8.2 final.

Thanks. After some experimentation, I think there isn’t actually a form of this 
code that can compile under both kafka 0.8.1.1 and 0.8.2-beta.

This form fails to compile against kafka-0.8.1.1: connector.commitOffsets()

[error] src/main/scala/com/whitepages/kafka/consumer/Connector.scala:21: 
Unit does not take parameters
[error] connector.commitOffsets()
[error]^

This form fails to compile against kafka-0.8.2-beta: connector.commitOffsets

[error] src/main/scala/com/whitepages/kafka/consumer/Connector.scala:21: 
missing arguments for method commitOffsets in trait ConsumerConnector;
[error] follow this method with `_' if you want to treat it as a partially 
applied function
[error] connector.commitOffsets
[error]   ^

--
Jack Foy mailto:j...@whitepages.com>>





[jira] [Updated] (KAFKA-328) Write unit test for kafka server startup and shutdown API

2014-10-31 Thread BalajiSeshadri (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BalajiSeshadri updated KAFKA-328:
-
Attachment: KAFKA-328-REVIEW-COMMENTS.patch

[~nehanarkhede] Please review

> Write unit test for kafka server startup and shutdown API 
> --
>
> Key: KAFKA-328
> URL: https://issues.apache.org/jira/browse/KAFKA-328
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: BalajiSeshadri
>  Labels: newbie
> Attachments: KAFKA-328-FORMATTED.patch, 
> KAFKA-328-REVIEW-COMMENTS.patch, KAFKA-328.patch
>
>
> Background discussion in KAFKA-320
> People often try to embed KafkaServer in an application that ends up calling 
> startup() and shutdown() repeatedly and sometimes in odd ways. To ensure this 
> works correctly we have to be very careful about cleaning up resources. This 
> is a good practice for making unit tests reliable anyway.
> A good first step would be to add some unit tests on startup and shutdown to 
> cover various cases:
> 1. A Kafka server can startup if it is not already starting up, if it is not 
> currently being shutdown, or if it hasn't been already started
> 2. A Kafka server can shutdown if it is not already shutting down, if it is 
> not currently starting up, or if it hasn't been already shutdown. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Trunk

2014-10-31 Thread futtre
GitHub user futtre opened a pull request:

https://github.com/apache/kafka/pull/35

Trunk



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/35.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #35


commit 5de68ef4aef7812fd9f2d5e4fb6158bf753658e3
Author: Sriram Subramanian 
Date:   2014-02-24T09:50:02Z

Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
trunk

Conflicts:
core/src/main/scala/kafka/controller/KafkaController.scala

commit 3955915a5f8a0daa7b96be69a87b3fbd013c9501
Author: Jay Kreps 
Date:   2014-02-24T22:44:32Z

KAFKA-1279 Socket server should close all connections when it is shutdown.

commit 5b80758b57d8063b8a8124562cc5c83854a99e00
Author: Sriram Subramanian 
Date:   2014-02-25T08:22:04Z

Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
trunk

commit 00afb619cb1f2aaca0ddc785ab1d649428f9c93e
Author: Sriram Subramanian 
Date:   2014-02-25T08:26:14Z

Check delete state of topic

commit edbed2823fc3e4948bb56ec3bee02fe4ad1bbdca
Author: Nathan Brown 
Date:   2014-02-25T17:37:15Z

kafka-1278; More flexible helper scripts; patched by Nathan Brown; reviewed 
by Jun Rao

commit 57be6c81a748a30e583037ddc70d2eec9acc7832
Author: Jun Rao 
Date:   2014-02-26T02:09:59Z

kafka-1280; exclude kafka-clients jar from dependant-libs dir; patched by 
Jun Rao; reviewed by Neha Narkhede

commit 5e2a9a560d847bd0cf364d86bd6784f70d99c71a
Author: Guozhang Wang 
Date:   2014-02-27T18:50:15Z

KAFKA-1260 Integration Test for New Producer Part II: Broker Failure 
Handling; reviewed by Jay Kreps, Neha Narkhede and Jun Rao

commit eb6da57492caad7a6b71692ad184a95c89035b67
Author: Guozhang Wang 
Date:   2014-02-27T22:15:41Z

kafka-1212; System test exception handling does not stop background 
producer threads; patched by Guozhang Wang; reviewed by Neha Narkhede, Joel 
Koshy, and Jun Rao

commit a810b8ecbe66f36f8ef58a440d439a54762b3d9c
Author: Jay Kreps 
Date:   2014-02-27T22:25:02Z

TRIVIAL: Fix failing producer integration tests.

commit f1a53b972eb1f8e75db54d3272d9eb7c398e238a
Author: Jay Kreps 
Date:   2014-02-21T04:17:01Z

KAFKA-1250 Add logging to new producer.

commit 8cdb234ad81ec324f15c9c1f8e484861926076f9
Author: Rajasekar Elango 
Date:   2014-02-28T16:02:42Z

kafka-1041; Number of file handles increases indefinitely in producer if 
broker host is unresolvable; patched by Rajasekar Elango; reviewed by Jun Rao

commit 220cc842a6e866806e3f53c394b5547fe7c44b3e
Author: Jun Rao 
Date:   2014-02-28T21:53:37Z

kafka-1285; enable log4j in unit test; patched by Jun Rao; reviewed by Neha 
Narkhede

commit 40c6555eb0d0db25d6bd06c5eb4040246805c79f
Author: Jun Rao 
Date:   2014-03-03T17:35:21Z

kafka-1287; enable log4j in command line tools using the new producer; 
patched by Jun Rao; reviewed by Guozhang Wang and Neha Narkhede

commit 77118a935ee28da80c67d4050f41a1e7e838ebaa
Author: Joe Stein 
Date:   2014-03-04T20:30:59Z

KAFKA-1289 Misc. nitpicks in log cleaner for new 0.8.1 features patch by 
Jay Kreps, reviewed by Sriram Subramanian and Jun Rao

commit 153ac8aa604014351e6958dce5d479e4875031fa
Author: Joe Stein 
Date:   2014-03-04T20:49:12Z

KAFKA-1288 add enclosing dir in release tar gz patch by Jun Rao, reviewed 
by Neha Narkhede

commit 5ba48348b3abb8f84fda0798d992ff2e0a04051d
Author: Jay Kreps 
Date:   2014-03-05T04:05:51Z

KAFKA-1286 Retry can block. Patch from Guozhang, reviewed by jay.

commit 2404191be9f4323fc6144a8f56ae36f794d50da6
Author: Jay Kreps 
Date:   2014-03-05T21:32:08Z

KAFKA-1286 Follow-up comment from Jun: Change the backoff default time 
configuration.

commit 4524f384dc041f33297b47b181be6b59061577b4
Author: Jay Kreps 
Date:   2014-03-05T22:35:07Z

KAFKA-1286: Trivial fix up: Use || instead of |.

commit 6319f26e6f47350f1e7926dea153ed848f464d2e
Author: Jay Kreps 
Date:   2014-03-06T16:26:12Z

TRIVIAL: Fix spurious logging in console consumer.

commit c3520fe7e0e50f0eeb4f82ad1ed961bdd39440fc
Author: Martin Kleppmann 
Date:   2014-03-06T17:34:37Z

kafka-server-stop.sh doesn't stop broker; reviewed by Neha Narkhede

commit 74c54c7eeb236cbf66710751165ea9f632cf3f52
Author: Neha Narkhede 
Date:   2014-03-07T02:08:55Z

KAFKA-1281 add the new producer to existing tools; reviewed by Jun Rao and 
Guozhang Wang

commit c765d7bd4e30e0b952fc4bc00d142f7939b498a6
Author: Jun Rao 
Date:   2014-03-07T03:06:25Z

kafka-1240; Add ability to existing system tests to use the new producer 
client; patched by Jun Rao; reviewed by Neha Narkhede

commit c1246c3e2152915411e1c3b00fca67634eed
Author: Jun Rao 
Date:   2014-03-12T03:56:32Z

kafka-1301; system testcase_0206 fails u

[GitHub] kafka pull request: Trunk

2014-10-31 Thread futtre
Github user futtre closed the pull request at:

https://github.com/apache/kafka/pull/35


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1742) ControllerContext removeTopic does not correctly update state

2014-10-31 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192733#comment-14192733
 ] 

Onur Karaman commented on KAFKA-1742:
-

partitionLeadershipInfo and partitionReplicaAssignment are mutable.Maps. Given 
that a mutable.Map may be arbitrarily ordered, dropping the longest prefix of 
elements that satisfy a predicate can cause incorrect results.

It's also noted in the scaladocs 
[http://www.scala-lang.org/api/current/index.html#scala.collection.mutable.Map@dropWhile(p:A=>Boolean):Repr]

For example:
{code}
import collection._

object Main {
  def main(args: Array[String]) {
var m = mutable.Map(5 -> 2, 3 -> 6)
println("original: " + m)
println("using filter: " + m.filter(p => p._1 != 3))
println("using dropWhile: " + m.dropWhile(p => p._1 == 3))
  }
}
{code}

Outputs:
{code}
original: Map(5 -> 2, 3 -> 6)
using filter: Map(5 -> 2)
using dropWhile: Map(5 -> 2, 3 -> 6)
{code}

> ControllerContext removeTopic does not correctly update state
> -
>
> Key: KAFKA-1742
> URL: https://issues.apache.org/jira/browse/KAFKA-1742
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 0.8.2
>
>
> removeTopic does not correctly update the state of ControllerContext.
> This is because it removes the topic from some underlying maps through 
> dropWhile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1743) ConsumerConnector.commitOffsets in 0.8.2 is not backward compatible

2014-10-31 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-1743:
--

 Summary: ConsumerConnector.commitOffsets in 0.8.2 is not backward 
compatible
 Key: KAFKA-1743
 URL: https://issues.apache.org/jira/browse/KAFKA-1743
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker


In 0.8.1.x, ConsumerConnector has the following api:
  def commitOffsets

This is changed to the following in 0.8.2 and breaks compatibility

  def commitOffsets(retryOnFailure: Boolean = true)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Compile failure going from kafka 0.8.1.1 to 0.8.2

2014-10-31 Thread Jun Rao
Yes, this is actually indeed an incompatible change.

In 0.8.1.x, ConsumerConnector has the following api:
def commitOffsets

This is changed to the following in 0.8.2 and breaks compatibility

def commitOffsets(retryOnFailure: Boolean = true)

Filed KAFKA-1743 as an 0.8.2 blocker.

Thanks,

Jun



On Fri, Oct 31, 2014 at 3:07 PM, Jack Foy  wrote:

> On Thu, Oct 30, 2014 at 9:20 AM, Jay Kreps  jay.kr...@gmail.com>> wrote:
> I think we should treat this like a bug for 0.8.2 final, we should be able
> to add two commitOffsets methods with and without the param which should
> fix the problem, right?
>
> On Oct 30, 2014, at 9:51 AM, Jun Rao  jun...@gmail.com>> wrote:
> Yes, we can change this to two methods in 0.8.2 final.
>
> Thanks. After some experimentation, I think there isn’t actually a form of
> this code that can compile under both kafka 0.8.1.1 and 0.8.2-beta.
>
> This form fails to compile against kafka-0.8.1.1: connector.commitOffsets()
>
> [error]
> src/main/scala/com/whitepages/kafka/consumer/Connector.scala:21: Unit does
> not take parameters
> [error] connector.commitOffsets()
> [error]^
>
> This form fails to compile against kafka-0.8.2-beta:
> connector.commitOffsets
>
> [error]
> src/main/scala/com/whitepages/kafka/consumer/Connector.scala:21: missing
> arguments for method commitOffsets in trait ConsumerConnector;
> [error] follow this method with `_' if you want to treat it as a
> partially applied function
> [error] connector.commitOffsets
> [error]   ^
>
> --
> Jack Foy mailto:j...@whitepages.com>>
>
>
>
>


Re: Review Request 27430: Fix KAFKA-1720

2014-10-31 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27430/
---

(Updated Nov. 1, 2014, 12:21 a.m.)


Review request for kafka.


Bugs: KAFKA-1720
https://issues.apache.org/jira/browse/KAFKA-1720


Repository: kafka


Description
---

Rename delayed requests to delayed operations, change some class names


Diffs (updated)
-

  core/src/main/scala/kafka/cluster/Partition.scala 
1be57008e983fc3a831626ecf9a861f164fcca92 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
1ccbb4b6fdbbd4412ba77ffe7d4cf5adf939e439 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
8049e07e5d6d65913ec2492f1e22e5ed3ecbbea8 
  core/src/main/scala/kafka/server/DelayedRequestKey.scala 
628ef59564b9b9238d7b05d26aef79d3cfec174d 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
3007a6d89b637b93f71fdb7adab561a93d9c4c62 
  core/src/main/scala/kafka/server/RequestPurgatory.scala 
323b12e765f981e9bba736a204e4a8159e5c5ada 
  core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala 
a7720d579ea15b71511c9da0e241bd087de3674e 
  system_test/metrics.json cd3fc142176b8a3638db5230fb74f055c04c59d4 

Diff: https://reviews.apache.org/r/27430/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2014-10-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192810#comment-14192810
 ] 

Guozhang Wang commented on KAFKA-1720:
--

Updated reviewboard https://reviews.apache.org/r/27430/diff/
 against branch origin/trunk

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1720.patch, KAFKA-1720_2014-10-31_17:21:46.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2014-10-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1720:
-
Attachment: KAFKA-1720_2014-10-31_17:21:46.patch

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1720.patch, KAFKA-1720_2014-10-31_17:21:46.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27430: Fix KAFKA-1720

2014-10-31 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27430/#review59453
---

Ship it!


LGTM :)

- Gwen Shapira


On Nov. 1, 2014, 12:21 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27430/
> ---
> 
> (Updated Nov. 1, 2014, 12:21 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1720
> https://issues.apache.org/jira/browse/KAFKA-1720
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Rename delayed requests to delayed operations, change some class names
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> 1be57008e983fc3a831626ecf9a861f164fcca92 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> 1ccbb4b6fdbbd4412ba77ffe7d4cf5adf939e439 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 8049e07e5d6d65913ec2492f1e22e5ed3ecbbea8 
>   core/src/main/scala/kafka/server/DelayedRequestKey.scala 
> 628ef59564b9b9238d7b05d26aef79d3cfec174d 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 3007a6d89b637b93f71fdb7adab561a93d9c4c62 
>   core/src/main/scala/kafka/server/RequestPurgatory.scala 
> 323b12e765f981e9bba736a204e4a8159e5c5ada 
>   core/src/test/scala/unit/kafka/server/RequestPurgatoryTest.scala 
> a7720d579ea15b71511c9da0e241bd087de3674e 
>   system_test/metrics.json cd3fc142176b8a3638db5230fb74f055c04c59d4 
> 
> Diff: https://reviews.apache.org/r/27430/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Updated] (KAFKA-1555) provide strong consistency with reasonable availability

2014-10-31 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1555:

Attachment: KAFKA-1555-DOCS.3.patch

Thank you for the detailed review [~jjkoshy]. I think the design documentation 
is far clearer now. 

One comment I did not incorporate into the docs:
{quote}
 * The sentence that follows ("Note, however...") contains details that seem 
redundant to what has already been said in parantheses. So we can remove one or 
the other.
{quote}

I added this note since this is a specific topic of confusion - why isn't 
"acks=-1" enough to guarantee consistency. I've explained this in detail to 
multiple customers, users on mailing list, product managers, support, etc. I 
think it deserves a sentence in our documentation.

Regarding the design itself - perhaps its worth its own JIRA with discussion. 
IMO, when a user specifies acks=[0,1], they are basically declaring that they 
don't care much if the message is lost due to a replica failure. Therefore 
rejecting the message due to lack of replicas will be surprising and 
counter-intuitive. However, most of my users are in the strong consistency camp 
and simply don't use acks=[0,1]. Perhaps you have example scenarios where these 
combinations make sense?


> provide strong consistency with reasonable availability
> ---
>
> Key: KAFKA-1555
> URL: https://issues.apache.org/jira/browse/KAFKA-1555
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Jiang Wu
>Assignee: Gwen Shapira
> Fix For: 0.8.2
>
> Attachments: KAFKA-1555-DOCS.0.patch, KAFKA-1555-DOCS.1.patch, 
> KAFKA-1555-DOCS.2.patch, KAFKA-1555-DOCS.3.patch, KAFKA-1555.0.patch, 
> KAFKA-1555.1.patch, KAFKA-1555.2.patch, KAFKA-1555.3.patch, 
> KAFKA-1555.4.patch, KAFKA-1555.5.patch, KAFKA-1555.5.patch, 
> KAFKA-1555.6.patch, KAFKA-1555.8.patch, KAFKA-1555.9.patch
>
>
> In a mission critical application, we expect a kafka cluster with 3 brokers 
> can satisfy two requirements:
> 1. When 1 broker is down, no message loss or service blocking happens.
> 2. In worse cases such as two brokers are down, service can be blocked, but 
> no message loss happens.
> We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
> due to its three behaviors:
> 1. when choosing a new leader from 2 followers in ISR, the one with less 
> messages may be chosen as the leader.
> 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
> has less messages than the leader.
> 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
> stored in only 1 broker.
> The following is an analytical proof. 
> We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
> that at the beginning, all 3 replicas, leader A, followers B and C, are in 
> sync, i.e., they have the same messages and are all in ISR.
> According to the value of request.required.acks (acks for short), there are 
> the following cases.
> 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
> 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
> time, although C hasn't received m, C is still in ISR. If A is killed, C can 
> be elected as the new leader, and consumers will miss m.
> 3. acks=-1. B and C restart and are removed from ISR. Producer sends a 
> message m to A, and receives an acknowledgement. Disk failure happens in A 
> before B and C replicate m. Message m is lost.
> In summary, any existing configuration cannot satisfy the requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)