[jira] [Resolved] (KAFKA-1959) Class CommitThread overwrite group of Thread class causing compile errors

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-1959.
---
Resolution: Fixed
  Assignee: Tong Li

Thanks for the patch - committed to trunk.

> Class CommitThread overwrite group of Thread class causing compile errors
> -
>
> Key: KAFKA-1959
> URL: https://issues.apache.org/jira/browse/KAFKA-1959
> Project: Kafka
>  Issue Type: Bug
>  Components: core
> Environment: scala 2.10.4
>Reporter: Tong Li
>Assignee: Tong Li
>  Labels: newbie
> Attachments: KAFKA-1959.patch, compileError.png
>
>
> class CommitThread(id: Int, partitionCount: Int, commitIntervalMs: Long, 
> zkClient: ZkClient)
> extends ShutdownableThread("commit-thread")
> with KafkaMetricsGroup {
> private val group = "group-" + id
> group overwrite class Thread group member, causing the following compile 
> error:
> overriding variable group in class Thread of type ThreadGroup;  value group 
> has weaker access privileges; it should not be private



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1960) .gitignore does not exclude test generated files and folders.

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-1960.
---
Resolution: Fixed
  Assignee: Tong Li

Thanks for the patch - committed to trunk.

> .gitignore does not exclude test generated files and folders.
> -
>
> Key: KAFKA-1960
> URL: https://issues.apache.org/jira/browse/KAFKA-1960
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1960.patch
>
>
> gradle test can create quite few folders, .gitignore should exclude these 
> files for an easier git submit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-02-17 Thread Joel Koshy


> On Feb. 4, 2015, 2:15 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/api/OffsetCommitRequest.scala, line 48
> > 
> >
> > I our convention is to include the if in the previous line.
> 
> Guozhang Wang wrote:
> I checked the code base and it seems we do not have a consensus here.. 
> and personally I would prefer this as it actually make the logic clearer.

We don't have a formal convention here but I think we should and incorporate it 
into our coding guidelines. The problem with a separate line is that at first 
glance (especially with just two character indentation) it does not seem to be 
associated with the assignment. Also, most current occurrences put the if on 
the same line.
```
find . -name "*.scala" -exec pcregrep -c '=(\s)*if' {} \; | grep -v 0 | paste 
-s -d+ | bc
61
find . -name "*.scala" -exec pcregrep -Mc '=(\s)*\n(\s)*if' {} \; | grep -v 0 | 
paste -s -d+ | bc
36
```


> On Feb. 4, 2015, 2:15 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/common/OffsetMetadataAndError.scala, line 36
> > 
> >
> > (This is also a public API change - although you did add an Object 
> > wrapper further down that comes close to the original API.)
> 
> Guozhang Wang wrote:
> I think the wrapper MessageAndMetadata preserves the existing public API 
> right?

You mean the wrapper object? It comes close, but not quite - since you can 
instantiate a case class with a `new` keyword or without. You need it for the 
secondary constructors of the case class. With the object wrapper we assume 
that the objects were being constructed without the new. I don't know how many 
people actually used it though, but it was part of the public API since you 
would need to create those objects to form an OffsetCommitRequest.


> On Feb. 4, 2015, 2:15 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 163
> > 
> >
> > Shouldn't the commit timestamp _always_ be set to the current time?
> > 
> > What I was thinking is this:
> > If v0:
> > - An explicit timestamp is provided only to override the v0 default 
> > retention which is add the server-side retention to the current timestamp. 
> > The (true) commit timestamp - i.e., receive time is useful for debugging 
> > purposes. So if an explicit timestamp is provided in v0 then use that to 
> > compute the absolute expire timestamp which will be the given commit 
> > timestamp; so you would store (commitTimestamp = now; expireTimestamp = 
> > given commitTimeStamp); if v0 and commit timestamp is default, then you 
> > would store (commitTimestamp = now, expireTimestamp = now + offsetRetention)
> > - if v1: (commitTimestamp = now, expireTimestamp = now + 
> > offsetRetention)
> > 
> > This way, you should have correct expiration behavior for v0, v1 and v2 
> > and at the same time have the true commit timestamp - i.e., the receive 
> > time at the broker which is useful for debugging. (also see comment in 
> > OffsetManager)
> 
> Guozhang Wang wrote:
> In v0/v1, the commit timestamp can be specified as a future timestamp so 
> the expiration timestamp = commit timestamp + retention (in v0/v1 it is 
> always the default value).
> 
> This behavior should not be respected, i.e. offsets already stored in v0 
> and v1 format should be expired correctly using 0.8.2 code. Details can be 
> found in Jun's comments and my replies.

I don't think we are on the same page here. Let's discuss offline to follow-up.


> On Feb. 4, 2015, 2:15 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, line 65
> > 
> >
> > Should we call this maxOffsetRetentionMs instead?
> 
> Guozhang Wang wrote:
> Not exactly, as it is just the default offset retention, not the upper 
> limit: users can specify a value larger than this default and it will still 
> be accepted.

Yes you are right.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review70836
---


On Feb. 6, 2015, 7:01 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Feb. 6, 2015, 7:01 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Incorporated Joel's comments
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/or

Re: Review Request 31097: Patch for KAFKA-1960

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31097/#review72911
---

Ship it!


Ship It!

- Joel Koshy


On Feb. 16, 2015, 9:48 p.m., Tong Li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31097/
> ---
> 
> (Updated Feb. 16, 2015, 9:48 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1960
> https://issues.apache.org/jira/browse/KAFKA-1960
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> test created files in following folders and the files and folders should
> be exclude from being committed to the source repository:
> 
> core/data/*
> gradle/wrapper/*
> 
> 
> Diffs
> -
> 
>   .gitignore 06a64184eaa531fcbf5586692b78bfd48e4176ba 
> 
> Diff: https://reviews.apache.org/r/31097/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Tong Li
> 
>



Re: Review Request 31088: Patch for KAFKA-1959

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31088/#review72910
---

Ship it!


Ship It!

- Joel Koshy


On Feb. 16, 2015, 4:37 p.m., Tong Li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31088/
> ---
> 
> (Updated Feb. 16, 2015, 4:37 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1959
> https://issues.apache.org/jira/browse/KAFKA-1959
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> The original code clearly wants to define a string but override the 
> ThreadGroup of
> the super class member. The patchset will rename the private variable to be 
> group_id
> as it intended.
> 
> 
> Diffs
> -
> 
>   core/src/test/scala/other/kafka/TestOffsetManager.scala 
> 41f334d48897b3027ed54c58bbf4811487d3b191 
> 
> Diff: https://reviews.apache.org/r/31088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Tong Li
> 
>



Re: Review Request 29912: Patch for KAFKA-1852

2015-02-17 Thread Joel Koshy


> On Feb. 13, 2015, 7:01 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, line 215
> > 
> >
> > Minor comment. I think this may be better to pass in to the 
> > OffsetManager.
> > 
> > We should even use it in loadOffsets to discard offsets that are from 
> > topics that have been deleted. We can do that in a separate jira - I don't 
> > think our handling for clearing out offsets on a delete topic is done yet - 
> > Onur Karaman did it for ZK based offsets but we need a separate jira to 
> > delete Kafka-based offsets.
> 
> Sriharsha Chintalapani wrote:
> Thanks for the review. Since offsetmanager initialized in KafkaServer and 
> metadataCache in KafkaApis , in the latest patch I added setMetadataCache in 
> OffsetManager and calling it in KafkaApis. Please take a look

In that case I think it is just better to create the cache outside (in 
KafkaServer and pass it in to KafkaApis). The metadataCache is useful enough to 
be used in other places (other than just KafkaApis).


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/#review72413
---


On Feb. 16, 2015, 9:22 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29912/
> ---
> 
> (Updated Feb. 16, 2015, 9:22 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1852
> https://issues.apache.org/jira/browse/KAFKA-1852
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
> contains method to MetadataCache.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 703886a1d48e6d2271da67f8b89514a6950278dd 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> 4c70aa7e0157b85de5e24736ebf487239c4571d0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 83d52643028c5628057dc0aa29819becfda61fdb 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
> 
> Diff: https://reviews.apache.org/r/29912/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: Review Request 31150: Patch for kafka-1952

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31150/#review72907
---

Ship it!



core/src/main/scala/kafka/server/DelayedOperation.scala


I think we do not use capitalized letter for in-function comments?


- Guozhang Wang


On Feb. 18, 2015, 5:30 a.m., Jun Rao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31150/
> ---
> 
> (Updated Feb. 18, 2015, 5:30 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: kafka-1952
> https://issues.apache.org/jira/browse/kafka-1952
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> kafka-1952
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/DelayedOperation.scala 
> fc06b01cad3a0497800df727fa2abf60772694f2 
> 
> Diff: https://reviews.apache.org/r/31150/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jun Rao
> 
>



Scala IDE debugging Unit Test Issues

2015-02-17 Thread Jonathan Rafalski
Hello all,

  Completely new to kafka and scala but thought I would get my feet
wet with a few of the newbie tasks.

  I was able to get the source up and running in the Scala IDE and I
am able to debug the examples, however when I try to debug any of the
unit tests in core (for example the
unit.kafka.consumer.zookeeperconsumerconnectortest class) I get the
java.lang.ClassNotFoundException:

Class not found unit.kafka.consumer.ZookeeperConsumerConnectorTest

  I have searched the normal sites (SE and Mail archives) and
attempted a few solutions (adding physical directories of the .class
and .scala files to the build path adding junit libraries) but to no
avail.  My thoughts are this is due to the fact that the package
declaration on the unit tests point to the main pacakages not the unit
test package which is causing eclipse to freak out (though might be
way off base).

 also since I am just starting and I have no alliances yet is eclipse
the preferred IDE here or should I be going with Intellij?

I apologize for the complete newb question here but any help on setup
to get these unit tests up and running so I can start contributing I
would be grateful.

Thank you again.

Jonathan.


[jira] [Commented] (KAFKA-1952) High CPU Usage in 0.8.2 release

2015-02-17 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325439#comment-14325439
 ] 

Jun Rao commented on KAFKA-1952:


Attach a patch for trunk. The CPU load and the end to end latency with the 
patch is comparable to those in 0.8.2.

> High CPU Usage in 0.8.2 release
> ---
>
> Key: KAFKA-1952
> URL: https://issues.apache.org/jira/browse/KAFKA-1952
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jun Rao
>Priority: Critical
> Fix For: 0.8.2.1
>
> Attachments: kafka-1952.patch, kafka-1952.patch, 
> kafka-1952_2015-02-15_15:26:33.patch
>
>
> Brokers with high partition count see increased CPU usage when migrating from 
> 0.8.1.1 to 0.8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1952) High CPU Usage in 0.8.2 release

2015-02-17 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325435#comment-14325435
 ] 

Jun Rao commented on KAFKA-1952:


Created reviewboard https://reviews.apache.org/r/31150/diff/
 against branch origin/trunk

> High CPU Usage in 0.8.2 release
> ---
>
> Key: KAFKA-1952
> URL: https://issues.apache.org/jira/browse/KAFKA-1952
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jun Rao
>Priority: Critical
> Fix For: 0.8.2.1
>
> Attachments: kafka-1952.patch, kafka-1952.patch, 
> kafka-1952_2015-02-15_15:26:33.patch
>
>
> Brokers with high partition count see increased CPU usage when migrating from 
> 0.8.1.1 to 0.8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1952) High CPU Usage in 0.8.2 release

2015-02-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1952:
---
Attachment: kafka-1952.patch

> High CPU Usage in 0.8.2 release
> ---
>
> Key: KAFKA-1952
> URL: https://issues.apache.org/jira/browse/KAFKA-1952
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jun Rao
>Priority: Critical
> Fix For: 0.8.2.1
>
> Attachments: kafka-1952.patch, kafka-1952.patch, 
> kafka-1952_2015-02-15_15:26:33.patch
>
>
> Brokers with high partition count see increased CPU usage when migrating from 
> 0.8.1.1 to 0.8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31150: Patch for kafka-1952

2015-02-17 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31150/
---

Review request for kafka.


Bugs: kafka-1952
https://issues.apache.org/jira/browse/kafka-1952


Repository: kafka


Description
---

kafka-1952


Diffs
-

  core/src/main/scala/kafka/server/DelayedOperation.scala 
fc06b01cad3a0497800df727fa2abf60772694f2 

Diff: https://reviews.apache.org/r/31150/diff/


Testing
---


Thanks,

Jun Rao



Build failed in Jenkins: KafkaPreCommit #7

2015-02-17 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-1943; MessageSizeTooLarge and MessageSetSizeTooLarge should not 
be counted toward broker-side producer failure rate

[jjkoshy] KAFKA-1914; Include total produce/fetch stats in broker topic metrics.

--
[...truncated 906 lines...]
kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProduc

[jira] [Commented] (KAFKA-1953) Disambiguate metrics from different purgatories

2015-02-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325327#comment-14325327
 ] 

Joel Koshy commented on KAFKA-1953:
---

Updated reviewboard https://reviews.apache.org/r/31140/
 against branch origin/trunk

> Disambiguate metrics from different purgatories
> ---
>
> Key: KAFKA-1953
> URL: https://issues.apache.org/jira/browse/KAFKA-1953
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Attachments: KAFKA-1953.patch, KAFKA-1953_2015-02-17_18:23:55.patch
>
>
> After the purgatory refactoring, all the different purgatories map to the 
> same metric names. We need to disambiguate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1953) Disambiguate metrics from different purgatories

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1953:
--
Attachment: KAFKA-1953_2015-02-17_18:23:55.patch

> Disambiguate metrics from different purgatories
> ---
>
> Key: KAFKA-1953
> URL: https://issues.apache.org/jira/browse/KAFKA-1953
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Attachments: KAFKA-1953.patch, KAFKA-1953_2015-02-17_18:23:55.patch
>
>
> After the purgatory refactoring, all the different purgatories map to the 
> same metric names. We need to disambiguate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31140: Patch for KAFKA-1953

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31140/
---

(Updated Feb. 18, 2015, 2:23 a.m.)


Review request for kafka.


Bugs: KAFKA-1953
https://issues.apache.org/jira/browse/KAFKA-1953


Repository: kafka


Description
---

KAFKA-1953; KAFKA-1962; Disambiguate purgatory metrics; restore delayed request 
metrics


Diffs (updated)
-

  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
01cf1d91b7056bea7368ae4ea1e3c3646fc33619 
  core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
894d6edb4077cae081b9d4039353dd17e6f0c18f 
  core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
445bfa1bf8840620e10de2456875716dc66e789a 
  core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
b3b3749a21d35950a975e24dd9d1d53afbfaaee4 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
  core/src/main/scala/kafka/server/DelayedOperation.scala 
fc06b01cad3a0497800df727fa2abf60772694f2 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
c229088eb4f3db414225a688e149591ae0f810e7 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
b82ff55e1dd1fe3fee2de5ab4bbddc91b0146601 
  core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
93f52d3222fc10b6d22ef6278365f6b026180418 

Diff: https://reviews.apache.org/r/31140/diff/


Testing
---


Thanks,

Joel Koshy



Build failed in Jenkins: Kafka-trunk #394

2015-02-17 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-1943; MessageSizeTooLarge and MessageSetSizeTooLarge should not 
be counted toward broker-side producer failure rate

[jjkoshy] KAFKA-1914; Include total produce/fetch stats in broker topic metrics.

--
[...truncated 1776 lines...]
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at kafka.integration.RollingBounceTest.setUp(RollingBounceTest.scala:50)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.int

Re: Review Request 31140: Patch for KAFKA-1953

2015-02-17 Thread Joel Koshy


> On Feb. 18, 2015, 2:06 a.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/server/DelayedOperation.scala, line 286
> > 
> >
> > We can move the debug statement out of the synchronized block.

Good point.


> On Feb. 18, 2015, 2:06 a.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/server/DelayedProduce.scala, line 147
> > 
> >
> > Were topic/partition expiration metrics useful in the past?

Yes, I can see those as being useful. However, it is likely not one of those 
critical metrics that you would want on a dashboard. We can discuss whether we 
want to keep those or not.


> On Feb. 18, 2015, 2:06 a.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/server/ReplicaManager.scala, lines 85-88
> > 
> >
> > We can remove "purgatoryName = " here.

When passing in literals it is best to provide named parameters otherwise it is 
not always clear from the call itself what the parameter actually means.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31140/#review72875
---


On Feb. 18, 2015, 12:48 a.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31140/
> ---
> 
> (Updated Feb. 18, 2015, 12:48 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1953
> https://issues.apache.org/jira/browse/KAFKA-1953
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1953; KAFKA-1962; Disambiguate purgatory metrics; restore delayed 
> request metrics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 01cf1d91b7056bea7368ae4ea1e3c3646fc33619 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 894d6edb4077cae081b9d4039353dd17e6f0c18f 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> 445bfa1bf8840620e10de2456875716dc66e789a 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> b3b3749a21d35950a975e24dd9d1d53afbfaaee4 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
>   core/src/main/scala/kafka/server/DelayedOperation.scala 
> fc06b01cad3a0497800df727fa2abf60772694f2 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> c229088eb4f3db414225a688e149591ae0f810e7 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> ce36cc72606fb5441335f1c7466a7db8da3db499 
>   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
> 93f52d3222fc10b6d22ef6278365f6b026180418 
> 
> Diff: https://reviews.apache.org/r/31140/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



Build failed in Jenkins: Kafka-trunk #393

2015-02-17 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1805; ProducerRecord should implement equals and hashCode; 
reviewed by Guozhang Wang

--
[...truncated 1695 lines...]
kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.

Re: Review Request 31140: Patch for KAFKA-1953

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31140/#review72875
---



core/src/main/scala/kafka/server/DelayedOperation.scala


We can move the debug statement out of the synchronized block.



core/src/main/scala/kafka/server/DelayedProduce.scala


Were topic/partition expiration metrics useful in the past?



core/src/main/scala/kafka/server/ReplicaManager.scala


We can remove "purgatoryName = " here.


- Guozhang Wang


On Feb. 18, 2015, 12:48 a.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31140/
> ---
> 
> (Updated Feb. 18, 2015, 12:48 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1953
> https://issues.apache.org/jira/browse/KAFKA-1953
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1953; KAFKA-1962; Disambiguate purgatory metrics; restore delayed 
> request metrics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 01cf1d91b7056bea7368ae4ea1e3c3646fc33619 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 894d6edb4077cae081b9d4039353dd17e6f0c18f 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> 445bfa1bf8840620e10de2456875716dc66e789a 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> b3b3749a21d35950a975e24dd9d1d53afbfaaee4 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
>   core/src/main/scala/kafka/server/DelayedOperation.scala 
> fc06b01cad3a0497800df727fa2abf60772694f2 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> c229088eb4f3db414225a688e149591ae0f810e7 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> ce36cc72606fb5441335f1c7466a7db8da3db499 
>   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
> 93f52d3222fc10b6d22ef6278365f6b026180418 
> 
> Diff: https://reviews.apache.org/r/31140/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



Re: Review Request 30809: Patch for KAFKA-1888

2015-02-17 Thread Abhishek Nigam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/
---

(Updated Feb. 18, 2015, 1:59 a.m.)


Review request for kafka.


Bugs: KAFKA-1888
https://issues.apache.org/jira/browse/KAFKA-1888


Repository: kafka


Description (updated)
---

patch for KAFKA-1888


Diffs (updated)
-

  build.gradle 0f0fe60a74542efa91a0e727146e896edcaa38af 
  core/src/main/scala/kafka/tools/ContinuousValidationTest.java PRE-CREATION 
  system_test/broker_upgrade/bin/kafka-run-class.sh PRE-CREATION 
  system_test/broker_upgrade/bin/test.sh PRE-CREATION 
  system_test/broker_upgrade/configs/server1.properties PRE-CREATION 
  system_test/broker_upgrade/configs/server2.properties PRE-CREATION 
  system_test/broker_upgrade/configs/zookeeper_source.properties PRE-CREATION 

Diff: https://reviews.apache.org/r/30809/diff/


Testing
---

Scripted it to run 20 times without any failures.
Command-line: broker-upgrade/bin/test.sh  


Thanks,

Abhishek Nigam



Build failed in Jenkins: KafkaPreCommit #6

2015-02-17 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1805; ProducerRecord should implement equals and hashCode; 
reviewed by Guozhang Wang

--
[...truncated 562 lines...]

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.utils.CrcTest > testUpdate PASSED

org.apache.kafka.common.utils.CrcTest > testUpdateInt PASSED

org.apache.kafka.common.utils.UtilsTest > testGetHost PASSED

org.apache.kafka.common.utils.UtilsTest > testGetPort PASSED

org.apache.kafka.common.utils.UtilsTest > testFormatAddress PASSED

org.apache.kafka.common.utils.UtilsTest > testJoin PASSED

org.apache.kafka.common.utils.AbstractIteratorTest > testIterator PASSED

org.apache.kafka.common.utils.AbstractIteratorTest > testEmptyIterator PASSED
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:compileTestJava UP-TO-DATE
:contrib:processTestResources UP-TO-DATE
:contrib:testClasses UP-TO-DATE
:contrib:test UP-TO-DATE
:clients:jar
:core:compileJava UP-TO-DATE
:core:compileScala
:318:
 non-variable type argument String in type pattern 
scala.collection.Map[String,_] is unchecked since it is eliminated by erasure
case Some(map: Map[String, _]) => 
   ^
:321:
 non-variable type argument String in type pattern 
scala.collection.Map[String,String] is unchecked since it is eliminated by 
erasure
case Some(config: Map[String, String]) =>
  ^
:42:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
responseCallback
^
:206:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:207:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:81:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
daemonThread(name, runnable(fun))
^
:361:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  maybeCloseOldestConnection
  ^
:381:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  try {
  ^
there were 12 feature warning(s); re-run with -feature for details
9 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:compileTestJava UP-TO-DATE
:core:compileTestScala
:169:
 This catches all Throwables. If this is really intended, use `case ex : 
Throwable` to clear this warning.
  case ex => fail()
   ^
one warning found
:core:processTestResources UP-TO-DATE
:core:testClasses
:core:test

unit.kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

unit.kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor 
PASSED

unit.kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

unit.kafka.common.TopicTest > testInvalidTopicNames PASSED

unit.kafka.common.ConfigTest > testInvalidClientIds PASSED

unit.kafka.common.ConfigTest > testInvalidGroupIds PASSED

unit.kafka.utils.

[jira] [Resolved] (KAFKA-1914) Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-1914.
---
Resolution: Fixed

Committed to trunk

> Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics
> -
>
> Key: KAFKA-1914
> URL: https://issues.apache.org/jira/browse/KAFKA-1914
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Aditya A Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-1914.patch, KAFKA-1914_2015-02-17_15:46:27.patch
>
>
> Currently the BrokerTopicMetrics only counts the failedProduceRequestRate and 
> the failedFetchRequestRate. We should add 2 metrics to count the overall 
> produce/fetch request rates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1943) Producer request failure rate should not include MessageSetSizeTooLarge and MessageSizeTooLargeException

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-1943.
---
Resolution: Fixed

Committed to trunk

> Producer request failure rate should not include MessageSetSizeTooLarge and 
> MessageSizeTooLargeException
> 
>
> Key: KAFKA-1943
> URL: https://issues.apache.org/jira/browse/KAFKA-1943
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya A Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-1943.patch
>
>
> If MessageSetSizeTooLargeException or MessageSizeTooLargeException is thrown 
> from Log, then ReplicaManager counts it as a failed produce request. My 
> understanding is that this metric should only count failures as a result of 
> broker issues and not bad requests sent by the clients.
> If the message or message set is too large, then it is a client side error 
> and should not be reported. (similar to NotLeaderForPartitionException, 
> UnknownTopicOrPartitionException).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31140: Patch for KAFKA-1953

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31140/#review72849
---


Couple of comments to call out.


core/src/main/scala/kafka/server/DelayedOperation.scala


FYI, I tried to avoid doing this. i.e., providing an explicit name.

We cannot simply do T.getClass.getName but have to look up manifests: 
http://stackoverflow.com/questions/8208179/scala-obtaining-a-class-object-from-a-generic-type

The previous code avoided this by extending this class.

E.g., class ProducerRequestPurgatory extends DelayedOperationPurgatory...

I actually prefer the explicit name to that approach because it forces 
people to think about the name. Otherwise if people forget to extend for 
multiple purgatory instances then the metrics could collide.



core/src/main/scala/kafka/server/DelayedProduce.scala


Note that this is similar to our expiration recording in the past, although 
it is slightly weird. i.e., each expired key counts toward the aggregate even 
if it is all from one single producer request.


- Joel Koshy


On Feb. 18, 2015, 12:48 a.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31140/
> ---
> 
> (Updated Feb. 18, 2015, 12:48 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1953
> https://issues.apache.org/jira/browse/KAFKA-1953
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1953; KAFKA-1962; Disambiguate purgatory metrics; restore delayed 
> request metrics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 01cf1d91b7056bea7368ae4ea1e3c3646fc33619 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 894d6edb4077cae081b9d4039353dd17e6f0c18f 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> 445bfa1bf8840620e10de2456875716dc66e789a 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> b3b3749a21d35950a975e24dd9d1d53afbfaaee4 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
>   core/src/main/scala/kafka/server/DelayedOperation.scala 
> fc06b01cad3a0497800df727fa2abf60772694f2 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> c229088eb4f3db414225a688e149591ae0f810e7 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> ce36cc72606fb5441335f1c7466a7db8da3db499 
>   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
> 93f52d3222fc10b6d22ef6278365f6b026180418 
> 
> Diff: https://reviews.apache.org/r/31140/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



[jira] [Updated] (KAFKA-1953) Disambiguate metrics from different purgatories

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1953:
--
Status: Patch Available  (was: Open)

> Disambiguate metrics from different purgatories
> ---
>
> Key: KAFKA-1953
> URL: https://issues.apache.org/jira/browse/KAFKA-1953
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Attachments: KAFKA-1953.patch
>
>
> After the purgatory refactoring, all the different purgatories map to the 
> same metric names. We need to disambiguate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1953) Disambiguate metrics from different purgatories

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1953:
--
Attachment: KAFKA-1953.patch

> Disambiguate metrics from different purgatories
> ---
>
> Key: KAFKA-1953
> URL: https://issues.apache.org/jira/browse/KAFKA-1953
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Attachments: KAFKA-1953.patch
>
>
> After the purgatory refactoring, all the different purgatories map to the 
> same metric names. We need to disambiguate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1953) Disambiguate metrics from different purgatories

2015-02-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325219#comment-14325219
 ] 

Joel Koshy commented on KAFKA-1953:
---

Created reviewboard https://reviews.apache.org/r/31140/
 against branch origin/trunk

> Disambiguate metrics from different purgatories
> ---
>
> Key: KAFKA-1953
> URL: https://issues.apache.org/jira/browse/KAFKA-1953
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Attachments: KAFKA-1953.patch
>
>
> After the purgatory refactoring, all the different purgatories map to the 
> same metric names. We need to disambiguate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31140: Patch for KAFKA-1953

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31140/
---

Review request for kafka.


Bugs: KAFKA-1953
https://issues.apache.org/jira/browse/KAFKA-1953


Repository: kafka


Description
---

KAFKA-1953; KAFKA-1962; Disambiguate purgatory metrics; restore delayed request 
metrics


Diffs
-

  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
01cf1d91b7056bea7368ae4ea1e3c3646fc33619 
  core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
894d6edb4077cae081b9d4039353dd17e6f0c18f 
  core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
445bfa1bf8840620e10de2456875716dc66e789a 
  core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
b3b3749a21d35950a975e24dd9d1d53afbfaaee4 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
  core/src/main/scala/kafka/server/DelayedOperation.scala 
fc06b01cad3a0497800df727fa2abf60772694f2 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
c229088eb4f3db414225a688e149591ae0f810e7 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
ce36cc72606fb5441335f1c7466a7db8da3db499 
  core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
93f52d3222fc10b6d22ef6278365f6b026180418 

Diff: https://reviews.apache.org/r/31140/diff/


Testing
---


Thanks,

Joel Koshy



Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Aditya Auradkar


> On Feb. 18, 2015, 12:41 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/KafkaRequestHandler.scala, line 108
> > 
> >
> > I think the aggregate rates here are redundant to what's already there 
> > in RequestChannel's request metrics; but I think it is convenient to have 
> > it here as well.

Thanks! But do the ones in RequestChannel aggregate on a per-topic basis?


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/#review72845
---


On Feb. 17, 2015, 11:46 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30570/
> ---
> 
> (Updated Feb. 17, 2015, 11:46 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1914
> https://issues.apache.org/jira/browse/KAFKA-1914
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> BrokerTopicStats changes to aggregate on a per-topic basis the total fetch 
> and produce requests
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> ccf5e2e36260b2484181b81d1b06e81de972674b 
> 
> Diff: https://reviews.apache.org/r/30570/diff/
> 
> 
> Testing
> ---
> 
> I've added asserts to the SimpleFetchTest to count the number of fetch 
> requests. I'm going to file an additional jira to add unit tests for all the 
> BrokerTopicMetrics updated via ReplicaManager
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-02-17 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325210#comment-14325210
 ] 

Parth Brahmbhatt commented on KAFKA-1660:
-

[~jkreps] I have updated the review with my understanding of your last 
statement. If that is what you mean I can add unit tests.

> Ability to call close() with a timeout on the Java Kafka Producer. 
> ---
>
> Key: KAFKA-1660
> URL: https://issues.apache.org/jira/browse/KAFKA-1660
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Affects Versions: 0.8.2.0
>Reporter: Andrew Stein
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
> Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch
>
>
> I would like the ability to call {{close}} with a timeout on the Java 
> Client's KafkaProducer.
> h6. Workaround
> Currently, it is possible to ensure that {{close}} will return quickly by 
> first doing a {{future.get(timeout)}} on the last future produced on each 
> partition, but this means that the user has to define the partitions up front 
> at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1962) Restore delayed request metrics

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy reassigned KAFKA-1962:
-

Assignee: Joel Koshy

I'll combine this with KAFKA-1953

> Restore delayed request metrics
> ---
>
> Key: KAFKA-1962
> URL: https://issues.apache.org/jira/browse/KAFKA-1962
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joel Koshy
>Assignee: Joel Koshy
>
> It seems we have lost the delayed request metrics that we had before:
> Producer/Fetch(follower/consumer) expires-per-second



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-02-17 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-1660:

Assignee: Parth Brahmbhatt  (was: Jun Rao)
  Status: Patch Available  (was: Open)

> Ability to call close() with a timeout on the Java Kafka Producer. 
> ---
>
> Key: KAFKA-1660
> URL: https://issues.apache.org/jira/browse/KAFKA-1660
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Affects Versions: 0.8.2.0
>Reporter: Andrew Stein
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
> Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch
>
>
> I would like the ability to call {{close}} with a timeout on the Java 
> Client's KafkaProducer.
> h6. Workaround
> Currently, it is possible to ensure that {{close}} will return quickly by 
> first doing a {{future.get(timeout)}} on the last future produced on each 
> partition, but this means that the user has to define the partitions up front 
> at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-02-17 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-1660:

Attachment: KAFKA-1660_2015-02-17_16:41:19.patch

> Ability to call close() with a timeout on the Java Kafka Producer. 
> ---
>
> Key: KAFKA-1660
> URL: https://issues.apache.org/jira/browse/KAFKA-1660
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Affects Versions: 0.8.2.0
>Reporter: Andrew Stein
>Assignee: Jun Rao
> Fix For: 0.8.3
>
> Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch
>
>
> I would like the ability to call {{close}} with a timeout on the Java 
> Client's KafkaProducer.
> h6. Workaround
> Currently, it is possible to ensure that {{close}} will return quickly by 
> first doing a {{future.get(timeout)}} on the last future produced on each 
> partition, but this means that the user has to define the partitions up front 
> at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-02-17 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325207#comment-14325207
 ] 

Parth Brahmbhatt commented on KAFKA-1660:
-

Updated reviewboard https://reviews.apache.org/r/29467/diff/
 against branch origin/trunk

> Ability to call close() with a timeout on the Java Kafka Producer. 
> ---
>
> Key: KAFKA-1660
> URL: https://issues.apache.org/jira/browse/KAFKA-1660
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Affects Versions: 0.8.2.0
>Reporter: Andrew Stein
>Assignee: Jun Rao
> Fix For: 0.8.3
>
> Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch
>
>
> I would like the ability to call {{close}} with a timeout on the Java 
> Client's KafkaProducer.
> h6. Workaround
> Currently, it is possible to ensure that {{close}} will return quickly by 
> first doing a {{future.get(timeout)}} on the last future produced on each 
> partition, but this means that the user has to define the partitions up front 
> at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29467: Patch for KAFKA-1660

2015-02-17 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29467/
---

(Updated Feb. 18, 2015, 12:41 a.m.)


Review request for kafka.


Bugs: KAFKA-1660
https://issues.apache.org/jira/browse/KAFKA-1660


Repository: kafka


Description
---

Merge remote-tracking branch 'origin/trunk' into KAFKA-1660

Conflicts:

clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
1fd6917c8a5131254c740abad7f7228a47e3628c 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
84530f2b948f9abd74203db48707e490dd9c81a5 
  clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
17fe541588d462c68c33f6209717cc4015e9b62f 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ed9c63a6679e3aaf83d19fde19268553a4c107c2 

Diff: https://reviews.apache.org/r/29467/diff/


Testing
---

existing unit tests passed.


Thanks,

Parth Brahmbhatt



Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/#review72845
---

Ship it!



core/src/main/scala/kafka/server/KafkaRequestHandler.scala


I think the aggregate rates here are redundant to what's already there in 
RequestChannel's request metrics; but I think it is convenient to have it here 
as well.


- Joel Koshy


On Feb. 17, 2015, 11:46 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30570/
> ---
> 
> (Updated Feb. 17, 2015, 11:46 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1914
> https://issues.apache.org/jira/browse/KAFKA-1914
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> BrokerTopicStats changes to aggregate on a per-topic basis the total fetch 
> and produce requests
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> ccf5e2e36260b2484181b81d1b06e81de972674b 
> 
> Diff: https://reviews.apache.org/r/30570/diff/
> 
> 
> Testing
> ---
> 
> I've added asserts to the SimpleFetchTest to count the number of fetch 
> requests. I'm going to file an additional jira to add unit tests for all the 
> BrokerTopicMetrics updated via ReplicaManager
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Re: Review Request 29467: Patch for KAFKA-1660

2015-02-17 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29467/
---

(Updated Feb. 18, 2015, 12:36 a.m.)


Review request for kafka.


Bugs: KAFKA-1660
https://issues.apache.org/jira/browse/KAFKA-1660


Repository: kafka


Description (updated)
---

Merge remote-tracking branch 'origin/trunk' into KAFKA-1660

Conflicts:

clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
1fd6917c8a5131254c740abad7f7228a47e3628c 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
84530f2b948f9abd74203db48707e490dd9c81a5 
  clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
17fe541588d462c68c33f6209717cc4015e9b62f 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ed9c63a6679e3aaf83d19fde19268553a4c107c2 

Diff: https://reviews.apache.org/r/29467/diff/


Testing
---

existing unit tests passed.


Thanks,

Parth Brahmbhatt



Re: Review Request 30848: Patch for KAFKA-1943

2015-02-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30848/#review72837
---

Ship it!


Ship It!

- Joel Koshy


On Feb. 10, 2015, 10:17 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30848/
> ---
> 
> (Updated Feb. 10, 2015, 10:17 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1943
> https://issues.apache.org/jira/browse/KAFKA-1943
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Change to not count MessageSetSizeTooLarge and MessageSizeTooLarge exceptions 
> as failed producer requests from the brokers perspective
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
> 
> Diff: https://reviews.apache.org/r/30848/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



[jira] [Updated] (KAFKA-1805) Kafka ProducerRecord should implement equals

2015-02-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1805:
-
Assignee: Parth Brahmbhatt  (was: Thomas Omans)

> Kafka ProducerRecord should implement equals
> 
>
> Key: KAFKA-1805
> URL: https://issues.apache.org/jira/browse/KAFKA-1805
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Thomas Omans
>Assignee: Parth Brahmbhatt
>Priority: Minor
> Attachments: KAFKA-1805.patch, KAFKA-1805_2014-12-29_16:37:11.patch, 
> KAFKA-1805_2015-02-11_14:30:14.patch, KAFKA-1805_2015-02-11_14:34:28.patch, 
> KAFKA-1805_2015-02-11_14:37:09.patch, KAFKA-1805_2015-02-11_14:37:41.patch, 
> KAFKA-1805_2015-02-11_14:49:10.patch
>
>
> I was writing some tests to verify that I am calculating my partitions, 
> topics, keys, and values properly in my producer code and discovered that 
> ProducerRecord does not implement equality.
> This makes tests integrating kafka particularly awkward.
> https://github.com/apache/kafka/blob/0.8.2-beta/clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java
> I can whip up a patch since this is essentially just a value object.
> Thanks,
> Thomas Omans



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/#review72836
---

Ship it!


Ship It!

- Guozhang Wang


On Feb. 17, 2015, 11:46 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30570/
> ---
> 
> (Updated Feb. 17, 2015, 11:46 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1914
> https://issues.apache.org/jira/browse/KAFKA-1914
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> BrokerTopicStats changes to aggregate on a per-topic basis the total fetch 
> and produce requests
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> ccf5e2e36260b2484181b81d1b06e81de972674b 
> 
> Diff: https://reviews.apache.org/r/30570/diff/
> 
> 
> Testing
> ---
> 
> I've added asserts to the SimpleFetchTest to count the number of fetch 
> requests. I'm going to file an additional jira to add unit tests for all the 
> BrokerTopicMetrics updated via ReplicaManager
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



[jira] [Resolved] (KAFKA-1805) Kafka ProducerRecord should implement equals

2015-02-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1805.
--
Resolution: Fixed

> Kafka ProducerRecord should implement equals
> 
>
> Key: KAFKA-1805
> URL: https://issues.apache.org/jira/browse/KAFKA-1805
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Thomas Omans
>Assignee: Parth Brahmbhatt
>Priority: Minor
> Attachments: KAFKA-1805.patch, KAFKA-1805_2014-12-29_16:37:11.patch, 
> KAFKA-1805_2015-02-11_14:30:14.patch, KAFKA-1805_2015-02-11_14:34:28.patch, 
> KAFKA-1805_2015-02-11_14:37:09.patch, KAFKA-1805_2015-02-11_14:37:41.patch, 
> KAFKA-1805_2015-02-11_14:49:10.patch
>
>
> I was writing some tests to verify that I am calculating my partitions, 
> topics, keys, and values properly in my producer code and discovered that 
> ProducerRecord does not implement equality.
> This makes tests integrating kafka particularly awkward.
> https://github.com/apache/kafka/blob/0.8.2-beta/clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java
> I can whip up a patch since this is essentially just a value object.
> Thanks,
> Thomas Omans



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1805) Kafka ProducerRecord should implement equals

2015-02-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325187#comment-14325187
 ] 

Guozhang Wang commented on KAFKA-1805:
--

Thanks for the patch, committed to trunk.

> Kafka ProducerRecord should implement equals
> 
>
> Key: KAFKA-1805
> URL: https://issues.apache.org/jira/browse/KAFKA-1805
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Thomas Omans
>Assignee: Parth Brahmbhatt
>Priority: Minor
> Attachments: KAFKA-1805.patch, KAFKA-1805_2014-12-29_16:37:11.patch, 
> KAFKA-1805_2015-02-11_14:30:14.patch, KAFKA-1805_2015-02-11_14:34:28.patch, 
> KAFKA-1805_2015-02-11_14:37:09.patch, KAFKA-1805_2015-02-11_14:37:41.patch, 
> KAFKA-1805_2015-02-11_14:49:10.patch
>
>
> I was writing some tests to verify that I am calculating my partitions, 
> topics, keys, and values properly in my producer code and discovered that 
> ProducerRecord does not implement equality.
> This makes tests integrating kafka particularly awkward.
> https://github.com/apache/kafka/blob/0.8.2-beta/clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java
> I can whip up a patch since this is essentially just a value object.
> Thanks,
> Thomas Omans



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30809: Patch for KAFKA-1888

2015-02-17 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/#review72786
---



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Can you remove this



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Any reason for not making this final?

static variables should come before Instance variables. 

Its a common standard to specify instance variables with _ like : _groupId.



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


same here : any reason for not making it final?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Why we need a flip?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


same here can we use isInterrupted()?

http://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


This might end up in infinite loop if something goes wrong with cluster, 
right?
Should we have a maximum numnber of retries?
What do you think?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


You might want to do :

Thread.currentThread().interrupt() . Then you might not require the 
blockingCallInterrupted flag :

http://www.ibm.com/developerworks/library/j-jtp05236/

http://www.javamex.com/tutorials/threads/thread_interruption_2.shtml

What do you think?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Can we put this in a separate method like init().
Constructor can be used mainly for assignment. what do you think?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


When is the blockingCallInterrupted set to true?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Thread.interrupted resets the interrupt flag. Can we use isInterrupted()?
http://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


formatting spaces.

Will there be a case where :
(evt.sequenceId < lastEventSeenSequenceId.get() && 
evt.eventProducedTimestamp > lastEventSeenTimeProduced.get()



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


The common format of commenting is :

// this is a comment

Personally I don't mind, but thats kind of a standard that I understood 
from the reviews that I got.



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Are you assuming that first argument will be some key?



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


what do you mean by rebuild state later?


- Mayuresh Gharat


On Feb. 9, 2015, 11:53 p.m., Abhishek Nigam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30809/
> ---
> 
> (Updated Feb. 9, 2015, 11:53 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1888
> https://issues.apache.org/jira/browse/KAFKA-1888
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Essentially this test does the following:
> a) Start a java process with 3 threads
>Producer - producing continuously
>Consumer - consuming from latest
>Bootstrap consumer - started after a pause to bootstrap from beginning.
>
>It uses sequentially increasing numbers and timestamps to make sure we are 
> not receiving out of order messages and do real-time validation. 
>
> b) Script which wraps this and takes two directories which contain the kafka 
> version specific jars:
> kafka_2.10-0.8.3-SNAPSHOT-test.jar
> kafka_2.10-0.8.3-SNAPSHOT.jar
> 
> The first argument is the directory containing the older version of the jars.
> The second argument is the directory containing the newer version of the jars.
> 
> The reason for choosing directories was because there are two jars in these 
> directories:
> 
> 
> Diffs
> -
> 
>   build.gradle c3e6bb839ad65c512c9db4695d2bb49b82c80da5 
>   c

[jira] [Updated] (KAFKA-1914) Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics

2015-02-17 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-1914:
-
Attachment: KAFKA-1914_2015-02-17_15:46:27.patch

> Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics
> -
>
> Key: KAFKA-1914
> URL: https://issues.apache.org/jira/browse/KAFKA-1914
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Aditya A Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-1914.patch, KAFKA-1914_2015-02-17_15:46:27.patch
>
>
> Currently the BrokerTopicMetrics only counts the failedProduceRequestRate and 
> the failedFetchRequestRate. We should add 2 metrics to count the overall 
> produce/fetch request rates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Aditya Auradkar


> On Feb. 17, 2015, 11:02 p.m., Guozhang Wang wrote:
> > core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala, line 137
> > 
> >
> > Should this be BrokerTopicStats.getBrokerAllTopicsStats()?

Good catch. Fixed


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/#review72817
---


On Feb. 17, 2015, 11:46 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30570/
> ---
> 
> (Updated Feb. 17, 2015, 11:46 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1914
> https://issues.apache.org/jira/browse/KAFKA-1914
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> BrokerTopicStats changes to aggregate on a per-topic basis the total fetch 
> and produce requests
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> ccf5e2e36260b2484181b81d1b06e81de972674b 
> 
> Diff: https://reviews.apache.org/r/30570/diff/
> 
> 
> Testing
> ---
> 
> I've added asserts to the SimpleFetchTest to count the number of fetch 
> requests. I'm going to file an additional jira to add unit tests for all the 
> BrokerTopicMetrics updated via ReplicaManager
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Re: Review Request 30763: Patch for KAFKA-1865

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30763/#review72821
---



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


Add @Throws KafkaException; in fact, flush interruption should never happen 
as ProduceRequestResult does not have interrupt APIs right?



core/src/test/scala/unit/kafka/utils/TestUtils.scala


Indentation?


- Guozhang Wang


On Feb. 7, 2015, 8:59 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30763/
> ---
> 
> (Updated Feb. 7, 2015, 8:59 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1865
> https://issues.apache.org/jira/browse/KAFKA-1865
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1865 Add a flush() method to the producer.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 1fd6917c8a5131254c740abad7f7228a47e3628c 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 84530f2b948f9abd74203db48707e490dd9c81a5 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 17fe541588d462c68c33f6209717cc4015e9b62f 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  ecfe2144d778a5d9b614df5278b9f0a15637f10b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
> 75513b0bdd439329c5771d87436ef83fda853bfb 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  83338633717cfa4ef7cf2a590b5aa6b9c8cb1dd2 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 54755e8dd3f23ced313067566cd4ea867f8a496e 
> 
> Diff: https://reviews.apache.org/r/30763/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jay Kreps
> 
>



[jira] [Commented] (KAFKA-1914) Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics

2015-02-17 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325135#comment-14325135
 ] 

Aditya A Auradkar commented on KAFKA-1914:
--

Updated reviewboard https://reviews.apache.org/r/30570/diff/
 against branch origin/trunk

> Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics
> -
>
> Key: KAFKA-1914
> URL: https://issues.apache.org/jira/browse/KAFKA-1914
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Aditya A Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-1914.patch, KAFKA-1914_2015-02-17_15:46:27.patch
>
>
> Currently the BrokerTopicMetrics only counts the failedProduceRequestRate and 
> the failedFetchRequestRate. We should add 2 metrics to count the overall 
> produce/fetch request rates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/
---

(Updated Feb. 17, 2015, 11:46 p.m.)


Review request for kafka and Joel Koshy.


Bugs: KAFKA-1914
https://issues.apache.org/jira/browse/KAFKA-1914


Repository: kafka


Description (updated)
---

BrokerTopicStats changes to aggregate on a per-topic basis the total fetch and 
produce requests


Diffs (updated)
-

  core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
fb948b9ab28c516e81dab14dcbe211dcd99842b6 
  core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
ccf5e2e36260b2484181b81d1b06e81de972674b 

Diff: https://reviews.apache.org/r/30570/diff/


Testing
---

I've added asserts to the SimpleFetchTest to count the number of fetch 
requests. I'm going to file an additional jira to add unit tests for all the 
BrokerTopicMetrics updated via ReplicaManager


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-1952) High CPU Usage in 0.8.2 release

2015-02-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1952:
---
Fix Version/s: (was: 0.8.2.0)
   0.8.2.1

> High CPU Usage in 0.8.2 release
> ---
>
> Key: KAFKA-1952
> URL: https://issues.apache.org/jira/browse/KAFKA-1952
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jun Rao
>Priority: Critical
> Fix For: 0.8.2.1
>
> Attachments: kafka-1952.patch, kafka-1952_2015-02-15_15:26:33.patch
>
>
> Brokers with high partition count see increased CPU usage when migrating from 
> 0.8.1.1 to 0.8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30570: Patch for KAFKA-1914

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30570/#review72817
---



core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala


Should this be BrokerTopicStats.getBrokerAllTopicsStats()?


- Guozhang Wang


On Feb. 3, 2015, 7:13 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30570/
> ---
> 
> (Updated Feb. 3, 2015, 7:13 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1914
> https://issues.apache.org/jira/browse/KAFKA-1914
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixing KAFKA-1914. Adding metrics to count total number of produce and fetch 
> metrics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> e4053fbe8ef78bf8bc39cb3f8ea4c21032613a16 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
> ccf5e2e36260b2484181b81d1b06e81de972674b 
> 
> Diff: https://reviews.apache.org/r/30570/diff/
> 
> 
> Testing
> ---
> 
> I've added asserts to the SimpleFetchTest to count the number of fetch 
> requests. I'm going to file an additional jira to add unit tests for all the 
> BrokerTopicMetrics updated via ReplicaManager
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Re: [DISCUSS] KIP-8 Add a flush method to the new Java producer

2015-02-17 Thread Jay Kreps
Yeah there was a separate thread on adding a client-side timeout to
requests. We should have this in the new java clients, it just isn't there
yet. When we do this the flush() call will implicitly have the same timeout
as the requests (since they will complete or fail by then). I think this
makes flush(timeout) and potentially close(timeout) both unnecessary.

-Jay

On Tue, Feb 17, 2015 at 2:44 PM, Guozhang Wang  wrote:

> In the scala clients we have the socket.timeout config as we are using
> blocking IOs, when such timeout is reached the TimeoutException will be
> thrown from the socket and the client can handle it accordingly; in the
> java clients we are switching to non-blocking IOs and hence we will not
> have the socket timeout any more.
>
> I agree that we could add this client request timeout back in the java
> clients, in addition to allowing client / server's non-blocking selector to
> close idle sockets.
>
> Guozhang
>
> On Tue, Feb 17, 2015 at 1:55 PM, Jiangjie Qin 
> wrote:
>
> > I'm thinking the flush call timeout will naturally be the timeout for a
> > produce request, No?
> >
> > Currently it seems we don¹t have a timeout for client requests, should we
> > have one?
> >
> > ‹Jiangjie (Becket) Qin
> >
> > On 2/16/15, 8:19 PM, "Jay Kreps"  wrote:
> >
> > >Yes, I think we all agree it would be good to add a client-side request
> > >timeout. That would effectively imply a flush timeout as well since any
> > >requests that couldn't complete in that time would be errors and hence
> > >completed in the definition we gave.
> > >
> > >-Jay
> > >
> > >On Mon, Feb 16, 2015 at 7:57 PM, Bhavesh Mistry
> > >
> > >wrote:
> > >
> > >> Hi All,
> > >>
> > >> Thanks Jay and all  address concern.  I am fine with just having
> flush()
> > >> method as long as it covers failure mode and resiliency.  e.g We had
> > >> situation where entire Kafka cluster brokers were reachable, but upon
> > >> adding new kafka node and admin migrated "leader to new brokers"  that
> > >>new
> > >> brokers is NOT reachable from producer stand point due to fire wall
> but
> > >> metadata would continue to elect new broker as leader for that
> > >>partition.
> > >>
> > >> All I am asking is either you will have to give-up sending to this
> > >>broker
> > >> or do something in this scenario.  As for the current code 0.8.2
> > >>release,
> > >> caller thread of flush() or close() method would be blocked for
> ever
> > >> so all I am asking is
> > >>
> > >> https://issues.apache.org/jira/browse/KAFKA-1659
> > >> https://issues.apache.org/jira/browse/KAFKA-1660
> > >>
> > >> Also, I recall that there is timeout also added to batch to indicate
> how
> > >> long "message" can retain in memory before expiring.
> > >>
> > >> Given,  all this should this API be consistent with others up coming
> > >> patches for addressing similar problem(s).
> > >>
> > >>
> > >> Otherwise, what we have done is spawn a thread for just calling
> close()
> > >>or
> > >> flush with timeout for join on caller end.
> > >>
> > >> Anyway, I just wanted to give you issues with existing API and if you
> > >>guys
> > >> think this is fine then, I am ok with this approach. It is just that
> > >>caller
> > >> will have to do bit more work.
> > >>
> > >>
> > >> Thanks,
> > >>
> > >> Bhavesh
> > >>
> > >> On Thursday, February 12, 2015, Joel Koshy 
> wrote:
> > >>
> > >> > Yes that is a counter-example. I'm okay either way on whether we
> > >> > should have just flush() or have a timeout. Bhavesh, does Jay's
> > >> > explanation a few replies prior address your concern? If so, shall
> we
> > >> > consider this closed?
> > >> >
> > >> > On Tue, Feb 10, 2015 at 01:36:23PM -0800, Jay Kreps wrote:
> > >> > > Yeah we could do that, I guess I just feel like it adds confusion
> > >> because
> > >> > > then you have to think about which timeout you want, when likely
> you
> > >> > don't
> > >> > > want a timeout at all.
> > >> > >
> > >> > > I guess the pattern I was thinking of was fflush or the java
> > >> equivalent,
> > >> > > which don't have timeouts:
> > >> > >
> > >> >
> > >>
> > >>
> >
> http://docs.oracle.com/javase/7/docs/api/java/io/OutputStream.html#flush(
> > >>)
> > >> > >
> > >> > > -Jay
> > >> > >
> > >> > > On Tue, Feb 10, 2015 at 10:41 AM, Joel Koshy  >
> > >> > wrote:
> > >> > >
> > >> > > > I think tryFlush with a timeout sounds good to me. This is
> really
> > >> more
> > >> > > > for consistency than anything else. I cannot think of any
> standard
> > >> > > > blocking calls off the top of my head that don't have a timed
> > >> variant.
> > >> > > > E.g., Thread.join, Object.wait, Future.get Either that, or they
> > >> > > > provide an entirely non-blocking mode (e.g.,
> socketChannel.connect
> > >> > > > followed by finishConnect)
> > >> > > >
> > >> > > > Thanks,
> > >> > > >
> > >> > > > Joel
> > >> > > >
> > >> > > > On Tue, Feb 10, 2015 at 11:30:47AM -0500, Joe Stein wrote:
> > >> > > > > Jay,
> > >> > > > >
> > >> > > > > The .flush() call seems 

Re: [DISCUSS] KIP-8 Add a flush method to the new Java producer

2015-02-17 Thread Guozhang Wang
In the scala clients we have the socket.timeout config as we are using
blocking IOs, when such timeout is reached the TimeoutException will be
thrown from the socket and the client can handle it accordingly; in the
java clients we are switching to non-blocking IOs and hence we will not
have the socket timeout any more.

I agree that we could add this client request timeout back in the java
clients, in addition to allowing client / server's non-blocking selector to
close idle sockets.

Guozhang

On Tue, Feb 17, 2015 at 1:55 PM, Jiangjie Qin 
wrote:

> I'm thinking the flush call timeout will naturally be the timeout for a
> produce request, No?
>
> Currently it seems we don¹t have a timeout for client requests, should we
> have one?
>
> ‹Jiangjie (Becket) Qin
>
> On 2/16/15, 8:19 PM, "Jay Kreps"  wrote:
>
> >Yes, I think we all agree it would be good to add a client-side request
> >timeout. That would effectively imply a flush timeout as well since any
> >requests that couldn't complete in that time would be errors and hence
> >completed in the definition we gave.
> >
> >-Jay
> >
> >On Mon, Feb 16, 2015 at 7:57 PM, Bhavesh Mistry
> >
> >wrote:
> >
> >> Hi All,
> >>
> >> Thanks Jay and all  address concern.  I am fine with just having flush()
> >> method as long as it covers failure mode and resiliency.  e.g We had
> >> situation where entire Kafka cluster brokers were reachable, but upon
> >> adding new kafka node and admin migrated "leader to new brokers"  that
> >>new
> >> brokers is NOT reachable from producer stand point due to fire wall but
> >> metadata would continue to elect new broker as leader for that
> >>partition.
> >>
> >> All I am asking is either you will have to give-up sending to this
> >>broker
> >> or do something in this scenario.  As for the current code 0.8.2
> >>release,
> >> caller thread of flush() or close() method would be blocked for ever
> >> so all I am asking is
> >>
> >> https://issues.apache.org/jira/browse/KAFKA-1659
> >> https://issues.apache.org/jira/browse/KAFKA-1660
> >>
> >> Also, I recall that there is timeout also added to batch to indicate how
> >> long "message" can retain in memory before expiring.
> >>
> >> Given,  all this should this API be consistent with others up coming
> >> patches for addressing similar problem(s).
> >>
> >>
> >> Otherwise, what we have done is spawn a thread for just calling close()
> >>or
> >> flush with timeout for join on caller end.
> >>
> >> Anyway, I just wanted to give you issues with existing API and if you
> >>guys
> >> think this is fine then, I am ok with this approach. It is just that
> >>caller
> >> will have to do bit more work.
> >>
> >>
> >> Thanks,
> >>
> >> Bhavesh
> >>
> >> On Thursday, February 12, 2015, Joel Koshy  wrote:
> >>
> >> > Yes that is a counter-example. I'm okay either way on whether we
> >> > should have just flush() or have a timeout. Bhavesh, does Jay's
> >> > explanation a few replies prior address your concern? If so, shall we
> >> > consider this closed?
> >> >
> >> > On Tue, Feb 10, 2015 at 01:36:23PM -0800, Jay Kreps wrote:
> >> > > Yeah we could do that, I guess I just feel like it adds confusion
> >> because
> >> > > then you have to think about which timeout you want, when likely you
> >> > don't
> >> > > want a timeout at all.
> >> > >
> >> > > I guess the pattern I was thinking of was fflush or the java
> >> equivalent,
> >> > > which don't have timeouts:
> >> > >
> >> >
> >>
> >>
> http://docs.oracle.com/javase/7/docs/api/java/io/OutputStream.html#flush(
> >>)
> >> > >
> >> > > -Jay
> >> > >
> >> > > On Tue, Feb 10, 2015 at 10:41 AM, Joel Koshy 
> >> > wrote:
> >> > >
> >> > > > I think tryFlush with a timeout sounds good to me. This is really
> >> more
> >> > > > for consistency than anything else. I cannot think of any standard
> >> > > > blocking calls off the top of my head that don't have a timed
> >> variant.
> >> > > > E.g., Thread.join, Object.wait, Future.get Either that, or they
> >> > > > provide an entirely non-blocking mode (e.g., socketChannel.connect
> >> > > > followed by finishConnect)
> >> > > >
> >> > > > Thanks,
> >> > > >
> >> > > > Joel
> >> > > >
> >> > > > On Tue, Feb 10, 2015 at 11:30:47AM -0500, Joe Stein wrote:
> >> > > > > Jay,
> >> > > > >
> >> > > > > The .flush() call seems like it would be the best way if you
> >>wanted
> >> > > > to-do a
> >> > > > > clean shutdown of the new producer?
> >> > > > >
> >> > > > > So, you could in your code "stop all incoming requests &&
> >> > > > producer.flush()
> >> > > > > && system.exit(value)" and know pretty much you won't drop
> >>anything
> >> > on
> >> > > > the
> >> > > > > floor.
> >> > > > >
> >> > > > > This can be done with the callbacks and futures (sure) but
> >>.flush()
> >> > seems
> >> > > > > to be the right time to block and a few lines of code, no?
> >> > > > >
> >> > > > > ~ Joestein
> >> > > > >
> >> > > > > On Tue, Feb 10, 2015 at 11:25 AM, Jay Kreps
> >>
> >> > wrote:
> >> > > > >
> >> > > > > > He

Re: [DISCUSS] KIP-8 Add a flush method to the new Java producer

2015-02-17 Thread Jiangjie Qin
I'm thinking the flush call timeout will naturally be the timeout for a
produce request, No?

Currently it seems we don¹t have a timeout for client requests, should we
have one?

‹Jiangjie (Becket) Qin

On 2/16/15, 8:19 PM, "Jay Kreps"  wrote:

>Yes, I think we all agree it would be good to add a client-side request
>timeout. That would effectively imply a flush timeout as well since any
>requests that couldn't complete in that time would be errors and hence
>completed in the definition we gave.
>
>-Jay
>
>On Mon, Feb 16, 2015 at 7:57 PM, Bhavesh Mistry
>
>wrote:
>
>> Hi All,
>>
>> Thanks Jay and all  address concern.  I am fine with just having flush()
>> method as long as it covers failure mode and resiliency.  e.g We had
>> situation where entire Kafka cluster brokers were reachable, but upon
>> adding new kafka node and admin migrated "leader to new brokers"  that
>>new
>> brokers is NOT reachable from producer stand point due to fire wall but
>> metadata would continue to elect new broker as leader for that
>>partition.
>>
>> All I am asking is either you will have to give-up sending to this
>>broker
>> or do something in this scenario.  As for the current code 0.8.2
>>release,
>> caller thread of flush() or close() method would be blocked for ever
>> so all I am asking is
>>
>> https://issues.apache.org/jira/browse/KAFKA-1659
>> https://issues.apache.org/jira/browse/KAFKA-1660
>>
>> Also, I recall that there is timeout also added to batch to indicate how
>> long "message" can retain in memory before expiring.
>>
>> Given,  all this should this API be consistent with others up coming
>> patches for addressing similar problem(s).
>>
>>
>> Otherwise, what we have done is spawn a thread for just calling close()
>>or
>> flush with timeout for join on caller end.
>>
>> Anyway, I just wanted to give you issues with existing API and if you
>>guys
>> think this is fine then, I am ok with this approach. It is just that
>>caller
>> will have to do bit more work.
>>
>>
>> Thanks,
>>
>> Bhavesh
>>
>> On Thursday, February 12, 2015, Joel Koshy  wrote:
>>
>> > Yes that is a counter-example. I'm okay either way on whether we
>> > should have just flush() or have a timeout. Bhavesh, does Jay's
>> > explanation a few replies prior address your concern? If so, shall we
>> > consider this closed?
>> >
>> > On Tue, Feb 10, 2015 at 01:36:23PM -0800, Jay Kreps wrote:
>> > > Yeah we could do that, I guess I just feel like it adds confusion
>> because
>> > > then you have to think about which timeout you want, when likely you
>> > don't
>> > > want a timeout at all.
>> > >
>> > > I guess the pattern I was thinking of was fflush or the java
>> equivalent,
>> > > which don't have timeouts:
>> > >
>> >
>> 
>>http://docs.oracle.com/javase/7/docs/api/java/io/OutputStream.html#flush(
>>)
>> > >
>> > > -Jay
>> > >
>> > > On Tue, Feb 10, 2015 at 10:41 AM, Joel Koshy 
>> > wrote:
>> > >
>> > > > I think tryFlush with a timeout sounds good to me. This is really
>> more
>> > > > for consistency than anything else. I cannot think of any standard
>> > > > blocking calls off the top of my head that don't have a timed
>> variant.
>> > > > E.g., Thread.join, Object.wait, Future.get Either that, or they
>> > > > provide an entirely non-blocking mode (e.g., socketChannel.connect
>> > > > followed by finishConnect)
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Joel
>> > > >
>> > > > On Tue, Feb 10, 2015 at 11:30:47AM -0500, Joe Stein wrote:
>> > > > > Jay,
>> > > > >
>> > > > > The .flush() call seems like it would be the best way if you
>>wanted
>> > > > to-do a
>> > > > > clean shutdown of the new producer?
>> > > > >
>> > > > > So, you could in your code "stop all incoming requests &&
>> > > > producer.flush()
>> > > > > && system.exit(value)" and know pretty much you won't drop
>>anything
>> > on
>> > > > the
>> > > > > floor.
>> > > > >
>> > > > > This can be done with the callbacks and futures (sure) but
>>.flush()
>> > seems
>> > > > > to be the right time to block and a few lines of code, no?
>> > > > >
>> > > > > ~ Joestein
>> > > > >
>> > > > > On Tue, Feb 10, 2015 at 11:25 AM, Jay Kreps
>>
>> > wrote:
>> > > > >
>> > > > > > Hey Bhavesh,
>> > > > > >
>> > > > > > If a broker is not available a new one should be elected to
>>take
>> > over,
>> > > > so
>> > > > > > although the flush might take longer it should still be quick.
>> > Even if
>> > > > not
>> > > > > > this should result in an error not a hang.
>> > > > > >
>> > > > > > The cases you enumerated are all covered already--if the user
>> > wants to
>> > > > > > retry that is covered by the retry setting in the client, for
>>all
>> > the
>> > > > > > errors that is considered completion of the request. The post
>> > > > condition of
>> > > > > > flush isn't that all sends complete successfully, just that
>>they
>> > > > complete.
>> > > > > > So if you try to send a message that is too big, when flush
>> returns
>> > > > calling
>> > > > > > .get() on the future should not 

[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324924#comment-14324924
 ] 

Jay Kreps commented on KAFKA-1961:
--

Yeah I think since the cache would be totally out of sync it is basically an 
error--you can't use this as a way to reset offsets. Let's just disable it in 
the command line tool, that is probably good enough, right?

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30763: Patch for KAFKA-1865

2015-02-17 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30763/#review72794
---



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java


I think we have to execute the callback before we wake up the caller 
thread. Otherwise if something went wrong in this batch, caller thread might 
not be aware of that before it's waken up and put a bunch of other stuff into 
producer, or commit offsets.
For example, 
In mirror maker:
...
for (rec <- recs)
  producer.send(rec1);
producer.flush();
consumer.commitOffsets();
...
The caller thread could have already committed offsets even if something 
went wrong in callback.


- Jiangjie Qin


On Feb. 7, 2015, 8:59 p.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30763/
> ---
> 
> (Updated Feb. 7, 2015, 8:59 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1865
> https://issues.apache.org/jira/browse/KAFKA-1865
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1865 Add a flush() method to the producer.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 1fd6917c8a5131254c740abad7f7228a47e3628c 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 84530f2b948f9abd74203db48707e490dd9c81a5 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 17fe541588d462c68c33f6209717cc4015e9b62f 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  ecfe2144d778a5d9b614df5278b9f0a15637f10b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
> 75513b0bdd439329c5771d87436ef83fda853bfb 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  83338633717cfa4ef7cf2a590b5aa6b9c8cb1dd2 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 54755e8dd3f23ced313067566cd4ea867f8a496e 
> 
> Diff: https://reviews.apache.org/r/30763/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jay Kreps
> 
>



[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324917#comment-14324917
 ] 

Joel Koshy commented on KAFKA-1961:
---

Yes it would be inconsistent in that you would lose offsets but we don't 
actually purge from the offsets cache if that happens. I agree that we should 
prevent this from happening in the first place.

We could expose a broker-side config to allow deleting internal topics but I 
think the better approach would be a combination of KIP-4 (topic command RPC) 
with authorization.


> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: two very simple patch sets to be reviewed.

2015-02-17 Thread Tong Li

Gwen,
Really appreciate it. Thanks so much. Anyone else please review them?
Here are the links again.

> https://reviews.apache.org/r/31088/
>
> https://reviews.apache.org/r/31097/


Tong Li
OpenStack Community Development
Building 501/B205
liton...@us.ibm.com



From:   Gwen Shapira 
To: "dev@kafka.apache.org" 
Date:   02/17/2015 03:29 PM
Subject:Re: two very simple patch sets to be reviewed.



I've reviewed both (but can't commit obviously)

They are both safe (a rename and an addition to .gitignore).
The addition to .gitignore will be very useful for anyone who uses system
tests (which should be all of us).
The rename is useful only to those using IBM JDK (i.e. not all of us), but
since its just a rename, I figured there's no reason not to solve it.

Gwen

On Tue, Feb 17, 2015 at 10:05 AM, Tong Li  wrote:

> Dear kafka developers,
>  New to this community and put up two really small patch sets with open
> issues, can any one please review and comment and get them merged if all
> possible? Thanks
>
>
> https://reviews.apache.org/r/31088/
>
> https://reviews.apache.org/r/31097/
>
>
> Tong Li
> OpenStack Community Development
> Building 501/B205
> liton...@us.ibm.com
>
> [image: Inactive hide details for Tong Li---02/17/2015 01:03:36 PM---Dear
> kafka developers, New to this community and put up two real]Tong
> Li---02/17/2015 01:03:36 PM---Dear kafka developers,   New to this
> community and put up two really small patch sets with open issu
>
> From: Tong Li/Raleigh/IBM
> To: kafka 
> Date: 02/17/2015 01:03 PM
> Subject: two very simple patch sets to be reviewed.
> --
>
>
> Dear kafka developers,
>  New to this community and put up two really small patch sets with open
> issues, can any one please review and comment and get them merged if all
> possible? Thanks
>
> https://reviews.apache.org/r/31097/
>
> https://reviews.apache.org/r/31097/
>
> Tong Li
> OpenStack & Kafka Community Development
> Building 501/B205
> liton...@us.ibm.com
>
>


Re: two very simple patch sets to be reviewed.

2015-02-17 Thread Gwen Shapira
I've reviewed both (but can't commit obviously)

They are both safe (a rename and an addition to .gitignore).
The addition to .gitignore will be very useful for anyone who uses system
tests (which should be all of us).
The rename is useful only to those using IBM JDK (i.e. not all of us), but
since its just a rename, I figured there's no reason not to solve it.

Gwen

On Tue, Feb 17, 2015 at 10:05 AM, Tong Li  wrote:

> Dear kafka developers,
>  New to this community and put up two really small patch sets with open
> issues, can any one please review and comment and get them merged if all
> possible? Thanks
>
>
> https://reviews.apache.org/r/31088/
>
> https://reviews.apache.org/r/31097/
>
>
> Tong Li
> OpenStack Community Development
> Building 501/B205
> liton...@us.ibm.com
>
> [image: Inactive hide details for Tong Li---02/17/2015 01:03:36 PM---Dear
> kafka developers, New to this community and put up two real]Tong
> Li---02/17/2015 01:03:36 PM---Dear kafka developers,   New to this
> community and put up two really small patch sets with open issu
>
> From: Tong Li/Raleigh/IBM
> To: kafka 
> Date: 02/17/2015 01:03 PM
> Subject: two very simple patch sets to be reviewed.
> --
>
>
> Dear kafka developers,
>  New to this community and put up two really small patch sets with open
> issues, can any one please review and comment and get them merged if all
> possible? Thanks
>
> https://reviews.apache.org/r/31097/
>
> https://reviews.apache.org/r/31097/
>
> Tong Li
> OpenStack & Kafka Community Development
> Building 501/B205
> liton...@us.ibm.com
>
>


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324770#comment-14324770
 ] 

Jay Kreps commented on KAFKA-1961:
--

Makes sense.

[~jjkoshy] Deleting the topic would leave the broker in an inconsistent state, 
right? If this is the case I think we should just prevent it as Gwen suggests.

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29301: Patch for KAFKA-1694

2015-02-17 Thread Guozhang Wang


> On Feb. 3, 2015, 7:14 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java, lines 
> > 39-42
> > 
> >
> > How about merge them into one request? The format could be:
> > 
> > topic -> string
> > action -> string (create, alter, delete, describe)
> > num_partition (?) -> int32
> > reassignment (?) -> string
> > configs (?) -> list[string]
> > 
> > For configs, the client can first get the current config overriden 
> > list, and get the new configs list as (curr + added - deleted) so that we 
> > do not need two configs field in the request.
> 
> Andrii Biletskyi wrote:
> I see your point but I'm not sure this will comply Wire protocol use 
> cases. My understanding is that Wire protocol has schema and RQ/RP message is 
> always defined and static.
> So it's probably possible to merge all requests in one but reusing some 
> of the fields in different types and thus having them different meaning, 
> makes such request type stand out from the others.
> Also I believe merging all create-alter-delete-describe won't go that 
> smoothly:
> 1) 'describe' has very different _response_ schema than "mutating" 
> 'create-alter-delete'
> 2) I will probably remove MaybeOf type, as guys had concerns, this is 
> different how empty values are handled now (see Jay's comment in KIP 
> discussion) - this will make things even more tangled
> 3) There was a comment about batching commands - I'll probably modify 
> schema so it will accept whitelist for topics (as in TopicCommand), but this 
> is right only for 'alter-delete' - you can't create topic by reqexp - another 
> issue to find the common point
> 4) There is also 'replicas' field which is used only in 'create'
> 
> As you see messages are really different.

Makes sense, if we are not going the MaybeOf type then there is no point 
merging them.


> On Feb. 3, 2015, 7:14 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/common/requests/admin/AbstractAdminRequest.java,
> >  lines 1-28
> > 
> >
> > Wondering if an abstract admin request is necessary, as it does not 
> > have many common interface functions.
> 
> Andrii Biletskyi wrote:
> This is needed to avoid code dupliaction in admin clients. See 
> RequestDispatcher for example.
> You will need to call admin request and get response of that type. Having 
> AbstractAdminRequest (specifically createResponseCounterpart) lets you have:
> ```
> public  T 
> sendAdminRequest(AbstractAdminRequest abstractRequest) throws Exception {
> ```
> Instead of sendCreateTopicRequest, sendAlter... etc. If there is a better 
> and cleaner way to achive this - please let me know.

I see. How about changing "sendAdminRequest(AbstractAdminRequest)" to 
"sendRequest(ClientRequest)" and the caller like AlterTopicCommand.execute() 
will be:

AlterTopicRequest alterTopicRequest = // create the request
ClientRequest request = new ClientRequest(new RequestSend(...) ...)
dispatcher.sendRequest(request)

This way we are duplicating the second line here in every upper-level class, 
while saving the admin interface. I actually do not know which one is better..


> On Feb. 3, 2015, 7:14 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/controller/ControllerChannelManager.scala, lines 
> > 301-310
> > 
> >
> > Do not understand the rationale behind this: could you add some 
> > comments? Particularly, why we want to send an empty metadata map to the 
> > brokers with forceSendBrokerInfo?
> 
> Andrii Biletskyi wrote:
> Thanks, this is done because on startup we don't send UpdateMetadaRequest 
> (updateMetadataRequestMap is empty) and thus brokers' cache is not filled 
> with brokers and controller. This leads to ClusterMetadataRequest can't be 
> served correctly. 
> I'm not sure this is the best way to do it, open for suggestions.

In this case can we just use addUpdateMetadataRequestForBrokers() before 
calling sendRequestsToBrokers()?


> On Feb. 3, 2015, 7:14 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/server/TopicCommandHelper.scala, lines 1-17
> > 
> >
> > One general comment:
> > 
> > For some topic commands, why use AdminUtils to write ZK path again 
> > instead of handle it via the controller directly? Or this is still WIP?
> 
> Andrii Biletskyi wrote:
> Not sure I understand you. You mean technially calling ZK client from 
> Controller class, not through TopicCommandHelper? If so - it's just to leave 
> KafkaApi clean and small.

For example, upon receiving a create-topic request, the helper class will call 
AdminUtils.createOrUpdateTopicPartiti

[jira] [Commented] (KAFKA-1694) kafka command line and centralized operations

2015-02-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324755#comment-14324755
 ] 

Guozhang Wang commented on KAFKA-1694:
--

[~abiletskyi], I am wondering if we can split the current RBs into multiple 
ones following the subtasks structure. Concerns with having one big RB is that 
it is hard to review, and also very hard to commit: as there will likely have 
some hidden issues with such big changes, and upon finding them after we 
commit, if it is not easily fix-forwardable we have to revert the whole thing. 
What do you think?

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324724#comment-14324724
 ] 

Gwen Shapira commented on KAFKA-1961:
-

[~jkreps] - by "accidentally" I mean, "I don't know what this topic is, so I 
probably don't need it. Lets delete!". 
I've seen this happen twice in the last few weeks. I wrote off the first 
incident, but two is a trend :)

We have utility for cleaning offsets per consumer group or topic in the 
consumer tool (or at least I think we have them? or planning to have them?). I 
think deleting an entire topic is pretty extreme. Perhaps we can allow it with 
a code-level flag (if someone calls the object directly) but hide the 
capability in the CLI? 
I think we do something similar in producing to internal topics.



> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1867) liveBroker list not updated on a cluster with no topics

2015-02-17 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324659#comment-14324659
 ] 

Sriharsha Chintalapani commented on KAFKA-1867:
---

[~nehanarkhede] pinging for a review. Thanks.

> liveBroker list not updated on a cluster with no topics
> ---
>
> Key: KAFKA-1867
> URL: https://issues.apache.org/jira/browse/KAFKA-1867
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1867.patch, KAFKA-1867.patch, 
> KAFKA-1867_2015-01-25_21:07:47.patch
>
>
> Currently, when there is no topic in a cluster, the controller doesn't send 
> any UpdateMetadataRequest to the broker when it starts up. As a result, the 
> liveBroker list in metadataCache is empty. This means that we will return 
> incorrect broker list in TopicMetatadataResponse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29912: Patch for KAFKA-1852

2015-02-17 Thread Sriharsha Chintalapani


> On Feb. 13, 2015, 7:01 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, line 215
> > 
> >
> > Minor comment. I think this may be better to pass in to the 
> > OffsetManager.
> > 
> > We should even use it in loadOffsets to discard offsets that are from 
> > topics that have been deleted. We can do that in a separate jira - I don't 
> > think our handling for clearing out offsets on a delete topic is done yet - 
> > Onur Karaman did it for ZK based offsets but we need a separate jira to 
> > delete Kafka-based offsets.

Thanks for the review. Since offsetmanager initialized in KafkaServer and 
metadataCache in KafkaApis , in the latest patch I added setMetadataCache in 
OffsetManager and calling it in KafkaApis. Please take a look


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/#review72413
---


On Feb. 16, 2015, 9:22 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29912/
> ---
> 
> (Updated Feb. 16, 2015, 9:22 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1852
> https://issues.apache.org/jira/browse/KAFKA-1852
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
> contains method to MetadataCache.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 703886a1d48e6d2271da67f8b89514a6950278dd 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> 4c70aa7e0157b85de5e24736ebf487239c4571d0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 83d52643028c5628057dc0aa29819becfda61fdb 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
> 
> Diff: https://reviews.apache.org/r/29912/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Commented] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-02-17 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324656#comment-14324656
 ] 

Sriharsha Chintalapani commented on KAFKA-1866:
---

[~nehanarkhede] pinging for a review. Thanks.

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch, KAFKA-1866_2015-02-10_22:50:09.patch, 
> KAFKA-1866_2015-02-11_09:25:33.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1778) Create new re-elect controller admin function

2015-02-17 Thread Abhishek Nigam (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Nigam reassigned KAFKA-1778:
-

Assignee: Abhishek Nigam

> Create new re-elect controller admin function
> -
>
> Key: KAFKA-1778
> URL: https://issues.apache.org/jira/browse/KAFKA-1778
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Abhishek Nigam
> Fix For: 0.8.3
>
>
> kafka --controller --elect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30848: Patch for KAFKA-1943

2015-02-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30848/#review72749
---

Ship it!


Ship It!

- Guozhang Wang


On Feb. 10, 2015, 10:17 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30848/
> ---
> 
> (Updated Feb. 10, 2015, 10:17 p.m.)
> 
> 
> Review request for kafka and Joel Koshy.
> 
> 
> Bugs: KAFKA-1943
> https://issues.apache.org/jira/browse/KAFKA-1943
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Change to not count MessageSetSizeTooLarge and MessageSizeTooLarge exceptions 
> as failed producer requests from the brokers perspective
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
> 
> Diff: https://reviews.apache.org/r/30848/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



two very simple patch sets to be reviewed.

2015-02-17 Thread Tong Li

Dear kafka developers,
New to this community and put up two really small patch sets with
open issues, can any one please review and comment and get them merged if
all possible? Thanks


https://reviews.apache.org/r/31088/

https://reviews.apache.org/r/31097/


Tong Li
OpenStack Community Development
Building 501/B205
liton...@us.ibm.com



From:   Tong Li/Raleigh/IBM
To: kafka 
Date:   02/17/2015 01:03 PM
Subject:two very simple patch sets to be reviewed.


Dear kafka developers,
New to this community and put up two really small patch sets with
open issues, can any one please review and comment and get them merged if
all possible? Thanks

https://reviews.apache.org/r/31097/

https://reviews.apache.org/r/31097/

Tong Li
OpenStack & Kafka Community Development
Building 501/B205
liton...@us.ibm.com

two very simple patch sets to be reviewed.

2015-02-17 Thread Tong Li

Dear kafka developers,
New to this community and put up two really small patch sets with
open issues, can any one please review and comment and get them merged if
all possible? Thanks

https://reviews.apache.org/r/31097/

https://reviews.apache.org/r/31097/

Tong Li
OpenStack & Kafka Community Development
Building 501/B205
liton...@us.ibm.com

[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324545#comment-14324545
 ] 

Jay Kreps commented on KAFKA-1961:
--

This would be hard to do accidentally, right, you have to type out the name of 
the topic you want to delete? Is there ever a valid reason to do this, e.g. you 
want to clean up all the offsets so you delete and recreate. Does that actually 
work or would something terrible happen (since state in broker wouldn't get 
reset)?

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324505#comment-14324505
 ] 

Neha Narkhede commented on KAFKA-1961:
--

That's a good catch [~gwenshap]. Seems like a bug.

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1961:
-
Labels: newbie  (was: )

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1961:
-
Affects Version/s: 0.8.2.0

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1963) Add unit tests to check presence of all metrics

2015-02-17 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-1963:
-

 Summary: Add unit tests to check presence of all metrics
 Key: KAFKA-1963
 URL: https://issues.apache.org/jira/browse/KAFKA-1963
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy


Metrics are sort of a public API so we should do this one way or the other. I 
have not yet thought through how best to do this, but it would be useful to 
have a unit test (or set of tests) that ensures that all metrics are accounted 
for wrt some golden set. E.g., we have removed metrics in the past without 
realizing it until much later.

We can do this after (or part of) KAFKA-1930.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1962) Restore delayed request metrics

2015-02-17 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-1962:
-

 Summary: Restore delayed request metrics
 Key: KAFKA-1962
 URL: https://issues.apache.org/jira/browse/KAFKA-1962
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joel Koshy


It seems we have lost the delayed request metrics that we had before:
Producer/Fetch(follower/consumer) expires-per-second



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1946) Fix various broker metrics

2015-02-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1946:
--
Summary: Fix various broker metrics  (was: Improve BrokerTopicMetrics 
reporting)

> Fix various broker metrics
> --
>
> Key: KAFKA-1946
> URL: https://issues.apache.org/jira/browse/KAFKA-1946
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> Creating an umbrella ticket to track improvement of BrokerTopicMetrics 
> reporting. 
> Some of the tasks are:
> - Add a metric for total fetch/produce requests as opposed to simply failure 
> counts
> - Tracking offset commit requests separately from produce requests
> - Adding a metric to track bad requests from clients. (HTTP 4XX vs 5XX as an 
> example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Update DelayedFetch.scala

2015-02-17 Thread arcz
GitHub user arcz opened a pull request:

https://github.com/apache/kafka/pull/44

Update DelayedFetch.scala

Fix typo

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arcz/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/44.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #44


commit 42072be223f1f8c336f9ee68761db842b9ab0b79
Author: Artur Cygan 
Date:   2015-02-17T08:20:54Z

Update DelayedFetch.scala

Fix typo




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: confirm subscribe to dev@kafka.apache.org

2015-02-17 Thread Ivan Dyachkov


On Tue, 17 Feb 2015, at 08:57, dev-h...@kafka.apache.org wrote:
> Hi! This is the ezmlm program. I'm managing the
> dev@kafka.apache.org mailing list.
> 
> I'm working for my owner, who can be reached
> at dev-ow...@kafka.apache.org.
> 
> To confirm that you would like
> 
>d...@dyachkov.org
> 
> added to the dev mailing list, please send
> a short reply to this address:
> 
>dev-sc.1424159848.bjhdjhockjdjpiiohnpe-dev=dyachkov@kafka.apache.org
> 
> Usually, this happens when you just hit the "reply" button.
> If this does not work, simply copy the address and paste it into
> the "To:" field of a new message.
> 
> or click here:
>   
> mailto:dev-sc.1424159848.bjhdjhockjdjpiiohnpe-dev=dyachkov@kafka.apache.org
> 
> This confirmation serves two purposes. First, it verifies that I am able
> to get mail through to you. Second, it protects you in case someone
> forges a subscription request in your name.
> 
> Please note that ALL Apache dev- and user- mailing lists are publicly
> archived.  Do familiarize yourself with Apache's public archive policy at
> 
> http://www.apache.org/foundation/public-archives.html
> 
> prior to subscribing and posting messages to dev@kafka.apache.org.
> If you're not sure whether or not the policy applies to this mailing list,
> assume it does unless the list name contains the word "private" in it.
> 
> Some mail programs are broken and cannot handle long addresses. If you
> cannot reply to this request, instead send a message to
>  and put the
> entire address listed above into the "Subject:" line.
> 
> 
> --- Administrative commands for the dev list ---
> 
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
> 
> To subscribe to the list, send a message to:
>
> 
> To remove your address from the list, send a message to:
>
> 
> Send mail to the following for info and FAQ for this list:
>
>
> 
> Similar addresses exist for the digest list:
>
>
> 
> To get messages 123 through 145 (a maximum of 100 per request), mail:
>
> 
> To get an index with subject and author for messages 123-456 , mail:
>
> 
> They are always returned as sets of 100, max 2000 per request,
> so you'll actually get 100-499.
> 
> To receive all messages with the same subject as message 12345,
> send a short message to:
>
> 
> The messages should contain one line or word of text to avoid being
> treated as sp@m, but I will ignore their content.
> Only the ADDRESS you send to is important.
> 
> You can start a subscription for an alternate address,
> for example "john@host.domain", just add a hyphen and your
> address (with '=' instead of '@') after the command word:
> 
> 
> To stop subscription for this address, mail:
> 
> 
> In both cases, I'll send a confirmation message to that address. When
> you receive it, simply reply to it to complete your subscription.
> 
> If despite following these instructions, you do not get the
> desired results, please contact my owner at
> dev-ow...@kafka.apache.org. Please be patient, my owner is a
> lot slower than I am ;-)
> 
> --- Enclosed is a copy of the request I received.
> 
> Return-Path: 
> Received: (qmail 40474 invoked by uid 99); 17 Feb 2015 07:57:28 -
> Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136)
> by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Feb 2015 07:57:28 +
> X-ASF-Spam-Status: No, hits=-2.7 required=10.0
>   tests=ASF_LIST_OPS,RCVD_IN_DNSWL_LOW,SPF_PASS
> X-Spam-Check-By: apache.org
> Received-SPF: pass (athena.apache.org: local policy)
> Received: from [66.111.4.28] (HELO out4-smtp.messagingengine.com) 
> (66.111.4.28)
> by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Feb 2015 07:57:24 +
> Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
>   by mailout.nyi.internal (Postfix) with ESMTP id 7D2872086F
>   for ; Tue, 17 Feb 2015 02:57:02 -0500 
> (EST)
> Received: from web5 ([10.202.2.215])
>   by compute4.internal (MEProxy); Tue, 17 Feb 2015 02:57:02 -0500
> DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=dyachkov.org; h=
>   message-id:x-sasl-enc:from:to:mime-version
>   :content-transfer-encoding:content-type:subject:date; s=mesmtp;
>bh=2jmj7l5rSw0yVb/vlWAYkK/YBwk=; b=HjQnaUD5Elndant+kYySivhz/HZw
>   ZZKul5gnVXNPR83/pW3VLZgLgfSRKqA64fYgRJqvAHis1nwLOpNodtwZUCn82V2M
>   nzooH3IETaaykvgt7Cm49CHio1IE7ZG377VzBMP8pX2oZwKz9lxpud8lq2/EXJhv
>   CoaLQybiUZYauBA=
> DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
>   messagingengine.com; h=message-id:x-sasl-enc:from:to
>   :mime-version:content-transfer-encoding:content-type:subject
>   :date; s=smtpout; bh=2jmj7l5rSw0yVb/vlWAYkK/YBwk=; b=JUlOlQTKDC+
>   diAS0BARukJI+FsGk3OH7cb7r2VWgSs8LPm/PbDDKfz1nRFRXRmRebgZUA7VlDQL
>   32pbqeOyj+pJke/kqxB54wOb+9sZrEO1EqbYcnysgGvupqWLvhkwGQsNUMvMoZU4
>   1xMpVlLo8AgxaUu