[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-09-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734674#comment-14734674
 ] 

Rajini Sivaram commented on KAFKA-2421:
---

[~junrao] Sorry I was away for the last couple of weeks. The patch has been 
updated.

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-09-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734672#comment-14734672
 ] 

Rajini Sivaram commented on KAFKA-2421:
---

Updated reviewboard https://reviews.apache.org/r/37357/diff/
 against branch origin/trunk

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-09-08 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-2421:
--
Attachment: kafka-2421_2015-09-08_11:38:03.patch

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 37357: Patch for KAFKA-2421: Upgrade to LZ4 version 1.3 and update reference to Utils method that was moved to SafeUtils

2015-09-08 Thread Rajini Sivaram

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/37357/
---

(Updated Sept. 8, 2015, 11:38 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-2421: Upgrade to LZ4 version 1.3 and update reference to Utils 
method that was moved to SafeUtils


Bugs: KAFKA-2421 and kafka-2421
https://issues.apache.org/jira/browse/KAFKA-2421
https://issues.apache.org/jira/browse/kafka-2421


Repository: kafka


Description (updated)
---

Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java


Diffs (updated)
-

  build.gradle fecc3eb3b6918ca62291360c5c59796264290a09 
  
clients/src/main/java/org/apache/kafka/common/record/KafkaLZ4BlockInputStream.java
 f480da2ae0992855cc860e1ce5cbd11ecfca7bee 
  
clients/src/main/java/org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.java
 6a2231f4775771932c36df362c88aead3189b7b8 

Diff: https://reviews.apache.org/r/37357/diff/


Testing
---


Thanks,

Rajini Sivaram



[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-09-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734719#comment-14734719
 ] 

Rajini Sivaram commented on KAFKA-2417:
---

[~granders] Are you working on this task? I would be happy to help with the 
tests in case you haven't started. I am new to ducktape, but I can take a look 
at the existing tests to get started.

We would also find it useful to run all the ducktape tests optionally with 
SSL-enabled clients. Looking at the current service definitions, this looks 
possible with a small amount of change. If this change would be of interest to 
the wider community, I will be happy to submit a patch.

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.8.3
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> * Renegotiation
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-09-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734728#comment-14734728
 ] 

Ismael Juma commented on KAFKA-2417:


Regarding running the existing tests with SSL enabled, yes that would be a good 
idea indeed (and something we were hoping to do).

I'm sure help will be welcome, let's see what Geoff suggests is the best way to 
split the work.

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.8.3
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> * Renegotiation
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Gwen Shapira
I propose a simple rename: s/0.8.3/0.9.0/

No change of scope and not including current 0.9.0 issues.

On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Is the plan to release 0.9 in October with the features currently targeted
> for 0.8.3, or would 0.9 be a later release including all the issues
> currently targeted for 0.8.3 and 0.9? Will the scope of the release change
> when it is renamed?
> Thanks,
>
> Rajini
>
> On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps  wrote:
>
> > +1 on 0.9
> >
> > -Jay
> >
> > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  wrote:
> >
> > > Hi Kafka Fans,
> > >
> > > What do you think of making the next release (the one with security,
> new
> > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > >
> > > It has lots of new features, and new consumer was pretty much scoped
> for
> > > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > > features deserve a better release number.
> > >
> > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > places), and noisy emails from JIRA while we change "fix version" field
> > > everywhere.
> > >
> > > Thoughts?
> > >
> >
>


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Edward Ribeiro
+1 on 0.9.0

On Tue, Sep 8, 2015 at 4:07 PM, Ashish Singh  wrote:

> +1 on 0.9.0
>
> On Tue, Sep 8, 2015 at 12:00 PM, Gwen Shapira  wrote:
>
> > I propose a simple rename: s/0.8.3/0.9.0/
> >
> > No change of scope and not including current 0.9.0 issues.
> >
> > On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> > > Is the plan to release 0.9 in October with the features currently
> > targeted
> > > for 0.8.3, or would 0.9 be a later release including all the issues
> > > currently targeted for 0.8.3 and 0.9? Will the scope of the release
> > change
> > > when it is renamed?
> > > Thanks,
> > >
> > > Rajini
> > >
> > > On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps  wrote:
> > >
> > > > +1 on 0.9
> > > >
> > > > -Jay
> > > >
> > > > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira 
> > wrote:
> > > >
> > > > > Hi Kafka Fans,
> > > > >
> > > > > What do you think of making the next release (the one with
> security,
> > > new
> > > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > > > >
> > > > > It has lots of new features, and new consumer was pretty much
> scoped
> > > for
> > > > > 0.9.0, so it matches our original roadmap. I feel that so many
> > awesome
> > > > > features deserve a better release number.
> > > > >
> > > > > The downside is mainly some confusion (we refer to 0.8.3 in bunch
> of
> > > > > places), and noisy emails from JIRA while we change "fix version"
> > field
> > > > > everywhere.
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > >
> >
>
>
>
> --
>
> Regards,
> Ashish
>


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Ashish Singh
+1 on 0.9.0

On Tue, Sep 8, 2015 at 12:00 PM, Gwen Shapira  wrote:

> I propose a simple rename: s/0.8.3/0.9.0/
>
> No change of scope and not including current 0.9.0 issues.
>
> On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > Is the plan to release 0.9 in October with the features currently
> targeted
> > for 0.8.3, or would 0.9 be a later release including all the issues
> > currently targeted for 0.8.3 and 0.9? Will the scope of the release
> change
> > when it is renamed?
> > Thanks,
> >
> > Rajini
> >
> > On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps  wrote:
> >
> > > +1 on 0.9
> > >
> > > -Jay
> > >
> > > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira 
> wrote:
> > >
> > > > Hi Kafka Fans,
> > > >
> > > > What do you think of making the next release (the one with security,
> > new
> > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > > >
> > > > It has lots of new features, and new consumer was pretty much scoped
> > for
> > > > 0.9.0, so it matches our original roadmap. I feel that so many
> awesome
> > > > features deserve a better release number.
> > > >
> > > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > > places), and noisy emails from JIRA while we change "fix version"
> field
> > > > everywhere.
> > > >
> > > > Thoughts?
> > > >
> > >
> >
>



-- 

Regards,
Ashish


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Neha Narkhede
Based on the scope, prefer 0.9.

On Tue, Sep 8, 2015 at 11:21 AM, Jay Kreps  wrote:

> +1 on 0.9
>
> -Jay
>
> On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  wrote:
>
> > Hi Kafka Fans,
> >
> > What do you think of making the next release (the one with security, new
> > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> >
> > It has lots of new features, and new consumer was pretty much scoped for
> > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > features deserve a better release number.
> >
> > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > places), and noisy emails from JIRA while we change "fix version" field
> > everywhere.
> >
> > Thoughts?
> >
>



-- 
Thanks,
Neha


New consumer subscribe then seek

2015-09-08 Thread Phil Steitz
I have been experimenting with the KafkaConsumer currently in
development [1].  Sorry if this should be a question for the user
list, but I am not sure if what I am seeing is something not working
yet or if I am misunderstanding the API.  If I use
KafkaConsumer#subscribe to subscribe to a topic and then try to use
seek(TopicPartion, offset) to position the consumer, I get an
IllegalStateException with message "No current assignment for
partition "  If I use assign instead to connect to the topic,
things work fine.  I can see why this is by looking at the
SubscriptionState code which is throwing the ISE because
SubscriptionState#seek expects to find an assignment, but
KafkaConsumer#subscribe does not make any.

I know this is unreleased code and I am not looking for help here -
actually more like looking *to* help but just learning the code. 
Happy to open a ticket with a test case if that will help or a patch
to the javadoc if I am misunderstanding the API and it can be made
clearer.

Thanks!

Phil

[1] ff189fa05ccdacac100f3d15d167dcbe561f57a6



[jira] [Commented] (KAFKA-2517) Performance Regression post SSL implementation

2015-09-08 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735738#comment-14735738
 ] 

Ben Stopford commented on KAFKA-2517:
-

Yes - what Ismael said. I'll take a look at getting a PR out for this now. 

> Performance Regression post SSL implementation
> --
>
> Key: KAFKA-2517
> URL: https://issues.apache.org/jira/browse/KAFKA-2517
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.8.3
>
>
> It would appear that we incurred a performance regression on submission of 
> the SSL work affecting the performance of the new Kafka Consumer. 
> Running with 1KB messages. Macbook 2.3 GHz Intel Core i7, 8GB, APPLE SSD 
> SM256E. Single server instance. All local. 
> kafka-consumer-perf-test.sh ... --messages 300  --new-consumer
> Pre-SSL changes (commit 503bd36647695e8cc91893ffb80346dd03eb0bc5)
> Steady state throughputs = 234.8 MB/s
> (2861.5913, 234.8261, 3000596, 246233.0543)
> Post-SSL changes (commit 13c432f7952de27e9bf8cb4adb33a91ae3a4b738) 
> Steady state throughput =  178.1 MB/s  
> (2861.5913, 178.1480, 3000596, 186801.7182)
> Implication is a 25% reduction in consumer throughput for these test 
> conditions. 
> This appears to be caused by the use of PlaintextTransportLayer rather than 
> SocketChannel in FileMessageSet.writeTo() meaning a zero copy transfer is not 
> invoked.
> Switching to the use of a SocketChannel directly in FileMessageSet.writeTo()  
> yields the following result:
> Steady state throughput =  281.8 MB/s
> (2861.5913, 281.8191, 3000596, 295508.7650)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Joel Koshy
+1 on 0.9 - we may want to adjust our ApiVersions accordingly (i.e.,
0.8.3 -> 0.9.0)


On Tue, Sep 8, 2015 at 2:02 PM, Guozhang Wang  wrote:
> +1 on 0.9 as well.
>
> On Tue, Sep 8, 2015 at 1:32 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
>> +1 on 0.9
>>
>> On Tue, Sep 8, 2015 at 12:29 PM, Edward Ribeiro 
>> wrote:
>>
>> > +1 on 0.9.0
>> >
>> > On Tue, Sep 8, 2015 at 4:07 PM, Ashish Singh 
>> wrote:
>> >
>> > > +1 on 0.9.0
>> > >
>> > > On Tue, Sep 8, 2015 at 12:00 PM, Gwen Shapira 
>> wrote:
>> > >
>> > > > I propose a simple rename: s/0.8.3/0.9.0/
>> > > >
>> > > > No change of scope and not including current 0.9.0 issues.
>> > > >
>> > > > On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
>> > > > rajinisiva...@googlemail.com> wrote:
>> > > >
>> > > > > Is the plan to release 0.9 in October with the features currently
>> > > > targeted
>> > > > > for 0.8.3, or would 0.9 be a later release including all the issues
>> > > > > currently targeted for 0.8.3 and 0.9? Will the scope of the release
>> > > > change
>> > > > > when it is renamed?
>> > > > > Thanks,
>> > > > >
>> > > > > Rajini
>> > > > >
>> > > > > On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps 
>> wrote:
>> > > > >
>> > > > > > +1 on 0.9
>> > > > > >
>> > > > > > -Jay
>> > > > > >
>> > > > > > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira > >
>> > > > wrote:
>> > > > > >
>> > > > > > > Hi Kafka Fans,
>> > > > > > >
>> > > > > > > What do you think of making the next release (the one with
>> > > security,
>> > > > > new
>> > > > > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
>> > > > > > >
>> > > > > > > It has lots of new features, and new consumer was pretty much
>> > > scoped
>> > > > > for
>> > > > > > > 0.9.0, so it matches our original roadmap. I feel that so many
>> > > > awesome
>> > > > > > > features deserve a better release number.
>> > > > > > >
>> > > > > > > The downside is mainly some confusion (we refer to 0.8.3 in
>> bunch
>> > > of
>> > > > > > > places), and noisy emails from JIRA while we change "fix
>> version"
>> > > > field
>> > > > > > > everywhere.
>> > > > > > >
>> > > > > > > Thoughts?
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > >
>> > > Regards,
>> > > Ashish
>> > >
>> >
>>
>
>
>
> --
> -- Guozhang


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Ismael Juma
+1 (non-binding) for 0.9.

Ismael

On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  wrote:

> Hi Kafka Fans,
>
> What do you think of making the next release (the one with security, new
> consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
>
> It has lots of new features, and new consumer was pretty much scoped for
> 0.9.0, so it matches our original roadmap. I feel that so many awesome
> features deserve a better release number.
>
> The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> places), and noisy emails from JIRA while we change "fix version" field
> everywhere.
>
> Thoughts?
>


[GitHub] kafka pull request: Small change to API doc for seekToEnd() to cla...

2015-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/199


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Guozhang Wang
+1 on 0.9 as well.

On Tue, Sep 8, 2015 at 1:32 PM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:

> +1 on 0.9
>
> On Tue, Sep 8, 2015 at 12:29 PM, Edward Ribeiro 
> wrote:
>
> > +1 on 0.9.0
> >
> > On Tue, Sep 8, 2015 at 4:07 PM, Ashish Singh 
> wrote:
> >
> > > +1 on 0.9.0
> > >
> > > On Tue, Sep 8, 2015 at 12:00 PM, Gwen Shapira 
> wrote:
> > >
> > > > I propose a simple rename: s/0.8.3/0.9.0/
> > > >
> > > > No change of scope and not including current 0.9.0 issues.
> > > >
> > > > On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
> > > > rajinisiva...@googlemail.com> wrote:
> > > >
> > > > > Is the plan to release 0.9 in October with the features currently
> > > > targeted
> > > > > for 0.8.3, or would 0.9 be a later release including all the issues
> > > > > currently targeted for 0.8.3 and 0.9? Will the scope of the release
> > > > change
> > > > > when it is renamed?
> > > > > Thanks,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps 
> wrote:
> > > > >
> > > > > > +1 on 0.9
> > > > > >
> > > > > > -Jay
> > > > > >
> > > > > > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  >
> > > > wrote:
> > > > > >
> > > > > > > Hi Kafka Fans,
> > > > > > >
> > > > > > > What do you think of making the next release (the one with
> > > security,
> > > > > new
> > > > > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > > > > > >
> > > > > > > It has lots of new features, and new consumer was pretty much
> > > scoped
> > > > > for
> > > > > > > 0.9.0, so it matches our original roadmap. I feel that so many
> > > > awesome
> > > > > > > features deserve a better release number.
> > > > > > >
> > > > > > > The downside is mainly some confusion (we refer to 0.8.3 in
> bunch
> > > of
> > > > > > > places), and noisy emails from JIRA while we change "fix
> version"
> > > > field
> > > > > > > everywhere.
> > > > > > >
> > > > > > > Thoughts?
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Regards,
> > > Ashish
> > >
> >
>



-- 
-- Guozhang


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Mayuresh Gharat
+1 for 0.9 - we may want to get rid of deprecated configs if possible in
this, instead of waiting for 1.0.

Thanks,

Mayuresh

On Tue, Sep 8, 2015 at 2:07 PM, Joel Koshy  wrote:

> +1 on 0.9 - we may want to adjust our ApiVersions accordingly (i.e.,
> 0.8.3 -> 0.9.0)
>
>
> On Tue, Sep 8, 2015 at 2:02 PM, Guozhang Wang  wrote:
> > +1 on 0.9 as well.
> >
> > On Tue, Sep 8, 2015 at 1:32 PM, Aditya Auradkar <
> > aaurad...@linkedin.com.invalid> wrote:
> >
> >> +1 on 0.9
> >>
> >> On Tue, Sep 8, 2015 at 12:29 PM, Edward Ribeiro <
> edward.ribe...@gmail.com>
> >> wrote:
> >>
> >> > +1 on 0.9.0
> >> >
> >> > On Tue, Sep 8, 2015 at 4:07 PM, Ashish Singh 
> >> wrote:
> >> >
> >> > > +1 on 0.9.0
> >> > >
> >> > > On Tue, Sep 8, 2015 at 12:00 PM, Gwen Shapira 
> >> wrote:
> >> > >
> >> > > > I propose a simple rename: s/0.8.3/0.9.0/
> >> > > >
> >> > > > No change of scope and not including current 0.9.0 issues.
> >> > > >
> >> > > > On Tue, Sep 8, 2015 at 11:36 AM, Rajini Sivaram <
> >> > > > rajinisiva...@googlemail.com> wrote:
> >> > > >
> >> > > > > Is the plan to release 0.9 in October with the features
> currently
> >> > > > targeted
> >> > > > > for 0.8.3, or would 0.9 be a later release including all the
> issues
> >> > > > > currently targeted for 0.8.3 and 0.9? Will the scope of the
> release
> >> > > > change
> >> > > > > when it is renamed?
> >> > > > > Thanks,
> >> > > > >
> >> > > > > Rajini
> >> > > > >
> >> > > > > On Tue, Sep 8, 2015 at 7:21 PM, Jay Kreps 
> >> wrote:
> >> > > > >
> >> > > > > > +1 on 0.9
> >> > > > > >
> >> > > > > > -Jay
> >> > > > > >
> >> > > > > > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira <
> g...@confluent.io
> >> >
> >> > > > wrote:
> >> > > > > >
> >> > > > > > > Hi Kafka Fans,
> >> > > > > > >
> >> > > > > > > What do you think of making the next release (the one with
> >> > > security,
> >> > > > > new
> >> > > > > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> >> > > > > > >
> >> > > > > > > It has lots of new features, and new consumer was pretty
> much
> >> > > scoped
> >> > > > > for
> >> > > > > > > 0.9.0, so it matches our original roadmap. I feel that so
> many
> >> > > > awesome
> >> > > > > > > features deserve a better release number.
> >> > > > > > >
> >> > > > > > > The downside is mainly some confusion (we refer to 0.8.3 in
> >> bunch
> >> > > of
> >> > > > > > > places), and noisy emails from JIRA while we change "fix
> >> version"
> >> > > > field
> >> > > > > > > everywhere.
> >> > > > > > >
> >> > > > > > > Thoughts?
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > >
> >> > > Regards,
> >> > > Ashish
> >> > >
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Commented] (KAFKA-2517) Performance Regression post SSL implementation

2015-09-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735711#comment-14735711
 ] 

Ismael Juma commented on KAFKA-2517:


Implementing `SelChImpl` is undesirable as it's an internal implementation 
class in the JDK. A better alternative, in my opinion, is to pass the 
underlying `SocketChannel` to `transferTo`; Ben verified that this fixes the 
problem before filing this ticket. I suggest we add a transfer-like method to 
`TransportLayer`. In the `PlaintextTransportLayer` implementation, it should 
use the underlying `socketChannel` instead of itself.

> Performance Regression post SSL implementation
> --
>
> Key: KAFKA-2517
> URL: https://issues.apache.org/jira/browse/KAFKA-2517
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.8.3
>
>
> It would appear that we incurred a performance regression on submission of 
> the SSL work affecting the performance of the new Kafka Consumer. 
> Running with 1KB messages. Macbook 2.3 GHz Intel Core i7, 8GB, APPLE SSD 
> SM256E. Single server instance. All local. 
> kafka-consumer-perf-test.sh ... --messages 300  --new-consumer
> Pre-SSL changes (commit 503bd36647695e8cc91893ffb80346dd03eb0bc5)
> Steady state throughputs = 234.8 MB/s
> (2861.5913, 234.8261, 3000596, 246233.0543)
> Post-SSL changes (commit 13c432f7952de27e9bf8cb4adb33a91ae3a4b738) 
> Steady state throughput =  178.1 MB/s  
> (2861.5913, 178.1480, 3000596, 186801.7182)
> Implication is a 25% reduction in consumer throughput for these test 
> conditions. 
> This appears to be caused by the use of PlaintextTransportLayer rather than 
> SocketChannel in FileMessageSet.writeTo() meaning a zero copy transfer is not 
> invoked.
> Switching to the use of a SocketChannel directly in FileMessageSet.writeTo()  
> yields the following result:
> Steady state throughput =  281.8 MB/s
> (2861.5913, 281.8191, 3000596, 295508.7650)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-08 Thread Will Funnell (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735694#comment-14735694
 ] 

Will Funnell commented on KAFKA-2500:
-

[~hachikuji] thanks for looking into this. I'll try and clarify my requirement 
if it helps:
Given a log compacted topic, I need to consume every key at least once, then 
cancel the subscription.

> It seems like there could also be new records pushed in the time that it 
> takes for the fetch response to be returned, right? It only reduces the 
> window.

At the moment the high watermark, in my implementation in KAFKA-1977, can be 
compared with the current offset when the message is received, if they match, 
you can finish. 

It is my understanding that to achieve the same functionality you would need to 
call the API as specified in KAFKA-2076 to get the High Watermark after every 
message, which would not seem performant.

I think Jay Kreps imitates this when defining the HW:
> 1. Consumer offset is determined by consumer, while HW is determined by 
> producer. This means consumer offsets needs only minimum communication with 
> broker, but HW needs frequent communication.
> 2. Typically user will only fetch offsets when starting consumption but user 
> may care about HW both before starting consumption and during the consuming 
> as it reflects lags. This means the HW updates should be cheap otherwise the 
> overhead would be big.

If I make an OffsetRequest (with HW information) call at the beginning, by the 
time my partition's offset matches the HW, I will miss messages that have been 
compacted in the meantime.

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.8.3
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.8.3
>
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/158


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735725#comment-14735725
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/158


> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: 0.8.2

2015-09-08 Thread leoricxu
Github user leoricxu closed the pull request at:

https://github.com/apache/kafka/pull/198


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Contributor List

2015-09-08 Thread Gwen Shapira
Thanks! Added you as well.

On Mon, Sep 7, 2015 at 10:11 PM, Prabhjot Bharaj 
wrote:

> My username: pbharaj
>
> Thanks,
> Prabhjot
> On Sep 8, 2015 10:14 AM, "Prabhjot Bharaj"  wrote:
>
> > Hi,
> >
> > Request you to add me as well for code contributions
> >
> > Regards,
> > Prabhjot
> > On Sep 8, 2015 2:10 AM, "Gwen Shapira"  wrote:
> >
> >> Done :)
> >>
> >> Happy hacking.
> >>
> >> On Mon, Sep 7, 2015 at 11:44 AM, Bill Bejeck  wrote:
> >>
> >> > Hi Just a reminder to add me to the contributors list! I've started
> >> looking
> >> > at KAFKA-2058,  My Jira username is bbejeck.
> >> >
> >> > Thanks!
> >> >
> >> > Bill
> >> >
> >> > On Fri, Sep 4, 2015 at 7:46 PM, Gwen Shapira 
> wrote:
> >> >
> >> > > Thank you!
> >> > >
> >> > > Please create a Jira user if you didn't do so already, and let me
> know
> >> > your
> >> > > user name. I'll add you to the list.
> >> > >
> >> > > On Fri, Sep 4, 2015 at 4:33 PM, Bill Bejeck 
> >> wrote:
> >> > >
> >> > > > Hi can i get added to the contributor list? I'd like to take crack
> >> at
> >> > > > KAFKA-2058 
> >> > > >
> >> > > > Thanks!
> >> > > >
> >> > > > Bill Bejeck
> >> > > >
> >> > >
> >> >
> >>
> >
>


Re: Contributor List

2015-09-08 Thread Gwen Shapira
What's your apache jira user name?
On Sep 7, 2015 9:44 PM, "Prabhjot Bharaj"  wrote:

> Hi,
>
> Request you to add me as well for code contributions
>
> Regards,
> Prabhjot
> On Sep 8, 2015 2:10 AM, "Gwen Shapira"  wrote:
>
> > Done :)
> >
> > Happy hacking.
> >
> > On Mon, Sep 7, 2015 at 11:44 AM, Bill Bejeck  wrote:
> >
> > > Hi Just a reminder to add me to the contributors list! I've started
> > looking
> > > at KAFKA-2058,  My Jira username is bbejeck.
> > >
> > > Thanks!
> > >
> > > Bill
> > >
> > > On Fri, Sep 4, 2015 at 7:46 PM, Gwen Shapira 
> wrote:
> > >
> > > > Thank you!
> > > >
> > > > Please create a Jira user if you didn't do so already, and let me
> know
> > > your
> > > > user name. I'll add you to the list.
> > > >
> > > > On Fri, Sep 4, 2015 at 4:33 PM, Bill Bejeck 
> wrote:
> > > >
> > > > > Hi can i get added to the contributor list? I'd like to take crack
> at
> > > > > KAFKA-2058 
> > > > >
> > > > > Thanks!
> > > > >
> > > > > Bill Bejeck
> > > > >
> > > >
> > >
> >
>


[jira] [Commented] (KAFKA-1070) Auto-assign node id

2015-09-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734916#comment-14734916
 ] 

Sriharsha Chintalapani commented on KAFKA-1070:
---

[~amuraru] I am not sure where you are seeing .orig file in the trunk. Can you 
paste the link to that file from here https://github.com/apache/kafka

> Auto-assign node id
> ---
>
> Key: KAFKA-1070
> URL: https://issues.apache.org/jira/browse/KAFKA-1070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: usability
> Fix For: 0.8.3
>
> Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
> KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
> KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch, 
> KAFKA-1070_2014-11-20_10:50:04.patch, KAFKA-1070_2014-11-25_20:29:37.patch, 
> KAFKA-1070_2015-01-01_17:39:30.patch, KAFKA-1070_2015-01-12_10:46:54.patch, 
> KAFKA-1070_2015-01-12_18:30:17.patch
>
>
> It would be nice to have Kafka brokers auto-assign node ids rather than 
> having that be a configuration. Having a configuration is irritating because 
> (1) you have to generate a custom config for each broker and (2) even though 
> it is in configuration, changing the node id can cause all kinds of bad 
> things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734928#comment-14734928
 ] 

Håkon Hitland commented on KAFKA-2477:
--

I was able to enable trace logging on a production server, and have captured 
logs from the leader when the error happens.

It looks like the attempted read happens right before the log is actually 
appended. I don't see any other abnormal behaviour.

Looking at the code in question, I think I have an idea of how it might happen:

kafka.log.Log uses a lock to synchronize writes, but not reads.

Assume a write W1 has gotten as far as FileMessageSet.append() and has just 
executed _size.getAndAdd(written)

Now a concurrent read R1 comes in. In FileMessageSet.read(), it can get a new 
message set with end = math.min(this.start + position + size, sizeInBytes()). 
This includes the message that was just written in W1.

The read finishes, and a new read R2 starts. R2 tries to continue from W1, but 
in Log.read() it finds that startOffset is larger than 
nextOffsetMetadata.messageOffset and throws an exception.
(By the way, Log.read() can potentially read nextOffsetMetadata multiple times, 
with no guarantee that it hasn't changed. It's not obvious to me that this is 
correct.)

Finally, W1 updates nextOffsetMetadata in Log.updateLogEndOffset(), too late 
for R2 which has already triggered a log truncation on the replica.

Some possible solutions:
- Synchronize access to nextOffsetMetadata in Log.read()
- Clamp reads in Log.read() to never go beyond the current message offset.

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-09-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734905#comment-14734905
 ] 

Rajini Sivaram commented on KAFKA-1686:
---

The current implementation uses GSSAPI as the only hard-coded SASL mechanism. 
We are keen to use SASL/PLAIN. Would it be possible to make the SASL mechanism 
configurable? This task does say "Implement SASL/Kerberos", so if it would be 
better to open a new task for Sasl/PLAIN, that would be fine too. But it will 
be good to separate out the Kerberos mechanism related code from the main SASL 
client/server codepath to make it easier to support multiple mechanisms.

We would like to use SSL as the transport layer with SASL/PLAIN for client 
authentication. I think that would be a straightforward new SecurityProtocol 
(SSL_SASL) that combines SSLTransportLayer with SaslAuthenticator. Are you 
planning to add this combination under this task?


> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Håkon Hitland updated KAFKA-2477:
-
Attachment: kafka_log_trace.txt

Attached trace log from leader. Filtered to only lines for the relevant 
partition.

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2489) System tests: update benchmark tests to run with new and old consumer

2015-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735924#comment-14735924
 ] 

ASF GitHub Bot commented on KAFKA-2489:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/179


> System tests: update benchmark tests to run with new and old consumer
> -
>
> Key: KAFKA-2489
> URL: https://issues.apache.org/jira/browse/KAFKA-2489
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Update benchmark tests to run w/new consumer to help catch performance 
> regressions
> For context:
> https://www.mail-archive.com/dev@kafka.apache.org/msg33633.html
> The new consumer was previously reaching getting good performance. However, a 
> recent report on the mailing list indicates it's dropped significantly. After 
> evaluation, even with a local broker it seems to only be reaching a 2-10MB/s, 
> compared to 600+MB/s previously. Before release, we should get the 
> performance 
> back on par.
> Some details about where the regression occurred from the mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201508.mbox/%3CCAAdKFaE8bPSeWZf%2BF9RuA-xZazRpBrZG6vo454QLVHBAk_VOJg%40mail.gmail.com%3E
>  :
> bq. At 49026f11781181c38e9d5edb634be9d27245c961 (May 14th), we went from good 
> performance -> an error due to broker apparently not accepting the partition 
> assignment strategy. Since this commit seems to add heartbeats and the server 
> side code for partition assignment strategies, I assume we were missing 
> something on the client side and by filling in the server side, things 
> stopped 
> working.
> bq. On either 84636272422b6379d57d4c5ef68b156edc1c67f8 or 
> a5b11886df8c7aad0548efd2c7c3dbc579232f03 (July 17th), I am able to run the 
> perf 
> test again, but it's slow -- ~10MB/s for me vs the 2MB/s Jay was seeing, but 
> that's still far less than the 600MB/s I saw on the earlier commits.
> Ideally we would also at least have a system test in place for the new 
> consumer, even if regressions weren't automatically detected. It would at 
> least 
> allow for manually checking for regressions. This should not be difficult 
> since 
> there are already old consumer performance tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2489: add benchmark for new consumer

2015-09-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/179


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2527) System Test for Quotas in Ducktape

2015-09-08 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-2527:
---

 Summary: System Test for Quotas in Ducktape
 Key: KAFKA-2527
 URL: https://issues.apache.org/jira/browse/KAFKA-2527
 Project: Kafka
  Issue Type: Test
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2528) Quota Performance Evaluation

2015-09-08 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-2528:
---

 Summary: Quota Performance Evaluation
 Key: KAFKA-2528
 URL: https://issues.apache.org/jira/browse/KAFKA-2528
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2527) System Test for Quotas in Ducktape

2015-09-08 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated KAFKA-2527:

Issue Type: Sub-task  (was: Test)
Parent: KAFKA-2083

> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: New consumer subscribe then seek

2015-09-08 Thread Jason Gustafson
Hey Phil,

You've stumbled onto one of the tricky aspects of the new consumer that
we've been talking about recently. KafkaConsumer.subscribe() is
asynchronous in the sense that it will return before partitions have been
assigned. We could make it synchronous, but we wouldn't be able to
guarantee how long the assignment would be active since other members of
the group or metadata changes can cause the coordinator to rebalance the
assignment. The best place to perform a seek would probably be in the
rebalance callback, which can be passed through the alternative subscribe
API. The code might look something like this:

consumer.subscribe(topics, new RebalanceListener() {
  void onPartitionsAssigned(List partitions) {
// seek to the initial offset for the assigned partitions here
  }
  void onPartitionsRevoked(List partitions) {
// commit offsets if you need to
  }
});

while (true) {
  ConsumerRecords records = consumer.poll(100);
  // do stuff with records
}

Does that make sense?


Thanks,
Jason


On Tue, Sep 8, 2015 at 2:59 PM, Phil Steitz  wrote:

> I have been experimenting with the KafkaConsumer currently in
> development [1].  Sorry if this should be a question for the user
> list, but I am not sure if what I am seeing is something not working
> yet or if I am misunderstanding the API.  If I use
> KafkaConsumer#subscribe to subscribe to a topic and then try to use
> seek(TopicPartion, offset) to position the consumer, I get an
> IllegalStateException with message "No current assignment for
> partition "  If I use assign instead to connect to the topic,
> things work fine.  I can see why this is by looking at the
> SubscriptionState code which is throwing the ISE because
> SubscriptionState#seek expects to find an assignment, but
> KafkaConsumer#subscribe does not make any.
>
> I know this is unreleased code and I am not looking for help here -
> actually more like looking *to* help but just learning the code.
> Happy to open a ticket with a test case if that will help or a patch
> to the javadoc if I am misunderstanding the API and it can be made
> clearer.
>
> Thanks!
>
> Phil
>
> [1] ff189fa05ccdacac100f3d15d167dcbe561f57a6
>
>


[jira] [Commented] (KAFKA-2120) Add a request timeout to NetworkClient

2015-09-08 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735899#comment-14735899
 ] 

Mayuresh Gharat commented on KAFKA-2120:


Hi [~junrao], I have uploaded a new patch addressing the concerns that you 
raised and also explained my earlier approach. Thanks a lot for all the 
comments. Would you mind taking another look?

Thanks,

Mayuresh

> Add a request timeout to NetworkClient
> --
>
> Key: KAFKA-2120
> URL: https://issues.apache.org/jira/browse/KAFKA-2120
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
> KAFKA-2120_2015-07-29_15:57:02.patch, KAFKA-2120_2015-08-10_19:55:18.patch, 
> KAFKA-2120_2015-08-12_10:59:09.patch, KAFKA-2120_2015-09-03_15:12:02.patch, 
> KAFKA-2120_2015-09-04_17:49:01.patch
>
>
> Currently NetworkClient does not have a timeout setting for requests. So if 
> no response is received for a request due to reasons such as broker is down, 
> the request will never be completed.
> Request timeout will also be used as implicit timeout for some methods such 
> as KafkaProducer.flush() and kafkaProducer.close().
> KIP-19 is created for this public interface change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2527) System Test for Quotas in Ducktape

2015-09-08 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2527 started by Dong Lin.
---
> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Test
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #618

2015-09-08 Thread Apache Jenkins Server
See 



Re: New consumer subscribe then seek

2015-09-08 Thread Phil Steitz
On 9/8/15 6:58 PM, Jason Gustafson wrote:
> Hey Phil,
>
> You've stumbled onto one of the tricky aspects of the new consumer that
> we've been talking about recently. KafkaConsumer.subscribe() is
> asynchronous in the sense that it will return before partitions have been
> assigned. We could make it synchronous, but we wouldn't be able to
> guarantee how long the assignment would be active since other members of
> the group or metadata changes can cause the coordinator to rebalance the
> assignment. The best place to perform a seek would probably be in the
> rebalance callback, which can be passed through the alternative subscribe
> API. The code might look something like this:
>
> consumer.subscribe(topics, new RebalanceListener() {
>   void onPartitionsAssigned(List partitions) {
> // seek to the initial offset for the assigned partitions here
>   }
>   void onPartitionsRevoked(List partitions) {
> // commit offsets if you need to
>   }
> });
>
> while (true) {
>   ConsumerRecords records = consumer.poll(100);
>   // do stuff with records
> }
>
> Does that make sense?

Yes, this makes sense.  Thanks!

Phil
>
>
> Thanks,
> Jason
>
>
> On Tue, Sep 8, 2015 at 2:59 PM, Phil Steitz  wrote:
>
>> I have been experimenting with the KafkaConsumer currently in
>> development [1].  Sorry if this should be a question for the user
>> list, but I am not sure if what I am seeing is something not working
>> yet or if I am misunderstanding the API.  If I use
>> KafkaConsumer#subscribe to subscribe to a topic and then try to use
>> seek(TopicPartion, offset) to position the consumer, I get an
>> IllegalStateException with message "No current assignment for
>> partition "  If I use assign instead to connect to the topic,
>> things work fine.  I can see why this is by looking at the
>> SubscriptionState code which is throwing the ISE because
>> SubscriptionState#seek expects to find an assignment, but
>> KafkaConsumer#subscribe does not make any.
>>
>> I know this is unreleased code and I am not looking for help here -
>> actually more like looking *to* help but just learning the code.
>> Happy to open a ticket with a test case if that will help or a patch
>> to the javadoc if I am misunderstanding the API and it can be made
>> clearer.
>>
>> Thanks!
>>
>> Phil
>>
>> [1] ff189fa05ccdacac100f3d15d167dcbe561f57a6
>>
>>




[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-09-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734918#comment-14734918
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~rsivaram] Yes I'll make it as configurable option . The current patch is 
going through cleanup and adding more config options.
yes I'll add SSLSASL as well.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1070) Auto-assign node id

2015-09-08 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734938#comment-14734938
 ] 

Adrian Muraru commented on KAFKA-1070:
--

[~harsha_ch] I see KAFKA-1973 already fixed that, nw :)

> Auto-assign node id
> ---
>
> Key: KAFKA-1070
> URL: https://issues.apache.org/jira/browse/KAFKA-1070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: usability
> Fix For: 0.8.3
>
> Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
> KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
> KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch, 
> KAFKA-1070_2014-11-20_10:50:04.patch, KAFKA-1070_2014-11-25_20:29:37.patch, 
> KAFKA-1070_2015-01-01_17:39:30.patch, KAFKA-1070_2015-01-12_10:46:54.patch, 
> KAFKA-1070_2015-01-12_18:30:17.patch
>
>
> It would be nice to have Kafka brokers auto-assign node ids rather than 
> having that be a configuration. Having a configuration is irritating because 
> (1) you have to generate a custom config for each broker and (2) even though 
> it is in configuration, changing the node id can cause all kinds of bad 
> things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735015#comment-14735015
 ] 

Jiangjie Qin commented on KAFKA-2477:
-

[~hakon] Yes, that's correct.

The log appending does the following two things:
1. Append message to log
2. Update Log.nextOffsetMetadata.messageOffset.
If two follower reads come between 1 and 2. There will be a out of range 
exception. I think the fix is to read up to 
Log.nextOffsetMetadata.messageOffset for replicas instead of max size.

Are you interested in submitting a patch?

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Aditya Auradkar
Hi Gwen,

I certainly think 0.9.0 is better than 0.8.3.
As regards semantic versioning, do we have a plan for a 1.0 release? IIUC,
compatibility rules don't really apply for pre-1.0 stuff. I'd argue that
Kafka already qualifies for 1.x.

Aditya

On Tue, Sep 8, 2015 at 10:26 AM, Gwen Shapira  wrote:

> We've been rather messy about this in the past, but I'm hoping to converge
> toward semantic versioning: http://semver.org/
>
> 0.9.0 will fit since we are adding new functionality in backward compatible
> manner.
>
> On Tue, Sep 8, 2015 at 10:23 AM, Flavio Junqueira  wrote:
>
> > Hi Gwen,
> >
> > What's the expected meaning of the individual digits of the version for
> > this community? Could you give me some insight here?
> >
> > -Flavio
> >
> > > On 08 Sep 2015, at 18:19, Gwen Shapira  wrote:
> > >
> > > Hi Kafka Fans,
> > >
> > > What do you think of making the next release (the one with security,
> new
> > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > >
> > > It has lots of new features, and new consumer was pretty much scoped
> for
> > > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > > features deserve a better release number.
> > >
> > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > places), and noisy emails from JIRA while we change "fix version" field
> > > everywhere.
> > >
> > > Thoughts?
> >
> >
>


[jira] [Commented] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735152#comment-14735152
 ] 

Jiangjie Qin commented on KAFKA-2477:
-

No worries. I can do that :)

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
>Assignee: Jiangjie Qin
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2517) Performance Regression post SSL implementation

2015-09-08 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-2517:
-
Priority: Blocker  (was: Critical)

> Performance Regression post SSL implementation
> --
>
> Key: KAFKA-2517
> URL: https://issues.apache.org/jira/browse/KAFKA-2517
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.8.3
>
>
> It would appear that we incurred a performance regression on submission of 
> the SSL work affecting the performance of the new Kafka Consumer. 
> Running with 1KB messages. Macbook 2.3 GHz Intel Core i7, 8GB, APPLE SSD 
> SM256E. Single server instance. All local. 
> kafka-consumer-perf-test.sh ... --messages 300  --new-consumer
> Pre-SSL changes (commit 503bd36647695e8cc91893ffb80346dd03eb0bc5)
> Steady state throughputs = 234.8 MB/s
> (2861.5913, 234.8261, 3000596, 246233.0543)
> Post-SSL changes (commit 13c432f7952de27e9bf8cb4adb33a91ae3a4b738) 
> Steady state throughput =  178.1 MB/s  
> (2861.5913, 178.1480, 3000596, 186801.7182)
> Implication is a 25% reduction in consumer throughput for these test 
> conditions. 
> This appears to be caused by the use of PlaintextTransportLayer rather than 
> SocketChannel in FileMessageSet.writeTo() meaning a zero copy transfer is not 
> invoked.
> Switching to the use of a SocketChannel directly in FileMessageSet.writeTo()  
> yields the following result:
> Steady state throughput =  281.8 MB/s
> (2861.5913, 281.8191, 3000596, 295508.7650)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Gwen Shapira
Hi Kafka Fans,

What do you think of making the next release (the one with security, new
consumer, quotas, etc) a 0.9.0 instead of 0.8.3?

It has lots of new features, and new consumer was pretty much scoped for
0.9.0, so it matches our original roadmap. I feel that so many awesome
features deserve a better release number.

The downside is mainly some confusion (we refer to 0.8.3 in bunch of
places), and noisy emails from JIRA while we change "fix version" field
everywhere.

Thoughts?


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Flavio Junqueira
Hi Gwen,

What's the expected meaning of the individual digits of the version for this 
community? Could you give me some insight here?

-Flavio

> On 08 Sep 2015, at 18:19, Gwen Shapira  wrote:
> 
> Hi Kafka Fans,
> 
> What do you think of making the next release (the one with security, new
> consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> 
> It has lots of new features, and new consumer was pretty much scoped for
> 0.9.0, so it matches our original roadmap. I feel that so many awesome
> features deserve a better release number.
> 
> The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> places), and noisy emails from JIRA while we change "fix version" field
> everywhere.
> 
> Thoughts?



[jira] [Commented] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735134#comment-14735134
 ] 

Håkon Hitland commented on KAFKA-2477:
--

I don't think I can provide a patch at the moment, I would appreciate if 
someone more familiar with the code fixed it.

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-08 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-2477:
---

Assignee: Jiangjie Qin

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Håkon Hitland
>Assignee: Jiangjie Qin
> Attachments: kafka_log.txt, kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-31 - Message format change proposal

2015-09-08 Thread Jay Kreps
Hey Todd,

Yeah, totally agree the use case is important. I think there are
potentially also other uses that having access by time opens up too.

-Jay

On Sun, Sep 6, 2015 at 9:54 PM, Todd Palino  wrote:

> So, with regards to why you want to search by timestamp, the biggest
> problem I've seen is with consumers who want to reset their timestamps to a
> specific point, whether it is to replay a certain amount of messages, or to
> rewind to before some problem state existed. This happens more often than
> anyone would like.
>
> To handle this now we need to constantly export the broker's offset for
> every partition to a time-series database and then use external processes
> to query this. I know we're not the only ones doing this. The way the
> broker handles requests for offsets by timestamp is a little obtuse
> (explain it to anyone without intimate knowledge of the internal workings
> of the broker - every time I do I see this). In addition, as Becket pointed
> out, it causes problems specifically with retention of messages by time
> when you move partitions around.
>
> I'm deliberately avoiding the discussion of what timestamp to use. I can
> see the argument either way, though I tend to lean towards the idea that
> the broker timestamp is the only viable source of truth in this situation.
>
> -Todd
>
>
> On Sun, Sep 6, 2015 at 7:08 PM, Ewen Cheslack-Postava 
> wrote:
>
> > On Sun, Sep 6, 2015 at 4:57 PM, Jay Kreps  wrote:
> >
> > >
> > > 2. Nobody cares what time it is on the server.
> > >
> >
> > This is a good way of summarizing the issue I was trying to get at, from
> an
> > app's perspective. Of the 3 stated goals of the KIP, #2 (lot retention)
> is
> > reasonably handled by a server-side timestamp. I really just care that a
> > message is there long enough that I have a chance to process it. #3
> > (searching by timestamp) only seems useful if we can guarantee the
> > server-side timestamp is close enough to the original client-side
> > timestamp, and any mirror maker step seems to break that (even ignoring
> any
> > issues with broker availability).
> >
> > I'm also wondering whether optimizing for search-by-timestamp on the
> broker
> > is really something we want to do given that messages aren't really
> > guaranteed to be ordered by application-level timestamps on the broker.
> Is
> > part of the need for this just due to the current consumer APIs being
> > difficult to work with? For example, could you implement this pretty
> easily
> > client side just the way you would broker-side? I'd imagine a couple of
> > random seeks + reads during very rare occasions (i.e. when the app starts
> > up) wouldn't be a problem performance-wise. Or is it also that you need
> the
> > broker to enforce things like monotonically increasing timestamps since
> you
> > can't do the query properly and efficiently without that guarantee, and
> > therefore what applications are actually looking for *is* broker-side
> > timestamps?
> >
> > -Ewen
> >
> >
> >
> > > Consider cases where data is being copied from a database or from log
> > > files. In steady-state the server time is very close to the client time
> > if
> > > their clocks are sync'd (see 1) but there will be times of large
> > divergence
> > > when the copying process is stopped or falls behind. When this occurs
> it
> > is
> > > clear that the time the data arrived on the server is irrelevant, it is
> > the
> > > source timestamp that matters. This is the problem you are trying to
> fix
> > by
> > > retaining the mm timestamp but really the client should always set the
> > time
> > > with the use of server-side time as a fallback. It would be worth
> talking
> > > to the Samza folks and reading through this blog post (
> > >
> >
> http://radar.oreilly.com/2015/08/the-world-beyond-batch-streaming-101.html
> > > )
> > > on this subject since we went through similar learnings on the stream
> > > processing side.
> > >
> > > I think the implication of these two is that we need a proposal that
> > > handles potentially very out-of-order timestamps in some kind of sanish
> > way
> > > (buggy clients will set something totally wrong as the time).
> > >
> > > -Jay
> > >
> > > On Sun, Sep 6, 2015 at 4:22 PM, Jay Kreps  wrote:
> > >
> > > > The magic byte is used to version message format so we'll need to
> make
> > > > sure that check is in place--I actually don't see it in the current
> > > > consumer code which I think is a bug we should fix for the next
> release
> > > > (filed KAFKA-2523). The purpose of that field is so there is a clear
> > > check
> > > > on the format rather than the scrambled scenarios Becket describes.
> > > >
> > > > Also, Becket, I don't think just fixing the java client is sufficient
> > as
> > > > that would break other clients--i.e. if anyone writes a v1 messages,
> > even
> > > > by accident, any non-v1-capable consumer will break. I think we
> > probably

Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Gwen Shapira
We've been rather messy about this in the past, but I'm hoping to converge
toward semantic versioning: http://semver.org/

0.9.0 will fit since we are adding new functionality in backward compatible
manner.

On Tue, Sep 8, 2015 at 10:23 AM, Flavio Junqueira  wrote:

> Hi Gwen,
>
> What's the expected meaning of the individual digits of the version for
> this community? Could you give me some insight here?
>
> -Flavio
>
> > On 08 Sep 2015, at 18:19, Gwen Shapira  wrote:
> >
> > Hi Kafka Fans,
> >
> > What do you think of making the next release (the one with security, new
> > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> >
> > It has lots of new features, and new consumer was pretty much scoped for
> > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > features deserve a better release number.
> >
> > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > places), and noisy emails from JIRA while we change "fix version" field
> > everywhere.
> >
> > Thoughts?
>
>


Re: [DISCUSS] KIP-31 - Message format change proposal

2015-09-08 Thread Jay Kreps
Hey Beckett,

I was proposing splitting up the KIP just for simplicity of discussion. You
can still implement them in one patch. I think otherwise it will be hard to
discuss/vote on them since if you like the offset proposal but not the time
proposal what do you do?

Introducing a second notion of time into Kafka is a pretty massive
philosophical change so it kind of warrants it's own KIP I think it isn't
just "Change message format".

WRT time I think one thing to clarify in the proposal is how MM will have
access to set the timestamp? Presumably this will be a new field in
ProducerRecord, right? If so then any user can set the timestamp, right?
I'm not sure you answered the questions around how this will work for MM
since when MM retains timestamps from multiple partitions they will then be
out of order and in the past (so the max(lastAppendedTimestamp,
currentTimeMillis) override you proposed will not work, right?). If we
don't do this then when you set up mirroring the data will all be new and
you have the same retention problem you described. Maybe I missed
something...?

My main motivation is that given that both Samza and Kafka streams are
doing work that implies a mandatory client-defined notion of time, I really
think introducing a different mandatory notion of time in Kafka is going to
be quite odd. We should think hard about how client-defined time could
work. I'm not sure if it can, but I'm also not sure that it can't. Having
both will be odd. Did you chat about this with Yi/Kartik on the Samza side?

When you are saying it won't work you are assuming some particular
implementation? Maybe that the index is a monotonically increasing set of
pointers to the least record with a timestamp larger than the index time?
In other words a search for time X gives the largest offset at which all
records are <= X?

For retention, I agree with the problem you point out, but I think what you
are saying in that case is that you want a size limit too. If you use
system time you actually hit the same problem: say you do a full dump of a
DB table with a setting of 7 days retention, your retention will actually
not get enforced for the first 7 days because the data is "new to Kafka".

-Jay


On Mon, Sep 7, 2015 at 10:44 AM, Jiangjie Qin 
wrote:

> Jay,
>
> Thanks for the comments. Yes, there are actually three proposals as you
> pointed out.
>
> We will have a separate proposal for (1) - version control mechanism. We
> actually thought about whether we want to separate 2 and 3 internally
> before creating the KIP. The reason we put 2 and 3 together is it will
> saves us another cross board wire protocol change. Like you said, we have
> to migrate all the clients in all languages. To some extent, the effort to
> spend on upgrading the clients can be even bigger than implementing the new
> feature itself. So there are some attractions if we can do 2 and 3 together
> instead of separately. Maybe after (1) is done it will be easier to do
> protocol migration. But if we are able to come to an agreement on the
> timestamp solution, I would prefer to have it together with relative offset
> in the interest of avoiding another wire protocol change (the process to
> migrate to relative offset is exactly the same as migrate to message with
> timestamp).
>
> In terms of timestamp. I completely agree that having client timestamp is
> more useful if we can make sure the timestamp is good. But in reality that
> can be a really big *IF*. I think the problem is exactly as Ewen mentioned,
> if we let the client to set the timestamp, it would be very hard for the
> broker to utilize it. If broker apply retention policy based on the client
> timestamp. One misbehave producer can potentially completely mess up the
> retention policy on the broker. Although people don't care about server
> side timestamp. People do care a lot when timestamp breaks. Searching by
> timestamp is a really important use case even though it is not used as
> often as searching by offset. It has significant direct impact on RTO when
> there is a cross cluster failover as Todd mentioned.
>
> The trick using max(lastAppendedTimestamp, currentTimeMillis) is to
> guarantee monotonic increase of the timestamp. Many commercial system
> actually do something similar to this to solve the time skew. About
> changing the time, I am not sure if people use NTP like using a watch to
> just set it forward/backward by an hour or so. The time adjustment I used
> to do is typically to adjust something like a minute  / week. So for each
> second, there might be a few mircoseconds slower/faster but should not
> break the clock completely to make sure all the time-based transactions are
> not affected. The one minute change will be done within a week but not
> instantly.
>
> Personally, I think having client side timestamp will be useful if we don't
> need to put the broker and data integrity under risk. If we have to choose
> from one of them but not 

Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Gwen Shapira
I don't know of any 1.0 plans. IMO, it makes sense to have 0.9.0 out first,
and then discuss what it will take to get to 1.0.
Does that make sense?

On Tue, Sep 8, 2015 at 10:39 AM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:

> Hi Gwen,
>
> I certainly think 0.9.0 is better than 0.8.3.
> As regards semantic versioning, do we have a plan for a 1.0 release? IIUC,
> compatibility rules don't really apply for pre-1.0 stuff. I'd argue that
> Kafka already qualifies for 1.x.
>
> Aditya
>
> On Tue, Sep 8, 2015 at 10:26 AM, Gwen Shapira  wrote:
>
> > We've been rather messy about this in the past, but I'm hoping to
> converge
> > toward semantic versioning: http://semver.org/
> >
> > 0.9.0 will fit since we are adding new functionality in backward
> compatible
> > manner.
> >
> > On Tue, Sep 8, 2015 at 10:23 AM, Flavio Junqueira 
> wrote:
> >
> > > Hi Gwen,
> > >
> > > What's the expected meaning of the individual digits of the version for
> > > this community? Could you give me some insight here?
> > >
> > > -Flavio
> > >
> > > > On 08 Sep 2015, at 18:19, Gwen Shapira  wrote:
> > > >
> > > > Hi Kafka Fans,
> > > >
> > > > What do you think of making the next release (the one with security,
> > new
> > > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > > >
> > > > It has lots of new features, and new consumer was pretty much scoped
> > for
> > > > 0.9.0, so it matches our original roadmap. I feel that so many
> awesome
> > > > features deserve a better release number.
> > > >
> > > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > > places), and noisy emails from JIRA while we change "fix version"
> field
> > > > everywhere.
> > > >
> > > > Thoughts?
> > >
> > >
> >
>


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Jiangjie Qin
Based on the new feature in next release, 0.9 looks reasonable.

There might be some other things worth thinking about. Although we have a
lot of new feature added, many of them are actually either still in
development or not well tested yet. For example, for security features,
only SSL is done and tested. New consumer API might still subject to
changes. In that case. If we release 0.9 now, we might need a lot of
0.9.x.x version to fix bugs and change APIs later. I thought the original
plan was to let 0.8.3 to have both new and old consumer and remove the old
consumer in 0.9.

If we don't have any stability guarantee for versions, I think either way
is fine. But I feel slightly better to have a transitional version 0.8.3.
It might give us some room to test and stabilize.

Thanks,

Jiangjie (Becket) Qin


On Tue, Sep 8, 2015 at 10:26 AM, Gwen Shapira  wrote:

> We've been rather messy about this in the past, but I'm hoping to converge
> toward semantic versioning: http://semver.org/
>
> 0.9.0 will fit since we are adding new functionality in backward compatible
> manner.
>
> On Tue, Sep 8, 2015 at 10:23 AM, Flavio Junqueira  wrote:
>
> > Hi Gwen,
> >
> > What's the expected meaning of the individual digits of the version for
> > this community? Could you give me some insight here?
> >
> > -Flavio
> >
> > > On 08 Sep 2015, at 18:19, Gwen Shapira  wrote:
> > >
> > > Hi Kafka Fans,
> > >
> > > What do you think of making the next release (the one with security,
> new
> > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > >
> > > It has lots of new features, and new consumer was pretty much scoped
> for
> > > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > > features deserve a better release number.
> > >
> > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > places), and noisy emails from JIRA while we change "fix version" field
> > > everywhere.
> > >
> > > Thoughts?
> >
> >
>


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-08 Thread Jun Rao
+1 for 0.9.

Thanks,

Jun

On Tue, Sep 8, 2015 at 3:04 PM, Ismael Juma  wrote:

> +1 (non-binding) for 0.9.
>
> Ismael
>
> On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  wrote:
>
> > Hi Kafka Fans,
> >
> > What do you think of making the next release (the one with security, new
> > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> >
> > It has lots of new features, and new consumer was pretty much scoped for
> > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > features deserve a better release number.
> >
> > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > places), and noisy emails from JIRA while we change "fix version" field
> > everywhere.
> >
> > Thoughts?
> >
>


Re: Review Request 36858: Patch for KAFKA-2120

2015-09-08 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36858/#review98136
---


Thanks for the patch. A few more minor comments below.


clients/src/main/java/org/apache/kafka/clients/NetworkClient.java (line 406)


Perhaps it's better to log the warning in the caller and distinguish 
whether the disconnect is due to the client timeout or not.



clients/src/main/java/org/apache/kafka/clients/NetworkClient.java (lines 415 - 
417)


The comment on handleDisconnections() is no longer accurate.



clients/src/main/java/org/apache/kafka/common/network/Selector.java (lines 184 
- 189)


Do we still need disconnect(id)? It seems that we can just replace the 
usage in test with close(id) and remove disconnect from selectable and 
KafkaChannel? We have to implement close(id) in MockSelector().



core/src/main/scala/kafka/server/KafkaConfig.scala (line 67)


Do we need this? Could we just use ControllerSocketTimeoutMs?


- Jun Rao


On Sept. 5, 2015, 12:49 a.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36858/
> ---
> 
> (Updated Sept. 5, 2015, 12:49 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2120
> https://issues.apache.org/jira/browse/KAFKA-2120
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Solved compile error
> 
> 
> Addressed Jason's comments for Kip-19
> 
> 
> Addressed Jun's comments
> 
> 
> Addressed Jason's comments about the default values for requestTimeout
> 
> 
> checkpoint
> 
> 
> Addressed Joel's concerns. Also tried to include Jun's feedback.
> 
> 
> Fixed a minor comment
> 
> 
> Solved unittest issue
> 
> 
> Addressed Jun's comments regarding NetworkClient
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> dc8f0f115bcda893c95d17c0a57be8d14518d034 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> 7d24c6f5dd2b63b96584f3aa8922a1d048dc1ae4 
>   clients/src/main/java/org/apache/kafka/clients/InFlightRequests.java 
> 15d00d4e484bb5d51a9ae6857ed6e024a2cc1820 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> f46c0d9b5eb73887c62a0e09c96e9d8c964c709d 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 049b22eadd5496b70dfcfd9d821f67c62c68a052 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> b9a2d4e2bc565f0ee72b27791afe5c894af262f1 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 73237e455a9e5aa38672522cfd9e5fcdafbcef3b 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
>  9517d9d0cd480d5ba1d12f1fde7963e60528d2f8 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 804d569498396d431880641041fc9292076452cb 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 06f00a99a73a288df9afa8c1d4abe3580fa968a6 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java
>  4cb1e50d6c4ed55241aeaef1d3af09def5274103 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  a152bd7697dca55609a9ec4cfe0a82c10595fbc3 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  06182db1c3a5da85648199b4c0c98b80ea7c6c0c 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> d2e64f7cd8bf56e433a210905b2874f71eee9ea0 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> 4aa5cbb86ce6e1bf8f6769147ee2a6452c855c74 
>   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
> e5815f56bdf8e2d980f2bc36b831ed234c0ac781 
>   clients/src/test/java/org/apache/kafka/clients/NetworkClientTest.java 
> 69c93c3adf674b1640534c3d7410fcaafaf2232c 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/BufferPoolTest.java
>  2c693824fa53db1e38766b8c66a0ef42ef9d0f3a 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
>  5b2e4ffaeab7127648db608c179703b27b577414 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
>  aa44991777a855f4b7f4f7bf17107c69393ff8ff 
>   clients/src/test/java/org/apache/kafka/test/MockSelector.java 
> f83fd9b794a3bd191121a22bcb40fd6ec31d83b5 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> da1cff07f7f76dcfa5a805718febcccd4ed5f578 
>