[GitHub] kafka pull request #2019: KAFKA-4298: Ensure compressed message sets are con...

2016-10-12 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2019

KAFKA-4298: Ensure compressed message sets are converted when cleaning the 
log



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4298

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2019.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2019


commit af3b31b4d94ece1603ac470bfd8d781558987501
Author: Jason Gustafson 
Date:   2016-10-13T04:56:52Z

KAFKA-4298: Ensure compressed message sets are converted when cleaning the 
log




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4298) LogCleaner does not convert compressed message sets properly

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570935#comment-15570935
 ] 

ASF GitHub Bot commented on KAFKA-4298:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2019

KAFKA-4298: Ensure compressed message sets are converted when cleaning the 
log



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4298

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2019.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2019


commit af3b31b4d94ece1603ac470bfd8d781558987501
Author: Jason Gustafson 
Date:   2016-10-13T04:56:52Z

KAFKA-4298: Ensure compressed message sets are converted when cleaning the 
log




> LogCleaner does not convert compressed message sets properly
> 
>
> Key: KAFKA-4298
> URL: https://issues.apache.org/jira/browse/KAFKA-4298
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.1
>
>
> When cleaning the log, we attempt to write the cleaned messages using the 
> message format configured for the topic, but as far as I can tell, we do not 
> convert the wrapped messages in compressed message sets. For example, if 
> there is an old compressed message set with magic=0 in the log and the topic 
> is configured for magic=1, then after cleaning, the new message set will have 
> a wrapper with magic=1, but the nested messages will still have magic=0. If 
> this happens, there does not seem to be an easy way to recover without 
> manually fixing up the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4298) LogCleaner does not convert compressed message sets properly

2016-10-12 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4298:
---
Description: When cleaning the log, we attempt to write the cleaned 
messages using the message format configured for the topic, but as far as I can 
tell, we do not convert the wrapped messages in compressed message sets. For 
example, if there is an old compressed message set with magic=0 in the log and 
the topic is configured for magic=1, then after cleaning, the new message set 
will have a wrapper with magic=1, but the nested messages will still have 
magic=0. If this happens, there does not seem to be an easy way to recover 
without manually fixing up the log.  (was: When cleaning the log, we attempt to 
write the cleaned messages using the message format configured for the topic, 
but as far as I can tell, we do not convert the wrapped messages in compressed 
message sets. For example, if there is an old compressed message set with 
magic=0 in the log and the topic is configured for magic=1, then after 
cleaning, the new message set will have a wrapper with magic=1, but the nested 
messages will still have magic=0. If this happens, there does not seem to be an 
easy way to recover with manually fixing up the log.)

> LogCleaner does not convert compressed message sets properly
> 
>
> Key: KAFKA-4298
> URL: https://issues.apache.org/jira/browse/KAFKA-4298
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.1
>
>
> When cleaning the log, we attempt to write the cleaned messages using the 
> message format configured for the topic, but as far as I can tell, we do not 
> convert the wrapped messages in compressed message sets. For example, if 
> there is an old compressed message set with magic=0 in the log and the topic 
> is configured for magic=1, then after cleaning, the new message set will have 
> a wrapper with magic=1, but the nested messages will still have magic=0. If 
> this happens, there does not seem to be an easy way to recover without 
> manually fixing up the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4298) LogCleaner does not convert compressed message sets properly

2016-10-12 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4298:
--

 Summary: LogCleaner does not convert compressed message sets 
properly
 Key: KAFKA-4298
 URL: https://issues.apache.org/jira/browse/KAFKA-4298
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
Priority: Critical
 Fix For: 0.10.1.1


When cleaning the log, we attempt to write the cleaned messages using the 
message format configured for the topic, but as far as I can tell, we do not 
convert the wrapped messages in compressed message sets. For example, if there 
is an old compressed message set with magic=0 in the log and the topic is 
configured for magic=1, then after cleaning, the new message set will have a 
wrapper with magic=1, but the nested messages will still have magic=0. If this 
happens, there does not seem to be an easy way to recover with manually fixing 
up the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Comment: was deleted

(was: KAFKA-4297.solution2.patch: I modified 'kafka\.Kafka' to 'kafkaServer-gc' 
in 'kafka-server-stop.sh')

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Attachment: (was: KAFKA-4297.solution1.patch)

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Issue Comment Deleted] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Comment: was deleted

(was: KAFKA-4297.solution1.patch: I added '-Dkafka.server' option in running 
command and modified stop script.)

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Attachment: (was: KAFKA-4297.solution2.patch)

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[GitHub] kafka pull request #1978: HOTFIX: Cannot Stop Kafka with Shell Script (Solut...

2016-10-12 Thread Mabin-J
Github user Mabin-J closed the pull request at:

https://github.com/apache/kafka/pull/1978


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1976: HOTFIX: Cannot Stop Kafka with Shell Script (Solut...

2016-10-12 Thread Mabin-J
Github user Mabin-J closed the pull request at:

https://github.com/apache/kafka/pull/1976


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570785#comment-15570785
 ] 

ASF GitHub Bot commented on KAFKA-4297:
---

GitHub user Mabin-J opened a pull request:

https://github.com/apache/kafka/pull/2018

KAFKA-4297: fix possiblity that didn't stop with shell (solution 2)

If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javassist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 

[GitHub] kafka pull request #2018: KAFKA-4297: fix possiblity that didn't stop with s...

2016-10-12 Thread Mabin-J
GitHub user Mabin-J opened a pull request:

https://github.com/apache/kafka/pull/2018

KAFKA-4297: fix possiblity that didn't stop with shell (solution 2)

If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../li
 
bs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jav
 
assist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-
 
server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc
 
/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 

[jira] [Commented] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570782#comment-15570782
 ] 

ASF GitHub Bot commented on KAFKA-4297:
---

GitHub user Mabin-J opened a pull request:

https://github.com/apache/kafka/pull/2017

KAFKA-4297: fix possiblity that didn't stop with shell (solution 1)

If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javassist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 

[GitHub] kafka pull request #2017: KAFKA-4297: fix possiblity that didn't stop with s...

2016-10-12 Thread Mabin-J
GitHub user Mabin-J opened a pull request:

https://github.com/apache/kafka/pull/2017

KAFKA-4297: fix possiblity that didn't stop with shell (solution 1)

If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../li
 
bs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jav
 
assist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-
 
server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc
 
/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Attachment: KAFKA-4297.solution2.patch

KAFKA-4297.solution2.patch: I modified 'kafka\.Kafka' to 'kafkaServer-gc' in 
'kafka-server-stop.sh'

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
> Attachments: KAFKA-4297.solution1.patch, KAFKA-4297.solution2.patch
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Comment Edited] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570754#comment-15570754
 ] 

Mabin Jeong edited comment on KAFKA-4297 at 10/13/16 4:03 AM:
--

KAFKA-4297.solution1.patch: I added '-Dkafka.server' option in running command 
and modified stop script.


was (Author: mabin):
I added '-Dkafka.server' option in running command and modified stop script.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
> Attachments: KAFKA-4297.solution1.patch, KAFKA-4297.solution2.patch
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Attachment: KAFKA-4297.solution1.patch

I added '-Dkafka.server' option in running command and modified stop script.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
> Attachments: KAFKA-4297.solution1.patch
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Status: Open  (was: Patch Available)

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
> Attachments: KAFKA-4297.solution1.patch
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Issue Comment Deleted] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Comment: was deleted

(was: I added '-Dkafka.server' option in running command and modified stop 
script.)

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
> Attachments: KAFKA-4297.solution1.patch
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Status: Patch Available  (was: Open)

I added '-Dkafka.server' option in running command and modified stop script.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Priority: Critical
>  Labels: easyfix
> Fix For: 0.10.0.1
>
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Updated] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mabin Jeong updated KAFKA-4297:
---
Description: 
If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javassist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 

[jira] [Created] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2016-10-12 Thread Mabin Jeong (JIRA)
Mabin Jeong created KAFKA-4297:
--

 Summary: Cannot Stop Kafka with Shell Script
 Key: KAFKA-4297
 URL: https://issues.apache.org/jira/browse/KAFKA-4297
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1
 Environment: CentOS 6.7
Reporter: Mabin Jeong
Priority: Critical
 Fix For: 0.10.0.1


If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.

That command showed this message:
```
No kafka server to stop
```

This bug is caused that command line is too long like this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
-XX:+DisableExplicitGC -Djava.awt.headless=true 
-Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
-Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
 -cp 
:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javassist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-test-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka
```

but that is not all command line.
Full command line is this.
```
/home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 

Build failed in Jenkins: kafka-trunk-jdk8 #976

2016-10-12 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-4025; make sure file.encoding system property is set to UTF-8 
when

--
[...truncated 14097 lines...]

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportMultipleBootstrapServers STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportMultipleBootstrapServers PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfKeySerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfKeySerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportNonPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportNonPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetRestoreConsumerConfigs 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetRestoreConsumerConfigs 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedRestoreConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedRestoreConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedRestoreConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedRestoreConsumerConfigs PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED


[jira] [Commented] (KAFKA-4296) LogCleaner CleanerStats swap logic seems incorrect

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570372#comment-15570372
 ] 

ASF GitHub Bot commented on KAFKA-4296:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2016

KAFKA-4296: Fix LogCleaner statistics rolling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4296

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2016


commit a96ad989e9d1bcc3a1a061bbf42776f0c2ad9ec3
Author: Jason Gustafson 
Date:   2016-10-13T00:10:43Z

KAFKA-4296: Fix LogCleaner statistics rolling




> LogCleaner CleanerStats swap logic seems incorrect
> --
>
> Key: KAFKA-4296
> URL: https://issues.apache.org/jira/browse/KAFKA-4296
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.1
>
>
> In LogCleaner, we keep track of two instances of the {{CleanerStats}} object 
> in a tuple object. One instance is intended to keep track the stats for the 
> last cycle while the other is for the current cycle. The idea is to swap them 
> after each cleaning cycle, but the current logic does not actually mutate the 
> existing tuple, which means that we always clear the same instance of 
> {{CleanerStats}} after each cleaning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2016: KAFKA-4296: Fix LogCleaner statistics rolling

2016-10-12 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2016

KAFKA-4296: Fix LogCleaner statistics rolling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4296

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2016


commit a96ad989e9d1bcc3a1a061bbf42776f0c2ad9ec3
Author: Jason Gustafson 
Date:   2016-10-13T00:10:43Z

KAFKA-4296: Fix LogCleaner statistics rolling




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Reg: Kafka Security features

2016-10-12 Thread Harsha Chintalapani
1. Kafka security features (Kerberos , ACL's) are beta quality code or can
they be used in production?
 Because Kafka documentation shows they are of beta code quality.

We need to update the document. But Authorizer feature released as part of
0.9.0. We have lot of deployments using this functionality. Its safe to use
it in production.
As security features first introduced in 0.9.0 we went with a beta label.
Since than there fixes and improvements added.
We've users running in producer and consumer in secure cluster I would say
 it safe to use it in production cluster.

Thanks,
Harsha

On Wed, Oct 12, 2016 at 4:52 PM BigData dev  wrote:

> Hi All,
> Could you please provide below information.
>
> 1. Kafka security features (Kerberos , ACL's) are beta quality code or can
> they be used in production?
>  Because Kafka documentation shows they are of beta code quality.
>
> From Apache Kafka Documentation "In release 0.9.0.0, the Kafka community
> added a number of features that, used either separately or together,
> increases security in a Kafka cluster. These features are considered to be
> of beta quality."
>
> 2. Kafka new Consumer/Producer only supports the security features.
> From Apache Kafka Documentation "The code is considered beta quality. Below
> is the configuration for the new consumer"
>
> So, can we use the Kafka security features on production cluster?
> Could any one help on this.
>


Reg: Kafka Security features

2016-10-12 Thread BigData dev
Hi All,
Could you please provide below information.

1. Kafka security features (Kerberos , ACL's) are beta quality code or can
they be used in production?
 Because Kafka documentation shows they are of beta code quality.

>From Apache Kafka Documentation "In release 0.9.0.0, the Kafka community
added a number of features that, used either separately or together,
increases security in a Kafka cluster. These features are considered to be
of beta quality."

2. Kafka new Consumer/Producer only supports the security features.
>From Apache Kafka Documentation "The code is considered beta quality. Below
is the configuration for the new consumer"

So, can we use the Kafka security features on production cluster?
Could any one help on this.


[jira] [Resolved] (KAFKA-4025) build fails on windows due to rat target output encoding

2016-10-12 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-4025.
---
   Resolution: Fixed
 Assignee: radai rosenblatt
 Reviewer: Joel Koshy
Fix Version/s: 0.10.1.1

> build fails on windows due to rat target output encoding
> 
>
> Key: KAFKA-4025
> URL: https://issues.apache.org/jira/browse/KAFKA-4025
> Project: Kafka
>  Issue Type: Bug
> Environment: windows 7, either regular command prompt or git bash
>Reporter: radai rosenblatt
>Assignee: radai rosenblatt
>Priority: Minor
> Fix For: 0.10.1.1
>
> Attachments: windows build debug output.txt
>
>
> kafka runs a rat report during the build, using [the rat ant report 
> task|http://creadur.apache.org/rat/apache-rat-tasks/report.html], which has 
> no output encoding parameter.
> this means that the resulting xml report is produced using the system-default 
> encoding, which is OS-dependent:
> the rat ant task code instantiates the output writer like so 
> ([org.apache.rat.anttasks.Report.java|http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.11/apache-rat-tasks/src/main/java/org/apache/rat/anttasks/Report.java]
>  line 196):
> {noformat}
> out = new PrintWriter(new FileWriter(reportFile));{noformat}
> which eventually leads to {{Charset.defaultCharset()}} that relies on the 
> file.encoding system property. this causes an issue if the default encoding 
> isnt UTF-8 (which it isnt on windows) as the code called by 
> printUnknownFiles() in rat.gradle defaults to UTF-8 when reading the report 
> xml, causing the build to fail with:
> {noformat}
> com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: 
> Invalid byte 1 of 1-byte UTF-8 sequence.{noformat}
> (see complete output of {{gradlew --debug --stacktrace rat}} in attached file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4025) build fails on windows due to rat target output encoding

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570214#comment-15570214
 ] 

ASF GitHub Bot commented on KAFKA-4025:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1710


> build fails on windows due to rat target output encoding
> 
>
> Key: KAFKA-4025
> URL: https://issues.apache.org/jira/browse/KAFKA-4025
> Project: Kafka
>  Issue Type: Bug
> Environment: windows 7, either regular command prompt or git bash
>Reporter: radai rosenblatt
>Priority: Minor
> Fix For: 0.10.1.1
>
> Attachments: windows build debug output.txt
>
>
> kafka runs a rat report during the build, using [the rat ant report 
> task|http://creadur.apache.org/rat/apache-rat-tasks/report.html], which has 
> no output encoding parameter.
> this means that the resulting xml report is produced using the system-default 
> encoding, which is OS-dependent:
> the rat ant task code instantiates the output writer like so 
> ([org.apache.rat.anttasks.Report.java|http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.11/apache-rat-tasks/src/main/java/org/apache/rat/anttasks/Report.java]
>  line 196):
> {noformat}
> out = new PrintWriter(new FileWriter(reportFile));{noformat}
> which eventually leads to {{Charset.defaultCharset()}} that relies on the 
> file.encoding system property. this causes an issue if the default encoding 
> isnt UTF-8 (which it isnt on windows) as the code called by 
> printUnknownFiles() in rat.gradle defaults to UTF-8 when reading the report 
> xml, causing the build to fail with:
> {noformat}
> com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: 
> Invalid byte 1 of 1-byte UTF-8 sequence.{noformat}
> (see complete output of {{gradlew --debug --stacktrace rat}} in attached file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1710: KAFKA-4025 - make sure file.encoding system proper...

2016-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1710


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #975

2016-10-12 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Increase number of retries in smoke test

--
[...truncated 7401 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage STARTED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex STARTED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap STARTED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate STARTED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset STARTED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage STARTED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes STARTED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptTimeIndex STARTED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptTimeIndex PASSED

kafka.log.LogSegmentTest > testReloadLargestTimestampAfterTruncation STARTED

kafka.log.LogSegmentTest > testReloadLargestTimestampAfterTruncation PASSED

kafka.log.LogSegmentTest > testMaxOffset STARTED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation STARTED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testFindOffsetByTimestamp STARTED

kafka.log.LogSegmentTest > testFindOffsetByTimestamp PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment STARTED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast STARTED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown STARTED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull STARTED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig STARTED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > 

[jira] [Updated] (KAFKA-4293) ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions

2016-10-12 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-4293:
--
Assignee: radai rosenblatt

It turns out we should be able to handle all of our current codecs by 
re-implementing the {{available()}} method correctly. We would still want to 
continue to catch EOF as a safety net for any future codecs we may add.

> ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions
> --
>
> Key: KAFKA-4293
> URL: https://issues.apache.org/jira/browse/KAFKA-4293
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>Assignee: radai rosenblatt
>
> around line 110:
> {noformat}
> try {
> while (true)
> innerMessageAndOffsets.add(readMessageFromStream(compressed))
> } catch {
> case eofe: EOFException =>
> // we don't do anything at all here, because the finally
> // will close the compressed input stream, and we simply
> // want to return the innerMessageAndOffsets
> {noformat}
> the only indication the code has that the end of the oteration was reached is 
> by catching EOFException (which will be thrown inside 
> readMessageFromStream()).
> profiling runs performed at linkedIn show 10% of the total broker CPU time 
> taken up by Throwable.fillInStack() because of this behaviour.
> unfortunately InputStream.available() cannot be relied upon (concrete example 
> - GZipInputStream will not correctly return 0) so the fix would probably be a 
> wire format change to also encode the number of messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2016-10-12 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570135#comment-15570135
 ] 

Jason Gustafson commented on KAFKA-2066:


I am available to pick this up since I'm beginning work which may depend on it, 
so please let us know. Thanks!

> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.1-jdk7 #69

2016-10-12 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Increase number of retries in smoke test

--
[...truncated 7325 lines...]
kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors STARTED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > 

[jira] [Created] (KAFKA-4296) LogCleaner CleanerStats swap logic seems incorrect

2016-10-12 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4296:
--

 Summary: LogCleaner CleanerStats swap logic seems incorrect
 Key: KAFKA-4296
 URL: https://issues.apache.org/jira/browse/KAFKA-4296
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.10.1.1


In LogCleaner, we keep track of two instances of the {{CleanerStats}} object in 
a tuple object. One instance is intended to keep track the stats for the last 
cycle while the other is for the current cycle. The idea is to swap them after 
each cleaning cycle, but the current logic does not actually mutate the 
existing tuple, which means that we always clear the same instance of 
{{CleanerStats}} after each cleaning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2016-10-12 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570087#comment-15570087
 ] 

Ismael Juma commented on KAFKA-2066:


[~dajac], it would be good to make progress on this. Do you think you will have 
time to pick this up again? If not, it may be worth unassigning yourself so 
that someone else can pick it up.

> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2016-10-12 Thread Piyush Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570070#comment-15570070
 ] 

Piyush Vijay commented on KAFKA-4185:
-

comments [~ecomar], [~ijuma]?

> Abstract out password verifier in SaslServer as an injectable dependency
> 
>
> Key: KAFKA-4185
> URL: https://issues.apache.org/jira/browse/KAFKA-4185
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Piyush Vijay
> Fix For: 0.10.0.2
>
>
> Kafka comes with a default SASL/PLAIN implementation which assumes that 
> username and password are present in a JAAS
> config file. People often want to use some other way to provide username and 
> password to SaslServer. Their best bet,
> currently, is to have their own implementation of SaslServer (which would be, 
> in most cases, a copied version of PlainSaslServer
> minus the logic where password verification happens). This is not ideal.
> We believe that there exists a better way to structure the current 
> PlainSaslServer implementation which makes it very
> easy for people to plug-in their custom password verifier without having to 
> rewrite SaslServer or copy any code.
> The idea is to have an injectable dependency interface PasswordVerifier which 
> can be re-implemented based on the
> requirements. There would be no need to re-implement or extend 
> PlainSaslServer class.
> Note that this is commonly asked feature and there have been some attempts in 
> the past to solve this problem:
> https://github.com/apache/kafka/pull/1350
> https://github.com/apache/kafka/pull/1770
> https://issues.apache.org/jira/browse/KAFKA-2629
> https://issues.apache.org/jira/browse/KAFKA-3679
> We believe that this proposed solution does not have the demerits because of 
> previous proposals were rejected.
> I would be happy to discuss more.
> Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2015: Small refactoring to improve readability and reduc...

2016-10-12 Thread picadoh
GitHub user picadoh opened a pull request:

https://github.com/apache/kafka/pull/2015

Small refactoring to improve readability and reduce method complexity

Small method extraction to reduce complexity and improve readability of the 
assign method. 

Private methods were created for:
- Get the assignment suppliers based on the subscriptions (decoding of 
subscriptions info is also done at this point, where it is used)
- Calculate the number of partitions for internal topic
- Add tasks to state change log topic subscribers

These methods are being called inside the assign method.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/picadoh/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2015


commit 001c890d02bf04b6f498e87d6ab347565db07dc0
Author: Hugo Picado 
Date:   2016-10-12T22:19:07Z

MINOR: small method extraction to reduce complexity and improve readability 
of assign method: get the assignment suppliers, calculate the number of 
partitions for internal topic and add tasks to state changelog topic.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1626

2016-10-12 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3559) Task creation time taking too long in rebalance callback

2016-10-12 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569915#comment-15569915
 ] 

Guozhang Wang commented on KAFKA-3559:
--

[~enothereska] If you feel this suggestion deserve another ticket, feel free to 
create a separate one and since with KIP-4 there is little difference between 
initializing in the rebalance callback v.s. after the rebalance completes, we 
can close this ticket if you think so.

> Task creation time taking too long in rebalance callback
> 
>
> Key: KAFKA-3559
> URL: https://issues.apache.org/jira/browse/KAFKA-3559
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> Currently in Kafka Streams, we create stream tasks upon getting newly 
> assigned partitions in rebalance callback function {code} onPartitionAssigned 
> {code}, which involves initialization of the processor state stores as well 
> (including opening the rocksDB, restore the store from changelog, etc, which 
> takes time).
> With a large number of state stores, the initialization time itself could 
> take tens of seconds, which usually is larger than the consumer session 
> timeout. As a result, when the callback is completed, the consumer is already 
> treated as failed by the coordinator and rebalance again.
> We need to consider if we can optimize the initialization process, or move it 
> out of the callback function, and while initializing the stores one-by-one, 
> use poll call to send heartbeats to avoid being kicked out by coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2292) failed fetch request logging doesn't indicate source of request

2016-10-12 Thread sunilkalva (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569819#comment-15569819
 ] 

sunilkalva commented on KAFKA-2292:
---

[~malaskat] Any updates ?

> failed fetch request logging doesn't indicate source of request
> ---
>
> Key: KAFKA-2292
> URL: https://issues.apache.org/jira/browse/KAFKA-2292
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>Assignee: Ted Malaska
>
> I am trying to figure out the source of a consumer client that is issuing out 
> of range offset requests for a topic, on one for our brokers (we are running 
> 0.8.2.1).
> I see log lines like this:
> {code}
> 2015-06-20 06:17:24,718 ERROR [kafka-request-handler-4] server.ReplicaManager 
> - [Replica Manager on Broker 123]: Error when processing fetch request for 
> partition [mytopic,0] offset 82754176 from consumer with correlation id 596. 
> Possible cause: Request for offset 82754176 but we only have log segments in 
> the range 82814171 to 83259786.
> {code}
> Successive log lines are similar, but with the correlation id incremented, 
> etc.
> Unfortunately, the correlation id is not particularly useful here in the 
> logging, because I have nothing to trace it back to to understand which 
> connected consumer is issuing this request.  It would be useful if the 
> logging included an ip address, or a clientId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4271) The console consumer fails on Windows with new consumer is used

2016-10-12 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4271:
---
Description: 
When I try to consume message using the new consumer (Quickstart Step 5) I get 
an exception on the broker side. The old consumer works fine.

{code}
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
at kafka.log.LogSegment.(LogSegment.scala:67)
at kafka.log.Log.loadSegments(Log.scala:255)
at kafka.log.Log.(Log.scala:108)
at kafka.log.LogManager.createLog(LogManager.scala:362)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
at kafka.cluster.Partition.makeLeader(Partition.scala:168)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:740)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:739)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
at 
kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:685)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
... 29 more
{code}

This issue seems to break the broker and I have to clear out the logs so I can 
bring the broker back up again.

Update: This issue seems to occur on 32-bit Windows only. I tried this on a 
Windows 32-bit VM. On a 64-bit machine I did not get the error.

  was:
When I try to consume message using the new consumer (Quickstart Step 5) I get 
an exception on the broker side. The old consumer works fine.

{code}
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
at kafka.log.LogSegment.(LogSegment.scala:67)
at kafka.log.Log.loadSegments(Log.scala:255)
at kafka.log.Log.(Log.scala:108)
at kafka.log.LogManager.createLog(LogManager.scala:362)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
at kafka.cluster.Partition.makeLeader(Partition.scala:168)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:740)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:739)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
at 
kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:685)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
at 

[GitHub] kafka pull request #2014: HOTFIX: Increase number of retries in smoke test

2016-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2014


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [kafka-clients] [VOTE] 0.10.1.0 RC2

2016-10-12 Thread Dana Powers
+1; all kafka-python integration tests pass.

-Dana


On Wed, Oct 12, 2016 at 10:41 AM, Jason Gustafson  wrote:
> Hello Kafka users, developers and client-developers,
>
> One more RC for 0.10.1.0. I think we're getting close!
>
> Release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1.
>
> Release notes for the 0.10.1.0 release:
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Saturday, Oct 15, 11am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/javadoc/
>
> * Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0-rc2 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=8702d66434b86092a3738472f9186d6845ab0720
>
> * Documentation:
> http://kafka.apache.org/0101/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0101/protocol.html
>
> * Tests:
> Unit tests: https://builds.apache.org/job/kafka-0.10.1-jdk7/68/
> System tests:
> http://confluent-kafka-0-10-1-system-test-results.s3-us-west-2.amazonaws.com/2016-10-11--001.1476197348--apache--0.10.1--d981dd2/
>
> Thanks,
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAJDuW%3DDk7Mi6ZsiniHcdbCCBdBhasjSeb7_N3EW%3D97OrfvFyew%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.


Re: Store flushing on commit.interval.ms from KIP-63 introduces aggregation latency

2016-10-12 Thread Greg Fodor
Ah thanks so much for the insights -- we should be in a position to profile
the new library against real data in the next week or so so I'll let you
know how it goes.

On Oct 11, 2016 6:26 PM, "Guozhang Wang"  wrote:

> Hello Greg,
>
> I can share some context of KIP-63 here:
>
> 1. Like Eno mentioned, we believe RocksDB's own mem-table is already
> optimizing a large portion of IO access for its write performance, and
> adding an extra caching layer on top of that was mainly for saving ser-de
> costs (note that you still need to ser / deser key-value objects into bytes
> when interacting with RocksDB). Although it may further help IO, it is not
> the main motivation.
>
> 2. As part of KIP-63 Bill helped investigating the pros / cons of such
> object caching (https://issues.apache.org/jira/browse/KAFKA-3973), and our
> conclusion based on that is, although it saves serde costs, it also makes
> memory management very hard in the long run, with caching based on
> num.records, not num.bytes. And when you have an OOM in one of the
> instances, it may well result in cascading failures from rebalances and
> task migration. Ideally, we want to have some restrict memory bound for
> better capacity planning and integration with cluster resource managers
> (see
> https://cwiki.apache.org/confluence/display/KAFKA/Discussion%3A+Memory+
> Management+in+Kafka+Streams
> for more details).
>
> 3. So as part of KIP-63, we removed object-oriented caching and replaced
> with bytes caches, and in addition add the RocksDBConfigSetter to allow
> users to configure their RocksDB to tune for their write /
> space amplifications for IO.
>
>
> With that, I think shutting off caching for your case should not degrading
> the performance too much assuming RocksDB itself can already do a good job
> in terms of write access, it may add extra serde costs though depending
> your use case (originally it is like 1000 records per cache, so roughly
> speaking you are saving those many serde calls per store). But if you do
> observe significant performance degradation I'd personally love to learn
> more and help on that end.
>
>
> Guozhang
>
>
>
>
>
> On Tue, Oct 11, 2016 at 10:10 AM, Greg Fodor  wrote:
>
> > Thanks Eno -- my understanding is that cache is already enabled to be
> > 100MB per rocksdb so it should be on already, but I'll check. I was
> > wondering if you could shed some light on the changes between 0.10.0
> > and 0.10.1 -- in 0.10.0 there was an intermediate cache within
> > RocksDbStore -- presumably this was there to improve performance,
> > despite there still being a lower level cache managed by rocksdb. Can
> > you shed some light why this cache was needed in 0.10.0? If it sounds
> > like our use case won't warrant the same need then we might be OK.
> >
> > Overall however, this is really problematic for us, since we will have
> > to turn off caching for effectively all of our jobs. The way our
> > system works is that we have a number of jobs running kafka streams
> > that are configured via database tables we change via our web stack.
> > For example, when we want to tell our jobs to begin processing data
> > for a user, we insert a record for that user into the database which
> > gets passed via kafka connect to a kafka topic. The kafka streams job
> > is consuming this topic, does some basic group by operations and
> > repartitions on it, and joins it against other data streams so that it
> > knows what users should be getting processed.
> >
> > So fundamentally we have two types of aggregations: the typical case
> > that was I think the target for the optimizations in KIP-63, where
> > latency is less critical since we are counting and emitting counts for
> > analysis, etc. And the other type of aggregation is where we are doing
> > simple transformations on data coming from the database in a way to
> > configure the live behavior of the job. Latency here is very
> > sensitive: users expect the job to react and start sending data for a
> > user immediately after the database records are changed.
> >
> > So as you can see, since this is the paradigm we use to operate jobs,
> > we're in a bad position if we ever want to take advantage of the work
> > in KIP-63. All of our jobs are set up to work in this way, so we will
> > either have to maintain our fork or will have to shut off caching for
> > all of our jobs, neither of which sounds like a very good path.
> >
> > On Tue, Oct 11, 2016 at 4:16 AM, Eno Thereska 
> > wrote:
> > > Hi Greg,
> > >
> > > An alternative would be to set up RocksDB's cache, while keeping the
> > streams cache to 0. That might give you what you need, especially if you
> > can work with RocksDb and don't need to change the store.
> > >
> > > For example, here is how to set the Block Cache size to 100MB and the
> > Write Buffer size to 32MB
> > >
> > > https://github.com/facebook/rocksdb/wiki/Block-Cache <
> > 

Re: [DISCUSS] KIP-80: Kafka REST Server

2016-10-12 Thread Nacho Solis
What is the criteria for keeping things in and out of Kafka, what code goes
in or out and what is part of the architecture or not?

The discussion of what goes into a project and what stays out is an always
evolving question. Different projects treat this in different ways.

Let me paint 2 extremes.  On one side, you have a single monolithic project
that brings everything in one tent.  On the other side you have the many
modules approach.  From what I've learned, Kafka falls in the middle.
Because of this, the question is bound to come up with respect to the
criteria used to bring something into the fold.

I'll be the first to point out that the distinction between modules,
architecture, software, repositories, governance and community are blurry.
Not to mention that many things are how they are for historical reasons.

I, personally, can't understand why we would not have REST as part of the
main Kafka project given that a lot of people use it and we include many
things with the current distribution.  What many things you may ask?  Well,
if we took the modular approach Kafka is a mixture of components, here's
the first 4 that come to mind:
1. The Kafka protocol
2. The Kafka java libraries
3. The Kafka broker
4. The Kafka stream framework
5. Kafka Connect
6. MirrorMaker

All of these could be separate products. You should be able to evolve each
one independently.  Even if they have dependencies on each other, you could
potentially replace one part.

The choice of keeping them all in a single repository, with a single
distribution, under the same governance and community, brings a number of
trade offs.  It's easy to keep things coherent for example.  There is less
of a need to rely on inherent versioning and compatibility (which we end up
providing anyway because of the way people usually deploy kafka). We all
focus our efforts on a single code base.

The downside is that it's harder to remove modules that are old or unused.
Modules that are only used by a small subset of the community will have an
impact on the rest of the community.  It mixes incentives of what people
want to work on and what holds them back.  We also need to decide what
belongs in the blessed bundle and what doesnt.

So, my question boils down to, what criteria is used for bringing stuff in.

If we have Streams and MirrorMaker and Connect in there, why not have REST?
Specially if there is more than one person/group willing to work on it?
Alternatively, if REST is not included because it's not used by all, then
why not remove Streams, Connect and MirrorMaker since they're definitely
not used by all? I realize I say this even though at LinkedIn we have a
REST setup of our own, just speaking from a community perspective.

Nacho


(I'm relatively new and I haven't read all of the mail archive, so I'm sure
this has been brought up before, but I decided to chime in anyway)

On Wed, Oct 12, 2016 at 8:03 AM, Jay Kreps  wrote:

> I think the questions around governance make sense, I think we should
> really clarify that to make the process more clear so it can be fully
> inclusive.
>
> The idea that we should not collaborate on what is there now, though,
> because in the future we might disagree about direction does not really
> make sense to me. If in the future we disagree, that is the beauty of open
> source, you can always fork off a copy of the code and start an independent
> project either in Apache or elsewhere. Pre-emptively re-creating another
> REST layer when it seems like we all quite agree on what needs to be done
> and we have an existing code base for HTTP/kafka access that is heavily
> used in production seems quite silly.
>
> Let me give some background on how I at least think about these things.
> I've participated in open source projects out of LinkedIn via github as
> well as via the ASF. I don't think there is a "right" answer to how to do
> these but rather some tradeoffs. We thought about this quite a lot in the
> context of Kafka based on the experience with the Hadoop ecosystem as well
> as from other open source communities.
>
> There is a rich ecosystem around Kafka. Many of the projects are quite
> small--single clients or tools that do a single thing well--and almost none
> of them are top level apache projects. I don't think trying to force each
> of these to turn into independent Apache projects is necessarily the best
> thing for the ecosystem.
>
> My observation of how this can go wrong is really what I think has happened
> to the Hadoop ecosystem. There you see quite a zoo of projects which all
> drift apart and don't quite work together well. Coordinating even simple
> changes and standardization across these is exceptionally difficult. The
> result is a bit of a mess for users--the pieces just don't really come
> together very well. This makes sense for independent infrastructure systems
> (Kudu vs HDFS) but I'm not at all convinced that doing this for every
> little tool or helper library 

[VOTE] 0.10.1.0 RC2

2016-10-12 Thread Jason Gustafson
Hello Kafka users, developers and client-developers,

One more RC for 0.10.1.0. I think we're getting close!

Release plan: https://cwiki.apache.org/confluence/display/KAFKA/Rele
ase+Plan+0.10.1.

Release notes for the 0.10.1.0 release:
http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Saturday, Oct 15, 11am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/javadoc/

* Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0-rc2 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
8702d66434b86092a3738472f9186d6845ab0720

* Documentation:
http://kafka.apache.org/0101/documentation.html

* Protocol:
http://kafka.apache.org/0101/protocol.html

* Tests:
Unit tests: https://builds.apache.org/job/kafka-0.10.1-jdk7/68/
System tests: http://confluent-kafka-0-10-1-system-test-results.s3-
us-west-2.amazonaws.com/2016-10-11--001.1476197348--apache--0.10.1--d981dd2/

Thanks,

Jason


delete topic causing spikes in fetch/metadata requests

2016-10-12 Thread sunil kalva
We are using kafka 0.8.2.2 (client and server), when ever we delete a topic
we see lot of errors in broker logs like below, and there is also a spike
in fetch/metadata requests. Can i correlate these errors with topic delete
or its a known issue. Since there is spike in metadata requests and fetch
requests broker throughput has comedown.

--
[2016-10-12 16:04:55,054] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,056] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,057] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,059] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,060] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,062] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,064] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,065] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,067] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,068] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,070] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,072] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,073] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,075] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to

Re: [VOTE] 0.10.1.0 RC1

2016-10-12 Thread Jason Gustafson
FYI: I'm cutting another RC this morning due to
https://issues.apache.org/jira/browse/KAFKA-4290. Hopefully this is the
last!

-Jason

On Mon, Oct 10, 2016 at 8:20 PM, Jason Gustafson  wrote:

> The documentation is mostly fixed now: http://kafka.apache.org/0
> 101/documentation.html. Thanks to Derrick Or for all the help. Let me
> know if anyone notices any additional problems.
>
> -Jason
>
> On Mon, Oct 10, 2016 at 1:10 PM, Jason Gustafson 
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the second candidate for release of Apache Kafka 0.10.1.0. This
>> is a minor release that includes great new features including throttled
>> replication, secure quotas, time-based log searching, and queryable state
>> for Kafka Streams. A full list of the content can be found here:
>> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1.
>>
>> One quick note on the docs. Because of all the recent improvements, the
>> documentation is still a bit out of sync with what's visible on the Kafka
>> homepage. This should be fixed soon (definitely before the release is
>> finalized).
>>
>> Release notes for the 0.10.1.0 release:
>> http://home.apache.org/~jgus/kafka-0.10.1.0-rc1/RELEASE_NOTES.html
>> 
>>
>> *** Please download, test and vote by Thursday, Oct 13, 1pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~jgus/kafka-0.10.1.0-rc1/
>> 
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/
>>
>> * Javadoc:
>> http://home.apache.org/~jgus/kafka-0.10.1.0-rc1/javadoc/
>> 
>>
>> * Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0-rc1 tag:
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> 6eda15a97ffe17d636c390c0e0b28c8349993941
>>
>> * Documentation:
>> http://kafka.apache.org/0101/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/0101/protocol.html
>>
>> * Tests:
>> Unit tests: https://builds.apache.org/job/kafka-0.10.1-jdk7/59/
>> System tests: http://testing.confluent.io/co
>> nfluent-kafka-0-10-1-system-test-results/?prefix=2016-10-10-
>> -001.1476110532--apache--0.10.1--e696f17/
>>
>> Thanks,
>>
>> Jason
>>
>
>


Re: [DISCUSS] KIP-80: Kafka REST Server

2016-10-12 Thread Jay Kreps
I think the questions around governance make sense, I think we should
really clarify that to make the process more clear so it can be fully
inclusive.

The idea that we should not collaborate on what is there now, though,
because in the future we might disagree about direction does not really
make sense to me. If in the future we disagree, that is the beauty of open
source, you can always fork off a copy of the code and start an independent
project either in Apache or elsewhere. Pre-emptively re-creating another
REST layer when it seems like we all quite agree on what needs to be done
and we have an existing code base for HTTP/kafka access that is heavily
used in production seems quite silly.

Let me give some background on how I at least think about these things.
I've participated in open source projects out of LinkedIn via github as
well as via the ASF. I don't think there is a "right" answer to how to do
these but rather some tradeoffs. We thought about this quite a lot in the
context of Kafka based on the experience with the Hadoop ecosystem as well
as from other open source communities.

There is a rich ecosystem around Kafka. Many of the projects are quite
small--single clients or tools that do a single thing well--and almost none
of them are top level apache projects. I don't think trying to force each
of these to turn into independent Apache projects is necessarily the best
thing for the ecosystem.

My observation of how this can go wrong is really what I think has happened
to the Hadoop ecosystem. There you see quite a zoo of projects which all
drift apart and don't quite work together well. Coordinating even simple
changes and standardization across these is exceptionally difficult. The
result is a bit of a mess for users--the pieces just don't really come
together very well. This makes sense for independent infrastructure systems
(Kudu vs HDFS) but I'm not at all convinced that doing this for every
little tool or helper library has lead to a desirable state. I think the
mode of operating where the Hadoop vendors spawn off a few new Apache
projects for each new product initiative, especially since often that
project is only valued by that vendor (and the other vendor has a different
competing Apache project) doesn't necessarily do a better job at producing
high quality communities or high quality software.

These tools/connects/clients/proxies and other integration pieces can take
many forms, but my take of what makes one of these things good is that it
remains simple, does its one thing well, and cleaves as closely as possible
to the conventions for Kafka itself--i.e. doesn't invent new ways of
monitoring, configuring, etc. For the tools we've contributed we've tried
really hard to make them consistent with Kafka as well as with each other
in how testing, configuration, monitoring, etc works.

I think what Apache does superbly well is create a community for managing a
large infrastructure layer like Kafka in a vendor independent way. What I
think is less successful is attempting to form full and independent apache
communities around very simple single purpose tools, especially if you hope
for these to come together into a cohesive toolset across multiple such
tools. Much of what Apache does--create a collective decision making
process for resolving disagreement, help to trademark and protect the marks
of the project, etc just isn't that relevant for simple single-purpose
tools.

So my take is there are a couple of options:

   1. We can try to put all the small tools into the Apache Project. I
   think this is not the right approach as there is simply too many of them,
   many in different languages, serving different protocols, integrating with
   particular systems, and a single community can't effectively maintain them
   all. Doing this would significantly slow the progress of the Kafka project.
   As a protocol for messaging, I don't really see a case for including REST
   but not MQTT or AMQP which are technically much better suited to messaging
   and both are widely used for that.
   2. We can treat ecosystem projects that aren't top level Apache projects
   as invalid and try to recreate them all as Apache projects. Honestly,
   though, if you go to the Kafka ecosystem page virtually none of the most
   popular add-ons to Kafka are Apache projects. The most successful things in
   the Kafka ecosystem such as Yahoo Manager, librdkafka, a number of other
   clients, as well as the existing REST layer have succeeded at developing
   communities that actively contribute and use these pieces and I don't know
   that that is a bad thing unless that community proves to be uninclusive,
   unresponsive, or goes in a bad technical direction--and those are failure
   modes that all open source efforts face.
   3. We can do what I think makes the most sense and try to work with the
   projects that exist in the ecosystem and if the project doesn't have a
   responsive community or wants to go in a 

Build failed in Jenkins: kafka-trunk-jdk7 #1625

2016-10-12 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4289; moved short-lived loggers to companion objects

--
[...truncated 14079 lines...]
org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldCantHaveNullPredicate PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullActionOnForEach STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullActionOnForEach PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullValueMapperOnTableJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullValueMapperOnTableJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullPredicateOnFilterNot STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullPredicateOnFilterNot PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldHaveAtLeastOnPredicateWhenBranching STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldHaveAtLeastOnPredicateWhenBranching PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullFilePathOnWriteAsText STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullFilePathOnWriteAsText PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform STARTED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullReducerOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullReducerOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullAdderOnWindowedAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullAdderOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnWindowedAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullReducerWithWindowedReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullReducerWithWindowedReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullAdderOnAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullAdderOnAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullWindowsWithWindowedReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullWindowsWithWindowedReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullWindowsOnWindowedAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullWindowsOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnWindowedAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameWithWindowedReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameWithWindowedReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnAggregate PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED


Jenkins build is back to normal : kafka-trunk-jdk8 #974

2016-10-12 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4128) Kafka broker losses messages when zookeeper session times out

2016-10-12 Thread Mazhar Shaikh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568706#comment-15568706
 ] 

Mazhar Shaikh commented on KAFKA-4128:
--

Hi Gwen Shapira,

My concern for this bug is as below :

1. When ever a follower connects to leader, where follower has more messages 
(offset) then leader, then follower truncates/Drop these msg to last 
Highwatermark.

   =>Here, Do we have any configuration which will avoid this dropping of msg 
and instead replicate it to master ?
 
2. What can be the possible reason for ZookeeperSession timeout, considering 
there is no issues with garbage collection.


Broker = 6
replica = 2
Total Partitions : 96, 
Partition per broker : 16 (Leader) + 16 (Follower)





> Kafka broker losses messages when zookeeper session times out
> -
>
> Key: KAFKA-4128
> URL: https://issues.apache.org/jira/browse/KAFKA-4128
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.1
>Reporter: Mazhar Shaikh
>Priority: Critical
>
> Pumping 30k msgs/second after some 6-8 hrs of run below logs are printed and 
> the messages are lost.
> [More than 5k messages are lost on every partitions]
> Below are few logs:
> [2016-09-06 05:00:42,595] INFO Client session timed out, have not heard from 
> server in 20903ms for sessionid 0x256fabec47c0003, closing socket connection 
> and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:42,696] INFO zookeeper state changed (Disconnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-09-06 05:00:42,753] INFO Partition [topic,62] on broker 4: Shrinking 
> ISR for partition [topic,62] from 4,2 to 4 (kafka.cluster.Partition)
> [2016-09-06 05:00:43,585] INFO Opening socket connection to server 
> b0/169.254.2.1:2182. Will not attempt to authenticate using SASL (unknown 
> error) (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:43,586] INFO Socket connection established to 
> b0/169.254.2.1:2182, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:43,587] INFO Unable to read additional data from server 
> sessionid 0x256fabec47c0003, likely server has closed socket, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,644] INFO Opening socket connection to server 
> b1/169.254.2.116:2181. Will not attempt to authenticate using SASL (unknown 
> error) (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,651] INFO Socket connection established to 
> b1/169.254.2.116:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,658] INFO zookeeper state changed (Expired) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-09-06 05:00:44,659] INFO Initiating client connection, 
> connectString=b2.broker.com:2181,b1.broker.com:2181,zoo3.broker.com:2182 
> sessionTimeout=15000 watcher=org.I0Itec.zkclient.ZkClient@37b8e86a 
> (org.apache.zookeeper.ZooKeeper)
> [2016-09-06 05:00:44,659] INFO Unable to reconnect to ZooKeeper service, 
> session 0x256fabec47c0003 has expired, closing socket connection 
> (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,661] INFO EventThread shut down 
> (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,662] INFO Opening socket connection to server 
> b2/169.254.2.216:2181. Will not attempt to authenticate using SASL (unknown 
> error) (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,662] INFO Socket connection established to 
> b2/169.254.2.216:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-09-06 05:00:44,665] ERROR Error handling event ZkEvent[New session 
> event sent to 
> kafka.controller.KafkaController$SessionExpirationListener@33b7dedc] 
> (org.I0Itec.zkclient.ZkEventThread)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1108)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at 
> kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1107)
> at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> 

[jira] [Updated] (KAFKA-4289) CPU wasted on reflection calls initializing short-lived loggers

2016-10-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4289:
---
Assignee: radai rosenblatt

> CPU wasted on reflection calls initializing short-lived loggers
> ---
>
> Key: KAFKA-4289
> URL: https://issues.apache.org/jira/browse/KAFKA-4289
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>Assignee: radai rosenblatt
> Fix For: 0.10.2.0
>
>
> an internal profiling run at linkedin found ~5% of the CPU time consumed by 
> `sun.reflect.Reflection.getCallerClass()`.
> digging into the stack trace shows its from initializing short lived logger 
> objects in `FileMessageSet` and `RequestChannel.Request`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4289) CPU wasted on reflection calls initializing short-lived loggers

2016-10-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4289:
---
Fix Version/s: 0.10.2.0

> CPU wasted on reflection calls initializing short-lived loggers
> ---
>
> Key: KAFKA-4289
> URL: https://issues.apache.org/jira/browse/KAFKA-4289
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
> Fix For: 0.10.2.0
>
>
> an internal profiling run at linkedin found ~5% of the CPU time consumed by 
> `sun.reflect.Reflection.getCallerClass()`.
> digging into the stack trace shows its from initializing short lived logger 
> objects in `FileMessageSet` and `RequestChannel.Request`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4289) CPU wasted on reflection calls initializing short-lived loggers

2016-10-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4289.

Resolution: Fixed

> CPU wasted on reflection calls initializing short-lived loggers
> ---
>
> Key: KAFKA-4289
> URL: https://issues.apache.org/jira/browse/KAFKA-4289
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
> Fix For: 0.10.2.0
>
>
> an internal profiling run at linkedin found ~5% of the CPU time consumed by 
> `sun.reflect.Reflection.getCallerClass()`.
> digging into the stack trace shows its from initializing short lived logger 
> objects in `FileMessageSet` and `RequestChannel.Request`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4289) CPU wasted on reflection calls initializing short-lived loggers

2016-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568547#comment-15568547
 ] 

ASF GitHub Bot commented on KAFKA-4289:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2006


> CPU wasted on reflection calls initializing short-lived loggers
> ---
>
> Key: KAFKA-4289
> URL: https://issues.apache.org/jira/browse/KAFKA-4289
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>
> an internal profiling run at linkedin found ~5% of the CPU time consumed by 
> `sun.reflect.Reflection.getCallerClass()`.
> digging into the stack trace shows its from initializing short lived logger 
> objects in `FileMessageSet` and `RequestChannel.Request`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2006: KAFKA-4289 - moved short-lived loggers to companio...

2016-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2006


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2016-10-12 Thread Sswater Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sswater Shi updated KAFKA-4295:
---
Description: 
I'm not sure it is a bug or you guys designed it.

Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
information in zookeeper/consumers on exit when without "--new-consumer". There 
will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
kafka-console-consumer.sh runs a lot of times.

When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
"group". If not specified, the kafka-console-consumer.sh will create a 
temporary group name like 'console-consumer-'. If the group name is 
specified by "group", the information in the zookeeper/consumers will be kept 
on exit. If the group name is a temporary one, the information in the zookeeper 
will be deleted when kafka-console-consumer.sh is quitted by Ctrl+C. Why this 
is changed from 0.9.x.x.


  was:
I'm not sure it is a bug or you guys designed it.

Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
information in zookeeper/consumers on exit when without "--new-consumer". There 
will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
kafka-console-consumer.sh runs a lot of times.

When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
"--group". If not specified, the kafka-console-consumer.sh will create a 
temporary group name like 'console-consumer-'. If the group name is 
specified by "--group", the information in the zookeeper/consumers will be kept 
on exit. If the group name is a temporary one, the information in the zookeeper 
will be deleted when kafka-console-consumer.sh is quitted by Ctrl+C. Why this 
is changed from 0.9.x.x.



> kafka-console-consumer.sh does not delete the temporary group in zookeeper
> --
>
> Key: KAFKA-4295
> URL: https://issues.apache.org/jira/browse/KAFKA-4295
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Sswater Shi
>Priority: Minor
>
> I'm not sure it is a bug or you guys designed it.
> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
> information in zookeeper/consumers on exit when without "--new-consumer". 
> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
> kafka-console-consumer.sh runs a lot of times.
> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
> "group". If not specified, the kafka-console-consumer.sh will create a 
> temporary group name like 'console-consumer-'. If the group name is 
> specified by "group", the information in the zookeeper/consumers will be kept 
> on exit. If the group name is a temporary one, the information in the 
> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
> Ctrl+C. Why this is changed from 0.9.x.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2016-10-12 Thread Sswater Shi (JIRA)
Sswater Shi created KAFKA-4295:
--

 Summary: kafka-console-consumer.sh does not delete the temporary 
group in zookeeper
 Key: KAFKA-4295
 URL: https://issues.apache.org/jira/browse/KAFKA-4295
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 0.10.0.1, 0.10.0.0, 0.9.0.1
Reporter: Sswater Shi
Priority: Minor


I'm not sure it is a bug or you guys designed it.

Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
information in zookeeper/consumers on exit when without "--new-consumer". There 
will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
kafka-console-consumer.sh runs a lot of times.

When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
"--group". If not specified, the kafka-console-consumer.sh will create a 
temporary group name like 'console-consumer-'. If the group name is 
specified by "--group", the information in the zookeeper/consumers will be kept 
on exit. If the group name is a temporary one, the information in the zookeeper 
will be deleted when kafka-console-consumer.sh is quitted by Ctrl+C. Why this 
is changed from 0.9.x.x.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4294) Allow password file in server.properties to separate 'secrets' from standard configs

2016-10-12 Thread Ryan P (JIRA)
Ryan P created KAFKA-4294:
-

 Summary: Allow password file in server.properties to separate 
'secrets' from standard configs 
 Key: KAFKA-4294
 URL: https://issues.apache.org/jira/browse/KAFKA-4294
 Project: Kafka
  Issue Type: Improvement
Reporter: Ryan P


Java's keytool(for Windows) allows you to specify the keystore/truststore 
password with an external file in addition to a string argument. 

-storepass:file secret.txt

http://docs.oracle.com/javase/7/docs/technotes/tools/windows/keytool.html

It would be nice if Kafka could offer the same functionality allowing 
organizations to separate concerns between standard configs and 'secrets'. 

Ideally Kafka would add a secrets file property to the broker config which 
could override any ssl properties which currently exist within the broker 
config. Since the secrets file property is only used to override existing 
SSL/TLS properties the change maintains backward compatibility. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4258) Page not found http://kafka.apache.org/streams.html

2016-10-12 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568084#comment-15568084
 ] 

Mickael Maison commented on KAFKA-4258:
---

With the new website, the URL is now: 
http://kafka.apache.org/documentation#streams

> Page not found http://kafka.apache.org/streams.html
> ---
>
> Key: KAFKA-4258
> URL: https://issues.apache.org/jira/browse/KAFKA-4258
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Andrey Dyachkov
>Priority: Minor
>
> It is not possible to access http://kafka.apache.org/streams.html because it 
> returns:
> Not Found
> The requested URL /streams.html was not found on this server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2014: HOTFIX: Increase number of retries in smoke test

2016-10-12 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2014

HOTFIX: Increase number of retries in smoke test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka hotfix-smoke-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2014


commit 1d95ad3ba222dbf4debe7b14013cdd856d12efb4
Author: Eno Thereska 
Date:   2016-10-12T08:46:12Z

Increase number of retries




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-82 - Add Record Headers

2016-10-12 Thread Michael Pearce
@Jay and Dana

We have internally had a few discussions of how we may address this if we had a 
common apache kafka message wrapper for headers that can be used client side 
only to, and address the compaction issue. 
I have detailed this solution separately and linked from the main KIP-82 wiki.

Here’s a direct link – 
https://cwiki.apache.org/confluence/display/KAFKA/Headers+Value+Message+Wrapper

We feel this solution though doesn’t manage to address all the use cases being 
mentioned still and also has some compatibility drawbacks e.g. backwards 
forwards compatibility especially on different language clients
Also we still require with this solution, as still need to address compaction 
issue / tombstones, we need to make server side changes and as many 
message/record version changes.

We believe the proposed solution in KIP-82 does address all these needs and is 
cleaner still, and more benefits.
Please have a read, and comment. Also if you have any improvements on the 
proposed KIP-82 or an alternative solution/option your input is appreciated.

@All
As Joel has mentioned to get this moving along, and able to discuss more 
fluidly, it would be great if we can organize to meet up virtually online e.g. 
webex or something.
I am aware, that the majority are based in America, myself is in the UK. 
@Kostya I assume you’re in Eastern Europe or Russia based on your email address 
(please correct this assumption), I hope the time difference isn’t too much 
that the below would suit you if you wish to join

Can I propose next Wednesday 19th October at 18:30 BST , 10:30 PST, 20:30 MSK 
we try meetup online?

Would this date/time suit the majority?
Also what is the preferred method? I can host via Adobe Connect style webex 
(which my company uses) but it isn’t the best IMHO, so more than happy to have 
someone suggest a better alternative. 

Best,
Mike




On 10/8/16, 7:26 AM, "Michael Pearce"  wrote:

>> I agree with the critique of compaction not having a value. I think we 
should consider fixing that directly.

> Agree that the compaction issue is troubling: compacted "null" deletes
are incompatible w/ headers that must be packed into the message
value. Are there any alternatives on compaction delete semantics that
could address this? The KIP wiki discussion I think mostly assumes
that compaction-delete is what it is and can't be changed/fixed.

This KIP is about dealing with quite a few use cases and issues, please see 
both the KIP use cases detailed by myself and also the additional use cases 
wiki added by LinkedIn linked from the main KIP.

The compaction is something that happily is addressed with headers, but 
most defiantly isn't the sole reason or use case for them, headers solves many 
issues and use cases. Thus their elegance and simplicity, and why they're so 
common in transport mechanisms and so succesfull, as stated like http, tcp, jms.


From: Dana Powers 
Sent: Friday, October 7, 2016 11:09 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-82 - Add Record Headers

> I agree with the critique of compaction not having a value. I think we 
should consider fixing that directly.

Agree that the compaction issue is troubling: compacted "null" deletes
are incompatible w/ headers that must be packed into the message
value. Are there any alternatives on compaction delete semantics that
could address this? The KIP wiki discussion I think mostly assumes
that compaction-delete is what it is and can't be changed/fixed.

-Dana

On Fri, Oct 7, 2016 at 1:38 PM, Michael Pearce  
wrote:
>
> Hi Jay,
>
> Thanks for the comments and feedback.
>
> I think its quite clear that if a problem keeps arising then it is clear 
that it needs resolving, and addressing properly.
>
> Fair enough at linkedIn, and historically for the very first use cases 
addressing this maybe not have been a big priority. But as Kafka is now Apache 
open source and being picked up by many including my company, it is clear and 
evident that this is a requirement and issue that needs to be now addressed to 
address these needs.
>
> The fact in almost every transport mechanism including networking layers 
in the enterprise ive worked in, there has always been headers i think clearly 
shows their need and success for a transport mechanism.
>
> I understand some concerns with regards to impact for others not needing 
it.
>
> What we are proposing is flexible solution that provides no overhead on 
storage or network traffic layers if you chose not to use headers, but does 
enable those who need or want it to use it.
>
>
> On your response to 1), there is nothing saying that it should be put in 
any faster or without diligence and the same KIP process 

Build failed in Jenkins: kafka-trunk-jdk8 #973

2016-10-12 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Fixed broken links in the documentation

--
[...truncated 14151 lines...]
org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 

Jenkins build is back to normal : kafka-0.10.1-jdk7 #68

2016-10-12 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : kafka-trunk-jdk7 #1624

2016-10-12 Thread Apache Jenkins Server
See