Build failed in Jenkins: kafka-trunk-jdk8 #88

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2017: Persist Group Metadata and Assignment before Responding

--
[...truncated 367 lines...]
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
cache fileHashes.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

 

[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-03 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987208#comment-14987208
 ] 

Rajini Sivaram commented on KAFKA-2658:
---

[~junrao]  We are clearly very near the deadline for Kafka 0.9.0.0 and 
understand reluctance to include such a large patch, even though the majority 
is test code. It is very important for us to be able to provide authentication 
credentials using SASL but we do not use Kerberos. We were just wondering 
whether it might be possible to include a minimal patch to allow SASL providers 
to be plugged in on client and server. If that was acceptable we could provide 
the patch today. If there is anything at all we could do to alleviate your 
concerns on inclusion of this patch please let us know.

Failing that we look forward to working with you to accept the existing patch 
for inclusion shortly after Kafka 0.9.0.0 branch is cut. Thank you...


> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-03 Thread Michael Noll (JIRA)
Michael Noll created KAFKA-2728:
---

 Summary: kafka-run-class.sh: incorrect path to 
tools-log4j.properties for KAFKA_LOG4J_OPTS
 Key: KAFKA-2728
 URL: https://issues.apache.org/jira/browse/KAFKA-2728
 Project: Kafka
  Issue Type: Bug
  Components: config, core
Affects Versions: 0.9.0.0
Reporter: Michael Noll


I noticed that the {{bin/kafka-run-class.sh}} script in current trunk (as of 
commit e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
incorrectly.  Noticeably, the way to construct the path to 
{{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
other bin scripts configure the paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (the buggy script)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-2728:

Description: 
I noticed that the {{bin/kafka-run-class.sh}} and the 
{{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable incorrectly. 
 Noticeably, the way to construct the path to {{config/tools-log4j.properties}} 
is wrong, and it is inconsistent to how the other bin scripts configure the 
paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (one of the two buggy scripts)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
am not that familiar with Windows .bat scripting):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
{code}


  was:
I noticed that the {{bin/kafka-run-class.sh}} and the 
{{bin/windows/kafka-run-class.bat} scripts in current trunk (as of commit 
e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable incorrectly. 
 Noticeably, the way to construct the path to {{config/tools-log4j.properties}} 
is wrong, and it is inconsistent to how the other bin scripts configure the 
paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (one of the two buggy scripts)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
am not that familiar with Windows .bat scripting):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
{code}



> kafka-run-class.sh: incorrect path to tools-log4j.properties for 
> KAFKA_LOG4J_OPTS
> -
>
> Key: KAFKA-2728
> URL: https://issues.apache.org/jira/browse/KAFKA-2728
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.9.0.0
>Reporter: Michael Noll
>
> I noticed that the {{bin/kafka-run-class.sh}} and the 
> {{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
> e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
> incorrectly.  Noticeably, the way to construct the path to 
> {{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
> other bin scripts configure the paths to their {{config/*.properties}} files.
> Example: bin/kafka-run-class.sh (one of the two buggy scripts)
> {code}
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
> else
>   ...snip...
> {code}
> Example: bin/kafka-server-start.sh (a correct script)
> {code}
> if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> fi
> {code}
> In the examples above, note the difference between:
> {code}
> # Without ".."
> file:$base_dir/config/tools-log4j.properties
> # With ".."
> file:$base_dir/../config/log4j.properties
> {code}
> *How to fix*
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:
> {code}
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
> {code}
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-cl

[jira] [Updated] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-2728:

Description: 
I noticed that the {{bin/kafka-run-class.sh}} and the 
{{bin/windows/kafka-run-class.bat} scripts in current trunk (as of commit 
e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable incorrectly. 
 Noticeably, the way to construct the path to {{config/tools-log4j.properties}} 
is wrong, and it is inconsistent to how the other bin scripts configure the 
paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (one of the two buggy scripts)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
am not that familiar with Windows .bat scripting):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
{code}


  was:
I noticed that the {{bin/kafka-run-class.sh}} script in current trunk (as of 
commit e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
incorrectly.  Noticeably, the way to construct the path to 
{{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
other bin scripts configure the paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (the buggy script)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}


> kafka-run-class.sh: incorrect path to tools-log4j.properties for 
> KAFKA_LOG4J_OPTS
> -
>
> Key: KAFKA-2728
> URL: https://issues.apache.org/jira/browse/KAFKA-2728
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.9.0.0
>Reporter: Michael Noll
>
> I noticed that the {{bin/kafka-run-class.sh}} and the 
> {{bin/windows/kafka-run-class.bat} scripts in current trunk (as of commit 
> e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
> incorrectly.  Noticeably, the way to construct the path to 
> {{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
> other bin scripts configure the paths to their {{config/*.properties}} files.
> Example: bin/kafka-run-class.sh (one of the two buggy scripts)
> {code}
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
> else
>   ...snip...
> {code}
> Example: bin/kafka-server-start.sh (a correct script)
> {code}
> if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> fi
> {code}
> In the examples above, note the difference between:
> {code}
> # Without ".."
> file:$base_dir/config/tools-log4j.properties
> # With ".."
> file:$base_dir/../config/log4j.properties
> {code}
> *How to fix*
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:
> {code}
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
> {code}
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
> am not that familiar with Windows .bat scripting):
> {code}
> set 
> KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-2728:

Description: 
I noticed that the {{bin/kafka-run-class.sh}} and the 
{{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable incorrectly. 
 Noticeably, the way to construct the path to {{config/tools-log4j.properties}} 
is wrong, and it is inconsistent to how the other bin scripts configure the 
paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (one of the two buggy scripts)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
am not that familiar with Windows .bat scripting):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
{code}

Alternatively, for the windows script, we could use the same code variant we 
use in e.g. {{kafka-server-start.bat}}, where we use {{~dp0}} instead of 
{{BASE_DIR}} (I'd opt for this variant so that the windows scripts are 
consistent):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/tools-log4j.properties
{code}


  was:
I noticed that the {{bin/kafka-run-class.sh}} and the 
{{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable incorrectly. 
 Noticeably, the way to construct the path to {{config/tools-log4j.properties}} 
is wrong, and it is inconsistent to how the other bin scripts configure the 
paths to their {{config/*.properties}} files.

Example: bin/kafka-run-class.sh (one of the two buggy scripts)

{code}
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
else
  ...snip...
{code}

Example: bin/kafka-server-start.sh (a correct script)

{code}
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
{code}

In the examples above, note the difference between:

{code}
# Without ".."
file:$base_dir/config/tools-log4j.properties

# With ".."
file:$base_dir/../config/log4j.properties
{code}

*How to fix*

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:

{code}
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
{code}

Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
am not that familiar with Windows .bat scripting):

{code}
set 
KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
{code}



> kafka-run-class.sh: incorrect path to tools-log4j.properties for 
> KAFKA_LOG4J_OPTS
> -
>
> Key: KAFKA-2728
> URL: https://issues.apache.org/jira/browse/KAFKA-2728
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.9.0.0
>Reporter: Michael Noll
>
> I noticed that the {{bin/kafka-run-class.sh}} and the 
> {{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
> e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
> incorrectly.  Noticeably, the way to construct the path to 
> {{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
> other bin scripts configure the paths to their {{config/*.properties}} files.
> Example: bin/kafka-run-class.sh (one of the two buggy scripts)
> {code}
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
> else
>   ...snip...
> {code}
> Example: bin/kafka-server-start.sh (a correct script)
> {code}
> if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> fi
> {code}
> In the examples above, note the difference between:
> {code}
> # Without ".."
> file:$base

[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-03 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987381#comment-14987381
 ] 

Rajini Sivaram commented on KAFKA-2658:
---

[~junrao] The minimal changeset referred to in the comment above that would 
enable us to integrate Kafka 0.9.0.0 with our authentication service is in the 
branch KAFKA-2658-minimal in the repository 
https://github.com/rajinisivaram/kafka. You can view the changes here: 
https://github.com/apache/kafka/compare/trunk...rajinisivaram:KAFKA-2658-minimal.
 Please let me know if it would be possible to integrate this into 0.9.0.0. If 
so, I can submit a PR today. Thank you...

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2730:
--

 Summary: partition-reassignment tool stops working due to error in 
registerMetric
 Key: KAFKA-2730
 URL: https://issues.apache.org/jira/browse/KAFKA-2730
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
 Fix For: 0.9.0.0


I updated our test system to use Kafka from latest revision 
7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:

[2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
 -> 
(LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
 (kafka.server.KafkaApis)
java.lang.IllegalArgumentException: A metric named 'MetricName 
[name=connection-close-rate, group=replica-fetcher-metrics, 
description=Connections closed per second in the window., tags={broker-id=3}]' 
already exists, can't register another one.
at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
at 
org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
at org.apache.kafka.common.network.Selector.(Selector.java:112)
at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
at 
kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
at 
kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
at 
kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at 
kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)

This happens when I'm running kafka-reassign-partitions.sh. As a result in the 
verify command one of the partition reassignments says "is still in progress" 
forever.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2015-11-03 Thread Danil Serdyuchenko (JIRA)
Danil Serdyuchenko created KAFKA-2729:
-

 Summary: Cached zkVersion not equal to that in zookeeper, broker 
not recovering.
 Key: KAFKA-2729
 URL: https://issues.apache.org/jira/browse/KAFKA-2729
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Danil Serdyuchenko


After a small network wobble where zookeeper nodes couldn't reach each other, 
we started seeing a large number of undereplicated partitions. The zookeeper 
cluster recovered, however we continued to see a large number of undereplicated 
partitions. Two brokers in the kafka cluster were showing this in the logs:

{code}
[2015-10-27 11:36:00,888] INFO Partition 
[__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
(kafka.cluster.Partition)
[2015-10-27 11:36:00,891] INFO Partition 
[__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] not 
equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
{code}
For all of the topics on the effected brokers. Both brokers only recovered 
after a restart. Our own investigation yielded nothing, I was hoping you could 
shed some light on this issue. Possibly if it's related to: 
https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Mohammad Abbasi (JIRA)
Mohammad Abbasi created KAFKA-2731:
--

 Summary: Kerberos on same host with Kafka does not find server in 
it's database on Ubuntu
 Key: KAFKA-2731
 URL: https://issues.apache.org/jira/browse/KAFKA-2731
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Mohammad Abbasi


Configuring Kafka to use keytab created in Kerberos, as it's said in 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
Kerberos logs:
Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for , Server not found in Kerberos database
Kafka's log:
SASL Connection info:
[2015-11-03 18:33:00,544] DEBUG creating sasl client: 
client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
and error:
[2015-11-03 18:33:00,607] ERROR An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - 
LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
member failed: javax.security.sasl.SaslException: An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - 
LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
(org.apache.zookeeper.ClientCnxn)

Kerberos works ok in kinit and kvno with the keytab.
Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
and hostname
and /etc/hosts is: 
127.0.0.1   myhost localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987474#comment-14987474
 ] 

Jun Rao commented on KAFKA-2658:


[~rsivaram], I wasn't so concerned about the size of the patch. Since this is a 
user-facing change, we probably should do a KIP discussion so that the 
community is aware of the change. Given the release timeline, I think it's 
better to do that post 0.9.0, probably 0.9.1. Thank you for all your help so 
far.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987487#comment-14987487
 ] 

Flavio Junqueira commented on KAFKA-2731:
-

[~mabbasi90.class] You also need to configure the zookeeper ensemble. You'll 
need a section for the server that looks like this:

{noformat}
Server {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/etc/zookeeper/conf/zookeeper.keytab"
  storeKey=true
  useTicketCache=false
  principal="zookeeper/fully.qualified.domain.name@";
};
{noformat}

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2719: Use wildcard classpath for dependa...

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/400


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2719) Kafka classpath has grown too large and breaks some system tests

2015-11-03 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2719:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 400
[https://github.com/apache/kafka/pull/400]

> Kafka classpath has grown too large and breaks some system tests
> 
>
> Key: KAFKA-2719
> URL: https://issues.apache.org/jira/browse/KAFKA-2719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> The jars added under KAFKA-2369 makes the Kafka command line used in system 
> tests much higher than 4096 due to more jars in the classpath. Since the ps 
> command used to find processes in system tests truncates the command line, 
> some system tests are failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2719) Kafka classpath has grown too large and breaks some system tests

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987531#comment-14987531
 ] 

ASF GitHub Bot commented on KAFKA-2719:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/400


> Kafka classpath has grown too large and breaks some system tests
> 
>
> Key: KAFKA-2719
> URL: https://issues.apache.org/jira/browse/KAFKA-2719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> The jars added under KAFKA-2369 makes the Kafka command line used in system 
> tests much higher than 4096 due to more jars in the classpath. Since the ps 
> command used to find processes in system tests truncates the command line, 
> some system tests are failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #89

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2719; Use wildcard classpath for dependant-libs

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 694e03c35582212749db5efaabd98b9f723609d5 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 694e03c35582212749db5efaabd98b9f723609d5
 > git rev-list 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2149702466518689473.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 18.627 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson959448687400365442.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.969 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Created] (KAFKA-2732) Add support for consumer test with ZK Auth, SASL and SSL

2015-11-03 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2732:
---

 Summary: Add support for consumer test with ZK Auth, SASL and SSL
 Key: KAFKA-2732
 URL: https://issues.apache.org/jira/browse/KAFKA-2732
 Project: Kafka
  Issue Type: Test
  Components: security
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira
Assignee: Flavio Junqueira
 Fix For: 0.9.0.0


Extend SaslSslConsumerTest to use ZK Auth and add support to enable it to work 
properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-11-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987537#comment-14987537
 ] 

Ismael Juma commented on KAFKA-2716:


[~singhashish], your PR looks good to me.

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2732) Add support for consumer test with ZK Auth, SASL and SSL

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987550#comment-14987550
 ] 

ASF GitHub Bot commented on KAFKA-2732:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/410

KAFKA-2732: Add class for ZK Auth.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2732

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/410.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #410


commit bfb191d08b871b5ce6d71e44af883370725d4164
Author: Flavio Junqueira 
Date:   2015-11-03T16:22:33Z

KAFKA-2732: Add class for ZK Auth.




> Add support for consumer test with ZK Auth, SASL and SSL
> 
>
> Key: KAFKA-2732
> URL: https://issues.apache.org/jira/browse/KAFKA-2732
> Project: Kafka
>  Issue Type: Test
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> Extend SaslSslConsumerTest to use ZK Auth and add support to enable it to 
> work properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2732: Add class for ZK Auth.

2015-11-03 Thread fpj
GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/410

KAFKA-2732: Add class for ZK Auth.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2732

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/410.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #410


commit bfb191d08b871b5ce6d71e44af883370725d4164
Author: Flavio Junqueira 
Date:   2015-11-03T16:22:33Z

KAFKA-2732: Add class for ZK Auth.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #749

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2719; Use wildcard classpath for dependant-libs

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 694e03c35582212749db5efaabd98b9f723609d5 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 694e03c35582212749db5efaabd98b9f723609d5
 > git rev-list 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5624780592953254289.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 15.081 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2942420518114071809.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:log4j-appender:compileJava
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes
:kafka-trunk-jdk7:log4j-appender:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMES

[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987587#comment-14987587
 ] 

Mohammad Abbasi commented on KAFKA-2731:


Thank you for response [~fpj], So keytab files must be different for 
client-server pairs of Kafka and Zookeeper? And service names must be zookeeper 
and kafka?

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987594#comment-14987594
 ] 

Flavio Junqueira commented on KAFKA-2731:
-

I don't think the need to be different, but you need to provide the zk server 
config. The wiki page you pointed to is focusing on the kafka broker, which 
doesn't need to know about the zk server configuration. That's why it is 
omitting it. 

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-11-03 Thread Neha Narkhede
Few suggestions on improving the KIP

*If some brokers have rack, and some do not, the algorithm will thrown an
> exception. This is to prevent incorrect assignment caused by user error.*


In the KIP, can you clearly state the user-facing behavior when some
brokers have rack information and some don't. Which actions and requests
will error out and how?

*Even distribution of partition leadership among brokers*


There is some information about arranging the sorted broker list interlaced
with rack ids. Can you describe the changes to the current algorithm in a
little more detail? How does this interlacing work if only a subset of
brokers have the rack id configured? Does this still work if uneven # of
brokers are assigned to each rack? It might work, I'm looking for more
details on the changes, since it will affect the behavior seen by the user
- imbalance on either the leaders or data or both.

On Mon, Nov 2, 2015 at 6:39 PM, Aditya Auradkar 
wrote:

> I think this sounds reasonable. Anyone else have comments?
>
> Aditya
>
> On Tue, Oct 27, 2015 at 5:23 PM, Allen Wang  wrote:
>
> > During the discussion in the hangout, it was mentioned that it would be
> > desirable that consumers know the rack information of the brokers so that
> > they can consume from the broker in the same rack to reduce latency. As I
> > understand this will only be beneficial if consumer can consume from any
> > broker in ISR, which is not possible now.
> >
> > I suggest we skip the change to TMR. Once the change is made to consumer
> to
> > be able to consume from any broker in ISR, the rack information can be
> > added to TMR.
> >
> > Another thing I want to confirm is  command line behavior. I think the
> > desirable default behavior is to fail fast on command line for incomplete
> > rack mapping. The error message can include further instruction that
> tells
> > the user to add an extra argument (like "--allow-partial-rackinfo") to
> > suppress the error and do an imperfect rack aware assignment. If the
> > default behavior is to allow incomplete mapping, the error can still be
> > easily missed.
> >
> > The affected command line tools are TopicCommand and
> > ReassignPartitionsCommand.
> >
> > Thanks,
> > Allen
> >
> >
> >
> >
> >
> > On Mon, Oct 26, 2015 at 12:55 PM, Aditya Auradkar <
> aaurad...@linkedin.com>
> > wrote:
> >
> > > Hi Allen,
> > >
> > > For TopicMetadataResponse to understand version, you can bump up the
> > > request version itself. Based on the version of the request, the
> response
> > > can be appropriately serialized. It shouldn't be a huge change. For
> > > example: We went through something similar for ProduceRequest recently
> (
> > > https://reviews.apache.org/r/33378/)
> > > I guess the reason protocol information is not included in the TMR is
> > > because the topic itself is independent of any particular protocol (SSL
> > vs
> > > Plaintext). Having said that, I'm not sure we even need rack
> information
> > in
> > > TMR. What usecase were you thinking of initially?
> > >
> > > For 1 - I'd be fine with adding an option to the command line tools
> that
> > > check rack assignment. For e.g. "--strict-assignment" or something
> > similar.
> > >
> > > Aditya
> > >
> > > On Thu, Oct 22, 2015 at 6:44 PM, Allen Wang 
> > wrote:
> > >
> > > > For 2 and 3, I have updated the KIP. Please take a look. One thing I
> > have
> > > > changed is removing the proposal to add rack to
> TopicMetadataResponse.
> > > The
> > > > reason is that unlike UpdateMetadataRequest, TopicMetadataResponse
> does
> > > not
> > > > understand version. I don't see a way to include rack without
> breaking
> > > old
> > > > version of clients. That's probably why secure protocol is not
> included
> > > in
> > > > the TopicMetadataResponse either. I think it will be a much bigger
> > change
> > > > to include rack in TopicMetadataResponse.
> > > >
> > > > For 1, my concern is that doing rack aware assignment without
> complete
> > > > broker to rack mapping will result in assignment that is not rack
> aware
> > > and
> > > > fail to provide fault tolerance in the event of rack outage. This
> kind
> > of
> > > > problem will be difficult to surface. And the cost of this problem is
> > > high:
> > > > you have to do partition reassignment if you are lucky to spot the
> > > problem
> > > > early on or face the consequence of data loss during real rack
> outage.
> > > >
> > > > I do see the concern of fail-fast as it might also cause data loss if
> > > > producer is not able produce the message due to topic creation
> failure.
> > > Is
> > > > it feasible to treat dynamic topic creation and command tools
> > > differently?
> > > > We allow dynamic topic creation with incomplete broker-rack mapping
> and
> > > > fail fast in command line. Another option is to let user determine
> the
> > > > behavior for command line. For example, by default fail fast in
> command
> > > > line but allow incomplete broker-rack mapping if another switch is
> >

[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987681#comment-14987681
 ] 

Mohammad Abbasi commented on KAFKA-2731:


Ok thanx, I'll test this soon.

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2724: ZK Auth documentation.

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/409


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2724) Document ZooKeeper authentication

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987709#comment-14987709
 ] 

ASF GitHub Bot commented on KAFKA-2724:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/409


> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2724) Document ZooKeeper authentication

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2724:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 409
[https://github.com/apache/kafka/pull/409]

> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2727: Topology partial construction

2015-11-03 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/411

KAFKA-2727: Topology partial construction

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka topology_partial_construction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #411


commit e561254d9f3e6927a9e0d62c85d7144d936d678b
Author: Yasuhiro Matsuda 
Date:   2015-10-29T17:03:10Z

partial construction of topology

commit 3ae930c43e0f2f60caa183d7265d3e69442a3d96
Author: Yasuhiro Matsuda 
Date:   2015-11-02T22:59:42Z

Merge branch 'trunk' of github.com:apache/kafka into 
topology_partial_construction

commit f984b3214e72c678ea6f45a564325eb212c4ccdf
Author: Yasuhiro Matsuda 
Date:   2015-11-02T23:13:31Z

cleanup

commit d170007d0df57af3afbe22c121aac0bd10dbeb7e
Author: Yasuhiro Matsuda 
Date:   2015-11-02T23:24:06Z

test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2727) initialize only the part of the topology relevant to the task

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987742#comment-14987742
 ] 

ASF GitHub Bot commented on KAFKA-2727:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/411

KAFKA-2727: Topology partial construction

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka topology_partial_construction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #411


commit e561254d9f3e6927a9e0d62c85d7144d936d678b
Author: Yasuhiro Matsuda 
Date:   2015-10-29T17:03:10Z

partial construction of topology

commit 3ae930c43e0f2f60caa183d7265d3e69442a3d96
Author: Yasuhiro Matsuda 
Date:   2015-11-02T22:59:42Z

Merge branch 'trunk' of github.com:apache/kafka into 
topology_partial_construction

commit f984b3214e72c678ea6f45a564325eb212c4ccdf
Author: Yasuhiro Matsuda 
Date:   2015-11-02T23:13:31Z

cleanup

commit d170007d0df57af3afbe22c121aac0bd10dbeb7e
Author: Yasuhiro Matsuda 
Date:   2015-11-02T23:24:06Z

test




> initialize only the part of the topology relevant to the task
> -
>
> Key: KAFKA-2727
> URL: https://issues.apache.org/jira/browse/KAFKA-2727
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2716: Make Kafka core not depend on log4...

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/405


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987800#comment-14987800
 ] 

ASF GitHub Bot commented on KAFKA-2716:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/405


> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2716.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 405
[https://github.com/apache/kafka/pull/405]

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-03 Thread Ed Yakabosky
Hi all,

Two corrections to the invite:

   1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a little
   hasty...
   2. LinkedIn has finished remodeling our broadcast room, so we are going
   to host the meet up in Mountain View, not San Francisco.

We've arranged for speakers from HortonWorks to talk about Security and
LinkedIn to talk about Quotas.  We are still looking for one more speaker,
so please let me know if you are interested.

Thanks!
Ed







On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky 
wrote:

> Hi all,
>
> LinkedIn is hoping to host one more Apache Kafka meetup this year on
> November 18 in our San Francisco office.  We're working on building the
> agenda now.  Does anyone want to talk?  Please send me (and Clark) a
> private email with a short description of what you would be talking about
> if interested.
>
> --
> Thanks,
>
> Ed Yakabosky
> ​Technical Program Management @ LinkedIn​
>
>


-- 
Thanks,
Ed Yakabosky


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-03 Thread Lukas Steiblys
This is sad news. I was looking forward to finally going to a Kafka or Samza 
meetup. Going to Mountain View for a meetup is just unrealistic with 2h 
travel time each way.


Lukas

-Original Message- 
From: Ed Yakabosky

Sent: Tuesday, November 3, 2015 10:36 AM
To: us...@kafka.apache.org ; dev@kafka.apache.org ; Clark Haskins
Subject: Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in 
San Francisco) - does anyone want to talk?


Hi all,

Two corrections to the invite:

  1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a little
  hasty...
  2. LinkedIn has finished remodeling our broadcast room, so we are going
  to host the meet up in Mountain View, not San Francisco.

We've arranged for speakers from HortonWorks to talk about Security and
LinkedIn to talk about Quotas.  We are still looking for one more speaker,
so please let me know if you are interested.

Thanks!
Ed







On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky 
wrote:


Hi all,

LinkedIn is hoping to host one more Apache Kafka meetup this year on
November 18 in our San Francisco office.  We're working on building the
agenda now.  Does anyone want to talk?  Please send me (and Clark) a
private email with a short description of what you would be talking about
if interested.

--
Thanks,

Ed Yakabosky
​Technical Program Management @ LinkedIn>




--
Thanks,
Ed Yakabosky 



[jira] [Updated] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2730:

Priority: Blocker  (was: Major)

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2730:

Assignee: Guozhang Wang

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987856#comment-14987856
 ] 

Gwen Shapira commented on KAFKA-2730:
-

[~guozhang] - I assigned this to you since you mentioned working on it. Feel 
free to reassign if you are not.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987890#comment-14987890
 ] 

Guozhang Wang commented on KAFKA-2730:
--

Thanks [~gwenshap], yes I am working on this now.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-03 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987903#comment-14987903
 ] 

Rajini Sivaram commented on KAFKA-2658:
---

[~junrao] OK, thanks, will submit a KIP.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-03 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987915#comment-14987915
 ] 

Flavio Junqueira commented on KAFKA-2731:
-

You may want to check the documentation we've checked in to trunk as part of 
KAFKA-2724. Have a look at docs/security.html and if you feel there is info 
missing there, let us know so that we can fix it.

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-11-03 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2733:


 Summary: Distinguish metric names inside the sensor registry
 Key: KAFKA-2733
 URL: https://issues.apache.org/jira/browse/KAFKA-2733
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
 Fix For: 0.9.0.1


Since stream tasks can share the same StreamingMetrics object, and the 
MetricName is distinguishable only by the group name (same for the same type of 
states, and for other streaming metrics) and the tags (currently only the 
client-ids of the StreamThead), when we have multiple tasks within a single 
stream thread, it could lead to IllegalStateException upon trying to registry 
the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2698) add paused API

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2698:

Priority: Critical  (was: Blocker)

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #90

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2724: ZK Auth documentation.

--
[...truncated 4643 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.copycat.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.copycat.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.copycat.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.copycat.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.copycat.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.copycat.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.copycat.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testDeliverConvertsData 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitConsumerFailure 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitTimeout PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testAssignmentPauseResume 
PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveConnector PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.k

[jira] [Updated] (KAFKA-2727) initialize only the part of the topology relevant to the task

2015-11-03 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2727:

Description: 
Currently each streaming task initializes the entire topology regardless of the 
assigned topic-partitions. This is wasteful especially when the topology has 
local state stores. All local state stores are restored from their change log 
topics even when are not actually used in the task execution. To fix this, the 
task initialization should be aware of the relevant subgraph of the topology 
and initializes only processors and state stores in the subgraph.


> initialize only the part of the topology relevant to the task
> -
>
> Key: KAFKA-2727
> URL: https://issues.apache.org/jira/browse/KAFKA-2727
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Currently each streaming task initializes the entire topology regardless of 
> the assigned topic-partitions. This is wasteful especially when the topology 
> has local state stores. All local state stores are restored from their change 
> log topics even when are not actually used in the task execution. To fix 
> this, the task initialization should be aware of the relevant subgraph of the 
> topology and initializes only processors and state stores in the subgraph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #750

2015-11-03 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #91

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2716: Make Kafka core not depend on log4j-appender

--
[...truncated 346 lines...]
:copycat:runtime:jar
:jarAll
:docsJar_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-03 Thread Ed Yakabosky
I'm sorry to hear that Lukas.  I have heard that people are starting to do
carpools via rydeful.com for some of these meetups.

Additionally, we will live stream and record the presentations, so you can
participate remotely.

Ed

On Tue, Nov 3, 2015 at 10:43 AM, Lukas Steiblys 
wrote:

> This is sad news. I was looking forward to finally going to a Kafka or
> Samza meetup. Going to Mountain View for a meetup is just unrealistic with
> 2h travel time each way.
>
> Lukas
>
> -Original Message- From: Ed Yakabosky
> Sent: Tuesday, November 3, 2015 10:36 AM
> To: us...@kafka.apache.org ; dev@kafka.apache.org ; Clark Haskins
> Subject: Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time
> in San Francisco) - does anyone want to talk?
>
> Hi all,
>
> Two corrections to the invite:
>
>   1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a little
>   hasty...
>   2. LinkedIn has finished remodeling our broadcast room, so we are going
>
>   to host the meet up in Mountain View, not San Francisco.
>
> We've arranged for speakers from HortonWorks to talk about Security and
> LinkedIn to talk about Quotas.  We are still looking for one more speaker,
> so please let me know if you are interested.
>
> Thanks!
> Ed
>
>
>
>
>
>
>
> On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky 
> wrote:
>
> Hi all,
>>
>> LinkedIn is hoping to host one more Apache Kafka meetup this year on
>> November 18 in our San Francisco office.  We're working on building the
>> agenda now.  Does anyone want to talk?  Please send me (and Clark) a
>> private email with a short description of what you would be talking about
>> if interested.
>>
>> --
>> Thanks,
>>
>> Ed Yakabosky
>> ​Technical Program Management @ LinkedIn>
>>
>>
>
> --
> Thanks,
> Ed Yakabosky
>



-- 
Thanks,
Ed Yakabosky


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Hannu Valtonen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988097#comment-14988097
 ] 

Hannu Valtonen commented on KAFKA-2730:
---

Answering Guozhang Wang's question on the mailing list here: 

Yes the servers all have the same version. (and were just raised up with that 
version from scratch)
 
As for the request logs logged by the server i.e. on INFO)? I'm afraid the VM 
with the logs was deleted already. I can reproduce it tomorrow when I'm at the 
office again if needed. (it reproduced consistently for us)

As background the test cluster is a two node cluster with a replication factor 
of 2 which is being grown to add a third node. The reassign partitions is 
called on the third node pretty much immediately after Kafka starts up and 
starts responding.



> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2480: Add backoff timeout and support re...

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/340


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988144#comment-14988144
 ] 

ASF GitHub Bot commented on KAFKA-2480:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/340


> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2480.
-
Resolution: Fixed

Issue resolved by pull request 340
[https://github.com/apache/kafka/pull/340]

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-03 Thread Grant Henke
Is there a place where we can find all previously streamed/recorded meetups?

Thank you,
Grant

On Tue, Nov 3, 2015 at 2:07 PM, Ed Yakabosky 
wrote:

> I'm sorry to hear that Lukas.  I have heard that people are starting to do
> carpools via rydeful.com for some of these meetups.
>
> Additionally, we will live stream and record the presentations, so you can
> participate remotely.
>
> Ed
>
> On Tue, Nov 3, 2015 at 10:43 AM, Lukas Steiblys 
> wrote:
>
> > This is sad news. I was looking forward to finally going to a Kafka or
> > Samza meetup. Going to Mountain View for a meetup is just unrealistic
> with
> > 2h travel time each way.
> >
> > Lukas
> >
> > -Original Message- From: Ed Yakabosky
> > Sent: Tuesday, November 3, 2015 10:36 AM
> > To: us...@kafka.apache.org ; dev@kafka.apache.org ; Clark Haskins
> > Subject: Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time
> > in San Francisco) - does anyone want to talk?
> >
> > Hi all,
> >
> > Two corrections to the invite:
> >
> >   1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a
> little
> >   hasty...
> >   2. LinkedIn has finished remodeling our broadcast room, so we are going
> >
> >   to host the meet up in Mountain View, not San Francisco.
> >
> > We've arranged for speakers from HortonWorks to talk about Security and
> > LinkedIn to talk about Quotas.  We are still looking for one more
> speaker,
> > so please let me know if you are interested.
> >
> > Thanks!
> > Ed
> >
> >
> >
> >
> >
> >
> >
> > On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky 
> > wrote:
> >
> > Hi all,
> >>
> >> LinkedIn is hoping to host one more Apache Kafka meetup this year on
> >> November 18 in our San Francisco office.  We're working on building the
> >> agenda now.  Does anyone want to talk?  Please send me (and Clark) a
> >> private email with a short description of what you would be talking
> about
> >> if interested.
> >>
> >> --
> >> Thanks,
> >>
> >> Ed Yakabosky
> >> ​Technical Program Management @ LinkedIn>
> >>
> >>
> >
> > --
> > Thanks,
> > Ed Yakabosky
> >
>
>
>
> --
> Thanks,
> Ed Yakabosky
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Build failed in Jenkins: kafka-trunk-jdk7 #751

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2480: Add backoff timeout and support rewinds

--
[...truncated 199 lines...]
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:401:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:264:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^


[jira] [Reopened] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2015-11-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reopened KAFKA-2480:
--

Oops, I think this got misfiled. The actual JIRA fixed by PR 340 is KAFKA-2481.

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2481) Allow copycat sinks to request periodic invocation of put even if no new data is available

2015-11-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2481.
--
Resolution: Fixed
  Reviewer: Gwen Shapira

This has been resolved via https://github.com/apache/kafka/pull/340, which was 
accidentally filed as KAFKA-2480 instead of KAFKA-2481.

> Allow copycat sinks to request periodic invocation of put even if no new data 
> is available
> --
>
> Key: KAFKA-2481
> URL: https://issues.apache.org/jira/browse/KAFKA-2481
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.9.0.0
>
>
> Some connectors will need to perform actions periodically (or more generally, 
> schedule actions in the future). For example, in an HDFS connector, if you 
> want to roll files every n minutes, the sink connector needs to make sure it 
> gets control every n minutes, regardless of availbable data. However, if data 
> isn't flowing into the consumer, we might never invoke {{put(records)}}. 
> Another variant of this is for connectors that might have an API like the new 
> consumer's where `poll()` needs to be invoked regularly.
> In terms of design, I think there are at least two options:
> 1. this could be handled via the context, so it is purely opt in to ask to be 
> scheduled for a put(), and they can specify exactly the timeout
> 2. alternatively, could be returned by put() since the return type is 
> currently void. we aren't using a return value right now, but this does mean 
> everyone has to return. also, unclear that this will always be the only info 
> you want to return
> I think 1 is cleaner and doesn't require connector developers who don't care 
> about the feature to even know about it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #92

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2480: Add backoff timeout and support rewinds

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5aa5f19d38eda33f32e170e14bcd4fd0d2835fc0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5aa5f19d38eda33f32e170e14bcd4fd0d2835fc0
 > git rev-list edddc41b37d06f8e819976b20b2e8ac711033e95 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5999150717451145244.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 13.089 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3951551085818697781.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.694 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-11-03 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988271#comment-14988271
 ] 

Jason Gustafson commented on KAFKA-2697:


[~onurkaraman] I was thinking something with naive error handling like this 
could work: 
https://github.com/hachikuji/kafka/commit/c064110b5792dca583190e54d0fa90ae0d245954.
 The only question as you mentioned before is whether KafkaConsumer should have 
a close(timeout) method like KafkaProducer does. If you don't have time for 
this, I can probably try to polish up this patch. It would be a pity if we 
couldn't get this into the release after finally getting the server-side code 
in.

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-11-03 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988276#comment-14988276
 ] 

Jason Gustafson commented on KAFKA-2697:


[~onurkaraman] Ah, didn't see your comment from yesterday. Assigning to myself.

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2441) SSL/TLS in official docs

2015-11-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2441:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 406
[https://github.com/apache/kafka/pull/406]

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2697) add leave group logic to the consumer

2015-11-03 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2697:
--

Assignee: Jason Gustafson  (was: Onur Karaman)

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988278#comment-14988278
 ] 

ASF GitHub Bot commented on KAFKA-2441:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/406


> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2441: SSL/TLS in official docs

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/406


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #752

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2441: SSL/TLS in official docs

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f413143eddd713dc5f03d53fdeb10e4e7f3738b1 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f413143eddd713dc5f03d53fdeb10e4e7f3738b1
 > git rev-list 5aa5f19d38eda33f32e170e14bcd4fd0d2835fc0 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2610007731056502514.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 18.122 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson439311800793931039.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.758 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: KAFKA-2687: Add support for ListGroups and Des...

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/388


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2687.
--
Resolution: Fixed

Issue resolved by pull request 388
[https://github.com/apache/kafka/pull/388]

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988327#comment-14988327
 ] 

ASF GitHub Bot commented on KAFKA-2687:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/388


> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2734) kafka-console-consumer throws NoSuchElementException on not specifying topic

2015-11-03 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2734:
-

 Summary: kafka-console-consumer throws NoSuchElementException on 
not specifying topic
 Key: KAFKA-2734
 URL: https://issues.apache.org/jira/browse/KAFKA-2734
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.9.0.0
Reporter: Ashish K Singh
Assignee: Ashish K Singh


The logic of argument checking is flawed for kafka-console-consumer. Throws 
below mentioned exception when topic is not specified. Users wont have a clue 
what went wrong.

{code}
Exception in thread "main" java.util.NoSuchElementException: head of empty list
at scala.collection.immutable.Nil$.head(List.scala:337)
at scala.collection.immutable.Nil$.head(List.scala:334)
at 
kafka.tools.ConsoleConsumer$ConsumerConfig.(ConsoleConsumer.scala:244)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:40)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2734: kafka-console-consumer throws NoSu...

2015-11-03 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/412

KAFKA-2734: kafka-console-consumer throws NoSuchElementException on n…

…ot specifying topic

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2734

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #412


commit 1b8522773d2ff799bb2228c5003d8fce5dcd4e86
Author: Ashish Singh 
Date:   2015-11-03T22:42:21Z

KAFKA-2734: kafka-console-consumer throws NoSuchElementException on not 
specifying topic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2734) kafka-console-consumer throws NoSuchElementException on not specifying topic

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988335#comment-14988335
 ] 

ASF GitHub Bot commented on KAFKA-2734:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/412

KAFKA-2734: kafka-console-consumer throws NoSuchElementException on n…

…ot specifying topic

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2734

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #412


commit 1b8522773d2ff799bb2228c5003d8fce5dcd4e86
Author: Ashish Singh 
Date:   2015-11-03T22:42:21Z

KAFKA-2734: kafka-console-consumer throws NoSuchElementException on not 
specifying topic




> kafka-console-consumer throws NoSuchElementException on not specifying topic
> 
>
> Key: KAFKA-2734
> URL: https://issues.apache.org/jira/browse/KAFKA-2734
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> The logic of argument checking is flawed for kafka-console-consumer. Throws 
> below mentioned exception when topic is not specified. Users wont have a clue 
> what went wrong.
> {code}
> Exception in thread "main" java.util.NoSuchElementException: head of empty 
> list
>   at scala.collection.immutable.Nil$.head(List.scala:337)
>   at scala.collection.immutable.Nil$.head(List.scala:334)
>   at 
> kafka.tools.ConsoleConsumer$ConsumerConfig.(ConsoleConsumer.scala:244)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:40)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2735) BrokerEndPoint should support non-lowercase hostnames

2015-11-03 Thread Jeff Holoman (JIRA)
Jeff Holoman created KAFKA-2735:
---

 Summary: BrokerEndPoint should support non-lowercase hostnames
 Key: KAFKA-2735
 URL: https://issues.apache.org/jira/browse/KAFKA-2735
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Jeff Holoman
Assignee: Jeff Holoman


BrokerEndPoint uses a regex to parse the host:port and fails if the hostname 
contains uppercase characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988441#comment-14988441
 ] 

Guozhang Wang commented on KAFKA-2730:
--

Thanks [~Ormod], did you use enable any security features in your system tests?

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2736) ZkClient doesn't handle SaslAuthenticated

2015-11-03 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2736:
---

 Summary: ZkClient doesn't handle SaslAuthenticated
 Key: KAFKA-2736
 URL: https://issues.apache.org/jira/browse/KAFKA-2736
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira


See https://github.com/sgroschupf/zkclient/issues/38



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #753

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2687: Add support for ListGroups and DescribeGroup APIs

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 596c203af1f33360c04f4be7c466310d11343f78 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 596c203af1f33360c04f4be7c466310d11343f78
 > git rev-list f413143eddd713dc5f03d53fdeb10e4e7f3738b1 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1636439642232168162.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 15.118 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7324184418762666047.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 16.454 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2702:
---
Attachment: ConsumerConfig-After-v2.html

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After-v2.html, ConsumerConfig-After.html, 
> ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988504#comment-14988504
 ] 

Grant Henke commented on KAFKA-2702:


Thanks for all the input [~jkreps][~gwenshap][~abiletskyi][~ijuma][~junrao]

I have updated the PR removing the required field and changing all instances of 
non-required field to default to null. I also uploaded a v2 sample output to 
this jira. Most notably defaults of "" and null are now output and I added a 
"valid values" column. 

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After-v2.html, ConsumerConfig-After.html, 
> ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988504#comment-14988504
 ] 

Grant Henke edited comment on KAFKA-2702 at 11/3/15 11:54 PM:
--

Thanks for all the input [~jkreps], [~gwenshap], [~abiletskyi], [~ijuma], 
[~junrao]

I have updated the PR removing the required field and changing all instances of 
non-required field to default to null. I also uploaded a v2 sample output to 
this jira. Most notably defaults of "" and null are now output and I added a 
"valid values" column. 


was (Author: granthenke):
Thanks for all the input [~jkreps][~gwenshap][~abiletskyi][~ijuma][~junrao]

I have updated the PR removing the required field and changing all instances of 
non-required field to default to null. I also uploaded a v2 sample output to 
this jira. Most notably defaults of "" and null are now output and I added a 
"valid values" column. 

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After-v2.html, ConsumerConfig-After.html, 
> ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #93

2015-11-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2441: SSL/TLS in official docs

[wangguoz] KAFKA-2687: Add support for ListGroups and DescribeGroup APIs

--
[...truncated 3972 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogT

[jira] [Created] (KAFKA-2737) Integration tests for round-robin assignment

2015-11-03 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-2737:
---

 Summary: Integration tests for round-robin assignment
 Key: KAFKA-2737
 URL: https://issues.apache.org/jira/browse/KAFKA-2737
 Project: Kafka
  Issue Type: Test
Reporter: Anna Povzner
Assignee: Anna Povzner


We currently don't have integration tests which use round-robin assignment. 
This card is to add basic integration tests with round-robin assignment for 
both single-consumer and multi-consumer cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2737: Added single- and multi-consumer i...

2015-11-03 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/413

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment

Two tests:
1. One consumer subscribes to 2 topics, each with 2 partitions; includes 
adding and removing a topic.
2. Several consumers subscribe to 2 topics, several partition each; 
includes adding one more consumer after initial assignment is done and verified.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-76

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #413


commit 6e3a74863b50162bed338e6719af0ddd13109268
Author: Anna Povzner 
Date:   2015-11-04T00:28:25Z

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2697: client-side support for leave grou...

2015-11-03 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/414

KAFKA-2697: client-side support for leave group



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2697

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #414


commit 0fa0bdb4887538e939c7dcc2b830bba5e8fffdae
Author: Jason Gustafson 
Date:   2015-11-03T22:13:47Z

KAFKA-2697: client-side support for leave group




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2737) Integration tests for round-robin assignment

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988598#comment-14988598
 ] 

ASF GitHub Bot commented on KAFKA-2737:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/413

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment

Two tests:
1. One consumer subscribes to 2 topics, each with 2 partitions; includes 
adding and removing a topic.
2. Several consumers subscribe to 2 topics, several partition each; 
includes adding one more consumer after initial assignment is done and verified.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-76

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #413


commit 6e3a74863b50162bed338e6719af0ddd13109268
Author: Anna Povzner 
Date:   2015-11-04T00:28:25Z

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment




> Integration tests for round-robin assignment
> 
>
> Key: KAFKA-2737
> URL: https://issues.apache.org/jira/browse/KAFKA-2737
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> We currently don't have integration tests which use round-robin assignment. 
> This card is to add basic integration tests with round-robin assignment for 
> both single-consumer and multi-consumer cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988601#comment-14988601
 ] 

ASF GitHub Bot commented on KAFKA-2697:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/414

KAFKA-2697: client-side support for leave group



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2697

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #414


commit 0fa0bdb4887538e939c7dcc2b830bba5e8fffdae
Author: Jason Gustafson 
Date:   2015-11-03T22:13:47Z

KAFKA-2697: client-side support for leave group




> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2738:

Description: 
Scenario (as carefully documented by [~benstopford]:

1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL protocols, 
and PLAINTEXT as security.inter.broker.protocol:

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = PLAINTEXT
listeners = PLAINTEXT://:9092,SSL://:9093

2. Stop one of the brokers and change security.inter.broker.protocol to SSL

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = SSL
listeners = PLAINTEXT://:9092,SSL://:9093

3. Start that broker again.

You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
port:

{code}
WARN ReplicaFetcherThread-0-3, Error in fetch 
kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
java.io.IOException: Connection to Node(3, worker4, 9092) failed 
(kafka.server.ReplicaFetcherThread)
WARN Failed to send SSL Close message 
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at 
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
at 
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
at org.apache.kafka.common.network.Selector.close(Selector.java:448)
at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
at 
kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
{code}

  was:
Scenario (as carefully documented by [~benstopford]:

1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL protocols, 
and PLAINTEXT as security.inter.broker.protocol:

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = PLAINTEXT
listeners = PLAINTEXT://:9092,SSL://:9093

2. Stop one of the brokers and change security.inter.broker.protocol to SSL

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = SSL
listeners = PLAINTEXT://:9092,SSL://:9093

3. Start that broker again.

You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
port:


WARN ReplicaFetcherThread-0-3, Error in fetch 
kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
java.io.IOException: Connection to Node(3, worker4, 9092) failed 
(kafka.server.ReplicaFetcherThread)
WARN Failed to send SSL Close message 
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at 
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
at 
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
at org.apache.kafka.common.network.Selector.close(Selector.java:448)
at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
at 
kafka.utils.NetworkClientBlockingOp

[jira] [Created] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-03 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2738:
---

 Summary: Can't set SSL as inter-broker-protocol by rolling restart 
of brokers
 Key: KAFKA-2738
 URL: https://issues.apache.org/jira/browse/KAFKA-2738
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Scenario (as carefully documented by [~benstopford]:

1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL protocols, 
and PLAINTEXT as security.inter.broker.protocol:

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = PLAINTEXT
listeners = PLAINTEXT://:9092,SSL://:9093

2. Stop one of the brokers and change security.inter.broker.protocol to SSL

inter.broker.protocol.version = 0.9.0.X
security.inter.broker.protocol = SSL
listeners = PLAINTEXT://:9092,SSL://:9093

3. Start that broker again.

You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
port:


WARN ReplicaFetcherThread-0-3, Error in fetch 
kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
java.io.IOException: Connection to Node(3, worker4, 9092) failed 
(kafka.server.ReplicaFetcherThread)
WARN Failed to send SSL Close message 
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at 
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
at 
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
at org.apache.kafka.common.network.Selector.close(Selector.java:448)
at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
at 
kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2737) Integration tests for round-robin assignment

2015-11-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2737:
-
Reviewer: Guozhang Wang

> Integration tests for round-robin assignment
> 
>
> Key: KAFKA-2737
> URL: https://issues.apache.org/jira/browse/KAFKA-2737
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> We currently don't have integration tests which use round-robin assignment. 
> This card is to add basic integration tests with round-robin assignment for 
> both single-consumer and multi-consumer cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988441#comment-14988441
 ] 

Guozhang Wang edited comment on KAFKA-2730 at 11/4/15 12:54 AM:


Thanks [~Ormod], did you use enable any security features in your system tests? 
Also could you share the Kafka server config values, particularly 
"num.replica.fetchers"?


was (Author: guozhang):
Thanks [~Ormod], did you use enable any security features in your system tests?

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988628#comment-14988628
 ] 

Gwen Shapira commented on KAFKA-2702:
-

Not 100% related to this patch, but I thought group.id is required (for 
consumer). Looks like the default is "" now? 
[~hachikuji]?

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After-v2.html, ConsumerConfig-After.html, 
> ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2739) Bug in ZKClient may cause failure to start brokers

2015-11-03 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2739:
---

 Summary: Bug in ZKClient may cause failure to start brokers
 Key: KAFKA-2739
 URL: https://issues.apache.org/jira/browse/KAFKA-2739
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Described by [~fpj] here:
https://github.com/sgroschupf/zkclient/issues/38

This is an ZKClient issue. I'm opening this JIRA so we can track the error and 
upgrade to the new ZKClient when this is resolved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-03 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988635#comment-14988635
 ] 

Jason Gustafson commented on KAFKA-2702:


I think it's only required if the user is using group management. We throw a 
runtime error if you try to join a group with an empty groupId.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After-v2.html, ConsumerConfig-After.html, 
> ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-03 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2255:

Fix Version/s: 0.9.0.0

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.0
>
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2379) Add Copycat documentation

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2379:

Priority: Blocker  (was: Major)

> Add Copycat documentation
> -
>
> Key: KAFKA-2379
> URL: https://issues.apache.org/jira/browse/KAFKA-2379
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Starting this out pretty broad as it can cover a lot. Some ideas:
> * Normal intro/readme type stuff
> * User guide - how to run in standalone/distributed mode. Connector/tasks 
> concepts and what they mean in practice. Fault tolerance & offsets. REST 
> interface, Copycat as a service, etc.
> * Dev guide - connectors/partitions/records/offsets/tasks. All the APIs, 
> specific examples for implementing APIs, resuming from previous offsets, 
> dynamic sets of partitions, how to work with the runtime data API, etc.
> * System design - KIP-26 + more - why we ended up on the design we did, 
> comparisons to other systems w/ low level details, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2735 BrokerEndPoint should support upper...

2015-11-03 Thread jholoman
GitHub user jholoman opened a pull request:

https://github.com/apache/kafka/pull/415

KAFKA-2735 BrokerEndPoint should support uppercase hostnames

Added support for uppercase hostnames in BrokerEndPoint. Added unit test
to cover this scenario.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jholoman/kafka KAFKA-2735

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #415


commit 59d3102e42b0ec79f9ae9120e2ba2edea238b59c
Author: jholoman 
Date:   2015-11-04T02:20:26Z

KAFKA-2735
Added support for uppercase hostnames in BrokerEndPoint. Added unit test
to cover this scenario.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2735) BrokerEndPoint should support uppercase hostnames

2015-11-03 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-2735:

Summary: BrokerEndPoint should support uppercase hostnames  (was: 
BrokerEndPoint should support non-lowercase hostnames)

> BrokerEndPoint should support uppercase hostnames
> -
>
> Key: KAFKA-2735
> URL: https://issues.apache.org/jira/browse/KAFKA-2735
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>
> BrokerEndPoint uses a regex to parse the host:port and fails if the hostname 
> contains uppercase characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2735) BrokerEndPoint should support non-lowercase hostnames

2015-11-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988781#comment-14988781
 ] 

ASF GitHub Bot commented on KAFKA-2735:
---

GitHub user jholoman opened a pull request:

https://github.com/apache/kafka/pull/415

KAFKA-2735 BrokerEndPoint should support uppercase hostnames

Added support for uppercase hostnames in BrokerEndPoint. Added unit test
to cover this scenario.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jholoman/kafka KAFKA-2735

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #415


commit 59d3102e42b0ec79f9ae9120e2ba2edea238b59c
Author: jholoman 
Date:   2015-11-04T02:20:26Z

KAFKA-2735
Added support for uppercase hostnames in BrokerEndPoint. Added unit test
to cover this scenario.




> BrokerEndPoint should support non-lowercase hostnames
> -
>
> Key: KAFKA-2735
> URL: https://issues.apache.org/jira/browse/KAFKA-2735
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>
> BrokerEndPoint uses a regex to parse the host:port and fails if the hostname 
> contains uppercase characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988785#comment-14988785
 ] 

Jun Rao commented on KAFKA-2738:


Good find. In ReplicaManager.makeFollowers(), we have the following code.
val partitionsToMakeFollowerWithLeaderAndOffset = 
partitionsToMakeFollower.map(partition =>
new TopicAndPartition(partition) -> BrokerAndInitialOffset(
leaders.find(_.id == partition.leaderReplicaIdOpt.get).get,
partition.getReplica().get.logEndOffset.messageOffset)).toMap
replicaFetcherManager.addFetcherForPartitions(partitionsToMakeFollowerWithLeaderAndOffset)
It seems that instead of passing in the BrokerEndpoint in the 
LeaderAndIsrRequest to replicaFetcherManager.addFetcherForPartitions, we should 
pick the endpoint from MetadataCache.brokers based on the 
security.inter.broker.protocol.

> Can't set SSL as inter-broker-protocol by rolling restart of brokers
> 
>
> Key: KAFKA-2738
> URL: https://issues.apache.org/jira/browse/KAFKA-2738
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Scenario (as carefully documented by [~benstopford]:
> 1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL 
> protocols, and PLAINTEXT as security.inter.broker.protocol:
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = PLAINTEXT
> listeners = PLAINTEXT://:9092,SSL://:9093
> 2. Stop one of the brokers and change security.inter.broker.protocol to SSL
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = SSL
> listeners = PLAINTEXT://:9092,SSL://:9093
> 3. Start that broker again.
> You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
> port:
> {code}
> WARN ReplicaFetcherThread-0-3, Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
> java.io.IOException: Connection to Node(3, worker4, 9092) failed 
> (kafka.server.ReplicaFetcherThread)
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
> at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:448)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2046) Delete topic still doesn't work

2015-11-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2046.
-
Resolution: Cannot Reproduce

> Delete topic still doesn't work
> ---
>
> Key: KAFKA-2046
> URL: https://issues.apache.org/jira/browse/KAFKA-2046
> Project: Kafka
>  Issue Type: Bug
>Reporter: Clark Haskins
>Assignee: Onur Karaman
>
> I just attempted to delete at 128 partition topic with all inbound producers 
> stopped.
> The result was as follows:
> The /admin/delete_topics znode was empty
> the topic under /brokers/topics was removed
> The Kafka topics command showed that the topic was removed
> However, the data on disk on each broker was not deleted and the topic has 
> not yet been re-created by starting up the inbound mirror maker.
> Let me know what additional information is needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-03 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2738:
---
 Assignee: Ben Stopford
 Priority: Blocker  (was: Major)
Fix Version/s: 0.9.0.0

> Can't set SSL as inter-broker-protocol by rolling restart of brokers
> 
>
> Key: KAFKA-2738
> URL: https://issues.apache.org/jira/browse/KAFKA-2738
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Scenario (as carefully documented by [~benstopford]:
> 1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL 
> protocols, and PLAINTEXT as security.inter.broker.protocol:
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = PLAINTEXT
> listeners = PLAINTEXT://:9092,SSL://:9093
> 2. Stop one of the brokers and change security.inter.broker.protocol to SSL
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = SSL
> listeners = PLAINTEXT://:9092,SSL://:9093
> 3. Start that broker again.
> You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
> port:
> {code}
> WARN ReplicaFetcherThread-0-3, Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
> java.io.IOException: Connection to Node(3, worker4, 9092) failed 
> (kafka.server.ReplicaFetcherThread)
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
> at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:448)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2644: Run relevant ducktape tests with S...

2015-11-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/358


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-11-03 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2644:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 358
[https://github.com/apache/kafka/pull/358]

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >