Jenkins build is back to normal : kafka-0.10.2-jdk7 #175

2017-06-20 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3136: KAFKA-5319 Add a tool to make cluster replica and ...

2017-06-20 Thread MarkTcMA
Github user MarkTcMA closed the pull request at:

https://github.com/apache/kafka/pull/3136


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3391: Add increment number to clientId

2017-06-20 Thread ccandy413
GitHub user ccandy413 opened a pull request:

https://github.com/apache/kafka/pull/3391

Add increment number to clientId

Add increment number to clientId if the ConsumerConfig.CLIENT_ID_CONFIG has 
been setted.
This modify could fix the "WARN" when I use spring-kafka that the same 
client.id can't regist to the mbean.
Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ccandy413/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3391


commit be329a31b366c7ea6638521b135488c1098b7111
Author: chencheng0312 
Date:   2017-06-21T05:22:17Z

Add increment number to clientId if the ConsumerConfig.CLIENT_ID_CONFIG has 
been setted.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: confluence permission request

2017-06-20 Thread Kenji Hayashida
To Kafka Dev Team,

Sorry, forgot sending my ID.
My ID is kenjih.

Thanks.

- Kenji Hayashida

2017-06-21 13:29 GMT+09:00 Kenji Hayashida :

> To Kafka Dev Team,
>
> Hi, could you please give me a write permission to the confluence page?
> https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+Improvement+Proposals
>
> I'm going to write a KIP.
> Thanks.
>
> - Kenji Hayashida
>
>


-- 
☆---★
林田賢二
MAIL: kenji12...@gmail.com
☆---★


confluence permission request

2017-06-20 Thread Kenji Hayashida
To Kafka Dev Team,

Hi, could you please give me a write permission to the confluence page?
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

I'm going to write a KIP.
Thanks.

- Kenji Hayashida


[GitHub] kafka-site pull request #62: MINOR: add jira@kafka mailing list

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/62


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #62: MINOR: add jira@kafka mailing list

2017-06-20 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/62
  
Merged to asf-site.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3390: KAFKA-5485: Streams should not suspend tasks twice

2017-06-20 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3390

KAFKA-5485: Streams should not suspend tasks twice



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5485-dont-suspend-tasks-twice

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3390


commit 467fd514ac9310dc6a17473fd7f8d9a62d4c1a5c
Author: Matthias J. Sax 
Date:   2017-06-21T02:08:49Z

KAFKA-5485: Streams should not suspend tasks twice




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5485) Streams should not suspend tasks twice

2017-06-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5485:
--

 Summary: Streams should not suspend tasks twice
 Key: KAFKA-5485
 URL: https://issues.apache.org/jira/browse/KAFKA-5485
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.11.1.0
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax
Priority: Minor


Currently, Streams suspends tasks on rebalance and closes suspended tasks if 
not reassigned. During close, {{suspend()}} is called a second time, also 
calling {{Processor.close()}} for all nodes again.

It would be safer to only call {{suspend()}} once in case users have 
non-idempotent operations in {{Processor.close()}} method and might thus fail. 
(cf. KAFKA-5167)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-0.10.2-jdk7 #174

2017-06-20 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5413; Log cleaner fails due to large offset in segment file

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H14 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.2^{commit} # timeout=10
Checking out Revision 7647b97f317ad8231a2b77c71ccf7e3ddb29a4cd 
(refs/remotes/origin/0.10.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7647b97f317ad8231a2b77c71ccf7e3ddb29a4cd
 > git rev-list 43fe077e9250f113a625c76546df2e9f3c2459db # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-0.10.2-jdk7] $ /bin/bash -xe /tmp/hudson1808710320236213186.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.259 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-0.10.2-jdk7] $ /bin/bash -xe /tmp/hudson5244801109611575109.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -Dorg.gradle.project.maxParallelForks=1 
-Dorg.gradle.project.testLoggingEvents=started,passed,skipped,failed clean 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.2.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:connect:transforms:clean
:streams:examples:clean
:test_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.2-jdk7:clients:compileJavaNote: Some input files use unchecked or 
unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-0.10.2-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.2-jdk7:clients:classes
:kafka-0.10.2-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.2-jdk7:clients:createVersionFile
:kafka-0.10.2-jdk7:clients:jar
:kafka-0.10.2-jdk7:clients:compileTestJava
:kafka-0.10.2-jdk7:clients:processTestResources
:kafka-0.10.2-jdk7:clients:testClasses
:kafka-0.10.2-jdk7:core:compileJava UP-TO-DATE
:kafka-0.10.2-jdk7:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:507:
 value 

[GitHub] kafka pull request #3357: KAFKA-5413: Log cleaner fails due to large offset ...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3357


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3379: KAFKA-5472 Eliminated duplicate group names when v...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3379


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3384: MINOR: remove unused hitRatio field in NamedCache

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3384


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3388: KAFKA-5021: Update delivery semantics documentatio...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3388


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3333: MINOR: Mark AbstractLogCleanerIntegrationTest as a...

2017-06-20 Thread ewencp
Github user ewencp closed the pull request at:

https://github.com/apache/kafka/pull/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3345: MINOR: Add Processing Guarantees in Streams docs

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3345


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3389: KAFKA-5484: Refactor kafkatest docker support

2017-06-20 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3389

KAFKA-5484: Refactor kafkatest docker support



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5484

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3389


commit 1d16a081f519b73fb0cd6e148b6db7cfcfa260ce
Author: Colin P. Mccabe 
Date:   2017-06-20T21:31:46Z

KAFKA-5484: Refactor kafkatest docker support




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.1-jdk7 #120

2017-06-20 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Add serialized vagrant rsync until upstream fixes broken

--
[...truncated 588.88 KB...]
org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalT

[GitHub] kafka pull request #3388: KAFKA-5021: Update delivery semantics documentatio...

2017-06-20 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3388

KAFKA-5021: Update delivery semantics documentation for EoS (KIP-98)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5021

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3388


commit 4433be408249594f0771fcd57f9a9cbaddd70648
Author: Jason Gustafson 
Date:   2017-06-20T21:34:53Z

KAFKA-5021: Update delivery semantics documentation for EoS (KIP-98)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5484) Refactor kafkatest docker support

2017-06-20 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5484:
--

 Summary: Refactor kafkatest docker support
 Key: KAFKA-5484
 URL: https://issues.apache.org/jira/browse/KAFKA-5484
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Refactor kafkatest docker support to fix some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3969) kafka.admin.ConsumerGroupCommand doesn't show consumer groups

2017-06-20 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-3969.

Resolution: Not A Bug

[~dieter_be] Closing this JIRA as it didn't seem to be a bug. Please re-open if 
you disagree.

> kafka.admin.ConsumerGroupCommand doesn't show consumer groups
> -
>
> Key: KAFKA-3969
> URL: https://issues.apache.org/jira/browse/KAFKA-3969
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dieter Plaetinck
>
> http://kafka.apache.org/documentation.html , at 
> http://kafka.apache.org/documentation.html#basic_ops_consumer_lag says 
> " Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool is 
> deprecated and you should use the kafka.admin.ConsumerGroupCommand (or the 
> bin/kafka-consumer-groups.sh script) to manage consumer groups, including 
> consumers created with the new consumer API."
> I'm sure that i have a consumer running, because i wrote an app that is 
> processing data, and i can see the data as well as the metrics that confirm 
> it's receiving data. I'm using kafka 0.10
> Yet when I run the command as instructed, it doesn't list any consumer groups
> $ /opt/kafka_2.11-0.10.0.0/bin/kafka-run-class.sh 
> kafka.admin.ConsumerGroupCommand --zookeeper localhost:2181 --list
> $
> So either something is wrong with the tool, or with the docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site pull request #60: Update delivery semantics section for KIP-98

2017-06-20 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka-site/pull/60


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #60: Update delivery semantics section for KIP-98

2017-06-20 Thread hachikuji
Github user hachikuji commented on the issue:

https://github.com/apache/kafka-site/pull/60
  
I'm going to reopen this as a PR against kafka since that is where 
design.html is maintained.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5290) docs need clarification on meaning of 'committed' to the log

2017-06-20 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5290.

Resolution: Fixed

> docs need clarification on meaning of 'committed' to the log
> 
>
> Key: KAFKA-5290
> URL: https://issues.apache.org/jira/browse/KAFKA-5290
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> The docs around
> http://kafka.apache.org/documentation/#semantics
> http://kafka.apache.org/documentation/#replication
> say
> ??A message is considered "committed" when all in sync replicas for that 
> partition have applied it to their log. Only committed messages are ever 
> given out to the consumer??
> I've always found that in need of clarification - as the producer acks 
> setting is crucial in determining what committed means.
> Based on conversations with [~rsivaram], [~apurva], [~vahid]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3035: KAFKA-5290 docs need clarification on meaning of '...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3035


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.10.2-jdk7 #173

2017-06-20 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3387: MINOR: Update documentation to use `kafka-consumer...

2017-06-20 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3387

MINOR: Update documentation to use `kafka-consumer-groups.sh` as the main 
tool for checking consumer offsets



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
doc/replace_consumeroffsetchecker_with_kafkaconsumergroups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3387.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3387


commit cbebbd5f0340bd4244be03e147832163ee8b7dc5
Author: Vahid Hashemian 
Date:   2017-06-20T20:56:47Z

MINOR: Update documentation to use `kafka-consumer-groups.sh` as the main 
tool for checking consumer offsets




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.0-jdk7 #210

2017-06-20 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Add serialized vagrant rsync until upstream fixes broken

--
[...truncated 76.42 KB...]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED
org.scalatest.junit.JUnitTestFailedError: expected exception when writing 
to closed trace socket
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
at 
org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:79)
at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:79)
at 
kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:194)

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafk

[GitHub] kafka pull request #3376: MINOR: MemoryRecordsBuilder.sizeInBytes should con...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3376


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.11.0.0 RC1

2017-06-20 Thread Vahid S Hashemian
Hi Ismael,

Thanks for running the release.

Running tests ('gradlew.bat test') on my Windows 64-bit VM results in 
these checkstyle errors:

:clients:checkstyleMain
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Data Abstraction Coupling is 57 (max allowed is 20) classes 
[ApiExceptionBuilder, BrokerNotAvailableException, 
ClusterAuthorizationException, ConcurrentTransactionsException, 
ControllerMovedException, CoordinatorLoadInProgressException, 
CoordinatorNotAvailableException, CorruptRecordException, 
DuplicateSequenceNumberException, GroupAuthorizationException, 
IllegalGenerationException, IllegalSaslStateException, 
InconsistentGroupProtocolException, InvalidCommitOffsetSizeException, 
InvalidConfigurationException, InvalidFetchSizeException, 
InvalidGroupIdException, InvalidPartitionsException, 
InvalidPidMappingException, InvalidReplicaAssignmentException, 
InvalidReplicationFactorException, InvalidRequestException, 
InvalidRequiredAcksException, InvalidSessionTimeoutException, 
InvalidTimestampException, InvalidTopicException, 
InvalidTxnStateException, InvalidTxnTimeoutException, 
LeaderNotAvailableException, NetworkException, NotControllerException, 
NotCoordinatorException, NotEnoughReplicasAfterAppendException, 
NotEnoughReplicasException, NotLeaderForPartitionException, 
OffsetMetadataTooLarge, OffsetOutOfRangeException, 
OperationNotAttemptedException, OutOfOrderSequenceException, 
PolicyViolationException, ProducerFencedException, 
RebalanceInProgressException, RecordBatchTooLargeException, 
RecordTooLargeException, ReplicaNotAvailableException, 
SecurityDisabledException, TimeoutException, TopicAuthorizationException, 
TopicExistsException, TransactionCoordinatorFencedException, 
TransactionalIdAuthorizationException, UnknownMemberIdException, 
UnknownServerException, UnknownTopicOrPartitionException, 
UnsupportedForMessageFormatException, UnsupportedSaslMechanismException, 
UnsupportedVersionException]. [ClassDataAbstractionCoupling]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Fan-Out Complexity is 60 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractRequest.java:26:1:
 
Class Fan-Out Complexity is 43 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractResponse.java:26:1:
 
Class Fan-Out Complexity is 42 (max allowed is 40). 
[ClassFanOutComplexity]
:clients:checkstyleMain FAILED

I wonder if there is an issue with my VM since I don't get similar errors 
on Ubuntu or Mac.

--Vahid




From:   Ismael Juma 
To: dev@kafka.apache.org, Kafka Users , 
kafka-clients 
Date:   06/18/2017 03:32 PM
Subject:[VOTE] 0.11.0.0 RC1
Sent by:isma...@gmail.com



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 0.11.0.0.

This is a major version release of Apache Kafka. It includes 32 new KIPs. 
See
the release notes and release plan (https://cwiki.apache.org/conf
luence/display/KAFKA/Release+Plan+0.11.0.0) for more details. A few 
feature
highlights:

* Exactly-once delivery and transactional messaging
* Streams exactly-once semantics
* Admin client with support for topic, ACLs and config management
* Record headers
* Request rate quotas
* Improved resiliency: replication protocol improvement and 
single-threaded
controller
* Richer and more efficient message format

A number of issues have been resolved since RC0 and there are no known
blockers remaining.

Release notes for the 0.11.0.0 release:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, June 22, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/javadoc/

* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=4818d4e1cbef1a8e9c027100fef317077fb3fb99


* Documentation:
http://kafka.apache.org/0110/documentation.html

* Protocol:
http://kafka.apache.org/0110/protocol.html

* Successful Jenkins builds for the 0.11.0 branch:
Unit/integration tests: 
https://builds.apache.org/job/kafka-0.11.0-jdk7/167/
System tests: 
https://jenkins.confluent.io/job/system-test-kafka-0.11.0/16/
(all 274 tests passed, the reported failure was not related to the tests)

/**

[GitHub] kafka pull request #3380: MINOR: Add serialized vagrant rsync until upstream...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3380


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #2429

2017-06-20 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #3278: MINOR: Add some logging for the transaction coordi...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3278


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Kafka Streams : parallelism related to repartition topic partitions as well

2017-06-20 Thread Matthias J. Sax
Your observation is correct.

The paragraph you quote is not very precise but also not necessarily
wrong. The example is simplified and assumes that there is no
re-partitioning even if it is not mentioned explicitly.


-Matthias


On 6/20/17 9:32 AM, Paolo Patierno wrote:
> Hi devs,
> 
> 
> at following documentation page (by Confluent) I read 
> (http://docs.confluent.io/current/streams/architecture.html#stream-partitions-and-tasks)
>  ...
> 
> 
> "the maximum parallelism at which your application may run is bounded by the 
> maximum number of stream tasks, which itself is determined by maximum number 
> of partitions of the input topic(s) the application is reading from. For 
> example, if your input topic has 5 partitions, then you can run up to 5 
> applications instances"
> 
> but it seems not so true ... I mean ...
> The number of the application instances depends on the possibility that we 
> have "internal" repartition topic in our processor topology.
> I tried the WordCountDemo starting from a topic with 2 partitions. In this 
> case I'm able to run up to 4 application instances while the 5th stays idle.
> It's possible because due to the map() in the example we have repartitioning 
> (so 1 repartition topic with 2 partitions) ... it means 4 tasks for the total 
> 4 partitions (2 for the input topic, 2 for the repartition topic) ... and 
> this tasks can run even one for each application instance.
> Following the above mentioned doc part the maximum should be just 2 (not 4).
> 
> Do you confirm this ?
> 
> Thanks,
> Paolo
> 
> 
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
> 
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] kafka pull request #3299: MINOR: Remove unused `AdminUtils.fetchTopicMetadat...

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3299


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #54: Fix typo - Acls Examples, Adding or removing a princip...

2017-06-20 Thread sunnykrGupta
Github user sunnykrGupta commented on the issue:

https://github.com/apache/kafka-site/pull/54
  
Great. Closing now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #54: Fix typo - Acls Examples, Adding or removing a ...

2017-06-20 Thread sunnykrGupta
Github user sunnykrGupta closed the pull request at:

https://github.com/apache/kafka-site/pull/54


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5483) Shutdown of scheduler should come after LogManager

2017-06-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5483:
--

 Summary: Shutdown of scheduler should come after LogManager
 Key: KAFKA-5483
 URL: https://issues.apache.org/jira/browse/KAFKA-5483
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


It seems like we shutdown the scheduler used by LogManager before shutting down 
LogManager itself. This can lead to an IllegalStateException
{code}
"[2017-06-06 18:10:19,025] ERROR [ReplicaFetcherThread-14-111], Error due to  
(kafka.server.ReplicaFetcherThread)
kafka.common.KafkaException: error processing data for partition 
[akiraPricedProduct.global,10] offset 191893
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:170)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:141)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:141)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:138)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:138)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:138)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:138)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:136)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Caused by: java.lang.IllegalStateException: Kafka scheduler is not running.
at kafka.utils.KafkaScheduler.ensureRunning(KafkaScheduler.scala:132)
at kafka.utils.KafkaScheduler.schedule(KafkaScheduler.scala:106)
at kafka.log.Log.roll(Log.scala:794)
at kafka.log.Log.maybeRoll(Log.scala:744)
at kafka.log.Log.append(Log.scala:405)
at 
kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:130)
at 
kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:42)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:153)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2428

2017-06-20 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove version in the uses page

--
[...truncated 2.58 MB...]
org.apache.kafka.connect.runtime.WorkerConnectorTest > testFailureIsFinalState 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testInitializeFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testInitializeFailure 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndPause 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndPause 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndShutdown 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testStartupAndShutdown 
PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > 
testTransitionStartedToStarted STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > 
testTransitionStartedToStarted PASSED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testShutdownFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerConnectorTest > testShutdownFailure 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush STARTED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeWithNoPreviousValueIsPropagated PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putCo

[jira] [Created] (KAFKA-5482) A CONCURRENT_TRANASCTIONS error for the first AddPartitionsToTxn request slows down transactions significantly

2017-06-20 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5482:
---

 Summary: A CONCURRENT_TRANASCTIONS error for the first 
AddPartitionsToTxn request slows down transactions significantly
 Key: KAFKA-5482
 URL: https://issues.apache.org/jira/browse/KAFKA-5482
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 0.11.0.1


Here is the issue.

# When we do a commit transaction, the producer sends an `EndTxn` request to 
the coordinator. The coordinator writes the `PrepareCommit` message to the 
transaction log and then returns the response the client. It writes the 
transaction markers and the final 'CompleteCommit' message asynchronously. 
# In the mean time, if the client starts another transaction, it will send an 
`AddPartitions` request on the next `Sender.run` loop. If the markers haven't 
been written yet, then the coordinator will return a retriable 
`CONCURRENT_TRANSACTIONS` error to the client.
# The current behavior in the producer is to sleep for `retryBackoffMs` before 
retrying the request. The current default for this is 100ms. So the producer 
will sleep for 100ms before sending the `AddPartitions` again. This puts a 
floor on the latency for back to back transactions.

This has been worked around in https://issues.apache.org/jira/browse/KAFKA-5477 
by reducing the retryBackoff for the first AddPartitions request. But we need a 
stronger solution: like having the commit block until the transaction is 
complete, or delaying the addPartitions until batches are actually ready to be 
sent to the transaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3386: KAFKA-4249: Document how to customize GC logging o...

2017-06-20 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3386

KAFKA-4249: Document how to customize GC logging options for broker

Document the KAFKA_GC_LOG_OPTS environment variable as well as the
common `kafka-run-class.sh` options.

The contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka KAFKA-4249

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3386


commit e7b95ae04a7c833aa66efdc75397c36ede5ad1b2
Author: Tom Bentley 
Date:   2017-06-20T17:12:01Z

KAFKA-4249: Document how to customize GC logging options for broker

Document the KAFKA_GC_LOG_OPTS environment variable as well as the
common `kafka-run-class.sh` options.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3366: MINOR: Remove version in the uses page

2017-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #49: Fix bad kafka stream link on use cases page

2017-06-20 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/49
  
https://github.com/apache/kafka/pull/3366 is merged, could close this PR 
now @haoch 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Kafka Streams : parallelism related to repartition topic partitions as well

2017-06-20 Thread Paolo Patierno
Hi devs,


at following documentation page (by Confluent) I read 
(http://docs.confluent.io/current/streams/architecture.html#stream-partitions-and-tasks)
 ...


"the maximum parallelism at which your application may run is bounded by the 
maximum number of stream tasks, which itself is determined by maximum number of 
partitions of the input topic(s) the application is reading from. For example, 
if your input topic has 5 partitions, then you can run up to 5 applications 
instances"

but it seems not so true ... I mean ...
The number of the application instances depends on the possibility that we have 
"internal" repartition topic in our processor topology.
I tried the WordCountDemo starting from a topic with 2 partitions. In this case 
I'm able to run up to 4 application instances while the 5th stays idle.
It's possible because due to the map() in the example we have repartitioning 
(so 1 repartition topic with 2 partitions) ... it means 4 tasks for the total 4 
partitions (2 for the input topic, 2 for the repartition topic) ... and this 
tasks can run even one for each application instance.
Following the above mentioned doc part the maximum should be just 2 (not 4).

Do you confirm this ?

Thanks,
Paolo


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[GitHub] kafka-site issue #18: Implementation: Clean-up invalid HTML

2017-06-20 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/18
  
@epeay Could you close this PR as it is already merged?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3385: KAFKA-4059: Documentation still refers to AsyncPro...

2017-06-20 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3385

KAFKA-4059: Documentation still refers to AsyncProducer and SyncProducer

Also remove old code snippet which bears little resemblance to the current
Producer API.

The contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka KAFKA-4059

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3385


commit a1f81db4783c1b97c8b58a4593873aa0793b7189
Author: Tom Bentley 
Date:   2017-06-20T14:05:21Z

KAFKA-4059: Remove mention of long-deprecated classes

Also remove old code snippet which bears little resemblance to the current
Producer API.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3384: MINOR: remove unused hitRatio field in NamedCache

2017-06-20 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3384

MINOR: remove unused hitRatio field in NamedCache



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka remove-unused-field

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3384


commit a971277f5348934c4196d4b7508a4b9907e1a96a
Author: Damian Guy 
Date:   2017-06-20T13:24:34Z

remove unused field




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3383: KAFKA-5481: ListOffsetResponse isn't logged in the...

2017-06-20 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/3383

KAFKA-5481: ListOffsetResponse isn't logged in the right way with trace 
level enabled

Added toString() method to ListOffsetResponse for logging

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka kafka-5481

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3383.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3383


commit 02bacc15a0de54f0dc75b74789a6922b6da4d83b
Author: ppatierno 
Date:   2017-06-20T12:35:33Z

Added toString() method to ListOffsetResponse for logging




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5481) ListOffsetResponse isn't logged in the right way with trace level enabled

2017-06-20 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5481:
-

 Summary: ListOffsetResponse isn't logged in the right way with 
trace level enabled
 Key: KAFKA-5481
 URL: https://issues.apache.org/jira/browse/KAFKA-5481
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
when trace level is enabled, the ListOffsetResponse isn't logged well but just 
the class name is showed in the log  :

{code}
[2017-06-20 14:18:50,724] TRACE Received ListOffsetResponse 
org.apache.kafka.common.requests.ListOffsetResponse@7ed5ecd9 from broker 
new-host:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.consumer.internals.Fetcher:674)
{code}

The class doesn't provide a toString() for such a thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5480) Partition Leader may not be elected although there is one live replica in ISR

2017-06-20 Thread Pengwei (JIRA)
Pengwei created KAFKA-5480:
--

 Summary: Partition Leader may not be elected although there is one 
live replica in ISR
 Key: KAFKA-5480
 URL: https://issues.apache.org/jira/browse/KAFKA-5480
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0, 0.9.0.1
Reporter: Pengwei


Currently we found a consumer blocking in the poll because of the coordinator 
of this consumer group is not available.
Digging in the log, we found some of the __consumer_offsets' partitions' leader 
are -1, so the coordinator not available is 
because of leader is not available, the scene is as follow:
There are 3 brokers in the cluster, and the network of the cluster is not 
stable.  At the beginning, the partition [__consumer_offsets,3] 
Leader is 3, ISR is [3, 1, 2]
1. Broker 1 become the controller: 
[2017-06-10 15:48:30,006] INFO [Controller 1]: Broker 1 starting become 
controller state transition (kafka.controller.KafkaController)
[2017-06-10 15:48:30,085] INFO [Controller 1]: Initialized controller epoch to 
8 and zk version 7 (kafka.controller.KafkaController)
[2017-06-10 15:48:30,088] INFO [Controller 1]: Controller 1 incremented epoch 
to 9 (kafka.controller.KafkaController)

2. Broker 2 soon becomes the controller, it is aware of all the brokers:
[2017-06-10 15:48:30,936] INFO [Controller 2]: Broker 2 starting become 
controller state transition (kafka.controller.KafkaController)
[2017-06-10 15:48:30,936] INFO [Controller 2]: Initialized controller epoch to 
9 and zk version 8 (kafka.controller.KafkaController)
[2017-06-10 15:48:30,943] INFO [Controller 2]: Controller 2 incremented epoch 
to 10 (kafka.controller.KafkaController)

[2017-06-10 15:48:31,574] INFO [Controller 2]: Currently active brokers in the 
cluster: Set(1, 2, 3) (kafka.controller.KafkaController)
[2017-06-10 15:48:31,574] INFO [Controller 2]: Currently shutting brokers in 
the cluster: Set() (kafka.controller.KafkaController)
So broker 2 think Leader 3 is alive, does not need to elect leader.

3. Broker 1 is not resign until 15:48:32,  but it is not aware of the broker 3:
[2017-06-10 15:48:31,470] INFO [Controller 1]: List of partitions to be 
deleted: Map() (kafka.controller.KafkaController)
[2017-06-10 15:48:31,470] INFO [Controller 1]: Currently active brokers in the 
cluster: Set(1, 2) (kafka.controller.KafkaController)
[2017-06-10 15:48:31,470] INFO [Controller 1]: Currently shutting brokers in 
the cluster: Set() (kafka.controller.KafkaController)

and change the Leader to broker 1:
[2017-06-10 15:48:31,847] DEBUG [OfflinePartitionLeaderSelector]: Some broker 
in ISR is alive for [__consumer_offsets,3]. Select 1 from ISR 1,2 to be the 
leader. (kafka.controller.OfflinePartitionLeaderSelector)

broker 1 resign until 15:48:32 when the zk client is aware of the broker 2 has 
change the controller's data:
kafka.common.ControllerMovedException: Broker 1 received update metadata 
request with correlation id 4 from an old controller 1 with epoch 9. Latest 
known controller epoch is 10
at 
kafka.server.ReplicaManager.maybeUpdateMetadataCache(ReplicaManager.scala:621)
at 
kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:163)
at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:748)
[2017-06-10 15:48:32,307] INFO New leader is 2 
(kafka.server.ZookeeperLeaderElector$LeaderChangeListener)

4. Then broker 2's controllerContext.partitionLeadershipInfo cached the Leader 
is 3 ISR is [3,1,2], but in zk
Leader is 1 ISR is [1, 2].  It will keep this a long time until another zk 
event happen.

5. After 1 day, broker 2 received the broker 1's broker change event:
[2017-06-12 21:43:18,287] INFO [BrokerChangeListener on Controller 2]: Broker 
change listener fired for path /brokers/ids with children 2,3 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2017-06-12 21:43:18,293] INFO [BrokerChangeListener on Controller 2]: Newly 
added brokers: , deleted brokers: 1, all live brokers: 2,3 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)

then broker 2 will invoke onBrokerFailure for the deleted broker 1,  but 
because Leader is 3, it will not change the partition to OfflinePartition and 
will not change the leader in 
partitionStateMachine.triggerOnlinePartitionStateChange().
But in the replicaStateMachine.handleStateChanges(activeReplicasOnDeadBrokers, 
OfflineReplica), it will remove the replica 1 in ISR.
In the removeReplicaFromIsr,  controller will read the ISR from zk again, will 
find Leader change to 1, then it will change 
the partition's leader to -1 and ISR [2]:

[2017-06-12 21:43:19,158] DEBUG [Controller 2]: Removing replica 1 from ISR 
1,3,2 for partition [__consumer_offsets,3]. (kafka.controller.KafkaController)
[2017-06-12 21:43:19,160] INFO [Controller 2]: N

[GitHub] kafka pull request #3382: KAFKA-4260: Check for nonroutable address in adver...

2017-06-20 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3382

KAFKA-4260: Check for nonroutable address in advertised.listeners

As described in KAFKA-4260, when `listeners=PLAINTEXT://0.0.0.0:9092` (note 
the 0.0.0.0 "bind all interfaces" IP address) and `advertised.listeners` is not 
specified it defaults to `listeners`, but it makes no sense to advertise 
0.0.0.0 as it's not a routable IP address.

This patch checks for a 0.0.0.0 host in `advertised.listeners` (whether via 
default or not) and fails with a meaningful error if it's found.

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka advertised.listeners

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3382.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3382


commit 4286adcecbf9de28ffe66ca5afd6626bc9cace1a
Author: Tom Bentley 
Date:   2017-06-20T10:40:22Z

KAFKA-4260: Check for nonroutable address is advertised.listeners




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2465) Need to document replica.fetcher.backoff.ms

2017-06-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2465.

Resolution: Fixed

> Need to document replica.fetcher.backoff.ms
> ---
>
> Key: KAFKA-2465
> URL: https://issues.apache.org/jira/browse/KAFKA-2465
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Sriharsha Chintalapani
>
> We added this parameter in KAFKA-1461, it changes existing behavior and is 
> configurable by users. 
> We should document the new behavior and the parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3381: KAFKA-5479: Tidy up Authorization section of Secur...

2017-06-20 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3381

KAFKA-5479: Tidy up Authorization section of Security docs

* Mention the authz is disabled by default and enabled via 
authorizer.class.name
* ACL is an initialism, so spell it ACL not acl.
* In examples standardize on "we" rather than having a mixture of "the 
user", "you" and "we"
* Link to KIP-11 instead of just referencing it

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka doc-acl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3381.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3381


commit 1b69b040200d562f2667e1ff7d26ca0bae0dfe8a
Author: Tom Bentley 
Date:   2017-06-19T10:50:51Z

KAFKA-5479: Tidy up Authorization section of Security docs

* Mention the authz is disabled by default and enabled via 
authorizer.class.name
* ACL is an initialism, so spell it ACL not acl.
* In examples standardize on "we" rather than having a mixture of "the 
user", "you" and "we"
* Link to KIP-11 instead of just referencing it




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3369: KAFKA-5479: Improve the documentation around ACLs

2017-06-20 Thread tombentley
Github user tombentley closed the pull request at:

https://github.com/apache/kafka/pull/3369


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5479) Docs for authorization omit authorizer.class.name

2017-06-20 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5479:
--

 Summary: Docs for authorization omit authorizer.class.name
 Key: KAFKA-5479
 URL: https://issues.apache.org/jira/browse/KAFKA-5479
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Priority: Minor


The documentation in §7.4 Authorization and ACLs doesn't mention the 
{{authorizer.class.name}} setting. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: JIRA contributor list

2017-06-20 Thread Tom Bentley
According to the website [1] I need to ask to be able to assign JIRAs to
myself, but I'm still unable to do this. Could someone set this up for me
please?

Thanks,

Tom

[1]: https://kafka.apache.org/contributing

On 14 June 2017 at 13:43, Tom Bentley  wrote:

> Please could I be added to the JIRA contributor list so that I can assign
> issues to myself?
>
> Thanks,
>
> Tom
>


Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-20 Thread Michal Borowiecki

+1


On 19/06/17 21:31, Vahid S Hashemian wrote:

Thanks everyone. Great discussion.

Because these Read or Write actions are interpreted in conjunction with
particular resources (Topic, Group, ...) it would also make more sense to
me that for committing offsets the ACL should be (Group, Write).
So, a consumer would be required to have (Topic, Read), (Group, Write)
ACLs in order to function.

--Vahid




From:   Colin McCabe 
To: us...@kafka.apache.org
Date:   06/19/2017 11:01 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
Permission of OffsetFetch



Thanks for the explanation.  I still think it would be better to have
the mutation operations require write ACLs, though.  It might not be
100% intuitive for novice users, but the current split between Describe
and Read is not intuitive for either novice or experienced users.

In any case, I am +1 on the incremental improvement discussed in
KIP-163.

cheers,
Colin


On Sat, Jun 17, 2017, at 11:11, Hans Jespersen wrote:

Offset commit is something that is done in the act of consuming (or
reading) Kafka messages.
Yes technically it is a write to the Kafka consumer offset topic but

it's

much easier for
administers to think of ACLs in terms of whether the user is allowed to
write (Produce) or
read (Consume) messages and not the lower level semantics that are that
consuming is actually
reading AND writing (albeit only to the offset topic).

-hans





On Jun 17, 2017, at 10:59 AM, Viktor Somogyi

 wrote:

Hi Vahid,

+1 for OffsetFetch from me too.

I also wanted to ask the strangeness of the permissions, like why is
OffsetCommit a Read operation instead of Write which would intuitively

make

more sense to me. Perhaps any expert could shed some light on this? :)

Viktor

On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com > wrote:


Hi Michal,

Thanks a lot for your feedback.

Your statement about Heartbeat is fair and makes sense. I'll update

the

KIP accordingly.

--Vahid




From:Michal Borowiecki 
To:us...@kafka.apache.org, Vahid S Hashemian <
vahidhashem...@us.ibm.com>, dev@kafka.apache.org
Date:06/13/2017 01:35 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
Permission of OffsetFetch
--



Hi Vahid,

+1 wrt OffsetFetch.

The "Additional Food for Thought" mentions Heartbeat as a

non-mutating

action. I don't think that's true as the GroupCoordinator updates the
latestHeartbeat field for the member and adds a new object to the
heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
called from handleHeartbeat()

NB added dev mailing list back into CC as it seems to have been lost

along

the way.

Cheers,

Michał


On 12/06/17 18:47, Vahid S Hashemian wrote:
Hi Colin,

Thanks for the feedback.

To be honest, I'm not sure either why Read was selected instead of

Write

for mutating APIs in the initial design (I asked Ewen on the

corresponding

JIRA and he seemed unsure too).
Perhaps someone who was involved in the design can clarify.

Thanks.
--Vahid




From:   Colin McCabe *mailto:cmcc...@apache.org

* mailto:cmcc...@apache.org>>

To: *us...@kafka.apache.org *

mailto:us...@kafka.apache.org>>

Date:   06/12/2017 10:11 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
Permission of OffsetFetch



Hi Vahid,

I think you make a valid point that the ACLs controlling group
operations are not very intuitive.

This is probably a dumb question, but why are we using Read for

mutating

APIs?  Shouldn't that be Write?

The distinction between Describe and Read makes a lot of sense for
Topics.  A group isn't really something that you "read" from in the

same

way as a topic, so it always felt kind of weird there.

best,
Colin


On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:

Hi all,

I'm resending my earlier note hoping it would spark some conversation
this
time around :)

Thanks.
--Vahid




From:   "Vahid S Hashemian" *
mailto:vahidhashem...@us.ibm.com>>*

mailto:vahidhashem...@us.ibm.com>>
To: dev *mailto:dev@kafka.apache.org>>*

mailto:dev@kafka.apache.org>>, "Kafka User"

*mailto:us...@kafka.apache.org>>*

mailto:us...@kafka.apache.org>>

Date:   05/30/2017 08:33 AM
Subject:KIP-163: Lower the Minimum Required ACL Permission of
OffsetFetch



Hi,

I started a new KIP to improve the minimum required ACL permissions

of

some of the APIs:





*https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*
<
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*

<

https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch
<
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+O