[jira] [Commented] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998910#comment-14998910
 ] 

Ben Stopford commented on KAFKA-2788:
-

OK Parth. Thanks. Ping me if you want a review.

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Dependency Updates

2015-11-10 Thread Grant Henke
With a new major release comes the opportunity to update dependencies;
potentially including bug fixes, performance improvements, and useful
features. Below is some analysis of the current state of Kafka dependencies
and the available updates with change logs:

Note: this shows [Current -> Newest], there may be maintenance release in
between that are a more appropriate choice.


   - org.scala-lang:scala-library [2.10.5 -> 2.10.6]
  - http://www.scala-lang.org/news/2.10.6
  - Scala 2.10.6 resolves a license incompatibility in
  scala.util.Sorting
  - Otherwise identical to Scala 2.10.5
  - Requires small gradle build changes and variable in
  kafka-run-class.sh
   - org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
  - https://github.com/xerial/snappy-java/blob/develop/Milestone.md
  - Fixes SnappyOutputStream.close() is not idempotent
   - net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
  - http://blog.jpountz.net/post/103674111856/lz4-java-130-is-out
  - May want to rewrite integration to use bytebuffers now that its
  available
   - junit:junit [4.11 -> 4.12]
  -
  https://github.com/junit-team/junit/blob/master/doc/ReleaseNotes4.12.md
   - org.easymock:easymock [3.3.1 -> 3.4]
  - https://github.com/easymock/easymock/releases/tag/easymock-3.4
   - org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
   - org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
  - https://github.com/jayway/powermock/blob/master/changelog.txt
   - org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
   - org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12
  - http://www.slf4j.org/news.html
   - com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 ->
   2.6.3]
  -
  
https://github.com/FasterXML/jackson-jaxrs-providers/blob/master/release-notes/VERSION
   - com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
  -
  
https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION#L61
  - many small bug fixes
   - org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.3.5.v20151012]
   - org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.3.5.v20151012]
  - https://github.com/eclipse/jetty.project/blob/master/VERSION.txt
   - org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
  - https://www.bouncycastle.org/releasenotes.html
   - net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
  - Only used in migration tool
  - Remove in favor of argparse? to reduce dependencies
   - org.rocksdb:rocksdbjni [3.10.1 -> 4.0]
  - https://github.com/facebook/rocksdb/releases
   - org.objenesis:objenesis [1.2 -> 2.2]
  - http://objenesis.org/notes.html
  - Is this library still needed/used?
   - com.yammer.metrics:metrics-core [2.2.0 -> NA]
  - Under new location: io.dropwizard.metrics:metrics-core:3.1.2
  - Explanation:
  https://groups.google.com/d/msg/dropwizard-user/1usH7frpnZE/RSQUsOBFMsoJ
  - Likely to big of a change to be worth it, since Kafka metrics now
  exists
  - Listed for completeness

So do we want to update any of these? Any that we absolutely should not?
Once we get a list of those to be updated I can send a pull request.

Thanks,
Grant
-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-11-10 Thread Allen Wang
I am busy with some time pressing issues for the last few days. I will
think about how the incomplete rack information will affect the balance and
update the KIP by early next week.

Thanks,
Allen


On Tue, Nov 3, 2015 at 9:03 AM, Neha Narkhede  wrote:

> Few suggestions on improving the KIP
>
> *If some brokers have rack, and some do not, the algorithm will thrown an
> > exception. This is to prevent incorrect assignment caused by user error.*
>
>
> In the KIP, can you clearly state the user-facing behavior when some
> brokers have rack information and some don't. Which actions and requests
> will error out and how?
>
> *Even distribution of partition leadership among brokers*
>
>
> There is some information about arranging the sorted broker list interlaced
> with rack ids. Can you describe the changes to the current algorithm in a
> little more detail? How does this interlacing work if only a subset of
> brokers have the rack id configured? Does this still work if uneven # of
> brokers are assigned to each rack? It might work, I'm looking for more
> details on the changes, since it will affect the behavior seen by the user
> - imbalance on either the leaders or data or both.
>
> On Mon, Nov 2, 2015 at 6:39 PM, Aditya Auradkar 
> wrote:
>
> > I think this sounds reasonable. Anyone else have comments?
> >
> > Aditya
> >
> > On Tue, Oct 27, 2015 at 5:23 PM, Allen Wang 
> wrote:
> >
> > > During the discussion in the hangout, it was mentioned that it would be
> > > desirable that consumers know the rack information of the brokers so
> that
> > > they can consume from the broker in the same rack to reduce latency.
> As I
> > > understand this will only be beneficial if consumer can consume from
> any
> > > broker in ISR, which is not possible now.
> > >
> > > I suggest we skip the change to TMR. Once the change is made to
> consumer
> > to
> > > be able to consume from any broker in ISR, the rack information can be
> > > added to TMR.
> > >
> > > Another thing I want to confirm is  command line behavior. I think the
> > > desirable default behavior is to fail fast on command line for
> incomplete
> > > rack mapping. The error message can include further instruction that
> > tells
> > > the user to add an extra argument (like "--allow-partial-rackinfo") to
> > > suppress the error and do an imperfect rack aware assignment. If the
> > > default behavior is to allow incomplete mapping, the error can still be
> > > easily missed.
> > >
> > > The affected command line tools are TopicCommand and
> > > ReassignPartitionsCommand.
> > >
> > > Thanks,
> > > Allen
> > >
> > >
> > >
> > >
> > >
> > > On Mon, Oct 26, 2015 at 12:55 PM, Aditya Auradkar <
> > aaurad...@linkedin.com>
> > > wrote:
> > >
> > > > Hi Allen,
> > > >
> > > > For TopicMetadataResponse to understand version, you can bump up the
> > > > request version itself. Based on the version of the request, the
> > response
> > > > can be appropriately serialized. It shouldn't be a huge change. For
> > > > example: We went through something similar for ProduceRequest
> recently
> > (
> > > > https://reviews.apache.org/r/33378/)
> > > > I guess the reason protocol information is not included in the TMR is
> > > > because the topic itself is independent of any particular protocol
> (SSL
> > > vs
> > > > Plaintext). Having said that, I'm not sure we even need rack
> > information
> > > in
> > > > TMR. What usecase were you thinking of initially?
> > > >
> > > > For 1 - I'd be fine with adding an option to the command line tools
> > that
> > > > check rack assignment. For e.g. "--strict-assignment" or something
> > > similar.
> > > >
> > > > Aditya
> > > >
> > > > On Thu, Oct 22, 2015 at 6:44 PM, Allen Wang 
> > > wrote:
> > > >
> > > > > For 2 and 3, I have updated the KIP. Please take a look. One thing
> I
> > > have
> > > > > changed is removing the proposal to add rack to
> > TopicMetadataResponse.
> > > > The
> > > > > reason is that unlike UpdateMetadataRequest, TopicMetadataResponse
> > does
> > > > not
> > > > > understand version. I don't see a way to include rack without
> > breaking
> > > > old
> > > > > version of clients. That's probably why secure protocol is not
> > included
> > > > in
> > > > > the TopicMetadataResponse either. I think it will be a much bigger
> > > change
> > > > > to include rack in TopicMetadataResponse.
> > > > >
> > > > > For 1, my concern is that doing rack aware assignment without
> > complete
> > > > > broker to rack mapping will result in assignment that is not rack
> > aware
> > > > and
> > > > > fail to provide fault tolerance in the event of rack outage. This
> > kind
> > > of
> > > > > problem will be difficult to surface. And the cost of this problem
> is
> > > > high:
> > > > > you have to do partition reassignment if you are lucky to spot the
> > > > problem
> > > > > early on or face the consequence of data loss during real rack
> 

[jira] [Commented] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998899#comment-14998899
 ] 

Ben Stopford commented on KAFKA-2788:
-

I'll pick this one up if you haven't done it yet [~parth.brahmbhatt]

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2797:
---

 Summary: Release artifact expects a git repository for the release 
audit tool (RAT)
 Key: KAFKA-2797
 URL: https://issues.apache.org/jira/browse/KAFKA-2797
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira
Priority: Blocker
 Fix For: 0.9.0.0


When running gradle on the RC0 for 0.9, we get an error because the build 
expects to find a git repo here:

{noformat}
line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
{noformat}

and we get this error message:

{noformat}
FAILURE: Build failed with an exception.

* Where:
Build file 'kafka-0.9.0.0-src/build.gradle' line: 68

* What went wrong:
A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
> repository not found: kafka-0.9.0.0-src

{noformat}

The definitions for rat make sense when working on a git branch, but not for 
the release artifact. One way around this is to disable rat by commenting out 
the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2788:
---
Priority: Blocker  (was: Major)

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998905#comment-14998905
 ] 

Parth Brahmbhatt commented on KAFKA-2788:
-

I have started working on it, will send a PR sometime today.

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-10 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2799:


 Summary: WakupException thrown in the followup poll() could lead 
to data loss
 Key: KAFKA-2799
 URL: https://issues.apache.org/jira/browse/KAFKA-2799
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.9.0.0


The common pattern of the new consumer:

{code}
try {
   records = consumer.poll();
   // process records
} catch (WakeupException) {
   consumer.close()
}
{code}

in which the close() can commit offsets. But since in the poll() call, we do 
the following order:

1) trigger client.poll().
2) possibly update consumed position if there are some data from fetch response.
3) before return the records, possibly trigger another client.poll()

And if wakeup exception is thrown in 3) it will lead to not-returned messages 
to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #794

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2797: Only run rat when in the .git repository since it require s

[cshapi] KAFKA-2792: Don't wait for a response to the leave group message when

--
[...truncated 96 lines...]
@deprecated
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate

Re: [VOTE] 0.9.0.0 Candiate 1

2015-11-10 Thread Ewen Cheslack-Postava
Jun, not sure if this is just because of the RC vs being published on the
site, but the links in the release notes aren't pointing to
issues.apache.org. They're relative URLs instead of absolute.

-Ewen

On Tue, Nov 10, 2015 at 3:38 AM, Flavio Junqueira  wrote:

> -1 (non-binding)
>
> I'm getting an error with gradle when using the source artifact because it
> seems to be expecting a git repository here:
>
> line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
>
> and I get this message:
> FAILURE: Build failed with an exception.
>
> * Where:
> Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>
> * What went wrong:
> A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
> > repository not found: kafka-0.9.0.0-src
>
> The definitions for rat make sense when working on a git branch, but not
> for the release artifact. One
> way around this is to disable rat by commenting out the corresponding
> lines, but that isn't what the
> README file says. I'd rather have an RC that fixes this issue by possibly
> disabling rat altogether.
>
> -Flavio
>
> > On 10 Nov 2015, at 07:17, Jun Rao  wrote:
> >
> > This is the first candidate for release of Apache Kafka 0.9.0.0. This a
> > major release that includes (1) authentication (through SSL and SASL) and
> > authorization, (2) a new java consumer, (3) a Kafka connect framework for
> > data ingestion and egression, and (4) quotas. Since this is a major
> > release, we will give people a bit more time for trying this out.
> >
> > Release Notes for the 0.9.0.0 release
> >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, Nov. 19, 11pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/
> >
> > * scala-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/scaladoc/
> >
> > * java-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=6cee4f38aba612209b0a8171736c6e2948c35b6f
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > /***
> >
> > Thanks,
> >
> > Jun
>
>


-- 
Thanks,
Ewen


Build failed in Jenkins: kafka-trunk-jdk8 #125

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2797: Only run rat when in the .git repository since it require s

[cshapi] KAFKA-2792: Don't wait for a response to the leave group message when

[cshapi] KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and

--
[...truncated 125 lines...]
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for 

Build failed in Jenkins: kafka-trunk-jdk7 #796

2015-11-10 Thread Apache Jenkins Server
See 

--
[...truncated 133 lines...]
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:40:
 @deprecated now takes two arguments; see the scaladoc.
@deprecated
 ^
:234:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:353:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if 

[jira] [Updated] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2797:
-
Reviewer: Gwen Shapira
  Status: Patch Available  (was: Open)

> Release artifact expects a git repository for the release audit tool (RAT)
> --
>
> Key: KAFKA-2797
> URL: https://issues.apache.org/jira/browse/KAFKA-2797
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running gradle on the RC0 for 0.9, we get an error because the build 
> expects to find a git repo here:
> {noformat}
>   line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
> {noformat}
> and we get this error message:
> {noformat}
>   FAILURE: Build failed with an exception.
>   * Where:
>   Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>   * What went wrong:
>   A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>   > repository not found: kafka-0.9.0.0-src
> {noformat}
> The definitions for rat make sense when working on a git branch, but not for 
> the release artifact. One way around this is to disable rat by commenting out 
> the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1810:

Fix Version/s: (was: 0.9.0.0)

> Add IP Filtering / Whitelists-Blacklists 
> -
>
> Key: KAFKA-1810
> URL: https://issues.apache.org/jira/browse/KAFKA-1810
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, network, security
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
> Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch, 
> KAFKA-1810_2015-03-15_01:13:12.patch
>
>
> While longer-term goals of security in Kafka are on the roadmap there exists 
> some value for the ability to restrict connection to Kafka brokers based on 
> IP address. This is not intended as a replacement for security but more of a 
> precaution against misconfiguration and to provide some level of control to 
> Kafka administrators about who is reading/writing to their cluster.
> 1) In some organizations software administration vs o/s systems 
> administration and network administration is disjointed and not well 
> choreographed. Providing software administrators the ability to configure 
> their platform relatively independently (after initial configuration) from 
> Systems administrators is desirable.
> 2) Configuration and deployment is sometimes error prone and there are 
> situations when test environments could erroneously read/write to production 
> environments
> 3) An additional precaution against reading sensitive data is typically 
> welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #795

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and

--
[...truncated 316 lines...]
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:core:javadoc
cache fileHashes.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] 
:40:
 warning: @deprecated now takes two arguments; see the scaladoc.
[ant:scaladoc] @deprecated
[ant:scaladoc]  ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 10 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,
   

[GitHub] kafka pull request: KAFKA-2797: Only run rat when in the .git repo...

2015-11-10 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/485

KAFKA-2797: Only run rat when in the .git repository since it require s the 
.gitignore to generate the list of files to ignore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2797-disable-rat-when-git-missing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/485.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #485


commit 79c6cac66644b458eff8141e13dacd0b859f5c3c
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T17:33:45Z

KAFKA-2797: Only run rat when in the .git repository since it requires the 
.gitignore to generate the list of files to ignore




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998996#comment-14998996
 ] 

ASF GitHub Bot commented on KAFKA-2797:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/485

KAFKA-2797: Only run rat when in the .git repository since it require s the 
.gitignore to generate the list of files to ignore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2797-disable-rat-when-git-missing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/485.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #485


commit 79c6cac66644b458eff8141e13dacd0b859f5c3c
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T17:33:45Z

KAFKA-2797: Only run rat when in the .git repository since it requires the 
.gitignore to generate the list of files to ignore




> Release artifact expects a git repository for the release audit tool (RAT)
> --
>
> Key: KAFKA-2797
> URL: https://issues.apache.org/jira/browse/KAFKA-2797
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running gradle on the RC0 for 0.9, we get an error because the build 
> expects to find a git repo here:
> {noformat}
>   line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
> {noformat}
> and we get this error message:
> {noformat}
>   FAILURE: Build failed with an exception.
>   * Where:
>   Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>   * What went wrong:
>   A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>   > repository not found: kafka-0.9.0.0-src
> {noformat}
> The definitions for rat make sense when working on a git branch, but not for 
> the release artifact. One way around this is to disable rat by commenting out 
> the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2797: Only run rat when in the .git repo...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/485


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2797:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 485
[https://github.com/apache/kafka/pull/485]

> Release artifact expects a git repository for the release audit tool (RAT)
> --
>
> Key: KAFKA-2797
> URL: https://issues.apache.org/jira/browse/KAFKA-2797
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running gradle on the RC0 for 0.9, we get an error because the build 
> expects to find a git repo here:
> {noformat}
>   line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
> {noformat}
> and we get this error message:
> {noformat}
>   FAILURE: Build failed with an exception.
>   * Where:
>   Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>   * What went wrong:
>   A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>   > repository not found: kafka-0.9.0.0-src
> {noformat}
> The definitions for rat make sense when working on a git branch, but not for 
> the release artifact. One way around this is to disable rat by commenting out 
> the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999055#comment-14999055
 ] 

ASF GitHub Bot commented on KAFKA-2797:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/485


> Release artifact expects a git repository for the release audit tool (RAT)
> --
>
> Key: KAFKA-2797
> URL: https://issues.apache.org/jira/browse/KAFKA-2797
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running gradle on the RC0 for 0.9, we get an error because the build 
> expects to find a git repo here:
> {noformat}
>   line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
> {noformat}
> and we get this error message:
> {noformat}
>   FAILURE: Build failed with an exception.
>   * Where:
>   Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>   * What went wrong:
>   A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>   > repository not found: kafka-0.9.0.0-src
> {noformat}
> The definitions for rat make sense when working on a git branch, but not for 
> the release artifact. One way around this is to disable rat by commenting out 
> the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2792) KafkaConsumer.close() can block unnecessarily due to leave group waiting for a reply

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2792.
-
Resolution: Fixed

Issue resolved by pull request 480
[https://github.com/apache/kafka/pull/480]

> KafkaConsumer.close() can block unnecessarily due to leave group waiting for 
> a reply
> 
>
> Key: KAFKA-2792
> URL: https://issues.apache.org/jira/browse/KAFKA-2792
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The current implementation of close() waits for a response to LeaveGroup. 
> However, if we have an outstanding rebalance in the works, this can cause the 
> close() operation to have to wait for the entire rebalance process to 
> complete, which is annoying since the goal is to get rid of the consumer 
> object anyway. This is at best surprising and at worst can cause unexpected 
> bugs due to close() taking excessively long -- this was found due to 
> exceeding timeouts unexpectedly causing other operations in Kafka Connect to 
> timeout.
> Waiting for a response isn't necessary since as soon as the data is in the 
> TCP buffer, it'll be delivered to the broker. The client doesn't benefit at 
> all from seeing the close group. So we can instead just always send the 
> request 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2798: Use prefixedd configurations for K...

2015-11-10 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/486

KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and 
consumer settings so they do not conflict with the distributed herder's 
settings.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2798-conflicting-herder-producer-consumer-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #486


commit d455acd938eb3e16d034ce0c4ac7899e541bc908
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T18:52:56Z

KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and 
consumer settings so they do not conflict with the distributed herder's 
settings.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2797) Release artifact expects a git repository for the release audit tool (RAT)

2015-11-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-2797:


Assignee: Ewen Cheslack-Postava

> Release artifact expects a git repository for the release audit tool (RAT)
> --
>
> Key: KAFKA-2797
> URL: https://issues.apache.org/jira/browse/KAFKA-2797
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running gradle on the RC0 for 0.9, we get an error because the build 
> expects to find a git repo here:
> {noformat}
>   line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
> {noformat}
> and we get this error message:
> {noformat}
>   FAILURE: Build failed with an exception.
>   * Where:
>   Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>   * What went wrong:
>   A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>   > repository not found: kafka-0.9.0.0-src
> {noformat}
> The definitions for rat make sense when working on a git branch, but not for 
> the release artifact. One way around this is to disable rat by commenting out 
> the corresponding lines, but that isn't what the README file says.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2792: Don't wait for a response to the l...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/480


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2792) KafkaConsumer.close() can block unnecessarily due to leave group waiting for a reply

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999069#comment-14999069
 ] 

ASF GitHub Bot commented on KAFKA-2792:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/480


> KafkaConsumer.close() can block unnecessarily due to leave group waiting for 
> a reply
> 
>
> Key: KAFKA-2792
> URL: https://issues.apache.org/jira/browse/KAFKA-2792
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The current implementation of close() waits for a response to LeaveGroup. 
> However, if we have an outstanding rebalance in the works, this can cause the 
> close() operation to have to wait for the entire rebalance process to 
> complete, which is annoying since the goal is to get rid of the consumer 
> object anyway. This is at best surprising and at worst can cause unexpected 
> bugs due to close() taking excessively long -- this was found due to 
> exceeding timeouts unexpectedly causing other operations in Kafka Connect to 
> timeout.
> Waiting for a response isn't necessary since as soon as the data is in the 
> TCP buffer, it'll be delivered to the broker. The client doesn't benefit at 
> all from seeing the close group. So we can instead just always send the 
> request 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2798) Kafka Connect distributed configs can conflict with producer/consumer configs, making it impossible to control them independently

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2798.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 486
[https://github.com/apache/kafka/pull/486]

> Kafka Connect distributed configs can conflict with producer/consumer 
> configs, making it impossible to control them independently
> -
>
> Key: KAFKA-2798
> URL: https://issues.apache.org/jira/browse/KAFKA-2798
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Right now we're using the approach to configs used by serializers where we 
> just forward the configs for the entire worker to the producer and consumer. 
> However, with the distributed mode, we now have a lot of conflicting configs 
> because the distributed herder uses the group membership functionality and 
> client networking libraries.
> We should instead use a sort of "namespaced" set of configs, which we already 
> started doing with converters since we have multiple instances of converters 
> instantiated by a single worker. We can use producer. and consumer. prefixes 
> to isolate settings for the different components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2798) Kafka Connect distributed configs can conflict with producer/consumer configs, making it impossible to control them independently

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999135#comment-14999135
 ] 

ASF GitHub Bot commented on KAFKA-2798:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/486


> Kafka Connect distributed configs can conflict with producer/consumer 
> configs, making it impossible to control them independently
> -
>
> Key: KAFKA-2798
> URL: https://issues.apache.org/jira/browse/KAFKA-2798
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Right now we're using the approach to configs used by serializers where we 
> just forward the configs for the entire worker to the producer and consumer. 
> However, with the distributed mode, we now have a lot of conflicting configs 
> because the distributed herder uses the group membership functionality and 
> client networking libraries.
> We should instead use a sort of "namespaced" set of configs, which we already 
> started doing with converters since we have multiple instances of converters 
> instantiated by a single worker. We can use producer. and consumer. prefixes 
> to isolate settings for the different components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2798: Use prefixedd configurations for K...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/486


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2798) Kafka Connect distributed configs can conflict with producer/consumer configs, making it impossible to control them independently

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999114#comment-14999114
 ] 

ASF GitHub Bot commented on KAFKA-2798:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/486

KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and 
consumer settings so they do not conflict with the distributed herder's 
settings.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2798-conflicting-herder-producer-consumer-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #486


commit d455acd938eb3e16d034ce0c4ac7899e541bc908
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T18:52:56Z

KAFKA-2798: Use prefixedd configurations for Kafka Connect producer and 
consumer settings so they do not conflict with the distributed herder's 
settings.




> Kafka Connect distributed configs can conflict with producer/consumer 
> configs, making it impossible to control them independently
> -
>
> Key: KAFKA-2798
> URL: https://issues.apache.org/jira/browse/KAFKA-2798
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Right now we're using the approach to configs used by serializers where we 
> just forward the configs for the entire worker to the producer and consumer. 
> However, with the distributed mode, we now have a lot of conflicting configs 
> because the distributed herder uses the group membership functionality and 
> client networking libraries.
> We should instead use a sort of "namespaced" set of configs, which we already 
> started doing with converters since we have multiple instances of converters 
> instantiated by a single worker. We can use producer. and consumer. prefixes 
> to isolate settings for the different components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: update system test readme

2015-11-10 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/487

MINOR: update system test readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka minor-update-test-readme

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/487.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #487


commit 98b18228dca8a405738669cade199720db753e93
Author: Geoff Anderson 
Date:   2015-11-10T02:22:19Z

Minor update to test README

commit 44b5543649f4ce6b2d7a9ee515e4c7d81199e009
Author: Geoff Anderson 
Date:   2015-11-10T18:57:46Z

Tweaked explanation

commit d0ea7d95dad2ba0f6aedac8864fb00060eee4c29
Author: Geoff Anderson 
Date:   2015-11-10T18:58:49Z

Tweak




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2106) Partition balance tool between borkers

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2106:

Fix Version/s: (was: 0.9.0.0)

> Partition balance tool between borkers
> --
>
> Key: KAFKA-2106
> URL: https://issues.apache.org/jira/browse/KAFKA-2106
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Reporter: chenshangan
>Assignee: chenshangan
> Attachments: KAFKA-2106.3, KAFKA-2106.patch, KAFKA-2106.patch.2
>
>
> The default partition assignment algorithm can work well in a static kafka 
> cluster(number of brokers seldom change). Actually, in production env, number 
> of brokers is always increasing according to the business data. When new 
> brokers added to the cluster, it's better to provide a tool that can help to 
> move existing data to new brokers. Currently, users need to choose topic or 
> partitions manually and use the Reassign Partitions Tool 
> (kafka-reassign-partitions.sh) to achieve the goal. It's a time-consuming 
> task when there's a lot of topics in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dependency Updates

2015-11-10 Thread Jun Rao
Hi, Grant,

Thanks for the email. Yes, we should try to keep up with the latest stable
version of the dependencies. We want to be a bit careful with api
compatibilities for dependencies used in the clients. Incompatible changes
make the upgrade of the clients painful. For example, metrics-core wasn't
very careful with compatibility in a release a couple yeas ago.

This is probably too late for the 0.9.0.0 release since we won't have
enough time to test this thoroughly. I recommend that we do that in trunk.

Thanks,

Jun

On Tue, Nov 10, 2015 at 8:12 AM, Grant Henke  wrote:

> With a new major release comes the opportunity to update dependencies;
> potentially including bug fixes, performance improvements, and useful
> features. Below is some analysis of the current state of Kafka dependencies
> and the available updates with change logs:
>
> Note: this shows [Current -> Newest], there may be maintenance release in
> between that are a more appropriate choice.
>
>
>- org.scala-lang:scala-library [2.10.5 -> 2.10.6]
>   - http://www.scala-lang.org/news/2.10.6
>   - Scala 2.10.6 resolves a license incompatibility in
>   scala.util.Sorting
>   - Otherwise identical to Scala 2.10.5
>   - Requires small gradle build changes and variable in
>   kafka-run-class.sh
>- org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
>   - https://github.com/xerial/snappy-java/blob/develop/Milestone.md
>   - Fixes SnappyOutputStream.close() is not idempotent
>- net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
>   - http://blog.jpountz.net/post/103674111856/lz4-java-130-is-out
>   - May want to rewrite integration to use bytebuffers now that its
>   available
>- junit:junit [4.11 -> 4.12]
>   -
>
> https://github.com/junit-team/junit/blob/master/doc/ReleaseNotes4.12.md
>- org.easymock:easymock [3.3.1 -> 3.4]
>   - https://github.com/easymock/easymock/releases/tag/easymock-3.4
>- org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
>- org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
>   - https://github.com/jayway/powermock/blob/master/changelog.txt
>- org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
>- org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12
>   - http://www.slf4j.org/news.html
>- com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 ->
>2.6.3]
>   -
>
> https://github.com/FasterXML/jackson-jaxrs-providers/blob/master/release-notes/VERSION
>- com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
>   -
>
> https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION#L61
>   - many small bug fixes
>- org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.3.5.v20151012]
>- org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.3.5.v20151012]
>   - https://github.com/eclipse/jetty.project/blob/master/VERSION.txt
>- org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
>   - https://www.bouncycastle.org/releasenotes.html
>- net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
>   - Only used in migration tool
>   - Remove in favor of argparse? to reduce dependencies
>- org.rocksdb:rocksdbjni [3.10.1 -> 4.0]
>   - https://github.com/facebook/rocksdb/releases
>- org.objenesis:objenesis [1.2 -> 2.2]
>   - http://objenesis.org/notes.html
>   - Is this library still needed/used?
>- com.yammer.metrics:metrics-core [2.2.0 -> NA]
>   - Under new location: io.dropwizard.metrics:metrics-core:3.1.2
>   - Explanation:
>
> https://groups.google.com/d/msg/dropwizard-user/1usH7frpnZE/RSQUsOBFMsoJ
>   - Likely to big of a change to be worth it, since Kafka metrics now
>   exists
>   - Listed for completeness
>
> So do we want to update any of these? Any that we absolutely should not?
> Once we get a list of those to be updated I can send a pull request.
>
> Thanks,
> Grant
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[jira] [Created] (KAFKA-2798) Kafka Connect distributed configs can conflict with producer/consumer configs, making it impossible to control them independently

2015-11-10 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2798:


 Summary: Kafka Connect distributed configs can conflict with 
producer/consumer configs, making it impossible to control them independently
 Key: KAFKA-2798
 URL: https://issues.apache.org/jira/browse/KAFKA-2798
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Right now we're using the approach to configs used by serializers where we just 
forward the configs for the entire worker to the producer and consumer. 
However, with the distributed mode, we now have a lot of conflicting configs 
because the distributed herder uses the group membership functionality and 
client networking libraries.

We should instead use a sort of "namespaced" set of configs, which we already 
started doing with converters since we have multiple instances of converters 
instantiated by a single worker. We can use producer. and consumer. prefixes to 
isolate settings for the different components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2419:

Issue Type: Improvement  (was: New Feature)

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: update system test readme

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/487


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2795: fix potential NPE in GroupMetadata...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/488


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #126

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: update system test readme

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 1d884d1f60aec9ec7ea334761bead4c60b13c7a9 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1d884d1f60aec9ec7ea334761bead4c60b13c7a9
 > git rev-list 403d89edeaa7808f71c0e7318411c925895210f2 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8776405657548686010.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 27.604 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5389755389112532056.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.826 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Commented] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999345#comment-14999345
 ] 

ASF GitHub Bot commented on KAFKA-2795:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/488


> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #798

2015-11-10 Thread Apache Jenkins Server
See 



[jira] [Assigned] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2795:
--

Assignee: Jason Gustafson  (was: Guozhang Wang)

> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: [VOTE] 0.9.0.0 Candiate 1

2015-11-10 Thread Onur Karaman
There's also a concurrency bug in the GroupMetadataManager that we'd
probably want fixed before the 0.9.0.0 release that Jason / Guozhang is
working on:
https://issues.apache.org/jira/browse/KAFKA-2795

On Tue, Nov 10, 2015 at 11:39 AM, Jun Rao  wrote:

> Ewen,
>
> Thanks for reporting that. Will fix that in RC2.
>
> Jun
>
> On Tue, Nov 10, 2015 at 10:41 AM, Ewen Cheslack-Postava  >
> wrote:
>
> > Jun, not sure if this is just because of the RC vs being published on the
> > site, but the links in the release notes aren't pointing to
> > issues.apache.org. They're relative URLs instead of absolute.
> >
> > -Ewen
> >
> > On Tue, Nov 10, 2015 at 3:38 AM, Flavio Junqueira 
> wrote:
> >
> >> -1 (non-binding)
> >>
> >> I'm getting an error with gradle when using the source artifact because
> >> it seems to be expecting a git repository here:
> >>
> >> line 68 of build.gradle: def repo =
> Grgit.open(project.file('.'))
> >>
> >> and I get this message:
> >> FAILURE: Build failed with an exception.
> >>
> >> * Where:
> >> Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
> >>
> >> * What went wrong:
> >> A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
> >> > repository not found: kafka-0.9.0.0-src
> >>
> >> The definitions for rat make sense when working on a git branch, but not
> >> for the release artifact. One
> >> way around this is to disable rat by commenting out the corresponding
> >> lines, but that isn't what the
> >> README file says. I'd rather have an RC that fixes this issue by
> possibly
> >> disabling rat altogether.
> >>
> >> -Flavio
> >>
> >> > On 10 Nov 2015, at 07:17, Jun Rao  wrote:
> >> >
> >> > This is the first candidate for release of Apache Kafka 0.9.0.0. This
> a
> >> > major release that includes (1) authentication (through SSL and SASL)
> >> and
> >> > authorization, (2) a new java consumer, (3) a Kafka connect framework
> >> for
> >> > data ingestion and egression, and (4) quotas. Since this is a major
> >> > release, we will give people a bit more time for trying this out.
> >> >
> >> > Release Notes for the 0.9.0.0 release
> >> >
> >>
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/RELEASE_NOTES.html
> >> >
> >> > *** Please download, test and vote by Thursday, Nov. 19, 11pm PT
> >> >
> >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> >> > and sha2 (SHA256) checksum.
> >> >
> >> > * Release artifacts to be voted upon (source and binary):
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/
> >> >
> >> > * Maven artifacts to be voted upon prior to release:
> >> > https://repository.apache.org/content/groups/staging/
> >> >
> >> > * scala-doc
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/scaladoc/
> >> >
> >> > * java-doc
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/javadoc/
> >> >
> >> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> >> >
> >>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=6cee4f38aba612209b0a8171736c6e2948c35b6f
> >> >
> >> > * Documentation
> >> > http://kafka.apache.org/090/documentation.html
> >> >
> >> > /***
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >>
> >>
> >
> >
> > --
> > Thanks,
> > Ewen
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at http://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CAE1jLMOaATVKB0qg7V4fi5DBLk15ndAip-E7e4bLyDGJWPq-Sg%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAE1jLMOaATVKB0qg7V4fi5DBLk15ndAip-E7e4bLyDGJWPq-Sg%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


[jira] [Updated] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2796:
-
Fix Version/s: (was: 0.9.0.0)
   0.9.0.1

> add support for reassignment partition to specified logdir
> --
>
> Key: KAFKA-2796
> URL: https://issues.apache.org/jira/browse/KAFKA-2796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, controller, core, log
>Reporter: Yonghui Yang
>Assignee: Yonghui Yang
>  Labels: features
> Fix For: 0.9.0.1
>
>
> Currently when creating a log, the directory is chosen by calculating the 
> number of partitions
> in each directory and then choosing the data directory with the fewest 
> partitions.
> However, the sizes of different TopicParitions are very different, which lead 
> to usage vary greatly between different logDirs. And usually each logDir 
> corresponds to a disk, so the disk usage between different disks is very 
> imbalance .
> The possible solution is to reassign partitions in high-usage logDirs to 
> low-usage logDirs. I change the format of /admin/reassign_partitions,add 
> replicaDirs field. At reassigning Partitions, when broker’s 
> LogManager.createLog() is invoked , if replicaDir is specified , the 
> specified logDir will be chosen, otherwise the logDir with the fewest 
> partitions will be chosen.
> the old /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3]
>  }
>]
>   }
> the new /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3],
>"replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
>  }
>]
>   }
> This feature has been developed.
> PR: https://github.com/apache/kafka/pull/484



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999221#comment-14999221
 ] 

Guozhang Wang commented on KAFKA-2795:
--

+1.

Originally it was written in the above case but I changed it for the internal 
usage of addGroup (which is wrong btw). Since now we are only calling addGroup 
directly from GroupCoordinator we can probably remove the private addGroup and 
move the logic back to the public addGroup with the above logic.

> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: [VOTE] 0.9.0.0 Candiate 1

2015-11-10 Thread Gwen Shapira
BTW. I created a Jenkins job for the 0.9 branch:
https://builds.apache.org/job/kafka_0.9.0_jdk7/

Right now its pretty much identical to trunk, but since they may diverge, I
figured we want to keep an eye on the branch separately.

Gwen

On Tue, Nov 10, 2015 at 11:39 AM, Jun Rao  wrote:

> Ewen,
>
> Thanks for reporting that. Will fix that in RC2.
>
> Jun
>
> On Tue, Nov 10, 2015 at 10:41 AM, Ewen Cheslack-Postava  >
> wrote:
>
> > Jun, not sure if this is just because of the RC vs being published on the
> > site, but the links in the release notes aren't pointing to
> > issues.apache.org. They're relative URLs instead of absolute.
> >
> > -Ewen
> >
> > On Tue, Nov 10, 2015 at 3:38 AM, Flavio Junqueira 
> wrote:
> >
> >> -1 (non-binding)
> >>
> >> I'm getting an error with gradle when using the source artifact because
> >> it seems to be expecting a git repository here:
> >>
> >> line 68 of build.gradle: def repo =
> Grgit.open(project.file('.'))
> >>
> >> and I get this message:
> >> FAILURE: Build failed with an exception.
> >>
> >> * Where:
> >> Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
> >>
> >> * What went wrong:
> >> A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
> >> > repository not found: kafka-0.9.0.0-src
> >>
> >> The definitions for rat make sense when working on a git branch, but not
> >> for the release artifact. One
> >> way around this is to disable rat by commenting out the corresponding
> >> lines, but that isn't what the
> >> README file says. I'd rather have an RC that fixes this issue by
> possibly
> >> disabling rat altogether.
> >>
> >> -Flavio
> >>
> >> > On 10 Nov 2015, at 07:17, Jun Rao  wrote:
> >> >
> >> > This is the first candidate for release of Apache Kafka 0.9.0.0. This
> a
> >> > major release that includes (1) authentication (through SSL and SASL)
> >> and
> >> > authorization, (2) a new java consumer, (3) a Kafka connect framework
> >> for
> >> > data ingestion and egression, and (4) quotas. Since this is a major
> >> > release, we will give people a bit more time for trying this out.
> >> >
> >> > Release Notes for the 0.9.0.0 release
> >> >
> >>
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/RELEASE_NOTES.html
> >> >
> >> > *** Please download, test and vote by Thursday, Nov. 19, 11pm PT
> >> >
> >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> >> > and sha2 (SHA256) checksum.
> >> >
> >> > * Release artifacts to be voted upon (source and binary):
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/
> >> >
> >> > * Maven artifacts to be voted upon prior to release:
> >> > https://repository.apache.org/content/groups/staging/
> >> >
> >> > * scala-doc
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/scaladoc/
> >> >
> >> > * java-doc
> >> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/javadoc/
> >> >
> >> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> >> >
> >>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=6cee4f38aba612209b0a8171736c6e2948c35b6f
> >> >
> >> > * Documentation
> >> > http://kafka.apache.org/090/documentation.html
> >> >
> >> > /***
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >>
> >>
> >
> >
> > --
> > Thanks,
> > Ewen
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at http://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CAE1jLMOaATVKB0qg7V4fi5DBLk15ndAip-E7e4bLyDGJWPq-Sg%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAE1jLMOaATVKB0qg7V4fi5DBLk15ndAip-E7e4bLyDGJWPq-Sg%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


[jira] [Resolved] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2795.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 488
[https://github.com/apache/kafka/pull/488]

> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999356#comment-14999356
 ] 

ASF GitHub Bot commented on KAFKA-2788:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/489

KAFKA-2788: Allow specifying principals with comman in ACL CLI.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2788

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/489.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #489


commit d94881e86096cfbc512b2bbc2ffdb5508cf4fb8e
Author: Parth Brahmbhatt 
Date:   2015-11-10T21:13:07Z

KAFKA-2788: Allow specifying principals with comman in ACL CLI.




> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999364#comment-14999364
 ] 

Guozhang Wang commented on KAFKA-2796:
--

I think there are some related KIP discussions:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support

And email threads:

http://mail-archives.apache.org/mod_mbox/kafka-dev/201504.mbox/browser

> add support for reassignment partition to specified logdir
> --
>
> Key: KAFKA-2796
> URL: https://issues.apache.org/jira/browse/KAFKA-2796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, controller, core, log
>Reporter: Yonghui Yang
>Assignee: Yonghui Yang
>  Labels: features
> Fix For: 0.9.0.1
>
>
> Currently when creating a log, the directory is chosen by calculating the 
> number of partitions
> in each directory and then choosing the data directory with the fewest 
> partitions.
> However, the sizes of different TopicParitions are very different, which lead 
> to usage vary greatly between different logDirs. And usually each logDir 
> corresponds to a disk, so the disk usage between different disks is very 
> imbalance .
> The possible solution is to reassign partitions in high-usage logDirs to 
> low-usage logDirs. I change the format of /admin/reassign_partitions,add 
> replicaDirs field. At reassigning Partitions, when broker’s 
> LogManager.createLog() is invoked , if replicaDir is specified , the 
> specified logDir will be chosen, otherwise the logDir with the fewest 
> partitions will be chosen.
> the old /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3]
>  }
>]
>   }
> the new /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3],
>"replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
>  }
>]
>   }
> This feature has been developed.
> PR: https://github.com/apache/kafka/pull/484



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999374#comment-14999374
 ] 

ASF GitHub Bot commented on KAFKA-2799:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/490

KAFKA-2799: skip wakeup in the follow-up poll() call.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2799

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/490.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #490


commit f9dd6ffbfa975ab454b84c43c722767d74d572ed
Author: Guozhang Wang 
Date:   2015-11-10T21:32:07Z

v1




> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2799: skip wakeup in the follow-up poll(...

2015-11-10 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/490

KAFKA-2799: skip wakeup in the follow-up poll() call.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2799

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/490.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #490


commit f9dd6ffbfa975ab454b84c43c722767d74d572ed
Author: Guozhang Wang 
Date:   2015-11-10T21:32:07Z

v1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [kafka-clients] Re: [VOTE] 0.9.0.0 Candiate 1

2015-11-10 Thread Jun Rao
Ewen,

Thanks for reporting that. Will fix that in RC2.

Jun

On Tue, Nov 10, 2015 at 10:41 AM, Ewen Cheslack-Postava 
wrote:

> Jun, not sure if this is just because of the RC vs being published on the
> site, but the links in the release notes aren't pointing to
> issues.apache.org. They're relative URLs instead of absolute.
>
> -Ewen
>
> On Tue, Nov 10, 2015 at 3:38 AM, Flavio Junqueira  wrote:
>
>> -1 (non-binding)
>>
>> I'm getting an error with gradle when using the source artifact because
>> it seems to be expecting a git repository here:
>>
>> line 68 of build.gradle: def repo = Grgit.open(project.file('.'))
>>
>> and I get this message:
>> FAILURE: Build failed with an exception.
>>
>> * Where:
>> Build file 'kafka-0.9.0.0-src/build.gradle' line: 68
>>
>> * What went wrong:
>> A problem occurred evaluating root project 'kafka-0.9.0.0-src'.
>> > repository not found: kafka-0.9.0.0-src
>>
>> The definitions for rat make sense when working on a git branch, but not
>> for the release artifact. One
>> way around this is to disable rat by commenting out the corresponding
>> lines, but that isn't what the
>> README file says. I'd rather have an RC that fixes this issue by possibly
>> disabling rat altogether.
>>
>> -Flavio
>>
>> > On 10 Nov 2015, at 07:17, Jun Rao  wrote:
>> >
>> > This is the first candidate for release of Apache Kafka 0.9.0.0. This a
>> > major release that includes (1) authentication (through SSL and SASL)
>> and
>> > authorization, (2) a new java consumer, (3) a Kafka connect framework
>> for
>> > data ingestion and egression, and (4) quotas. Since this is a major
>> > release, we will give people a bit more time for trying this out.
>> >
>> > Release Notes for the 0.9.0.0 release
>> >
>> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/RELEASE_NOTES.html
>> >
>> > *** Please download, test and vote by Thursday, Nov. 19, 11pm PT
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > http://kafka.apache.org/KEYS in addition to the md5, sha1
>> > and sha2 (SHA256) checksum.
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/
>> >
>> > * Maven artifacts to be voted upon prior to release:
>> > https://repository.apache.org/content/groups/staging/
>> >
>> > * scala-doc
>> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/scaladoc/
>> >
>> > * java-doc
>> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate1/javadoc/
>> >
>> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
>> >
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=6cee4f38aba612209b0a8171736c6e2948c35b6f
>> >
>> > * Documentation
>> > http://kafka.apache.org/090/documentation.html
>> >
>> > /***
>> >
>> > Thanks,
>> >
>> > Jun
>>
>>
>
>
> --
> Thanks,
> Ewen
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAE1jLMOaATVKB0qg7V4fi5DBLk15ndAip-E7e4bLyDGJWPq-Sg%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


Build failed in Jenkins: kafka-trunk-jdk7 #797

2015-11-10 Thread Apache Jenkins Server
See 

--
[...truncated 1561 lines...]
kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidJoinGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupLeaderAfterFollower PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownMember 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidLeaveGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupInactiveGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToDeadIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers PASSED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToDeadTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testStableToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testAwaitingSyncToStableTransition PASSED

kafka.coordinator.GroupMetadataTest > testDeadToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToAwaitingSyncIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testSupportsProtocols PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenStable PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > 
testSelectProtocolChoosesCompatibleProtocol PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

634 tests completed, 1 failed
:kafka-trunk-jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file://

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
 

[jira] [Created] (KAFKA-2800) Update outdated dependencies

2015-11-10 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2800:
--

 Summary: Update outdated dependencies
 Key: KAFKA-2800
 URL: https://issues.apache.org/jira/browse/KAFKA-2800
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2.2
Reporter: Grant Henke
Assignee: Grant Henke


See the relevant discussion here: 
http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #2

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: update system test readme

--
[...truncated 1513 lines...]
kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

634 tests completed, 1 failed
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file://

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 

[GitHub] kafka pull request: KAFKA-2788: Allow specifying principals with c...

2015-11-10 Thread Parth-Brahmbhatt
GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/489

KAFKA-2788: Allow specifying principals with comman in ACL CLI.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2788

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/489.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #489


commit d94881e86096cfbc512b2bbc2ffdb5508cf4fb8e
Author: Parth Brahmbhatt 
Date:   2015-11-10T21:13:07Z

KAFKA-2788: Allow specifying principals with comman in ACL CLI.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999181#comment-14999181
 ] 

Jason Gustafson commented on KAFKA-2795:


Nice catch. Seems like we need to use the result of putIfNotExists. Maybe 
something like this:
{code}
  private def addGroup(groupId: String, group: GroupMetadata): GroupMetadata = {
val previous = groupsCache.putIfNotExists(groupId, group)
if (previous != null)
  previous
else
  group
  }
{code}

> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2792) KafkaConsumer.close() can block unnecessarily due to leave group waiting for a reply

2015-11-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999211#comment-14999211
 ] 

Guozhang Wang commented on KAFKA-2792:
--

[~ewencp] [~onurkaraman] If we fire-and-forget in the close() call, it will 
likely lead to an EOF exception on the server side whenever some consumer 
closes themselves, causing some log pollutions. This is why I was deciding to 
not do fire-and-forget in close() but only in unsubscribe(). We need to think 
about if this issue can be resolved in the socket server if we stick to this 
plan.

> KafkaConsumer.close() can block unnecessarily due to leave group waiting for 
> a reply
> 
>
> Key: KAFKA-2792
> URL: https://issues.apache.org/jira/browse/KAFKA-2792
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The current implementation of close() waits for a response to LeaveGroup. 
> However, if we have an outstanding rebalance in the works, this can cause the 
> close() operation to have to wait for the entire rebalance process to 
> complete, which is annoying since the goal is to get rid of the consumer 
> object anyway. This is at best surprising and at worst can cause unexpected 
> bugs due to close() taking excessively long -- this was found due to 
> exceeding timeouts unexpectedly causing other operations in Kafka Connect to 
> timeout.
> Waiting for a response isn't necessary since as soon as the data is in the 
> TCP buffer, it'll be delivered to the broker. The client doesn't benefit at 
> all from seeing the close group. So we can instead just always send the 
> request 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-10 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999238#comment-14999238
 ] 

Jun Rao commented on KAFKA-2790:


In the upgrade section, we should mention that 0.9.0.0 drops the scala 2.9.x 
support.

> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:version0.8.2.0/version
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html: cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2796:
-
Assignee: Yonghui Yang  (was: Neha Narkhede)

> add support for reassignment partition to specified logdir
> --
>
> Key: KAFKA-2796
> URL: https://issues.apache.org/jira/browse/KAFKA-2796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, controller, core, log
>Reporter: Yonghui Yang
>Assignee: Yonghui Yang
>  Labels: features
> Fix For: 0.9.0.0
>
>
> Currently when creating a log, the directory is chosen by calculating the 
> number of partitions
> in each directory and then choosing the data directory with the fewest 
> partitions.
> However, the sizes of different TopicParitions are very different, which lead 
> to usage vary greatly between different logDirs. And usually each logDir 
> corresponds to a disk, so the disk usage between different disks is very 
> imbalance .
> The possible solution is to reassign partitions in high-usage logDirs to 
> low-usage logDirs. I change the format of /admin/reassign_partitions,add 
> replicaDirs field. At reassigning Partitions, when broker’s 
> LogManager.createLog() is invoked , if replicaDir is specified , the 
> specified logDir will be chosen, otherwise the logDir with the fewest 
> partitions will be chosen.
> the old /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3]
>  }
>]
>   }
> the new /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3],
>"replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
>  }
>]
>   }
> This feature has been developed.
> PR: https://github.com/apache/kafka/pull/484



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #127

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2795: fix potential NPE in GroupMetadataManager.addGroup

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c455e608c1f2c7be6ff0a721f49c1fe3ede0165f 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c455e608c1f2c7be6ff0a721f49c1fe3ede0165f
 > git rev-list 1d884d1f60aec9ec7ea334761bead4c60b13c7a9 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8401525364683444675.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 21.587 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8650990860352027924.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/main/java/org/apache/kafka/clients/consumer/CommitFailedException.java'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/.gradle/2.8/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.847 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Commented] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999194#comment-14999194
 ] 

Onur Karaman commented on KAFKA-2795:
-

definitely seems better than before.

> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2795: fix potential NPE in GroupMetadata...

2015-11-10 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/488

KAFKA-2795: fix potential NPE in GroupMetadataManager.addGroup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2795

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #488


commit 095456ccee67cb97913158c2b78a92ad90970745
Author: Jason Gustafson 
Date:   2015-11-10T20:31:53Z

KAFKA-2795: fix potential NPE in addGroup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999299#comment-14999299
 ] 

ASF GitHub Bot commented on KAFKA-2795:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/488

KAFKA-2795: fix potential NPE in GroupMetadataManager.addGroup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2795

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #488


commit 095456ccee67cb97913158c2b78a92ad90970745
Author: Jason Gustafson 
Date:   2015-11-10T20:31:53Z

KAFKA-2795: fix potential NPE in addGroup




> potential NPE in GroupMetadataManager
> -
>
> Key: KAFKA-2795
> URL: https://issues.apache.org/jira/browse/KAFKA-2795
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
>
> I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
> it looks like we can get a NullPointerException when a group is somehow 
> removed between the groupsCache.putIfNotExists and groupsCache.get lines and 
> someone tries to use the result of the addGroup. One way this can happen is 
> by interleaving GroupMetadataManager.addGroup and 
> GroupMetadataManager.removeGroupsForPartition.
> Here's the scenario:
> # thread-1 is in the middle of adding a group g which is in the offset topic 
> partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
> GroupMetadataManager.addGroup
> # thread-2 is in the middle of migrating all groups for partition p. thread-2 
> is in GroupMetadataManager.removeGroupsForPartition and called 
> groupsCache.remove("g").
> # thread-1 now executes groupsCache.get("g"), which returns null since it's 
> now gone.
> # thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
> GroupMetadata and then tries to do a group synchronized {...}, resulting in 
> an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999468#comment-14999468
 ] 

ASF GitHub Bot commented on KAFKA-2788:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/489


> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka_0.9.0_jdk7 #3

2015-11-10 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-2752: Add VerifiableSource/Sink connecto...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/432


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2752) Add clean bounce system test for distributed Copycat

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999504#comment-14999504
 ] 

ASF GitHub Bot commented on KAFKA-2752:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/432


> Add clean bounce system test for distributed Copycat
> 
>
> Key: KAFKA-2752
> URL: https://issues.apache.org/jira/browse/KAFKA-2752
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1
>
>
> Using sources and sinks for system tests that are similar to 
> VerifiableProducer and VerifiableConsumer, the test should run a copycat 
> cluster, create source and sink jobs, perform rolling bounces of the Copycat 
> nodes, and then validate the delivery semantics. In particular, with clean 
> bounces, we should see exactly once delivery of messages all the way to the 
> sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2752) Add clean bounce system test for distributed Copycat

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2752:
-
Fix Version/s: (was: 0.9.0.0)
   0.9.0.1

> Add clean bounce system test for distributed Copycat
> 
>
> Key: KAFKA-2752
> URL: https://issues.apache.org/jira/browse/KAFKA-2752
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1
>
>
> Using sources and sinks for system tests that are similar to 
> VerifiableProducer and VerifiableConsumer, the test should run a copycat 
> cluster, create source and sink jobs, perform rolling bounces of the Copycat 
> nodes, and then validate the delivery semantics. In particular, with clean 
> bounces, we should see exactly once delivery of messages all the way to the 
> sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2752) Add clean bounce system test for distributed Copycat

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2752.
--
Resolution: Fixed

Issue resolved by pull request 432
[https://github.com/apache/kafka/pull/432]

> Add clean bounce system test for distributed Copycat
> 
>
> Key: KAFKA-2752
> URL: https://issues.apache.org/jira/browse/KAFKA-2752
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Using sources and sinks for system tests that are similar to 
> VerifiableProducer and VerifiableConsumer, the test should run a copycat 
> cluster, create source and sink jobs, perform rolling bounces of the Copycat 
> nodes, and then validate the delivery semantics. In particular, with clean 
> bounces, we should see exactly once delivery of messages all the way to the 
> sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #128

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2788; Allow specifying principals with comman in ACL CLI.

[cshapi] KAFKA-2793: Use ByteArrayDeserializer instead of StringDeserializer for

[wangguoz] KAFKA-2752: Add VerifiableSource/Sink connectors and rolling bounce

[wangguoz] KAFKA-2799: skip wakeup in the follow-up poll() call.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9d9bb708bf59c93672306cd731b89d7df114bba7 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9d9bb708bf59c93672306cd731b89d7df114bba7
 > git rev-list c455e608c1f2c7be6ff0a721f49c1fe3ede0165f # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson411794100965675964.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 21.015 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7958451813036330406.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:tools:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 15.732 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[GitHub] kafka pull request: KAFKA-2801: Process any remaining data in SSL ...

2015-11-10 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/493

KAFKA-2801: Process any remaining data in SSL network read buffer after 
handshake

Process any remaining data in the network read buffer in 
`SslTransportLayer` when `read()` is invoked. On handshake completion, there 
could be application data ready to be processed that was read into 
`netReadBuffer` during handshake processing. `read()` is already invoked from 
`Selector` after handshake completion, but data already read into the 
`netReadBuffer` was not being processed. This PR adds a check for remaining 
data and continues with processing data if data is available.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2801

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/493.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #493


commit 081cdc953c04e0fa12c7c8633c9f2d5e1db8638f
Author: Rajini Sivaram 
Date:   2015-11-10T23:19:37Z

KAFKA-2801: Process any remaining data in network read buffer during SSL 
read




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2801) Data read from network not processed by SSL transport layer

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999595#comment-14999595
 ] 

ASF GitHub Bot commented on KAFKA-2801:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/493

KAFKA-2801: Process any remaining data in SSL network read buffer after 
handshake

Process any remaining data in the network read buffer in 
`SslTransportLayer` when `read()` is invoked. On handshake completion, there 
could be application data ready to be processed that was read into 
`netReadBuffer` during handshake processing. `read()` is already invoked from 
`Selector` after handshake completion, but data already read into the 
`netReadBuffer` was not being processed. This PR adds a check for remaining 
data and continues with processing data if data is available.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2801

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/493.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #493


commit 081cdc953c04e0fa12c7c8633c9f2d5e1db8638f
Author: Rajini Sivaram 
Date:   2015-11-10T23:19:37Z

KAFKA-2801: Process any remaining data in network read buffer during SSL 
read




> Data read from network not processed by SSL transport layer
> ---
>
> Key: KAFKA-2801
> URL: https://issues.apache.org/jira/browse/KAFKA-2801
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We have been seeing intermittent failures in our performance tests when 
> producer times out while waiting for metadata response, when running with 
> SSL. Digging deeper into this failure, this is a result of data that was read 
> into _SslTransporLayer.netReadBuffer_ during handshake processing, but never 
> processed later unless more data arrives on the network. In the case of the 
> producer, no more data is sent until the metadata response is received and 
> hence metadata request is never processed, leading to timeouts in the 
> producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2802) Add integration tests for Kafka Streams

2015-11-10 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2802:


 Summary: Add integration tests for Kafka Streams
 Key: KAFKA-2802
 URL: https://issues.apache.org/jira/browse/KAFKA-2802
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


We want to test the following criterion:

1. Tasks are created / migrated on the right stream threads.
2. State stores are created with change-log topics in the right numbers and 
assigned properly to tasks.
3. Co-partitioned topic partitions are assigned in the right way to tasks.
4. At least once processing guarantees (this include correct state store 
flushing / offset committing / producer flushing behavior).

Under the following scenarios:

1. Stream process killed (both -15 and -9)
2. Broker service killed (both -15 and -9)
3. Stream process got long GC.
4. New topic added to subscribed lists.
5. New partitions added to subscribed topics.
6. New stream processes started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2788.

Resolution: Fixed

Issue resolved by pull request 489
[https://github.com/apache/kafka/pull/489]

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2793) ConsoleConsumer crashes with new consumer when using keys because of incorrect deserializer

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999498#comment-14999498
 ] 

ASF GitHub Bot commented on KAFKA-2793:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/482


> ConsoleConsumer crashes with new consumer when using keys because of 
> incorrect deserializer
> ---
>
> Key: KAFKA-2793
> URL: https://issues.apache.org/jira/browse/KAFKA-2793
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> The ConsoleConsumer class uses Array[Byte] everywhere, but the new consumer 
> is configured with a string key deserializer, resulting in a class cast 
> exception:
> {quote}
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:62)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:101)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:64)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:42)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}
> Note that this is an issue whether you are printing the keys or not, it will 
> be triggered by any non-null key (and I'd imagine some should also trigger 
> serialization exceptions if they are not UTF8-decodeable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2793: Use ByteArrayDeserializer instead ...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/482


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2793) ConsoleConsumer crashes with new consumer when using keys because of incorrect deserializer

2015-11-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2793.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 482
[https://github.com/apache/kafka/pull/482]

> ConsoleConsumer crashes with new consumer when using keys because of 
> incorrect deserializer
> ---
>
> Key: KAFKA-2793
> URL: https://issues.apache.org/jira/browse/KAFKA-2793
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The ConsoleConsumer class uses Array[Byte] everywhere, but the new consumer 
> is configured with a string key deserializer, resulting in a class cast 
> exception:
> {quote}
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:62)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:101)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:64)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:42)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}
> Note that this is an issue whether you are printing the keys or not, it will 
> be triggered by any non-null key (and I'd imagine some should also trigger 
> serialization exceptions if they are not UTF8-decodeable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2752: Follow up to fix checkstlye

2015-11-10 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/492

KAFKA-2752: Follow up to fix checkstlye



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #492


commit 4a092d113ae310ef79d094660e30dbd00fcdb18b
Author: Grant Henke 
Date:   2015-11-10T23:20:01Z

KAFKA-2752: Follow up to fix checkstlye




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #129

2015-11-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2752: Follow up to fix checkstlye

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 04827e6e999c1b5e89e7dc8b1573ad263e66cd56 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 04827e6e999c1b5e89e7dc8b1573ad263e66cd56
 > git rev-list 9d9bb708bf59c93672306cd731b89d7df114bba7 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5581635528077644936.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 9.858 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4264929280979048221.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:tools:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.746 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Updated] (KAFKA-1892) System tests for the new consumer and co-ordinator

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1892:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

We have added integration tests in ducktape as part of KAFKA-2274

> System tests for the new consumer and co-ordinator
> --
>
> Key: KAFKA-1892
> URL: https://issues.apache.org/jira/browse/KAFKA-1892
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Guozhang Wang
> Attachments: KAFKA-1892.patch
>
>
> We need to get system test coverage for the new consumer implementation and 
> the co-ordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999427#comment-14999427
 ] 

ASF GitHub Bot commented on KAFKA-2790:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/491

KAFKA-2790: doc improvements



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2790

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/491.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #491


commit 55dc9e4f77096aec0ab0e9408662f55d5181b9bd
Author: Gwen Shapira 
Date:   2015-11-10T22:04:31Z

KAFKA-2790: doc improvements




> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:version0.8.2.0/version
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html: cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2790: doc improvements

2015-11-10 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/491

KAFKA-2790: doc improvements



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2790

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/491.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #491


commit 55dc9e4f77096aec0ab0e9408662f55d5181b9bd
Author: Gwen Shapira 
Date:   2015-11-10T22:04:31Z

KAFKA-2790: doc improvements




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2799: skip wakeup in the follow-up poll(...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/490


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2799.
--
Resolution: Fixed

Issue resolved by pull request 490
[https://github.com/apache/kafka/pull/490]

> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999508#comment-14999508
 ] 

ASF GitHub Bot commented on KAFKA-2799:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/490


> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2752) Add clean bounce system test for distributed Copycat

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999616#comment-14999616
 ] 

ASF GitHub Bot commented on KAFKA-2752:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/492


> Add clean bounce system test for distributed Copycat
> 
>
> Key: KAFKA-2752
> URL: https://issues.apache.org/jira/browse/KAFKA-2752
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1
>
>
> Using sources and sinks for system tests that are similar to 
> VerifiableProducer and VerifiableConsumer, the test should run a copycat 
> cluster, create source and sink jobs, perform rolling bounces of the Copycat 
> nodes, and then validate the delivery semantics. In particular, with clean 
> bounces, we should see exactly once delivery of messages all the way to the 
> sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2752: Follow up to fix checkstlye

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/492


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #130

2015-11-10 Thread Apache Jenkins Server
See 

--
Started by user gwenshap
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 04827e6e999c1b5e89e7dc8b1573ad263e66cd56 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 04827e6e999c1b5e89e7dc8b1573ad263e66cd56
 > git rev-list 04827e6e999c1b5e89e7dc8b1573ad263e66cd56 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8783468523867505802.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 10.835 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2202014582783847902.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:tools:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/main/java/org/apache/kafka/clients/consumer/CommitFailedException.java'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/.gradle/2.8/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 9.004 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999433#comment-14999433
 ] 

Gwen Shapira commented on KAFKA-2790:
-

Hey [~junrao],

Take a look. I think I addressed all the issues you found.

The table border uses a stylesheet we have in the Kafka site (SVN), so you 
won't see it without uploading to the site (unfortunately). Currently editing, 
deploying and testing our docs is still a bit painful - I'll work on improving 
that more for the next release.
I reordered the fields in the configuration tables, so description shows 
immediately after name. This is because in the new configurations the defaults 
are sometimes a bit long and it looks bad otherwise.



> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:version0.8.2.0/version
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html: cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2788: Allow specifying principals with c...

2015-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/489


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2752) Add clean bounce system test for distributed Copycat

2015-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999574#comment-14999574
 ] 

ASF GitHub Bot commented on KAFKA-2752:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/492

KAFKA-2752: Follow up to fix checkstlye



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #492


commit 4a092d113ae310ef79d094660e30dbd00fcdb18b
Author: Grant Henke 
Date:   2015-11-10T23:20:01Z

KAFKA-2752: Follow up to fix checkstlye




> Add clean bounce system test for distributed Copycat
> 
>
> Key: KAFKA-2752
> URL: https://issues.apache.org/jira/browse/KAFKA-2752
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1
>
>
> Using sources and sinks for system tests that are similar to 
> VerifiableProducer and VerifiableConsumer, the test should run a copycat 
> cluster, create source and sink jobs, perform rolling bounces of the Copycat 
> nodes, and then validate the delivery semantics. In particular, with clean 
> bounces, we should see exactly once delivery of messages all the way to the 
> sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2803) Add hard bounce system test for distributed Kafka Connect

2015-11-10 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2803:


 Summary: Add hard bounce system test for distributed Kafka Connect
 Key: KAFKA-2803
 URL: https://issues.apache.org/jira/browse/KAFKA-2803
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Similar to the existing clean bounce test, but use kill -9. The assertions we 
can make are weaker here -- we can only check at least once delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-11-10 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2804:


 Summary: Create / Update changelog topics upon state store 
initialization
 Key: KAFKA-2804
 URL: https://issues.apache.org/jira/browse/KAFKA-2804
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


When state store instances that are logging-backed are initialized, we need to 
check if the corresponding change log topics have been created with the right 
number of partitions:

1) If not exist, create topic
2) If expected #.partitions < actual #.partitions, delete and re-create topic.
3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Yonghui Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonghui Yang updated KAFKA-2796:

Reviewer: Guozhang Wang

> add support for reassignment partition to specified logdir
> --
>
> Key: KAFKA-2796
> URL: https://issues.apache.org/jira/browse/KAFKA-2796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, controller, core, log
>Reporter: Yonghui Yang
>Assignee: Neha Narkhede
>  Labels: features
> Fix For: 0.9.0.0
>
>
> Currently when creating a log, the directory is chosen by calculating the 
> number of partitions
> in each directory and then choosing the data directory with the fewest 
> partitions.
> However, the sizes of different TopicParitions are very different, which lead 
> to usage vary greatly between different logDirs. And usually each logDir 
> corresponds to a disk, so the disk usage between different disks is very 
> imbalance .
> The possible solution is to reassign partitions in high-usage logDirs to 
> low-usage logDirs. I change the format of /admin/reassign_partitions,add 
> replicaDirs field. At reassigning Partitions, when broker’s 
> LogManager.createLog() is invoked , if replicaDir is specified , the 
> specified logDir will be chosen, otherwise the logDir with the fewest 
> partitions will be chosen.
> the old /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3]
>  }
>]
>   }
> the new /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3],
>"replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
>  }
>]
>   }
> This feature has been developed.
> PR: https://github.com/apache/kafka/pull/484



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Yonghui Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonghui Yang updated KAFKA-2796:

Affects Version/s: (was: 0.9.0.0)
  Description: 
Currently when creating a log, the directory is chosen by calculating the 
number of partitions
in each directory and then choosing the data directory with the fewest 
partitions.
However, the sizes of different TopicParitions are very different, which lead 
to usage vary greatly between different logDirs. And usually each logDir 
corresponds to a disk, so the disk usage between different disks is very 
imbalance .
The possible solution is to reassign partitions in high-usage logDirs to 
low-usage logDirs. I change the format of /admin/reassign_partitions,add 
replicaDirs field. At reassigning Partitions, when broker’s 
LogManager.createLog() is invoked , if replicaDir is specified , the specified 
logDir will be chosen, otherwise the logDir with the fewest partitions will be 
chosen.

the old /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3]
 }
   ]
  }
the new /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3],
   "replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
 }
   ]
  }

This feature has been developed.
PR: https://github.com/apache/kafka/pull/484

  was:
Currently when creating a log, the directory is chosen by calculating the 
number of partitions
in each directory and then choosing the data directory with the fewest 
partitions.
However, the sizes of different TopicParitions are very different, which lead 
to usage vary greatly between different logDirs. And usually each logDir 
corresponds to a disk, so the disk usage between different disks is very 
imbalance .
The possible solution is to reassign partitions in high-usage logDirs to 
low-usage logDirs. I change the format of /admin/reassign_partitions,add 
replicaDirs field. At reassigning Partitions, when broker’s 
LogManager.createLog() is invoked , if replicaDir is specified , the specified 
logDir will be chosen, otherwise the logDir with the fewest partitions will be 
chosen.

the old /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3]
 }
   ]
  }
the new /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3],
   "replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
 }
   ]
  }


> add support for reassignment partition to specified logdir
> --
>
> Key: KAFKA-2796
> URL: https://issues.apache.org/jira/browse/KAFKA-2796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, controller, core, log
>Reporter: Yonghui Yang
>Assignee: Neha Narkhede
>  Labels: features
> Fix For: 0.9.0.0
>
>
> Currently when creating a log, the directory is chosen by calculating the 
> number of partitions
> in each directory and then choosing the data directory with the fewest 
> partitions.
> However, the sizes of different TopicParitions are very different, which lead 
> to usage vary greatly between different logDirs. And usually each logDir 
> corresponds to a disk, so the disk usage between different disks is very 
> imbalance .
> The possible solution is to reassign partitions in high-usage logDirs to 
> low-usage logDirs. I change the format of /admin/reassign_partitions,add 
> replicaDirs field. At reassigning Partitions, when broker’s 
> LogManager.createLog() is invoked , if replicaDir is specified , the 
> specified logDir will be chosen, otherwise the logDir with the fewest 
> partitions will be chosen.
> the old /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3]
>  }
>]
>   }
> the new /admin/reassign_partitions:
>   {"version":1,
>"partitions": 
>[
>  {
>"topic" : "Foo",
>"partition": 1,
>"replicas": [1, 2, 3],
>"replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
>  }
>]
>   }
> This feature has been developed.
> PR: https://github.com/apache/kafka/pull/484



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: add support for reassignment partition to spec...

2015-11-10 Thread yonghuiyang
GitHub user yonghuiyang opened a pull request:

https://github.com/apache/kafka/pull/484

add support for reassignment partition to specified logdir

  add support for reassignment partition to specified logdir

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yonghuiyang/kafka logdir_reassignment_dev

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/484.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #484


commit 5bc2cb48c90bb693e85c4a9b273275d18627dc9e
Author: yangyonghui 
Date:   2015-11-10T07:23:42Z

add support for reassignment partition to specified logdir




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2690) Protect passwords from logging

2015-11-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998365#comment-14998365
 ] 

Ismael Juma commented on KAFKA-2690:


PR link:
https://github.com/apache/kafka/pull/371

> Protect passwords from logging
> --
>
> Key: KAFKA-2690
> URL: https://issues.apache.org/jira/browse/KAFKA-2690
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jakub Nowak
> Fix For: 0.9.0.0
>
>
> We currently store the key (ssl.key.password), keystore 
> (ssl.keystore.password) and truststore (ssl.truststore.password) passwords as 
> a String in `KafkaConfig`, `ConsumerConfig` and `ProducerConfig`.
> The problem with this approach is that we may accidentally log the password 
> when logging the config.
> A possible solution is to introduce a new `ConfigDef.Type` that overrides 
> `toString` so that the value is hidden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2642) Run replication tests in ducktape with SSL for clients

2015-11-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998374#comment-14998374
 ] 

Ismael Juma commented on KAFKA-2642:


Is this done then?

> Run replication tests in ducktape with SSL for clients
> --
>
> Key: KAFKA-2642
> URL: https://issues.apache.org/jira/browse/KAFKA-2642
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> Under KAFKA-2581, replication tests were parametrized to run with SSL for 
> interbroker communication, but not for clients. When KAFKA-2603 is committed, 
> the tests should be able to use SSL for clients as well,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2643) Run mirror maker tests in ducktape with SSL

2015-11-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998375#comment-14998375
 ] 

Ismael Juma commented on KAFKA-2643:


Any update on this?

> Run mirror maker tests in ducktape with SSL
> ---
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2795) potential NPE in GroupMetadataManager

2015-11-10 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-2795:
---

 Summary: potential NPE in GroupMetadataManager
 Key: KAFKA-2795
 URL: https://issues.apache.org/jira/browse/KAFKA-2795
 Project: Kafka
  Issue Type: Bug
Reporter: Onur Karaman
Assignee: Guozhang Wang


I didn't run the code, but I took a look at GroupMetadataManager.addGroup and 
it looks like we can get a NullPointerException when a group is somehow removed 
between the groupsCache.putIfNotExists and groupsCache.get lines and someone 
tries to use the result of the addGroup. One way this can happen is by 
interleaving GroupMetadataManager.addGroup and 
GroupMetadataManager.removeGroupsForPartition.

Here's the scenario:
# thread-1 is in the middle of adding a group g which is in the offset topic 
partition p. thread-1 already hit the groupsCache.putIfNotExists line in 
GroupMetadataManager.addGroup
# thread-2 is in the middle of migrating all groups for partition p. thread-2 
is in GroupMetadataManager.removeGroupsForPartition and called 
groupsCache.remove("g").
# thread-1 now executes groupsCache.get("g"), which returns null since it's now 
gone.
# thread-1 now goes back to the GroupCoordinator doJoinGroup with a null 
GroupMetadata and then tries to do a group synchronized {...}, resulting in an 
NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2796) add support for reassignment partition to specified logdir

2015-11-10 Thread Yonghui Yang (JIRA)
Yonghui Yang created KAFKA-2796:
---

 Summary: add support for reassignment partition to specified logdir
 Key: KAFKA-2796
 URL: https://issues.apache.org/jira/browse/KAFKA-2796
 Project: Kafka
  Issue Type: Improvement
  Components: clients, controller, core, log
Affects Versions: 0.9.0.0
Reporter: Yonghui Yang
Assignee: Neha Narkhede
 Fix For: 0.9.0.0


Currently when creating a log, the directory is chosen by calculating the 
number of partitions
in each directory and then choosing the data directory with the fewest 
partitions.
However, the sizes of different TopicParitions are very different, which lead 
to usage vary greatly between different logDirs. And usually each logDir 
corresponds to a disk, so the disk usage between different disks is very 
imbalance .
The possible solution is to reassign partitions in high-usage logDirs to 
low-usage logDirs. I change the format of /admin/reassign_partitions,add 
replicaDirs field. At reassigning Partitions, when broker’s 
LogManager.createLog() is invoked , if replicaDir is specified , the specified 
logDir will be chosen, otherwise the logDir with the fewest partitions will be 
chosen.

the old /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3]
 }
   ]
  }
the new /admin/reassign_partitions:

  {"version":1,
   "partitions": 
   [
 {
   "topic" : "Foo",
   "partition": 1,
   "replicas": [1, 2, 3],
   "replicaDirs": {"1":"/data1/kafka_data",  "3":"/data10/kakfa_data" }
 }
   ]
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >