[GitHub] kafka pull request: HOTFIX: Fix checkstyle failure in KStreams by ...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1000


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: Fix checkstyle failure in KStreams by ...

2016-03-02 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/1000

HOTFIX: Fix checkstyle failure in KStreams by providing fully qualified 
class names.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
hotfix-kstreams-checkstyle-javadocs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1000


commit e9ec85fc9fa3a732c4ede2cf91305d2d8a22ed44
Author: Ewen Cheslack-Postava 
Date:   2016-03-03T06:57:15Z

HOTFIX: Fix checkstyle failure in KStreams by providing fully qualified 
class names.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2016-03-02 Thread Vamsi Subhash Achanta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177342#comment-15177342
 ] 

Vamsi Subhash Achanta commented on KAFKA-2967:
--

Emacs org mode works very well for documentation: http://orgmode.org/
It can generate Latex output as well and developers feel like heaven writing 
documentation in it. It can be used by people who don't use Emacs as well.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3323) Negative offsets in Log Segment Index files due to Integer overflow when compaction is enabled

2016-03-02 Thread Michael Schiff (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Schiff updated KAFKA-3323:
--
Attachment: log_dump.txt
index_dump.txt

Index and Log dump from the 0 segment of partition 0 of a compact topic.

The index file is very small, and is quite obviously incorrect.

{code}
offset: -1733149320 position: 1307775
{code}
Is the first incorrect entry. The corresponding entry in the Log dump shows an 
offset of {code}6856785272L{code}.  This value overflows to the indexed offset.
{code}
scala> 6856785272L.toInt
res1: Int = -1733149320
{code}

> Negative offsets in Log Segment Index files due to Integer overflow when 
> compaction is enabled 
> ---
>
> Key: KAFKA-3323
> URL: https://issues.apache.org/jira/browse/KAFKA-3323
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Michael Schiff
>Assignee: Jay Kreps
> Attachments: index_dump.txt, log_dump.txt
>
>
> {code}
>  /**
>* Append an entry for the given offset/location pair to the index. This 
> entry must have a larger offset than all subsequent entries.
>*/
>   def append(offset: Long, position: Int) {
> inLock(lock) {
>   require(!isFull, "Attempt to append to a full index (size = " + size + 
> ").")
>   if (size.get == 0 || offset > lastOffset) {
> debug("Adding index entry %d => %d to %s.".format(offset, position, 
> file.getName))
> this.mmap.putInt((offset - baseOffset).toInt)
> this.mmap.putInt(position)
> this.size.incrementAndGet()
> this.lastOffset = offset
> require(entries * 8 == mmap.position, entries + " entries but file 
> position in index is " + mmap.position + ".")
>   } else {
> throw new InvalidOffsetException("Attempt to append an offset (%d) to 
> position %d no larger than the last offset appended (%d) to %s."
>   .format(offset, entries, lastOffset, file.getAbsolutePath))
>   }
> }
>   }
> {code}
> OffsetIndex.append assumes that (offset - baseOffset) can be represented as 
> an integer without overflow. If the LogSegment is from a compacted topic, 
> this assumption may not be valid. The result is a quiet integer overflow, 
> which stores a negative value into the index. This breaks the binary search 
> used to lookup offset positions -> large intervals of offsets are skipped by 
> consumers who are bootstrapping themselves on the topic.
> I believe that the issue is caused by the LogCleaner. Specifically, by the 
> groupings produced by 
> {code}
> /**
>* Group the segments in a log into groups totaling less than a given size. 
> the size is enforced separately for the log data and the index data.
>* We collect a group of such segments together into a single
>* destination segment. This prevents segment sizes from shrinking too much.
>*
>* @param segments The log segments to group
>* @param maxSize the maximum size in bytes for the total of all log data 
> in a group
>* @param maxIndexSize the maximum size in bytes for the total of all index 
> data in a group
>*
>* @return A list of grouped segments
>*/
>   private[log] def groupSegmentsBySize(segments: Iterable[LogSegment], 
> maxSize: Int, maxIndexSize: Int): List[Seq[LogSegment]]
> {code}
> Since this method is only concerned with grouping by size, without taking 
> baseOffset and groupMaxOffset into account, it will produce groups that when 
> cleaned into a single segment, have offsets that overflow. This is more 
> likely for topics with low key cardinality, but high update volume, as you 
> could wind up with very few cleaned records, but with very high offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3323) Negative offsets in Log Segment Index files due to Integer overflow when compaction is enabled

2016-03-02 Thread Michael Schiff (JIRA)
Michael Schiff created KAFKA-3323:
-

 Summary: Negative offsets in Log Segment Index files due to 
Integer overflow when compaction is enabled 
 Key: KAFKA-3323
 URL: https://issues.apache.org/jira/browse/KAFKA-3323
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Michael Schiff
Assignee: Jay Kreps


{code}
 /**
   * Append an entry for the given offset/location pair to the index. This 
entry must have a larger offset than all subsequent entries.
   */
  def append(offset: Long, position: Int) {
inLock(lock) {
  require(!isFull, "Attempt to append to a full index (size = " + size + 
").")
  if (size.get == 0 || offset > lastOffset) {
debug("Adding index entry %d => %d to %s.".format(offset, position, 
file.getName))
this.mmap.putInt((offset - baseOffset).toInt)
this.mmap.putInt(position)
this.size.incrementAndGet()
this.lastOffset = offset
require(entries * 8 == mmap.position, entries + " entries but file 
position in index is " + mmap.position + ".")
  } else {
throw new InvalidOffsetException("Attempt to append an offset (%d) to 
position %d no larger than the last offset appended (%d) to %s."
  .format(offset, entries, lastOffset, file.getAbsolutePath))
  }
}
  }
{code}

OffsetIndex.append assumes that (offset - baseOffset) can be represented as an 
integer without overflow. If the LogSegment is from a compacted topic, this 
assumption may not be valid. The result is a quiet integer overflow, which 
stores a negative value into the index. This breaks the binary search used to 
lookup offset positions -> large intervals of offsets are skipped by consumers 
who are bootstrapping themselves on the topic.

I believe that the issue is caused by the LogCleaner. Specifically, by the 
groupings produced by 
{code}
/**
   * Group the segments in a log into groups totaling less than a given size. 
the size is enforced separately for the log data and the index data.
   * We collect a group of such segments together into a single
   * destination segment. This prevents segment sizes from shrinking too much.
   *
   * @param segments The log segments to group
   * @param maxSize the maximum size in bytes for the total of all log data in 
a group
   * @param maxIndexSize the maximum size in bytes for the total of all index 
data in a group
   *
   * @return A list of grouped segments
   */
  private[log] def groupSegmentsBySize(segments: Iterable[LogSegment], maxSize: 
Int, maxIndexSize: Int): List[Seq[LogSegment]]
{code}

Since this method is only concerned with grouping by size, without taking 
baseOffset and groupMaxOffset into account, it will produce groups that when 
cleaned into a single segment, have offsets that overflow. This is more likely 
for topics with low key cardinality, but high update volume, as you could wind 
up with very few cleaned records, but with very high offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2375) Implement elasticsearch Copycat sink connector

2016-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2375.
--
   Resolution: Won't Fix
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 0.10.0.0

Resolving this since there are multiple community-provided connectors now (see, 
e.g., a few different versions listed on http://connectors.confluent.io).

> Implement elasticsearch Copycat sink connector
> --
>
> Key: KAFKA-2375
> URL: https://issues.apache.org/jira/browse/KAFKA-2375
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
> Fix For: 0.10.0.0
>
>
> Implement an elasticsearch sink connector for Copycat. This should send 
> records to elasticsearch with unique document IDs, given appropriate configs 
> to extract IDs from input records.
> The motivation here is to provide a good end-to-end example with built-in 
> connectors that require minimal dependencies. Because Elasticsearch has a 
> very simple REST API, an elasticsearch connector shouldn't require any extra 
> dependencies and logs -> Elasticsearch (in combination with KAFKA-2374) 
> provides a compelling out-of-the-box Copycat use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #411

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2784: swallow exceptions when mirror maker exits.

[me] KAFKA-2944: Replaced the NPE with a nicer error and clean exit and added

--
[...truncated 6576 lines...]
org.apache.kafka.common.record.RecordTest > testChecksum[160] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[160] PASSED

org.apache.kafka.common.record.RecordTest > testFields[160] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[161] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[161] PASSED

org.apache.kafka.common.record.RecordTest > testFields[161] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[162] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[162] PASSED

org.apache.kafka.common.record.RecordTest > testFields[162] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[163] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[163] PASSED

org.apache.kafka.common.record.RecordTest > testFields[163] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[164] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[164] PASSED

org.apache.kafka.common.record.RecordTest > testFields[164] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[165] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[165] PASSED

org.apache.kafka.common.record.RecordTest > testFields[165] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[166] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[166] PASSED

org.apache.kafka.common.record.RecordTest > testFields[166] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[167] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[167] PASSED

org.apache.kafka.common.record.RecordTest > testFields[167] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[168] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[168] PASSED

org.apache.kafka.common.record.RecordTest > testFields[168] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[169] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[169] PASSED

org.apache.kafka.common.record.RecordTest > testFields[169] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[170] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[170] PASSED

org.apache.kafka.common.record.RecordTest > testFields[170] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[171] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[171] PASSED

org.apache.kafka.common.record.RecordTest > testFields[171] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[172] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[172] PASSED

org.apache.kafka.common.record.RecordTest > testFields[172] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[173] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[173] PASSED

org.apache.kafka.common.record.RecordTest > testFields[173] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[174] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[174] PASSED

org.apache.kafka.common.record.RecordTest > testFields[174] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[175] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[175] PASSED

org.apache.kafka.common.record.RecordTest > testFields[175] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[176] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[176] PASSED

org.apache.kafka.common.record.RecordTest > testFields[176] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[177] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[177] PASSED

org.apache.kafka.common.record.RecordTest > testFields[177] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[178] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[178] PASSED

org.apache.kafka.common.record.RecordTest > testFields[178] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[179] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[179] PASSED

org.apache.kafka.common.record.RecordTest > testFields[179] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[180] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[180] PASSED

org.apache.kafka.common.record.RecordTest > testFields[180] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[181] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[181] PASSED

org.apache.kafka.common.record.RecordTest > testFields[181] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[182] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[182

Build failed in Jenkins: kafka-trunk-jdk7 #1084

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2784: swallow exceptions when mirror maker exits.

[me] KAFKA-2944: Replaced the NPE with a nicer error and clean exit and added

--
[...truncated 1488 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollo

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Becket Qin
Hi Jason,

I was thinking that every time when we connect to a broker, we first send
the version check request. (The version check request itself should be very
simple and never changes across all server releases.) This does add an
additional round trip, but given reconnect is rare, it is probably fine. On
the client side, the client will always send request using the lowest
supported version across all brokers. That means if a Kafka cluster is
downgrading, we will use the downgraded protocol as soon as the client
connected to an older broker.

@Ashish,
Can you help me understand the pain points from other open source projects
that you mentioned a little more? There are two different levels of
requirements:
1. User wants to know if the client is compatible with the broker or not.
2. User wants the client and the broker to negotiate the protocol on their
own.

Currently in Kafka the principle we are following is to let clients stick
to a certain version and server will adapt to the clients accordingly.
If this KIP doesn't want to break this rule, it seems we should simply let
the clients send the ApiVersion it is using to the brokers and the brokers
will decide whether to accept or reject the clients. This means user have
to upgrade broker before they upgrade clients. This satisfies (1) so that a
newer client will know it does not compatible with an older server
immediately.
If this KIP will change that to let the newer clients adapt to the older
brokers,  compatibility wise it is a good thing to have. With this now
users are able to upgrade clients before they upgrade Kafka brokers. This
means user can upgrade clients even before upgrade servers. This satisfies
(2) as the newer clients can also talk to the older servers.

If we decide to go with (2). The benefit is that a newer client won't break
when talking to an older broker. But functionality wise, it might be the
same as an older clients.
In the downgrading case, we probably still have to notify all the users.
For example, if application is sending messages with timestamp and the
broker got downgraded to an older version that does not support timestamp.
The clients will suddenly start to throw away timestamps. This might affect
the application logic. In this case even if we have clients automatically
adapted to a lower version broker, the applications might still break.
Hence we still need to notify the users about the case when the clients is
newer than the brokers. This is the same for both (1) and (2).
Supporting (2) will introduce more complication on the client side. And we
may also have to communicate with users about what function is supported in
the new clients and what is not supported after the protocol negotiation
finishes.

Thanks,

Jiangjie (Becket) Qin




On Wed, Mar 2, 2016 at 5:58 PM, Dana Powers  wrote:

> In kafka-python we've been doing something like:
>
> if version >= (0, 9):
>   Do cool new stuff
> elif version >= (0, 8, 2):
>   Do some older stuff
> 
> else:
>   raise UnsupportedVersionError
>
> This will break if / when the new 0.9 apis are completely removed from some
> future release, but should handle intermediate broker upgrades. Because we
> can't add support for future apis a priori, I think the best we could do
> here is throw an error that request protocol version X is not supported.
> For now that comes through as a broken socket connection, so there is an
> error - just not a super helpful one.
>
> For that reason I'm also in favor of a generic error response when a
> protocol req is not recognized.
>
> -Dana
> On Mar 2, 2016 5:38 PM, "Jay Kreps"  wrote:
>
> > But won't it be the case that what clients end up doing would be
> something
> > like
> >if(version != 0.8.1)
> >   throw new UnsupportedVersionException()
> > which then means the client is broken as soon as we release a new server
> > version even though the protocol didn't change. I'm actually not sure how
> > you could use that information in a forward compatible way since you
> can't
> > know a priori if you will work with the next release until you know if
> the
> > protocol changed.
> >
> > -Jay
> >
> > On Wed, Mar 2, 2016 at 5:28 PM, Jason Gustafson 
> > wrote:
> >
> > > Hey Jay,
> > >
> > > Yeah, I wasn't suggesting that we eliminate request API versions.
> They're
> > > definitely needed on the broker to support compatibility. I was just
> > saying
> > > that if a client wants to support multiple broker versions (e.g. 0.8
> and
> > > 0.9), then it makes more sense to me to make the kafka release version
> > > available in order to determine which version of the request API should
> > be
> > > used rather than adding a new request type which exposes all of the
> > > different supported versions for all of the request types. Request API
> > > versions all change in lockstep with Kafka releases anyway.
> > >
> > > -Jason
> > >
> > > On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin 
> wrote:
> > >
> > > > I think using Kafka release version makes se

Build failed in Jenkins: kafka-trunk-jdk8 #410

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3290: fix transient test failures in WorkerSourceTaskTest

[cshapi] MINOR: fix typo

--
[...truncated 4744 lines...]
org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testUnknownMemberIdOnSyncGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testUnexpectedErrorOnSyncGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testIllegalGenerationOnSyncGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetSyncCallbackWithNonRetriableException PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetMetadata PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testDisconnectInJoin PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetSyncCoordinatorDisconnected PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncFailedWithDefaultCallback PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetSyncNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetRebalanceInProgress PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testIllegalGeneration PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNormalHeartbeat PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRefreshOffset PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetSyncCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRefreshOffsetWithNoFetchableOffsets PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testProtocolMetadataOrder PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncWithDefaultCallback PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testGroupReadUnauthorized PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRebalanceInProgressOnSyncGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetUnknownMemberId PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testUnknownConsumerId PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRefreshOffsetNotCoordinatorForConsumer PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetMetadataTooLarge PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetAsyncDisconnected PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testLeaveGroupOnClose PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testJoinGroupInvalidGroupId PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCommitOffsetIllegalGeneration PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest > 
testNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeNullSubscriptionUserData PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNewSubscriptionVersion PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeMetadata PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNullAssignmentUserData PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
serializeDeserializeAssignment PASSED

org.apache.kafka.clients.consumer.internals.ConsumerProtocolTest > 
deserializeNewAssignmentVersion PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testUpdateFetchPositionResetToLatestOffset PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchUnknownTopicOrPartition PASSED

org.apache.kafka.clients.consumer.internals.FetcherTest > 
testFetchOffsetOutOfRange PASSED

org.apache.kafka.clients.c

Build failed in Jenkins: kafka-trunk-jdk7 #1083

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3290: fix transient test failures in WorkerSourceTaskTest

[cshapi] MINOR: fix typo

--
[...truncated 2916 lines...]

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install JDK_1_7U51_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.Tool

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177056#comment-15177056
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/993


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.ja

[jira] [Updated] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2944:
-
Assignee: Gwen Shapira  (was: Ewen Cheslack-Postava)

> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Gwen Shapira
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.start

[GitHub] kafka pull request: KAFKA-2944: Replaced the NPE with a nicer erro...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/993


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3322) recurring errors

2016-03-02 Thread jackie (JIRA)
jackie created KAFKA-3322:
-

 Summary: recurring errors
 Key: KAFKA-3322
 URL: https://issues.apache.org/jira/browse/KAFKA-3322
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
 Environment: kafka0.9.0 and zookeeper 3.4.6
Reporter: jackie


we're getting hundreds of these errs with kafka 0.8 and topics become 
unavailable after running for a few days.  it looks like this 
https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2784) Mirror maker should swallow exceptions during shutdown.

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177007#comment-15177007
 ] 

ASF GitHub Bot commented on KAFKA-2784:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/478


> Mirror maker should swallow exceptions during shutdown.
> ---
>
> Key: KAFKA-2784
> URL: https://issues.apache.org/jira/browse/KAFKA-2784
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> Mirror maker should swallow exceptions during shutdown to make sure shutdown 
> latch is pulled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2784) Mirror maker should swallow exceptions during shutdown.

2016-03-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2784:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 478
[https://github.com/apache/kafka/pull/478]

> Mirror maker should swallow exceptions during shutdown.
> ---
>
> Key: KAFKA-2784
> URL: https://issues.apache.org/jira/browse/KAFKA-2784
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> Mirror maker should swallow exceptions during shutdown to make sure shutdown 
> latch is pulled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2784: swallow exceptions when mirror mak...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/478


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Dana Powers
In kafka-python we've been doing something like:

if version >= (0, 9):
  Do cool new stuff
elif version >= (0, 8, 2):
  Do some older stuff

else:
  raise UnsupportedVersionError

This will break if / when the new 0.9 apis are completely removed from some
future release, but should handle intermediate broker upgrades. Because we
can't add support for future apis a priori, I think the best we could do
here is throw an error that request protocol version X is not supported.
For now that comes through as a broken socket connection, so there is an
error - just not a super helpful one.

For that reason I'm also in favor of a generic error response when a
protocol req is not recognized.

-Dana
On Mar 2, 2016 5:38 PM, "Jay Kreps"  wrote:

> But won't it be the case that what clients end up doing would be something
> like
>if(version != 0.8.1)
>   throw new UnsupportedVersionException()
> which then means the client is broken as soon as we release a new server
> version even though the protocol didn't change. I'm actually not sure how
> you could use that information in a forward compatible way since you can't
> know a priori if you will work with the next release until you know if the
> protocol changed.
>
> -Jay
>
> On Wed, Mar 2, 2016 at 5:28 PM, Jason Gustafson 
> wrote:
>
> > Hey Jay,
> >
> > Yeah, I wasn't suggesting that we eliminate request API versions. They're
> > definitely needed on the broker to support compatibility. I was just
> saying
> > that if a client wants to support multiple broker versions (e.g. 0.8 and
> > 0.9), then it makes more sense to me to make the kafka release version
> > available in order to determine which version of the request API should
> be
> > used rather than adding a new request type which exposes all of the
> > different supported versions for all of the request types. Request API
> > versions all change in lockstep with Kafka releases anyway.
> >
> > -Jason
> >
> > On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin  wrote:
> >
> > > I think using Kafka release version makes sense. More particularly, we
> > can
> > > use the ApiVersion and this will cover all the interval version as
> well.
> > In
> > > KAFKA-3025, we added the ApiVersion to message format version mapping,
> We
> > > can add the ApiKey to version mapping to ApiVersion as well. We can
> move
> > > ApiVersion class to o.a.k.c package and use it for both server and
> > clients.
> > >
> > > @Jason, if we cache the release info in metadata and not re-validate
> the
> > > release on reconnect, would it still work if we do a rolling downgrade?
> > >
> > > On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > I think Dana's suggestion to include the Kafka release version makes
> a
> > > lot
> > > > of sense. I'm actually wondering why you would need the individual
> API
> > > > versions if you have that? It sounds like keeping track of all the
> api
> > > > version information would add a lot of complexity to clients since
> > > they'll
> > > > have to try to handle different version permutations which are not
> > > actually
> > > > possible in practice. Wouldn't it be simpler to know that you're
> > talking
> > > to
> > > > an 0.9 broker than that you're talking to a broker which supports
> > > version 2
> > > > of the group coordinator request, version 1 of fetch request, etc?
> > Also,
> > > > the release version could be included in the broker information in
> the
> > > > topic metadata request which would save the need for the additional
> > round
> > > > trip on every reconnect.
> > > >
> > > > -Jason
> > > >
> > > > On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh 
> > > wrote:
> > > >
> > > > > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > One more thing, the KIP actually had 3 parts:
> > > > > > 1. The version protocol
> > > > > > 2. New response on messages of wrong API key or wrong version
> > > > > > 3. Protocol documentation
> > > > > >
> > > > > There is a WIP patch for adding protocol docs,
> > > > > https://github.com/apache/kafka/pull/970 . By protocol
> > documentation,
> > > > you
> > > > > mean updating this, right?
> > > > >
> > > > > >
> > > > > > I understand that you are offering to only implement part 1?
> > > > > > But the KIP discussion and vote should still cover all three
> parts,
> > > > > > they will just be implemented in separate JIRA?
> > > > > >
> > > > > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986
> ,
> > > > covers
> > > > > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should
> include
> > > all
> > > > > the three points you mentioned while discussing or voting for
> KIP-35.
> > > > >
> > > > > >
> > > > > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh <
> asi...@cloudera.com>
> > > > > wrote:
> > > > > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira <
> g...@confluent.io>
> > > > > wrote:
> > > > > > >
> > > > > > >> I don't see a use for the name - clients should be able to
> > > tra

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jason Gustafson
Hey Becket,

I was thinking about that too. I guess there are two scenarios: upgrades
and downgrades. For upgrades, the client may continue using the old API
version until the next metadata refresh, which seems ok. For downgrades, we
still need some way to signal the client that an API version is not
supported by the broker (that's one part of this KIP that we should
definitely get in even if we reject the rest). In that case, the client can
refetch topic metadata from a different broker and update versions
accordingly, though there's clearly some trickiness if you have to revert
to an older version of the topic metadata request. What do you think?

-Jason

On Wed, Mar 2, 2016 at 5:28 PM, Jason Gustafson  wrote:

> Hey Jay,
>
> Yeah, I wasn't suggesting that we eliminate request API versions. They're
> definitely needed on the broker to support compatibility. I was just saying
> that if a client wants to support multiple broker versions (e.g. 0.8 and
> 0.9), then it makes more sense to me to make the kafka release version
> available in order to determine which version of the request API should be
> used rather than adding a new request type which exposes all of the
> different supported versions for all of the request types. Request API
> versions all change in lockstep with Kafka releases anyway.
>
> -Jason
>
> On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin  wrote:
>
>> I think using Kafka release version makes sense. More particularly, we can
>> use the ApiVersion and this will cover all the interval version as well.
>> In
>> KAFKA-3025, we added the ApiVersion to message format version mapping, We
>> can add the ApiKey to version mapping to ApiVersion as well. We can move
>> ApiVersion class to o.a.k.c package and use it for both server and
>> clients.
>>
>> @Jason, if we cache the release info in metadata and not re-validate the
>> release on reconnect, would it still work if we do a rolling downgrade?
>>
>> On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson 
>> wrote:
>>
>> > I think Dana's suggestion to include the Kafka release version makes a
>> lot
>> > of sense. I'm actually wondering why you would need the individual API
>> > versions if you have that? It sounds like keeping track of all the api
>> > version information would add a lot of complexity to clients since
>> they'll
>> > have to try to handle different version permutations which are not
>> actually
>> > possible in practice. Wouldn't it be simpler to know that you're
>> talking to
>> > an 0.9 broker than that you're talking to a broker which supports
>> version 2
>> > of the group coordinator request, version 1 of fetch request, etc? Also,
>> > the release version could be included in the broker information in the
>> > topic metadata request which would save the need for the additional
>> round
>> > trip on every reconnect.
>> >
>> > -Jason
>> >
>> > On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh 
>> wrote:
>> >
>> > > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira 
>> wrote:
>> > >
>> > > > One more thing, the KIP actually had 3 parts:
>> > > > 1. The version protocol
>> > > > 2. New response on messages of wrong API key or wrong version
>> > > > 3. Protocol documentation
>> > > >
>> > > There is a WIP patch for adding protocol docs,
>> > > https://github.com/apache/kafka/pull/970 . By protocol documentation,
>> > you
>> > > mean updating this, right?
>> > >
>> > > >
>> > > > I understand that you are offering to only implement part 1?
>> > > > But the KIP discussion and vote should still cover all three parts,
>> > > > they will just be implemented in separate JIRA?
>> > > >
>> > > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986,
>> > covers
>> > > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include
>> all
>> > > the three points you mentioned while discussing or voting for KIP-35.
>> > >
>> > > >
>> > > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
>> > > wrote:
>> > > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
>> > > wrote:
>> > > > >
>> > > > >> I don't see a use for the name - clients should be able to
>> translate
>> > > > >> ApiKey to name for any API they support, and I'm not sure why
>> would
>> > a
>> > > > >> client need to log anything about APIs it does not support. Am I
>> > > > >> missing something?
>> > > > >>
>> > > > > Yea, it is a fair assumption that client would know about APIs it
>> > > > supports.
>> > > > > It could have been helpful for client users to see new APIs
>> though,
>> > > > however
>> > > > > users can always refer to protocol doc of new version to find
>> > > > corresponding
>> > > > > names of the new APIs.
>> > > > >
>> > > > >>
>> > > > >> On a related note, Magnus is currently on vacation, but he
>> should be
>> > > > >> back at the end of next week. I'd like to hold off on the vote
>> until
>> > > > >> he gets back since his experience in implementing clients  and
>> his
>> > > > >> opinions will be very valuable for this discussion.
>> > > > >>
>> > > > > 

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jason Gustafson
I'm assuming that the broker would continue to support older versions of
the requests, so the check would probably be more like this:

if (version > 0.9)
// send 0.9 requests
else if (version > 0.8)
// send 0.8 requests

Does that not work? This version can also can be used by clients to tell
whether the broker supports major new features (such as the group
coordinator protocol or the admin APIs).

-Jason



On Wed, Mar 2, 2016 at 5:38 PM, Jay Kreps  wrote:

> But won't it be the case that what clients end up doing would be something
> like
>if(version != 0.8.1)
>   throw new UnsupportedVersionException()
> which then means the client is broken as soon as we release a new server
> version even though the protocol didn't change. I'm actually not sure how
> you could use that information in a forward compatible way since you can't
> know a priori if you will work with the next release until you know if the
> protocol changed.
>
> -Jay
>
> On Wed, Mar 2, 2016 at 5:28 PM, Jason Gustafson 
> wrote:
>
> > Hey Jay,
> >
> > Yeah, I wasn't suggesting that we eliminate request API versions. They're
> > definitely needed on the broker to support compatibility. I was just
> saying
> > that if a client wants to support multiple broker versions (e.g. 0.8 and
> > 0.9), then it makes more sense to me to make the kafka release version
> > available in order to determine which version of the request API should
> be
> > used rather than adding a new request type which exposes all of the
> > different supported versions for all of the request types. Request API
> > versions all change in lockstep with Kafka releases anyway.
> >
> > -Jason
> >
> > On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin  wrote:
> >
> > > I think using Kafka release version makes sense. More particularly, we
> > can
> > > use the ApiVersion and this will cover all the interval version as
> well.
> > In
> > > KAFKA-3025, we added the ApiVersion to message format version mapping,
> We
> > > can add the ApiKey to version mapping to ApiVersion as well. We can
> move
> > > ApiVersion class to o.a.k.c package and use it for both server and
> > clients.
> > >
> > > @Jason, if we cache the release info in metadata and not re-validate
> the
> > > release on reconnect, would it still work if we do a rolling downgrade?
> > >
> > > On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > I think Dana's suggestion to include the Kafka release version makes
> a
> > > lot
> > > > of sense. I'm actually wondering why you would need the individual
> API
> > > > versions if you have that? It sounds like keeping track of all the
> api
> > > > version information would add a lot of complexity to clients since
> > > they'll
> > > > have to try to handle different version permutations which are not
> > > actually
> > > > possible in practice. Wouldn't it be simpler to know that you're
> > talking
> > > to
> > > > an 0.9 broker than that you're talking to a broker which supports
> > > version 2
> > > > of the group coordinator request, version 1 of fetch request, etc?
> > Also,
> > > > the release version could be included in the broker information in
> the
> > > > topic metadata request which would save the need for the additional
> > round
> > > > trip on every reconnect.
> > > >
> > > > -Jason
> > > >
> > > > On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh 
> > > wrote:
> > > >
> > > > > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > One more thing, the KIP actually had 3 parts:
> > > > > > 1. The version protocol
> > > > > > 2. New response on messages of wrong API key or wrong version
> > > > > > 3. Protocol documentation
> > > > > >
> > > > > There is a WIP patch for adding protocol docs,
> > > > > https://github.com/apache/kafka/pull/970 . By protocol
> > documentation,
> > > > you
> > > > > mean updating this, right?
> > > > >
> > > > > >
> > > > > > I understand that you are offering to only implement part 1?
> > > > > > But the KIP discussion and vote should still cover all three
> parts,
> > > > > > they will just be implemented in separate JIRA?
> > > > > >
> > > > > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986
> ,
> > > > covers
> > > > > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should
> include
> > > all
> > > > > the three points you mentioned while discussing or voting for
> KIP-35.
> > > > >
> > > > > >
> > > > > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh <
> asi...@cloudera.com>
> > > > > wrote:
> > > > > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira <
> g...@confluent.io>
> > > > > wrote:
> > > > > > >
> > > > > > >> I don't see a use for the name - clients should be able to
> > > translate
> > > > > > >> ApiKey to name for any API they support, and I'm not sure why
> > > would
> > > > a
> > > > > > >> client need to log anything about APIs it does not support.
> Am I
> > > > > > >> missing something?
> > > > > > >>
> > > > > > > Yea, it is a fair assumption that

Build failed in Jenkins: kafka-trunk-jdk7 #1082

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Improve JavaDoc for some public classes.

--
[...truncated 1481 lines...]

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install JDK_1_7U51_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.co

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jay Kreps
But won't it be the case that what clients end up doing would be something
like
   if(version != 0.8.1)
  throw new UnsupportedVersionException()
which then means the client is broken as soon as we release a new server
version even though the protocol didn't change. I'm actually not sure how
you could use that information in a forward compatible way since you can't
know a priori if you will work with the next release until you know if the
protocol changed.

-Jay

On Wed, Mar 2, 2016 at 5:28 PM, Jason Gustafson  wrote:

> Hey Jay,
>
> Yeah, I wasn't suggesting that we eliminate request API versions. They're
> definitely needed on the broker to support compatibility. I was just saying
> that if a client wants to support multiple broker versions (e.g. 0.8 and
> 0.9), then it makes more sense to me to make the kafka release version
> available in order to determine which version of the request API should be
> used rather than adding a new request type which exposes all of the
> different supported versions for all of the request types. Request API
> versions all change in lockstep with Kafka releases anyway.
>
> -Jason
>
> On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin  wrote:
>
> > I think using Kafka release version makes sense. More particularly, we
> can
> > use the ApiVersion and this will cover all the interval version as well.
> In
> > KAFKA-3025, we added the ApiVersion to message format version mapping, We
> > can add the ApiKey to version mapping to ApiVersion as well. We can move
> > ApiVersion class to o.a.k.c package and use it for both server and
> clients.
> >
> > @Jason, if we cache the release info in metadata and not re-validate the
> > release on reconnect, would it still work if we do a rolling downgrade?
> >
> > On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson 
> > wrote:
> >
> > > I think Dana's suggestion to include the Kafka release version makes a
> > lot
> > > of sense. I'm actually wondering why you would need the individual API
> > > versions if you have that? It sounds like keeping track of all the api
> > > version information would add a lot of complexity to clients since
> > they'll
> > > have to try to handle different version permutations which are not
> > actually
> > > possible in practice. Wouldn't it be simpler to know that you're
> talking
> > to
> > > an 0.9 broker than that you're talking to a broker which supports
> > version 2
> > > of the group coordinator request, version 1 of fetch request, etc?
> Also,
> > > the release version could be included in the broker information in the
> > > topic metadata request which would save the need for the additional
> round
> > > trip on every reconnect.
> > >
> > > -Jason
> > >
> > > On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh 
> > wrote:
> > >
> > > > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira 
> > wrote:
> > > >
> > > > > One more thing, the KIP actually had 3 parts:
> > > > > 1. The version protocol
> > > > > 2. New response on messages of wrong API key or wrong version
> > > > > 3. Protocol documentation
> > > > >
> > > > There is a WIP patch for adding protocol docs,
> > > > https://github.com/apache/kafka/pull/970 . By protocol
> documentation,
> > > you
> > > > mean updating this, right?
> > > >
> > > > >
> > > > > I understand that you are offering to only implement part 1?
> > > > > But the KIP discussion and vote should still cover all three parts,
> > > > > they will just be implemented in separate JIRA?
> > > > >
> > > > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986,
> > > covers
> > > > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include
> > all
> > > > the three points you mentioned while discussing or voting for KIP-35.
> > > >
> > > > >
> > > > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
> > > > wrote:
> > > > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
> > > > wrote:
> > > > > >
> > > > > >> I don't see a use for the name - clients should be able to
> > translate
> > > > > >> ApiKey to name for any API they support, and I'm not sure why
> > would
> > > a
> > > > > >> client need to log anything about APIs it does not support. Am I
> > > > > >> missing something?
> > > > > >>
> > > > > > Yea, it is a fair assumption that client would know about APIs it
> > > > > supports.
> > > > > > It could have been helpful for client users to see new APIs
> though,
> > > > > however
> > > > > > users can always refer to protocol doc of new version to find
> > > > > corresponding
> > > > > > names of the new APIs.
> > > > > >
> > > > > >>
> > > > > >> On a related note, Magnus is currently on vacation, but he
> should
> > be
> > > > > >> back at the end of next week. I'd like to hold off on the vote
> > until
> > > > > >> he gets back since his experience in implementing clients  and
> his
> > > > > >> opinions will be very valuable for this discussion.
> > > > > >>
> > > > > > That is great. It will be valuable to have his feedback. I will
> > hold
> > > > off
> > > > > on
> 

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jason Gustafson
Hey Jay,

Yeah, I wasn't suggesting that we eliminate request API versions. They're
definitely needed on the broker to support compatibility. I was just saying
that if a client wants to support multiple broker versions (e.g. 0.8 and
0.9), then it makes more sense to me to make the kafka release version
available in order to determine which version of the request API should be
used rather than adding a new request type which exposes all of the
different supported versions for all of the request types. Request API
versions all change in lockstep with Kafka releases anyway.

-Jason

On Wed, Mar 2, 2016 at 5:15 PM, Becket Qin  wrote:

> I think using Kafka release version makes sense. More particularly, we can
> use the ApiVersion and this will cover all the interval version as well. In
> KAFKA-3025, we added the ApiVersion to message format version mapping, We
> can add the ApiKey to version mapping to ApiVersion as well. We can move
> ApiVersion class to o.a.k.c package and use it for both server and clients.
>
> @Jason, if we cache the release info in metadata and not re-validate the
> release on reconnect, would it still work if we do a rolling downgrade?
>
> On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson 
> wrote:
>
> > I think Dana's suggestion to include the Kafka release version makes a
> lot
> > of sense. I'm actually wondering why you would need the individual API
> > versions if you have that? It sounds like keeping track of all the api
> > version information would add a lot of complexity to clients since
> they'll
> > have to try to handle different version permutations which are not
> actually
> > possible in practice. Wouldn't it be simpler to know that you're talking
> to
> > an 0.9 broker than that you're talking to a broker which supports
> version 2
> > of the group coordinator request, version 1 of fetch request, etc? Also,
> > the release version could be included in the broker information in the
> > topic metadata request which would save the need for the additional round
> > trip on every reconnect.
> >
> > -Jason
> >
> > On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh 
> wrote:
> >
> > > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira 
> wrote:
> > >
> > > > One more thing, the KIP actually had 3 parts:
> > > > 1. The version protocol
> > > > 2. New response on messages of wrong API key or wrong version
> > > > 3. Protocol documentation
> > > >
> > > There is a WIP patch for adding protocol docs,
> > > https://github.com/apache/kafka/pull/970 . By protocol documentation,
> > you
> > > mean updating this, right?
> > >
> > > >
> > > > I understand that you are offering to only implement part 1?
> > > > But the KIP discussion and vote should still cover all three parts,
> > > > they will just be implemented in separate JIRA?
> > > >
> > > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986,
> > covers
> > > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include
> all
> > > the three points you mentioned while discussing or voting for KIP-35.
> > >
> > > >
> > > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
> > > wrote:
> > > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > >> I don't see a use for the name - clients should be able to
> translate
> > > > >> ApiKey to name for any API they support, and I'm not sure why
> would
> > a
> > > > >> client need to log anything about APIs it does not support. Am I
> > > > >> missing something?
> > > > >>
> > > > > Yea, it is a fair assumption that client would know about APIs it
> > > > supports.
> > > > > It could have been helpful for client users to see new APIs though,
> > > > however
> > > > > users can always refer to protocol doc of new version to find
> > > > corresponding
> > > > > names of the new APIs.
> > > > >
> > > > >>
> > > > >> On a related note, Magnus is currently on vacation, but he should
> be
> > > > >> back at the end of next week. I'd like to hold off on the vote
> until
> > > > >> he gets back since his experience in implementing clients  and his
> > > > >> opinions will be very valuable for this discussion.
> > > > >>
> > > > > That is great. It will be valuable to have his feedback. I will
> hold
> > > off
> > > > on
> > > > > removing "api_name" and "api_deprecated_versions" or adding release
> > > > version.
> > > > >
> > > > >>
> > > > >> Gwen
> > > > >>
> > > > >> On Tue, Mar 1, 2016 at 4:24 PM, Ashish Singh  >
> > > > wrote:
> > > > >> > Works with me. I will update PR to remove this.
> > > > >> >
> > > > >> > Also, "api_name" have been pointed out as a concern. However, it
> > can
> > > > be
> > > > >> > handy for logging and similar purposes. Any take on that?
> > > > >> >
> > > > >> > On Tue, Mar 1, 2016 at 3:46 PM, Gwen Shapira  >
> > > > wrote:
> > > > >> >
> > > > >> >> Jay also mentioned:
> > > > >> >> "Or, alternately, since deprecation has no functional impact
> and
> > is
> > > > >> >> just a message
> > > > >> >> to developers, we could just leave 

[GitHub] kafka pull request: MINOR: fix typo

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/996


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-03-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3290.
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 998
[https://github.com/apache/kafka/pull/998]

> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fixed error in test by moving commit va...

2016-03-02 Thread gwenshap
Github user gwenshap closed the pull request at:

https://github.com/apache/kafka/pull/994


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176909#comment-15176909
 ] 

ASF GitHub Bot commented on KAFKA-3290:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/998


> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3290: fix transient test failures in Wor...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/998


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Becket Qin
I think using Kafka release version makes sense. More particularly, we can
use the ApiVersion and this will cover all the interval version as well. In
KAFKA-3025, we added the ApiVersion to message format version mapping, We
can add the ApiKey to version mapping to ApiVersion as well. We can move
ApiVersion class to o.a.k.c package and use it for both server and clients.

@Jason, if we cache the release info in metadata and not re-validate the
release on reconnect, would it still work if we do a rolling downgrade?

On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson  wrote:

> I think Dana's suggestion to include the Kafka release version makes a lot
> of sense. I'm actually wondering why you would need the individual API
> versions if you have that? It sounds like keeping track of all the api
> version information would add a lot of complexity to clients since they'll
> have to try to handle different version permutations which are not actually
> possible in practice. Wouldn't it be simpler to know that you're talking to
> an 0.9 broker than that you're talking to a broker which supports version 2
> of the group coordinator request, version 1 of fetch request, etc? Also,
> the release version could be included in the broker information in the
> topic metadata request which would save the need for the additional round
> trip on every reconnect.
>
> -Jason
>
> On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh  wrote:
>
> > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira  wrote:
> >
> > > One more thing, the KIP actually had 3 parts:
> > > 1. The version protocol
> > > 2. New response on messages of wrong API key or wrong version
> > > 3. Protocol documentation
> > >
> > There is a WIP patch for adding protocol docs,
> > https://github.com/apache/kafka/pull/970 . By protocol documentation,
> you
> > mean updating this, right?
> >
> > >
> > > I understand that you are offering to only implement part 1?
> > > But the KIP discussion and vote should still cover all three parts,
> > > they will just be implemented in separate JIRA?
> > >
> > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986,
> covers
> > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include all
> > the three points you mentioned while discussing or voting for KIP-35.
> >
> > >
> > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
> > wrote:
> > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
> > wrote:
> > > >
> > > >> I don't see a use for the name - clients should be able to translate
> > > >> ApiKey to name for any API they support, and I'm not sure why would
> a
> > > >> client need to log anything about APIs it does not support. Am I
> > > >> missing something?
> > > >>
> > > > Yea, it is a fair assumption that client would know about APIs it
> > > supports.
> > > > It could have been helpful for client users to see new APIs though,
> > > however
> > > > users can always refer to protocol doc of new version to find
> > > corresponding
> > > > names of the new APIs.
> > > >
> > > >>
> > > >> On a related note, Magnus is currently on vacation, but he should be
> > > >> back at the end of next week. I'd like to hold off on the vote until
> > > >> he gets back since his experience in implementing clients  and his
> > > >> opinions will be very valuable for this discussion.
> > > >>
> > > > That is great. It will be valuable to have his feedback. I will hold
> > off
> > > on
> > > > removing "api_name" and "api_deprecated_versions" or adding release
> > > version.
> > > >
> > > >>
> > > >> Gwen
> > > >>
> > > >> On Tue, Mar 1, 2016 at 4:24 PM, Ashish Singh 
> > > wrote:
> > > >> > Works with me. I will update PR to remove this.
> > > >> >
> > > >> > Also, "api_name" have been pointed out as a concern. However, it
> can
> > > be
> > > >> > handy for logging and similar purposes. Any take on that?
> > > >> >
> > > >> > On Tue, Mar 1, 2016 at 3:46 PM, Gwen Shapira 
> > > wrote:
> > > >> >
> > > >> >> Jay also mentioned:
> > > >> >> "Or, alternately, since deprecation has no functional impact and
> is
> > > >> >> just a message
> > > >> >> to developers, we could just leave it out of the protocol and
> just
> > > have
> > > >> it
> > > >> >> in release notes etc."
> > > >> >>
> > > >> >> I'm in favor of leaving it out of the protocol. I can't really
> see
> > a
> > > >> >> use-case.
> > > >> >>
> > > >> >> Gwen
> > > >> >>
> > > >> >> On Mon, Feb 29, 2016 at 3:55 PM, Ashish Singh <
> asi...@cloudera.com
> > >
> > > >> wrote:
> > > >> >>
> > > >> >> > I hope it is OK for me to make some progress here. I have made
> > the
> > > >> >> > following changes.
> > > >> >> >
> > > >> >> > 1. Updated KIP-35, to adopt Jay's suggestion on maintaining
> > > separate
> > > >> list
> > > >> >> > of deprecated versions, instead of using a version of -1.
> > > >> >> > 2. Added information on required permissions, Describe action
> on
> > > >> Cluster
> > > >> >> > resource, to be able to retrieve protocol versions from a auth
> > > enabled
> > > 

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jay Kreps
Hey guys,

I think we want the clients to depend not on the broker version but rather
on the protocol version. If you build against protocol version x it is
valid with any Kafka version that supports x even as Kafka releases
progress.

We discussed originally whether to version the protocol as a whole or the
individual apis. The decision to version the apis individually was based on
the fact that there are multiple categories of apis: internal, producer,
consumer, admin, etc, and these could be exposed in different combinations.
If you version the protocol as a whole it bumps up every time anything
changes. However clients often only use a small fraction of these apis. For
example, practically, the largest number of clients are producers which are
generally just the metadata request and the produce request. A client built
against a particular version of these really doesn't care the version of
any other apis and shouldn't need be tested against anything that didn't
change these two things.

You could argue pros and cons of this, but changing the versioning scheme
is pretty hard nowl.

I'd argue that rather than introduce a new broker versioning scheme let's
just make correct use of what we have. The complaint was that we changed
the meaning of some of these things without changing the version--I'm not
sure adding another version field fixes this in general though you could
definitely invent heuristics that fix the specific cases we had. But this
pushes the complexity onto the client. I think maybe a better solution
would just be not to make those kinds of changes within a single protocol
version.

-Jay

On Wed, Mar 2, 2016 at 3:16 PM, Jason Gustafson  wrote:

> I think Dana's suggestion to include the Kafka release version makes a lot
> of sense. I'm actually wondering why you would need the individual API
> versions if you have that? It sounds like keeping track of all the api
> version information would add a lot of complexity to clients since they'll
> have to try to handle different version permutations which are not actually
> possible in practice. Wouldn't it be simpler to know that you're talking to
> an 0.9 broker than that you're talking to a broker which supports version 2
> of the group coordinator request, version 1 of fetch request, etc? Also,
> the release version could be included in the broker information in the
> topic metadata request which would save the need for the additional round
> trip on every reconnect.
>
> -Jason
>
> On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh  wrote:
>
> > On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira  wrote:
> >
> > > One more thing, the KIP actually had 3 parts:
> > > 1. The version protocol
> > > 2. New response on messages of wrong API key or wrong version
> > > 3. Protocol documentation
> > >
> > There is a WIP patch for adding protocol docs,
> > https://github.com/apache/kafka/pull/970 . By protocol documentation,
> you
> > mean updating this, right?
> >
> > >
> > > I understand that you are offering to only implement part 1?
> > > But the KIP discussion and vote should still cover all three parts,
> > > they will just be implemented in separate JIRA?
> > >
> > The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986,
> covers
> > 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include all
> > the three points you mentioned while discussing or voting for KIP-35.
> >
> > >
> > > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
> > wrote:
> > > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
> > wrote:
> > > >
> > > >> I don't see a use for the name - clients should be able to translate
> > > >> ApiKey to name for any API they support, and I'm not sure why would
> a
> > > >> client need to log anything about APIs it does not support. Am I
> > > >> missing something?
> > > >>
> > > > Yea, it is a fair assumption that client would know about APIs it
> > > supports.
> > > > It could have been helpful for client users to see new APIs though,
> > > however
> > > > users can always refer to protocol doc of new version to find
> > > corresponding
> > > > names of the new APIs.
> > > >
> > > >>
> > > >> On a related note, Magnus is currently on vacation, but he should be
> > > >> back at the end of next week. I'd like to hold off on the vote until
> > > >> he gets back since his experience in implementing clients  and his
> > > >> opinions will be very valuable for this discussion.
> > > >>
> > > > That is great. It will be valuable to have his feedback. I will hold
> > off
> > > on
> > > > removing "api_name" and "api_deprecated_versions" or adding release
> > > version.
> > > >
> > > >>
> > > >> Gwen
> > > >>
> > > >> On Tue, Mar 1, 2016 at 4:24 PM, Ashish Singh 
> > > wrote:
> > > >> > Works with me. I will update PR to remove this.
> > > >> >
> > > >> > Also, "api_name" have been pointed out as a concern. However, it
> can
> > > be
> > > >> > handy for logging and similar purposes. Any take on that?
> > > >> >
> > > >> > On Tue, Mar 1, 2016 

Build failed in Jenkins: kafka-trunk-jdk8 #409

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Improve JavaDoc for some public classes.

--
[...truncated 1478 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValid

[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-03-02 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176875#comment-15176875
 ] 

Jiangjie Qin commented on KAFKA-3236:
-

[~tgraves] Got it. It is true that to achieve a purely non-blocking behavior is 
more involved due to the first metadata fetch. It sounds reasonable to keep 
block.on.buffer.full. That said, I'm not sure if keeping block.on.buffer.full 
completely solve the problem in general. 

If max.block.ms > 0 and block.on.buffer.full=false, User might still be blocked 
on send(). To have a guaranteed non-blocking behavior, there are more 
requirements such as users have to send to a static list of topics and those 
topics cannot be deleted from the server.
If max.block.ms = 0, regardless of block.on.buffer.full setting, users will be 
guaranteed for non-blocking send(), but the metadata prefetch needs additional 
work.

So either way, there are some additional works or requirements. I am not sure 
which way is better. Maybe having a timeout argument for partitionsFor() will 
solve the problem, but it seems weird to have max.block.ms and the timeout 
argument at the same time.

> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1081

2016-03-02 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: MINOR: Improve JavaDoc for some public classes...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/999


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Improve JavaDoc for some public classes...

2016-03-02 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/999

MINOR: Improve JavaDoc for some public classes.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KJavaDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/999.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #999


commit 1335f72afcbd962c303208031b64e6caae1833d9
Author: Guozhang Wang 
Date:   2016-02-03T00:56:48Z

java docs

commit 53e02796da61535fb83097c90e659569309b56c7
Author: Guozhang Wang 
Date:   2016-02-03T18:53:30Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
KJavaDoc

commit 11cf7e18cc348516451a59002b72d99080f16f33
Author: Guozhang Wang 
Date:   2016-02-03T19:16:49Z

address github comments

commit afa5b839d5344933a7ff8841c1a3f762489def18
Author: Guozhang Wang 
Date:   2016-02-03T19:27:19Z

address github comments

commit f4daa654d967c749b33796f025fd5c3fc3188c82
Author: Guozhang Wang 
Date:   2016-03-02T23:04:19Z

resolve conflicts merging from trunk

commit 00aa405cf7301520c731e46821b98226c8991f87
Author: Guozhang Wang 
Date:   2016-03-03T00:04:57Z

one pass over java docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #408

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3311; Prepare internal source topics before calling partition

--
[...truncated 5658 lines...]

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > connectorStatus PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > taskStatus PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1

[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-03-02 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176728#comment-15176728
 ] 

Thomas Graves commented on KAFKA-3236:
--

Yeah it would be possible to do it outside in our code its just not as nice.

>From my understanding the metadata gather thread is ok to block. it needs the 
>metadata to do anything with that topic.   Note this is one of our users I'm 
>representing here so I'm just going by what they are saying.  I think its ok 
>to block here because the send() thread can still go ahead and send to the 
>topics it already has metadata on.

Note that also send() only blocks waiting on the metadata if its not already 
cached, in waitOnMetadata:

while(metadata.fetch().partitionsForTopic(topic) == null) {

so by our metadata thread making sure it is there first I don't think send() 
will block waiting on metadata at all.  Then I think the metadata gets updated 
when needed by internal kafka threads and doesn't block the sender.

We do also expect the buffer full to be much more frequent give the metadata 
should be cached anyway.

> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-02 Thread Jason Gustafson
I think Dana's suggestion to include the Kafka release version makes a lot
of sense. I'm actually wondering why you would need the individual API
versions if you have that? It sounds like keeping track of all the api
version information would add a lot of complexity to clients since they'll
have to try to handle different version permutations which are not actually
possible in practice. Wouldn't it be simpler to know that you're talking to
an 0.9 broker than that you're talking to a broker which supports version 2
of the group coordinator request, version 1 of fetch request, etc? Also,
the release version could be included in the broker information in the
topic metadata request which would save the need for the additional round
trip on every reconnect.

-Jason

On Tue, Mar 1, 2016 at 7:59 PM, Ashish Singh  wrote:

> On Tue, Mar 1, 2016 at 6:30 PM, Gwen Shapira  wrote:
>
> > One more thing, the KIP actually had 3 parts:
> > 1. The version protocol
> > 2. New response on messages of wrong API key or wrong version
> > 3. Protocol documentation
> >
> There is a WIP patch for adding protocol docs,
> https://github.com/apache/kafka/pull/970 . By protocol documentation, you
> mean updating this, right?
>
> >
> > I understand that you are offering to only implement part 1?
> > But the KIP discussion and vote should still cover all three parts,
> > they will just be implemented in separate JIRA?
> >
> The patch for KAFKA-3307, https://github.com/apache/kafka/pull/986, covers
> 1 and 2. KAFKA-3309 tracks documentation part. Yes, we should include all
> the three points you mentioned while discussing or voting for KIP-35.
>
> >
> > On Tue, Mar 1, 2016 at 5:25 PM, Ashish Singh 
> wrote:
> > > On Tue, Mar 1, 2016 at 5:11 PM, Gwen Shapira 
> wrote:
> > >
> > >> I don't see a use for the name - clients should be able to translate
> > >> ApiKey to name for any API they support, and I'm not sure why would a
> > >> client need to log anything about APIs it does not support. Am I
> > >> missing something?
> > >>
> > > Yea, it is a fair assumption that client would know about APIs it
> > supports.
> > > It could have been helpful for client users to see new APIs though,
> > however
> > > users can always refer to protocol doc of new version to find
> > corresponding
> > > names of the new APIs.
> > >
> > >>
> > >> On a related note, Magnus is currently on vacation, but he should be
> > >> back at the end of next week. I'd like to hold off on the vote until
> > >> he gets back since his experience in implementing clients  and his
> > >> opinions will be very valuable for this discussion.
> > >>
> > > That is great. It will be valuable to have his feedback. I will hold
> off
> > on
> > > removing "api_name" and "api_deprecated_versions" or adding release
> > version.
> > >
> > >>
> > >> Gwen
> > >>
> > >> On Tue, Mar 1, 2016 at 4:24 PM, Ashish Singh 
> > wrote:
> > >> > Works with me. I will update PR to remove this.
> > >> >
> > >> > Also, "api_name" have been pointed out as a concern. However, it can
> > be
> > >> > handy for logging and similar purposes. Any take on that?
> > >> >
> > >> > On Tue, Mar 1, 2016 at 3:46 PM, Gwen Shapira 
> > wrote:
> > >> >
> > >> >> Jay also mentioned:
> > >> >> "Or, alternately, since deprecation has no functional impact and is
> > >> >> just a message
> > >> >> to developers, we could just leave it out of the protocol and just
> > have
> > >> it
> > >> >> in release notes etc."
> > >> >>
> > >> >> I'm in favor of leaving it out of the protocol. I can't really see
> a
> > >> >> use-case.
> > >> >>
> > >> >> Gwen
> > >> >>
> > >> >> On Mon, Feb 29, 2016 at 3:55 PM, Ashish Singh  >
> > >> wrote:
> > >> >>
> > >> >> > I hope it is OK for me to make some progress here. I have made
> the
> > >> >> > following changes.
> > >> >> >
> > >> >> > 1. Updated KIP-35, to adopt Jay's suggestion on maintaining
> > separate
> > >> list
> > >> >> > of deprecated versions, instead of using a version of -1.
> > >> >> > 2. Added information on required permissions, Describe action on
> > >> Cluster
> > >> >> > resource, to be able to retrieve protocol versions from a auth
> > enabled
> > >> >> > Kafka cluster.
> > >> >> >
> > >> >> > Created https://issues.apache.org/jira/browse/KAFKA-3304.
> Primary
> > >> patch
> > >> >> is
> > >> >> > available to review, https://github.com/apache/kafka/pull/986
> > >> >> >
> > >> >> > On Thu, Feb 25, 2016 at 1:27 PM, Ashish Singh <
> asi...@cloudera.com
> > >
> > >> >> wrote:
> > >> >> >
> > >> >> > > Kafka clients in Hadoop ecosystem, Flume, Spark, etc, have
> found
> > it
> > >> >> > really
> > >> >> > > difficult to cope up with Kafka releases as they want to
> support
> > >> users
> > >> >> on
> > >> >> > > different Kafka versions. Capability to retrieve protocol
> version
> > >> will
> > >> >> > go a
> > >> >> > > long way to ease out those pain points. I will be happy to help
> > out
> > >> >> with
> > >> >> > > the work on this KIP. @Magnus, thanks for driving this, 

[jira] [Commented] (KAFKA-1037) switching to gzip compression no error message for missing snappy jar on classpath

2016-03-02 Thread Andrew Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176683#comment-15176683
 ] 

Andrew Stein commented on KAFKA-1037:
-

This is a severe issue AFAIK. It took three of us three days to track down why 
our kafka messages were disappearing. At a minimum, if one cannot revert easily 
to "gzip" or "none" for compression, this should be logged as an ERROR.

> switching to gzip compression no error message for missing snappy jar on 
> classpath
> --
>
> Key: KAFKA-1037
> URL: https://issues.apache.org/jira/browse/KAFKA-1037
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: noob
>
> seems to be swallowed by not setting the log4j.properties but shows up when 
> this and setting to DEBUG 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-03-02 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176626#comment-15176626
 ] 

Jiangjie Qin commented on KAFKA-3236:
-

[~tgraves] I remember there were some discussions about the requirement as you 
described. i.e. you want to block to first get the metadata but don't want to 
block afterwards. Unfortunately I forgot what exact the conclusion was from 
that discussion. For your use case, would the following solution work?
1. set {{max.block.ms = 0}}
2. Let the metadata discovery thread call partitionsFor() and catch the timeout 
exception in a while loop until it gets the metadata.
3. let the actual producing thread start produce.

Given your metadata discovery thread has to call partitionsFor() on different 
producers, I assume you probably don't want it to be blocked on one of the 
producer either, right?

I'm also wondering that if blocking on metadata refreshing during sending is 
rare and you are OK with that occasional short blocking behavior, do you expect 
buffer full to be frequent and you are not OK with that?

> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3290: fix transient test failures in Wor...

2016-03-02 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/998

KAFKA-3290: fix transient test failures in WorkerSourceTaskTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3290

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/998.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #998


commit ea4487e1c657313afb79fa80377b751e73f90869
Author: Jason Gustafson 
Date:   2016-03-02T22:15:13Z

KAFKA-3290: fix transient test failures in WorkerSourceTaskTest




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176591#comment-15176591
 ] 

ASF GitHub Bot commented on KAFKA-3290:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/998

KAFKA-3290: fix transient test failures in WorkerSourceTaskTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3290

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/998.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #998


commit ea4487e1c657313afb79fa80377b751e73f90869
Author: Jason Gustafson 
Date:   2016-03-02T22:15:13Z

KAFKA-3290: fix transient test failures in WorkerSourceTaskTest




> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-03-02 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176573#comment-15176573
 ] 

Thomas Graves commented on KAFKA-3236:
--

[~becket_qin]  So if your concern that this change doesn't make it fully 
non-blocking since it could potentially still block on the metadata query?

You are correct in what you say that block.on.buffer.full = false and 
max.block.ms > 0 its not purely non-blocking and that is actually what we want. 
 We want to be able to control the metadata fetch block time separate from if 
the buffer is full.  I expect most users to leave block.on.buffer.full=true.

We have a use case where we have a process sending to multiple different Kafka 
clusters.  It has one thread that is getting the metadata for all the topics 
from all the clusters and then another thread that is doing the send() on the 
produce. The second thread sends to multiple kafka clusters so we don't want it 
to block if at all possible.  It using a single instance of a producer per 
topic.  
Since its a single instance we can't set the max.blocking.ms to 0 because then 
the thread getting the metadata wouldn't block which we want it to.   Note the 
thread getting metadata is calling partitionsFor().With this patch and 
block.on.buffer.full = false and max.block.ms > 0 it is still possible that the 
send() blocks but I consider this a rare/special case.  We don't want it to 
block if the buffer is full though.

An alternative to this would be to add another interface for partitionsFor() 
that would take in the maxBlockTimeMs rather then using it from the config.  
Then the thread doing the send could set max.blocking.ms to 0 and all is well. 

Thoughts or other ideas?



> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3229) Root statestore is not registered with ProcessorStateManager, inner state store is registered instead

2016-03-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176570#comment-15176570
 ] 

Guozhang Wang commented on KAFKA-3229:
--

[~tom_dearman] Sorry my previous message was incorrect, indeed I meant {code} 
StateStore.init() {code} not {code} ProcessorContext.register() {code}, and I 
was not worrying about "users" but "developers" who, say, want to add a new 
state store implementation into it, whether it is more complex for them to 
reason about which store to use calling init(). Maybe I was over-thinking about 
programmability, but would like to bring it up so people can ponder whether it 
is worthwhile doing :)

> Root statestore is not registered with ProcessorStateManager, inner state 
> store is registered instead
> -
>
> Key: KAFKA-3229
> URL: https://issues.apache.org/jira/browse/KAFKA-3229
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.10.0.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Tom Dearman
> Fix For: 0.10.0.0
>
>
> When the hierarchy of nested StateStores are created, init is called on the 
> root store, but parent StateStores such as  MeteredKeyValueStore just call 
> the contained StateStore until a store such as MemoryStore calls 
> ProcessorContext.register, but it passes 'this' to the method so only that 
> inner state store (MemoryStore in this case) is registered with 
> ProcessorStateManager.  As state is added to the store none of the parent 
> stores code will be called, metering, or even StoreChangeLogger to put the 
> state on the kafka topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-03-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176557#comment-15176557
 ] 

Guozhang Wang commented on KAFKA-3261:
--

[~ijuma][~chenzhu] Since both of these classes are internal, could we just make 
a new class (or just augment one of these two) that contain all of these 
fields, and is versioned based on the protocol id?

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3311) Move co-partition checking to PartitionAssignor and auto-create internal topics

2016-03-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3311.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 990
[https://github.com/apache/kafka/pull/990]

> Move co-partition checking to PartitionAssignor and auto-create internal 
> topics
> ---
>
> Key: KAFKA-3311
> URL: https://issues.apache.org/jira/browse/KAFKA-3311
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Right now the internal topics management (i.e. re-partition topics for now) 
> is buggy such that it was not auto-created by the assignor. We need to fix 
> this by:
> 1) moving co-paritition info into the assignor.
> 2) let assignor create the internal topics with the right number of 
> partitions as: if co-partitioned, equal to the other partitions; otherwise if 
> there are other source topics in the sub-topology, equal to the maximum of 
> other partitions; otherwise equal to the number of tasks writing to this 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3311) Move co-partition checking to PartitionAssignor and auto-create internal topics

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176537#comment-15176537
 ] 

ASF GitHub Bot commented on KAFKA-3311:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/990


> Move co-partition checking to PartitionAssignor and auto-create internal 
> topics
> ---
>
> Key: KAFKA-3311
> URL: https://issues.apache.org/jira/browse/KAFKA-3311
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Right now the internal topics management (i.e. re-partition topics for now) 
> is buggy such that it was not auto-created by the assignor. We need to fix 
> this by:
> 1) moving co-paritition info into the assignor.
> 2) let assignor create the internal topics with the right number of 
> partitions as: if co-partitioned, equal to the other partitions; otherwise if 
> there are other source topics in the sub-topology, equal to the maximum of 
> other partitions; otherwise equal to the number of tasks writing to this 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3311: Prepare internal source topics bef...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/990


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3321) KafkaConfigStorage should never encounter commit without config data

2016-03-02 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3321:
---

 Summary: KafkaConfigStorage should never encounter commit without 
config data
 Key: KAFKA-3321
 URL: https://issues.apache.org/jira/browse/KAFKA-3321
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


After fixing  KAFKA-2994, KafkaConfigStorage should no longer blow up with 
surprise NPEs, but we did not figure out what causes it to see commit messages 
before seeing the configurations that are committed. 

This JIRA is to track down the root cause of the issue.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1080

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3196; Added checksum and size to RecordMetadata and 
ConsumerRecord

--
[...truncated 1354 lines...]

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush FAILED
java.lang.AssertionError: No request is complete.

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testWrongSerializer PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.

[jira] [Updated] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3188:
---
Priority: Blocker  (was: Major)

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3197:
---
Priority: Blocker  (was: Major)

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1215) Rack-Aware replica assignment option

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1215:
---
Priority: Blocker  (was: Major)

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Allen Wang
>Priority: Blocker
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3202:
---
Priority: Blocker  (was: Major)

> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3201:
---
Priority: Blocker  (was: Major)

> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3303) Pass partial record metadata to Interceptor onAcknowledgement in case of errors

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3303:
---
Priority: Blocker  (was: Major)

> Pass partial record metadata to Interceptor onAcknowledgement in case of 
> errors
> ---
>
> Key: KAFKA-3303
> URL: https://issues.apache.org/jira/browse/KAFKA-3303
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Currently Interceptor.onAcknowledgement behaves similarly to Callback. If 
> exception occurred and exception is passed to onAcknowledgement, metadata 
> param is set to null.
> However, it would be useful to pass topic, and partition if available to the 
> interceptor so that it knows which topic/partition got an error.
> This is part of KIP-42.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3316) Add Connect REST API to list available connector classes

2016-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3316:
-
Priority: Blocker  (was: Major)

> Add Connect REST API to list available connector classes
> 
>
> Key: KAFKA-3316
> URL: https://issues.apache.org/jira/browse/KAFKA-3316
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Each worker process's REST API should have an endpoint that can list 
> available connector classes. This can use the same Reflections code as we 
> used in KAFKA-2422 to find matching connector classes based on a short name. 
> This is useful both for debugging and for any systems that want to work with 
> different connect clusters and be able to tell which clusters support which 
> connectors.
> We may need a new top-level resource to support this. We have /connectors 
> already, but that refers to instantiated connectors that have been named. In 
> contrast, this resource would refer to the connector classes 
> (uninstantiated). We might be able to use the same resource to, e.g., lookup 
> config info in KAFKA-3315 (which occurs before connector instantiation).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2370) Add pause/unpause connector support

2016-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2370:
-
Priority: Blocker  (was: Major)

> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3315) Add Connect API to expose connector configuration info

2016-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3315:
-
Priority: Blocker  (was: Major)

> Add Connect API to expose connector configuration info
> --
>
> Key: KAFKA-3315
> URL: https://issues.apache.org/jira/browse/KAFKA-3315
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Connectors should be able to provide information about how they can be 
> configured. It will be nice to expose this programmatically as part of the 
> standard interface for connectors. This can also include support for more 
> than just a static set of config options. For example, a validation REST API 
> could provide intermediate feedback based on a partial configuration and 
> include recommendations/suggestions for fields based on the settings 
> available so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #407

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3196; Added checksum and size to RecordMetadata and 
ConsumerRecord

--
[...truncated 2890 lines...]

kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testWrongSerializer PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinA

[jira] [Updated] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3310:

Priority: Blocker  (was: Major)

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jun Rao
>Assignee: Aditya Auradkar
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3320) Add successful acks verification to ProduceConsumeValidateTest

2016-03-02 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-3320:
---

 Summary: Add successful acks verification to 
ProduceConsumeValidateTest
 Key: KAFKA-3320
 URL: https://issues.apache.org/jira/browse/KAFKA-3320
 Project: Kafka
  Issue Type: Test
Reporter: Anna Povzner


Currently ProduceConsumeValidateTest only validates that each acked message was 
consumed. Some tests may want an additional verification that all acks were 
successful.

This JIRA is to add an addition optional verification that all acks were 
successful and use it in couple of tests that need that verification. Example 
is compression test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Upgrading to Kafka 0.9.x

2016-03-02 Thread Cody Koeninger
Jay, thanks for the response.

Regarding the new consumer API for 0.9, I've been reading through the code
for it and thinking about how it fits in to the existing Spark integration.
So far I've seen some interesting challenges, and if you (or anyone else on
the dev list) have time to provide some hints, I'd appreciate it.

To recap how the existing Spark integration works (this is all using the
Kafka simple consumer api):

- Single driver node.  For each spark microbatch, it queries Kafka for the
offset high watermark for all topicpartitions of interest.  Creates one
spark partition for each topicpartition.  The partition doesn't have
messages, it just has a lower offset equal to the last consumed position,
upper offset equal to the high water mark. Sends those partitions to the
workers.

- Multiple worker nodes.  For each spark partition, it opens a simple
consumer, consumes from kafka the lower to the upper offset for a single
topicpartition, closes the consumer.

This is really blunt, but it actually works better than the integration
based on the older higher level consumer.  Churn of simple consumers on the
worker nodes in practice wasn't much of a problem (because the granularity
of microbatches is rarely under 1 second), so we don't even bother to cache
connections between batches.

The new consumer api presents some interesting challenges

- On the driver node, it would be desirable to be the single member of a
consumer group with dynamic topic subscription (so that users can take
advantage of topic patterns, etc).  Heartbeat happens only on a poll.  But
clearly the driver doesn't actually want to poll any messages, because that
load should be distributed to workers. I've seen KIP-41, which might help
if polling a single message is sufficiently lightweight.  In the meantime
the only things I can think of are trying to subclass to make a .heartbeat
method, or possibly setting max fetch bytes to a very low value.

- On the worker nodes, we aren't going to be able to get away with creating
and closing an instance of the new consumer every microbatch, since
prefetching, security, metadata requests all make that heavier weight than
a simple consumer.  The new consumer doesn't have a way to poll for only a
given number of messages (again KIP-41 would help here).  But the new
consumer also doesn't provide a way to poll for only a given
topicpartition, and the .pause method flushes fetch buffers so it's not an
option either.  I don't see a way to avoid caching one consumer per
topicpartition, which is probably less desirable than e.g. one consumer per
broker.

Any suggestions welcome, even if it's "why don't you go work on
implementing KIP-41", or "You're doing it wrong" :)

Thanks,
Cody








On Fri, Feb 26, 2016 at 1:36 PM, Jay Kreps  wrote:

> Hey, yeah, we'd really like to make this work well for you guys.
>
> I think there are actually maybe two questions here:
> 1. How should this work in steady state?
> 2. Given that there was a major reworking of the kafka consumer java
> library for 0.9 how does that impact things right now? (
> http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
> )
>
> Quick recap of how we do compatibility, just so everyone is on the same
> page:
> 1. The protocol is versioned and the cluster supports multiple versions.
> 2. As we evolve Kafka we always continue to support older versions of the
> protocol an hence older clients continue to work with newer Kafka versions.
> 2. In general we don't try to have the clients support older versions of
> Kafka since, after all, the whole point of the new client is to add
> features which often require those features to be in the broker.
>
> So I think in steady state the answer is to choose a conservative version
> to build against and it's on us to keep that working as Kafka evolves. As
> always there will be some tradeoff between using the newest features and
> being compatible with old stuff.
>
> But that steady state question ignores the fact that we did a complete
> rewrite of the consumer in 0.9. The old consumer is still there, supported,
> and still works as before but the new consumer is the path forward and what
> we are adding features to. At some point you will want to migrate to this
> new api, which will be a non-trivial change to your code.
>
> This api has a couple of advantages for you guys (1) it supports security,
> (2) It allows low-level control over partition assignment and offsets
> without the crazy fiddliness of the old "simple consumer" api, (3) it no
> longer directly accesses ZK, (4) no scala dependency and no dependency on
> Kafka core. I think all four of these should be desirable for Spark et al.
>
> One thing we could discuss is the possibility of doing forwards and
> backwards compatibility in the clients. I'm not sure this would actually
> make things better, that would probably depend on the details of how it
> worked.
>
> -Jay
>
>
> On Fri, Feb 26, 201

[jira] [Commented] (KAFKA-3314) Add CDDL license to LICENSE and NOTICE file

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176146#comment-15176146
 ] 

ASF GitHub Bot commented on KAFKA-3314:
---

GitHub user junrao opened a pull request:

https://github.com/apache/kafka/pull/997

KAFKA-3314: Add CDDL license to LICENSE and NOTICE file



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/junrao/kafka kafka-3314

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #997


commit ef72005da661872c3e5dd2efb358649de1a13fdf
Author: Jun Rao 
Date:   2016-03-02T18:17:07Z

add CDDL license




> Add CDDL license to LICENSE and NOTICE file
> ---
>
> Key: KAFKA-3314
> URL: https://issues.apache.org/jira/browse/KAFKA-3314
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Kafka now has a binary dependency on jersey, which is dual licensed under 
> CDDL. According to http://www.apache.org/legal/resolved.html#category-a , we 
> need to add CDDL to our LICENSE and NOTICE file. 
> The discussion on this can be found at 
> https://mail-archives.apache.org/mod_mbox/www-legal-discuss/201602.mbox/%3ccafbh0q1u33uog1+xsntens7rzaa5f6hgujczwx03xotug1c...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3314) Add CDDL license to LICENSE and NOTICE file

2016-03-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3314:
---
Status: Patch Available  (was: Open)

> Add CDDL license to LICENSE and NOTICE file
> ---
>
> Key: KAFKA-3314
> URL: https://issues.apache.org/jira/browse/KAFKA-3314
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Kafka now has a binary dependency on jersey, which is dual licensed under 
> CDDL. According to http://www.apache.org/legal/resolved.html#category-a , we 
> need to add CDDL to our LICENSE and NOTICE file. 
> The discussion on this can be found at 
> https://mail-archives.apache.org/mod_mbox/www-legal-discuss/201602.mbox/%3ccafbh0q1u33uog1+xsntens7rzaa5f6hgujczwx03xotug1c...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3314: Add CDDL license to LICENSE and NO...

2016-03-02 Thread junrao
GitHub user junrao opened a pull request:

https://github.com/apache/kafka/pull/997

KAFKA-3314: Add CDDL license to LICENSE and NOTICE file



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/junrao/kafka kafka-3314

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #997


commit ef72005da661872c3e5dd2efb358649de1a13fdf
Author: Jun Rao 
Date:   2016-03-02T18:17:07Z

add CDDL license




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Possible bug in Streams → KStreamImpl.java

2016-03-02 Thread Avi Flax
On Mon, Feb 29, 2016 at 6:34 PM, Guozhang Wang  wrote:
> Thanks for pointing it out!

My pleasure!

> And yes it is indeed a bug in the code. Could
> you file a HOTFIX PR fixing this and also modify the existing unit test to
> cover this case?

Sorry, I missed your email until just now. I’m glad to see you were
able to get to this!

Thanks!
Avi


[jira] [Created] (KAFKA-3319) Improve description of group min and max session timeouts

2016-03-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3319:
--

 Summary: Improve description of group min and max session timeouts
 Key: KAFKA-3319
 URL: https://issues.apache.org/jira/browse/KAFKA-3319
 Project: Kafka
  Issue Type: Improvement
  Components: config, consumer
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The current descriptions of the group min and max session timeouts are very 
matter-of-fact: they define exactly what the configuration is and nothing else. 
We should provide more detail about why these settings exist and why a user 
might need to change them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3318) Improve consumer rebalance error messaging

2016-03-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3318:
--

 Summary: Improve consumer rebalance error messaging
 Key: KAFKA-3318
 URL: https://issues.apache.org/jira/browse/KAFKA-3318
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


A common problem with the new consumer is to have message processing take 
longer than the session timeout, causing an unexpected rebalance. 
Unfortunately, when this happens, the error messages are often cryptic (e.g. 
something about illegal generation) and contain no clear advice on what to do 
(e.g. increase session timeout). We should do a pass on error messages to 
ensure that users receive clear guidance on the problem and possible solutions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3196) KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords

2016-03-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3196:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 951
[https://github.com/apache/kafka/pull/951]

> KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords
> --
>
> Key: KAFKA-3196
> URL: https://issues.apache.org/jira/browse/KAFKA-3196
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This is the second (smaller) part of KIP-42, which includes: Add record size 
> and CRC to RecordMetadata and ConsumerRecord.
> See details in KIP-42 wiki: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-03-02 Thread Ashish Singh
Here is link to the KIP,
https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Enhance+Authorizer+interface+to+be+aware+of+supported+Principal+Types

On Wed, Mar 2, 2016 at 9:46 AM, Ashish Singh  wrote:

> Hi Guys,
>
> I would like to initiate a discuss thread on KIP-50. Kafka authorizer is
> agnostic of principal types it supports, so are the acls CRUD methods
> in kafka.security.auth.Authorizer. The intent behind is to keep Kafka
> authorization pluggable, which is really great. However, this leads to Acls
> CRUD methods not performing any check on validity of acls, as they are not
> aware of what principal types Authorizer implementation supports. This
> opens up space for lots of user errors, KAFKA-3097
>  for an instance.
>
> This KIP proposes adding a getSupportedPrincipalTypes method to authorizer
> and use that for acls verification during acls CRUD.
>
> Feedbacks and comments are welcome.
>
> --
>
> Regards,
> Ashish
>



-- 

Regards,
Ashish


[DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-03-02 Thread Ashish Singh
Hi Guys,

I would like to initiate a discuss thread on KIP-50. Kafka authorizer is
agnostic of principal types it supports, so are the acls CRUD methods
in kafka.security.auth.Authorizer. The intent behind is to keep Kafka
authorization pluggable, which is really great. However, this leads to Acls
CRUD methods not performing any check on validity of acls, as they are not
aware of what principal types Authorizer implementation supports. This
opens up space for lots of user errors, KAFKA-3097
 for an instance.

This KIP proposes adding a getSupportedPrincipalTypes method to authorizer
and use that for acls verification during acls CRUD.

Feedbacks and comments are welcome.

-- 

Regards,
Ashish


[GitHub] kafka pull request: KAFKA-3196: Added checksum and size to RecordM...

2016-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/951


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3196) KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176067#comment-15176067
 ] 

ASF GitHub Bot commented on KAFKA-3196:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/951


> KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords
> --
>
> Key: KAFKA-3196
> URL: https://issues.apache.org/jira/browse/KAFKA-3196
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This is the second (smaller) part of KIP-42, which includes: Add record size 
> and CRC to RecordMetadata and ConsumerRecord.
> See details in KIP-42 wiki: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3290) WorkerSourceTask testCommit transient failure

2016-03-02 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176061#comment-15176061
 ] 

Jason Gustafson commented on KAFKA-3290:


[~ewencp] I haven't been able to reproduce it locally, but it looks like 
there's more information in that output, so I'll have another look.

> WorkerSourceTask testCommit transient failure
> -
>
> Key: KAFKA-3290
> URL: https://issues.apache.org/jira/browse/KAFKA-3290
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> From recent failed build:
> {code}
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
> java.lang.AssertionError:
>   Expectation failure on verify:
> Listener.onStartup(job-0): expected: 1, actual: 1
> Listener.onShutdown(job-0): expected: 1, actual: 1
> at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
> at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
> at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
> at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommit(WorkerSourceTaskTest.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3317) too many open files between kafka borkers or may between borker and clients

2016-03-02 Thread david (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

david updated KAFKA-3317:
-
Description: 
there is much open files in my java app client which write msg to kafka broker, 
i used 
{quote}
   
org.apache.kafka
kafka-clients
0.9.0.1

{quote}
new api to write msg. when i use

{quote}
lsof -p 8780 | wc -l
19184
lsof -p 8780 | grep XmlIpcRegSvc | wc -l
4920
lsof -p 8780 | grep pipe | wc -l
9576
lsof -p 8780 | grep eventpoll | wc -l
4792
{quote}

where 8780 is my java app pid.

{quote}
java37121  app *796u  IPv6  9606739970t0   TCP 
mad183:50213->mad180:XmlIpcRegSvc (ESTABLISHED)
{quote}

there are many  ESTABLISHED XmlIpcRegSvc , and seems not closed.
and It likes use ipv6 .


  was:
there is much open files in my java app client which write msg to kafka broker, 
i used 
   
org.apache.kafka
kafka-clients
0.9.0.1

new api to write msg.
when i use

ps -ef | grep myapp | grep -v 'grep'  | awk '{print $2}' | xargs lsof -p | wc -l
19184
lsof -p 8780 | grep XmlIpcRegSvc | wc -l
4920
lsof -p 8780 | grep pipe | wc -l
9576
lsof -p 8780 | grep eventpoll | wc -l
4792

where 8780 is my java app pid.

java37121  app *796u  IPv6  9606739970t0   TCP 
mad183:50213->mad180:XmlIpcRegSvc (ESTABLISHED)


there are many  ESTABLISHED XmlIpcRegSvc , and seems not closed.
and It likes use ipv6 .



> too many open files between kafka borkers or may between borker and clients 
> 
>
> Key: KAFKA-3317
> URL: https://issues.apache.org/jira/browse/KAFKA-3317
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
> Environment: CentOS release 6.5 (Final)
> kafka_2.11-0.9.0.1
>Reporter: david
>
> there is much open files in my java app client which write msg to kafka 
> broker, i used 
> {quote}
>
>   org.apache.kafka
>   kafka-clients
>   0.9.0.1
>   
> {quote}
> new api to write msg. when i use
> {quote}
> lsof -p 8780 | wc -l
> 19184
> lsof -p 8780 | grep XmlIpcRegSvc | wc -l
> 4920
> lsof -p 8780 | grep pipe | wc -l
> 9576
> lsof -p 8780 | grep eventpoll | wc -l
> 4792
> {quote}
> where 8780 is my java app pid.
> {quote}
> java37121  app *796u  IPv6  9606739970t0   TCP 
> mad183:50213->mad180:XmlIpcRegSvc (ESTABLISHED)
> {quote}
> there are many  ESTABLISHED XmlIpcRegSvc , and seems not closed.
> and It likes use ipv6 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2934) Offset storage file configuration in Connect standalone mode is not included in StandaloneConfig

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15175765#comment-15175765
 ] 

ASF GitHub Bot commented on KAFKA-2934:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/734

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

Added offsetBackingStore config to StandaloneConfig and DistributedConfig;
Added config for offset.storage.topic and config.storage.topic into 
DistributedConfig;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2934

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/734.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #734


commit 8a59426b2d348826584e8a618676b739dd7918d7
Author: jinxing 
Date:   2016-02-21T08:34:27Z

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

commit 54bc2bcf5259cba9d99c74566d57823784065ab9
Author: jinxing 
Date:   2016-03-02T06:56:04Z

mod

commit b491755fe3ea3e3420cd0b2b4297ca3196e89307
Author: jinxing 
Date:   2016-03-02T07:03:25Z

fix KakfaStatusBackingStore




> Offset storage file configuration in Connect standalone mode is not included 
> in StandaloneConfig
> 
>
> Key: KAFKA-2934
> URL: https://issues.apache.org/jira/browse/KAFKA-2934
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> The config is coded directly in FileOffsetBackingStore rather than being 
> listed (and validated) in StandaloneConfig. This also means it wouldn't be 
> included if we autogenerated docs from the config classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2934) Offset storage file configuration in Connect standalone mode is not included in StandaloneConfig

2016-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15175764#comment-15175764
 ] 

ASF GitHub Bot commented on KAFKA-2934:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/734


> Offset storage file configuration in Connect standalone mode is not included 
> in StandaloneConfig
> 
>
> Key: KAFKA-2934
> URL: https://issues.apache.org/jira/browse/KAFKA-2934
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> The config is coded directly in FileOffsetBackingStore rather than being 
> listed (and validated) in StandaloneConfig. This also means it wouldn't be 
> included if we autogenerated docs from the config classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2934: Offset storage file configuration ...

2016-03-02 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/734


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2934: Offset storage file configuration ...

2016-03-02 Thread ZoneMayor
GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/734

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

Added offsetBackingStore config to StandaloneConfig and DistributedConfig;
Added config for offset.storage.topic and config.storage.topic into 
DistributedConfig;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2934

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/734.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #734


commit 8a59426b2d348826584e8a618676b739dd7918d7
Author: jinxing 
Date:   2016-02-21T08:34:27Z

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

commit 54bc2bcf5259cba9d99c74566d57823784065ab9
Author: jinxing 
Date:   2016-03-02T06:56:04Z

mod

commit b491755fe3ea3e3420cd0b2b4297ca3196e89307
Author: jinxing 
Date:   2016-03-02T07:03:25Z

fix KakfaStatusBackingStore




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3317) too many open files between kafka borkers or may between borker and clients

2016-03-02 Thread david (JIRA)
david created KAFKA-3317:


 Summary: too many open files between kafka borkers or may between 
borker and clients 
 Key: KAFKA-3317
 URL: https://issues.apache.org/jira/browse/KAFKA-3317
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1, 0.9.0.0
 Environment: CentOS release 6.5 (Final)
kafka_2.11-0.9.0.1
Reporter: david


there is much open files in my java app client which write msg to kafka broker, 
i used 
   
org.apache.kafka
kafka-clients
0.9.0.1

new api to write msg.
when i use

ps -ef | grep myapp | grep -v 'grep'  | awk '{print $2}' | xargs lsof -p | wc -l
19184
lsof -p 8780 | grep XmlIpcRegSvc | wc -l
4920
lsof -p 8780 | grep pipe | wc -l
9576
lsof -p 8780 | grep eventpoll | wc -l
4792

where 8780 is my java app pid.

java37121  app *796u  IPv6  9606739970t0   TCP 
mad183:50213->mad180:XmlIpcRegSvc (ESTABLISHED)


there are many  ESTABLISHED XmlIpcRegSvc , and seems not closed.
and It likes use ipv6 .




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


intellij make errors...

2016-03-02 Thread vasanth loka
I am attempting to compile kafka 0.9 and run a few tests in intellij. I am
able to import the project into IntelliJ. I have scala 2.1.6 and Jdk 1.8.
When I attempt a make, i get the few errors and all indicating: can't
expand macros compiled by previous versions of Scala
for example:

Error:(110, 29) can't expand macros compiled by previous versions of Scala
assert(replicas.contains(partitionDataForTopic1(1).leader.get))
^


Is there anything that i am missing in getting it to work in IntelliJ?


[GitHub] kafka pull request: MINOR: remove unnecessary imports

2016-03-02 Thread xuwei-k
Github user xuwei-k closed the pull request at:

https://github.com/apache/kafka/pull/65


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3240) Replication issues

2016-03-02 Thread Jan Omar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Omar updated KAFKA-3240:

Summary: Replication issues  (was: Replication issues on FreeBSD)

> Replication issues
> --
>
> Key: KAFKA-3240
> URL: https://issues.apache.org/jira/browse/KAFKA-3240
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.8.2.2, 0.9.0.1
> Environment: FreeBSD 10.2-RELEASE-p9
>Reporter: Jan Omar
>
> Hi,
> We are trying to replace our 3-broker cluster running on 0.6 with a new 
> cluster on 0.9.0.1 (but tried 0.8.2.2 and 0.9.0.0 as well).
> - 3 kafka nodes with one zookeeper instance on each machine
> - FreeBSD 10.2 p9
> - Nagle off (sysctl net.inet.tcp.delayed_ack=0)
> - all kafka machines write a ZFS ZIL to a dedicated SSD
> - 3 producers on 3 machines, writing to 1 topics, partitioning 3, replication 
> factor 3
> - acks all
> - 10 Gigabit Ethernet, all machines on one switch, ping 0.05 ms worst case.
> While using the ProducerPerformance or rdkafka_performance we are seeing very 
> strange Replication errors. Any hint on what's going on would be highly 
> appreciated. Any suggestion on how to debug this properly would help as well.
> This is what our broker config looks like:
> {code}
> broker.id=5
> auto.create.topics.enable=false
> delete.topic.enable=true
> listeners=PLAINTEXT://:9092
> port=9092
> host.name=kafka-five.acc
> advertised.host.name=10.5.3.18
> zookeeper.connect=zookeeper-four.acc:2181,zookeeper-five.acc:2181,zookeeper-six.acc:2181
> zookeeper.connection.timeout.ms=6000
> num.replica.fetchers=1
> replica.fetch.max.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.checkpoint.interval.ms=5000
> replica.socket.timeout.ms=30
> replica.socket.receive.buffer.bytes=65536
> replica.lag.time.max.ms=1000
> min.insync.replicas=2
> controller.socket.timeout.ms=3
> controller.message.queue.size=100
> log.dirs=/var/db/kafka
> num.partitions=8
> message.max.bytes=1
> auto.create.topics.enable=false
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.hours=168
> log.flush.interval.ms=1
> log.flush.interval.messages=2
> log.flush.scheduler.interval.ms=2000
> log.roll.hours=168
> log.retention.check.interval.ms=30
> log.segment.bytes=536870912
> zookeeper.connection.timeout.ms=100
> zookeeper.sync.time.ms=5000
> num.io.threads=8
> num.network.threads=4
> socket.request.max.bytes=104857600
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> queued.max.requests=10
> fetch.purgatory.purge.interval.requests=100
> producer.purgatory.purge.interval.requests=100
> replica.lag.max.messages=1000
> {code}
> These are the errors we're seeing:
> {code:borderStyle=solid}
> ERROR [Replica Manager on Broker 5]: Error processing fetch operation on 
> partition [test,0] offset 50727 (kafka.server.ReplicaManager)
> java.lang.IllegalStateException: Invalid message size: 0
>   at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:141)
>   at kafka.log.LogSegment.translateOffset(LogSegment.scala:105)
>   at kafka.log.LogSegment.read(LogSegment.scala:126)
>   at kafka.log.Log.read(Log.scala:506)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)
>   at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)0
> {code}
> and 
> {code}
> ERROR Found invalid messages during fetch for partition [test,0] offset 2732 
> error Message found with corrupt size (0) in shallow iterator 
> (kafka.server.ReplicaFetcherThread)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3240) Replication issues on FreeBSD

2016-03-02 Thread Jan Omar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15175423#comment-15175423
 ] 

Jan Omar commented on KAFKA-3240:
-

To sum it up:

The issue does NOT come up when we use 2 brokers with only 2 partitions for a 
given topic, without compression.
Increasing the partition count (in our case 30) results in the reported issue. 
The same goes for enabling lz4 compression on the producer side.

> Replication issues on FreeBSD
> -
>
> Key: KAFKA-3240
> URL: https://issues.apache.org/jira/browse/KAFKA-3240
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0, 0.8.2.2, 0.9.0.1
> Environment: FreeBSD 10.2-RELEASE-p9
>Reporter: Jan Omar
>
> Hi,
> We are trying to replace our 3-broker cluster running on 0.6 with a new 
> cluster on 0.9.0.1 (but tried 0.8.2.2 and 0.9.0.0 as well).
> - 3 kafka nodes with one zookeeper instance on each machine
> - FreeBSD 10.2 p9
> - Nagle off (sysctl net.inet.tcp.delayed_ack=0)
> - all kafka machines write a ZFS ZIL to a dedicated SSD
> - 3 producers on 3 machines, writing to 1 topics, partitioning 3, replication 
> factor 3
> - acks all
> - 10 Gigabit Ethernet, all machines on one switch, ping 0.05 ms worst case.
> While using the ProducerPerformance or rdkafka_performance we are seeing very 
> strange Replication errors. Any hint on what's going on would be highly 
> appreciated. Any suggestion on how to debug this properly would help as well.
> This is what our broker config looks like:
> {code}
> broker.id=5
> auto.create.topics.enable=false
> delete.topic.enable=true
> listeners=PLAINTEXT://:9092
> port=9092
> host.name=kafka-five.acc
> advertised.host.name=10.5.3.18
> zookeeper.connect=zookeeper-four.acc:2181,zookeeper-five.acc:2181,zookeeper-six.acc:2181
> zookeeper.connection.timeout.ms=6000
> num.replica.fetchers=1
> replica.fetch.max.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.checkpoint.interval.ms=5000
> replica.socket.timeout.ms=30
> replica.socket.receive.buffer.bytes=65536
> replica.lag.time.max.ms=1000
> min.insync.replicas=2
> controller.socket.timeout.ms=3
> controller.message.queue.size=100
> log.dirs=/var/db/kafka
> num.partitions=8
> message.max.bytes=1
> auto.create.topics.enable=false
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.hours=168
> log.flush.interval.ms=1
> log.flush.interval.messages=2
> log.flush.scheduler.interval.ms=2000
> log.roll.hours=168
> log.retention.check.interval.ms=30
> log.segment.bytes=536870912
> zookeeper.connection.timeout.ms=100
> zookeeper.sync.time.ms=5000
> num.io.threads=8
> num.network.threads=4
> socket.request.max.bytes=104857600
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> queued.max.requests=10
> fetch.purgatory.purge.interval.requests=100
> producer.purgatory.purge.interval.requests=100
> replica.lag.max.messages=1000
> {code}
> These are the errors we're seeing:
> {code:borderStyle=solid}
> ERROR [Replica Manager on Broker 5]: Error processing fetch operation on 
> partition [test,0] offset 50727 (kafka.server.ReplicaManager)
> java.lang.IllegalStateException: Invalid message size: 0
>   at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:141)
>   at kafka.log.LogSegment.translateOffset(LogSegment.scala:105)
>   at kafka.log.LogSegment.read(LogSegment.scala:126)
>   at kafka.log.Log.read(Log.scala:506)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)
>   at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)0
> {code}
> and 
> {code}
> ERROR Found invalid messages during fetch for partition [test,0] offset 2732 
> error Message found with corrupt size (0) in shallow iterator 
> (kafk

Build failed in Jenkins: kafka-trunk-jdk8 #406

2016-03-02 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3257: disable bootstrap-test-env.sh --colour option

--
[...truncated 3563 lines...]

org.apache.kafka.clients.producer.RecordSendTest > testBlocking PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse PASSED

org.apache.kafka.clients.NetworkClientTest > testClose PASSED

org.apache.kafka.clients.NetworkClientTest > testLeastLoadedNode PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayConnected PASSED

org.apache.kafka.clients.NetworkClientTest > testRequestTimeout PASSED

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithStaticNodes PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelay PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayDisconnected 
PASSED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister PASSED

org.apache.kafka.clients.MetadataTest > testFailedUpdate PASSED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
PASSED

org.apache.kafka.clients.MetadataTest > testMetadata PASSED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testSerializationRoundtrip PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testIntegerSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > 
testByteBufferSerializer PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance PASSED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric PASSED

org.apache.kafka.common.protocol.ProtoUtilsTest > schemaVersionOutOfRange PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testArray 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.requests.RequestResponseTest > fetchResponseVersionTest 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
produceResponseVersionTest PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testControlledShutdownResponse PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testRequestHeaderWithNullClientId PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.metrics.MetricsTest > testSimpleStats PASSED

org.apache.kafka.common.metrics.MetricsTest > testOldDataHasNoEffect PASSED

org

Jenkins build is back to normal : kafka-trunk-jdk7 #1079

2016-03-02 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3310:
---
Affects Version/s: 0.9.0.0

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jun Rao
> Fix For: 0.10.0.0
>
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3310:
---
Fix Version/s: 0.10.0.0

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
> Fix For: 0.10.0.0
>
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3310:
---
Assignee: Aditya Auradkar

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jun Rao
>Assignee: Aditya Auradkar
> Fix For: 0.10.0.0
>
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3310:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
> Fix For: 0.10.0.0
>
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: fix typo

2016-03-02 Thread xuwei-k
GitHub user xuwei-k opened a pull request:

https://github.com/apache/kafka/pull/996

MINOR: fix typo



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xuwei-k/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/996.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #996


commit e940b2e5bf5a4cc7fca9ef74e40ff2403c39412c
Author: kenji yoshida <6b656e6...@gmail.com>
Date:   2016-03-02T10:05:30Z

MINOR: fix typo




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2951) Additional authorization test cases

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2951:
---
Fix Version/s: (was: 0.10.0.0)

> Additional authorization test cases
> ---
>
> Key: KAFKA-2951
> URL: https://issues.apache.org/jira/browse/KAFKA-2951
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>
> There are a few test cases that are worth adding. I've run them manually, but 
> it sounds like a good idea to have them in:
> # Test incorrect topic name (authorization failure)
> # Test topic wildcard
> The first one is covered by checking access to a topic with no authorization, 
> which could happen for example if the user as a typo in the topic name. This 
> case is somewhat covered by the test case testProduceWithNoTopicAccess in 
> AuthorizerIntegrationTest, but not in EndToEndAuthorizationTest. The second 
> case consists of testing that using the topic wildcard works. This wildcard 
> might end up being commonly used and it is worth checking the functionality. 
> At the moment, I believe none of AuthorizerIntegrationTest or 
> EndToEndAuthorizationTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2834) kafka-merge-pr.py should run unit tests before pushing it to trunk

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2834:
---
Fix Version/s: (was: 0.10.0.0)

> kafka-merge-pr.py should run unit tests before pushing it to trunk
> --
>
> Key: KAFKA-2834
> URL: https://issues.apache.org/jira/browse/KAFKA-2834
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15175344#comment-15175344
 ] 

Ismael Juma commented on KAFKA-3173:


[~fpj] Are you planning to work on this?

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change requests batch is not empty while creating 
> a new one. 
> Some LeaderAndIsr state changes Map(0 -> Map(foo-0 -> (LeaderAndIsrInfo:
> (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:0)))
>  might be lostat 
> kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:144)
> at 
> kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:517)
> at 
> kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:504)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:437)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)at 
> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:418)
> at 
> org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)at 
> org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-03-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma closed KAFKA-2547.
--

This was merged a while ago.

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.0.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-03-02 Thread Rajini Sivaram
Jun,

Thanks, I have added a note to the KIP. I will add a comment in the
implementation and also add a unit test to ensure that conflicts are
avoided when version number is modified.

On Tue, Mar 1, 2016 at 5:43 PM, Jun Rao  wrote:

> Rajini,
>
> Thanks for the explanation. For 1, this implies that we have to be careful
> with changing the 2-byte version in the future to avoid conflict. Could you
> document this in the KIP and also in the implementation?
>
> Jun
>
> On Tue, Mar 1, 2016 at 2:47 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com
> > wrote:
>
> > Jun,
> >
> > Thank you for the review.
> >
> >
> >1. With GSSAPI, the first context establishment packet starts with the
> >byte 0x60 (tag for APPLICATION-0) followed by a variable-length
> encoded
> >size, followed by various tags and contents. And the packet also
> > contains a
> >checksum. This is completely different from the mechanism packet from
> > Kafka
> >clients which start with a two-byte version set to zero currently,
> > followed
> >by just a String mechanism.
> >2. Agreed, I have removed the version from the server response in the
> >KIP. Thanks.
> >
> >
> > On Tue, Mar 1, 2016 at 2:33 AM, Jun Rao  wrote:
> >
> > > Rajini,
> > >
> > > Thanks for the updates. Just a couple of minor comments.
> > >
> > > 1. With the default GSSAPI, what's the first packet that the client
> sends
> > > to the server? Is that completely different from the packet format that
> > we
> > > will use for non-GSSAPI mechanisms?
> > >
> > > 2. In the server response, it doesn't seem that we need to include the
> > > version since the client knows the version of the request that it
> sends.
> > >
> > > Jun
> > >
> > > On Mon, Feb 29, 2016 at 10:14 AM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > > > Harsha,
> > > >
> > > > Thank you for the review. I will wait another day to see if there is
> > more
> > > > feedback and then start a voting thread.
> > > >
> > > > Rajini
> > > >
> > > > On Mon, Feb 29, 2016 at 2:51 PM, Harsha  wrote:
> > > >
> > > > > Rajini,
> > > > >   Thanks for the changes to the KIP. It looks good to
> > me. I
> > > > >   think we can move to voting.
> > > > > Thanks,
> > > > > Harsha
> > > > >
> > > > > On Mon, Feb 29, 2016, at 12:43 AM, Rajini Sivaram wrote:
> > > > > > I have added some more detail to the KIP based on the discussion
> in
> > > the
> > > > > > last KIP meeting to simplify support for multiple mechanisms.
> Have
> > > also
> > > > > > changed the property names to reflect this.
> > > > > >
> > > > > > Also updated the PR in
> > > > https://issues.apache.org/jira/browse/KAFKA-3149
> > > > > > to
> > > > > > reflect the KIP.
> > > > > >
> > > > > > Any feedback is appreciated.
> > > > > >
> > > > > >
> > > > > > On Tue, Feb 23, 2016 at 9:36 PM, Rajini Sivaram <
> > > > > > rajinisiva...@googlemail.com> wrote:
> > > > > >
> > > > > > > I have updated the KIP based on the discussion in the KIP
> meeting
> > > > > today.
> > > > > > >
> > > > > > > Comments and feedback are welcome.
> > > > > > >
> > > > > > > On Wed, Feb 3, 2016 at 7:20 PM, Rajini Sivaram <
> > > > > > > rajinisiva...@googlemail.com> wrote:
> > > > > > >
> > > > > > >> Hi Harsha,
> > > > > > >>
> > > > > > >> Thank you for the review. Can you clarify - I think you are
> > saying
> > > > > that
> > > > > > >> the client should send its mechanism over the wire to the
> > server.
> > > Is
> > > > > that
> > > > > > >> correct? The exchange is slightly different in the KIP (the PR
> > > > > matches the
> > > > > > >> KIP) from the one you described to enable interoperability
> with
> > > > > 0.9.0.0.
> > > > > > >>
> > > > > > >>
> > > > > > >> On Wed, Feb 3, 2016 at 1:56 PM, Harsha 
> wrote:
> > > > > > >>
> > > > > > >>> Rajini,
> > > > > > >>>I looked at the PR you have. I think its better
> with
> > > > your
> > > > > > >>>earlier approach rather than extending the
> protocol.
> > > > > > >>> What I was thinking initially is, Broker has a config option
> of
> > > say
> > > > > > >>> sasl.mechanism = GSSAPI, PLAIN
> > > > > > >>> and the client can have similar config of
> sasl.mechanism=PLAIN.
> > > > > Client
> > > > > > >>> can send its sasl mechanism before the handshake starts and
> if
> > > the
> > > > > > >>> broker accepts that particular mechanism than it can go ahead
> > > with
> > > > > > >>> handshake otherwise return a error saying that the mechanism
> > not
> > > > > > >>> allowed.
> > > > > > >>>
> > > > > > >>> Thanks,
> > > > > > >>> Harsha
> > > > > > >>>
> > > > > > >>> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> > > > > > >>> > A slightly different approach for supporting different SASL
> > > > > mechanisms
> > > > > > >>> > within a broker is to allow the same "*security protocol*"
> to
> > > be
> > > > > used
> > > > > > >>> on
> > > > > > >>> > different ports with different configuration options. An
> > > > advantage
> > >