Re: Possible StreamingConfig Bug

2015-11-27 Thread Guozhang Wang
I think the problem is that "getConsumerConfigs" called in StreamThread.
createConsumer() triggers getRestoreConsumerConfigs instead of
getBaseConsumerConfigs, which seems a bug to me.

Guozhang

On Fri, Nov 27, 2015 at 8:01 PM, Yasuhiro Matsuda <
yasuhiro.mats...@gmail.com> wrote:

> The group id is removed from the restore consumer config because the
> restore consumer should not participate in the specified consumer group. I
> don't know why it is failing.
>
> On Fri, Nov 27, 2015 at 12:37 PM, Guozhang Wang 
> wrote:
>
> > Hello Bill,
> >
> > Thanks for reporting it, this is a valid issue, could you create a
> ticket?
> >
> > Guozhang
> >
> > On Fri, Nov 27, 2015 at 6:19 AM, Bill Bejeck  wrote:
> >
> > > All,
> > >
> > > When starting KafkaStreaming I'm getting the following error (even when
> > > explicitly setting the groupId with props.put("group.id
> > > ","test-consumer-group")
> > > );
> > >
> > > Exception in thread "StreamThread-1"
> > > org.apache.kafka.common.KafkaException:
> > > org.apache.kafka.common.errors.ApiException: The configured groupId is
> > > invalid
> > > at
> > >
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
> > > at
> > >
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
> > > Caused by: org.apache.kafka.common.errors.ApiException: The configured
> > > groupId is invalid
> > >
> > > I've traced the source of the issue to the
> > > StreamingConfig.getConsumerConfigs method as it calls
> > > getRestoreConsumerConfigs (which explicitly removes the groupId
> property)
> > > vs using getBaseConsumerConfigs which returns the passed in configs
> > > unaltered.
> > >
> > > When I switched the method call, KafkaStreaming starts up fine.
> > >
> > > If you agree with this change/fix, I'll create a Jira ticket and put in
> > the
> > > PR, yada yada yada..
> > >
> > > Thanks,
> > > Bill
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: Possible StreamingConfig Bug

2015-11-27 Thread Yasuhiro Matsuda
The group id is removed from the restore consumer config because the
restore consumer should not participate in the specified consumer group. I
don't know why it is failing.

On Fri, Nov 27, 2015 at 12:37 PM, Guozhang Wang  wrote:

> Hello Bill,
>
> Thanks for reporting it, this is a valid issue, could you create a ticket?
>
> Guozhang
>
> On Fri, Nov 27, 2015 at 6:19 AM, Bill Bejeck  wrote:
>
> > All,
> >
> > When starting KafkaStreaming I'm getting the following error (even when
> > explicitly setting the groupId with props.put("group.id
> > ","test-consumer-group")
> > );
> >
> > Exception in thread "StreamThread-1"
> > org.apache.kafka.common.KafkaException:
> > org.apache.kafka.common.errors.ApiException: The configured groupId is
> > invalid
> > at
> >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
> > at
> >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
> > Caused by: org.apache.kafka.common.errors.ApiException: The configured
> > groupId is invalid
> >
> > I've traced the source of the issue to the
> > StreamingConfig.getConsumerConfigs method as it calls
> > getRestoreConsumerConfigs (which explicitly removes the groupId property)
> > vs using getBaseConsumerConfigs which returns the passed in configs
> > unaltered.
> >
> > When I switched the method call, KafkaStreaming starts up fine.
> >
> > If you agree with this change/fix, I'll create a Jira ticket and put in
> the
> > PR, yada yada yada..
> >
> > Thanks,
> > Bill
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Updated] (KAFKA-2902) StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of getBaseConsumerConfigs

2015-11-27 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-2902:
---
Description: 
When starting a KafkaStreaming instance the StreamingConfig.getConsumerConfigs 
method uses the getRestoreConsumerConfigs to retrieve properties. But this 
method removes the groupId property which causes an error and the 
KafkaStreaming instance shuts down.  On KafkaStreaming startup StreamingConfig 
should use getBaseConsumerConfigs instead.

Exception in thread "StreamThread-1" org.apache.kafka.common.KafkaException: 
org.apache.kafka.common.errors.ApiException: The configured groupId is invalid
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
Caused by: org.apache.kafka.common.errors.ApiException: The configured groupId 
is invalid 

  was:When starting a KafkaStreaming instance the 
StreamingConfig.getConsumerConfigs method uses the getRestoreConsumerConfigs to 
retrieve properties. But this method removes the groupId property which causes 
an error and the KafkaStreaming instance shuts down.  On KafkaStreaming startup 
StreamingConfig should use getBaseConsumerConfigs instead.


> StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of 
>  getBaseConsumerConfigs
> -
>
> Key: KAFKA-2902
> URL: https://issues.apache.org/jira/browse/KAFKA-2902
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.9.1.0
>
>
> When starting a KafkaStreaming instance the 
> StreamingConfig.getConsumerConfigs method uses the getRestoreConsumerConfigs 
> to retrieve properties. But this method removes the groupId property which 
> causes an error and the KafkaStreaming instance shuts down.  On 
> KafkaStreaming startup StreamingConfig should use getBaseConsumerConfigs 
> instead.
> Exception in thread "StreamThread-1" org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.errors.ApiException: The configured groupId is invalid
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
> Caused by: org.apache.kafka.common.errors.ApiException: The configured 
> groupId is invalid 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-11-27 Thread Pengwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengwei updated KAFKA-2903:
---
Affects Version/s: 0.9.0.0
   0.8.2.1
Fix Version/s: 0.9.0.0

> FileMessageSet's read method maybe has problem when start is not zero
> -
>
> Key: KAFKA-2903
> URL: https://issues.apache.org/jira/browse/KAFKA-2903
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Pengwei
>Assignee: Jay Kreps
> Fix For: 0.9.0.0
>
>
> now the code is :
> def read(position: Int, size: Int): FileMessageSet = {
>. 
> new FileMessageSet(file,
>channel,
>start = this.start + position,
>end = math.min(this.start + position + size, 
> sizeInBytes()))
>   }
> if this.start is not 0, the end is only the FileMessageSet's size, not the 
> actually position of end position.
> the end parameter should be:
>  end = math.min(this.start + position + size, this.start+sizeInBytes())



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-11-27 Thread Pengwei (JIRA)
Pengwei created KAFKA-2903:
--

 Summary: FileMessageSet's read method maybe has problem when start 
is not zero
 Key: KAFKA-2903
 URL: https://issues.apache.org/jira/browse/KAFKA-2903
 Project: Kafka
  Issue Type: Bug
  Components: log
Reporter: Pengwei
Assignee: Jay Kreps


now the code is :
def read(position: Int, size: Int): FileMessageSet = {
   . 
new FileMessageSet(file,
   channel,
   start = this.start + position,
   end = math.min(this.start + position + size, 
sizeInBytes()))
  }

if this.start is not 0, the end is only the FileMessageSet's size, not the 
actually position of end position.
the end parameter should be:
 end = math.min(this.start + position + size, this.start+sizeInBytes())





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: fix verifiable consumer assertion

2015-11-27 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/597

MINOR: fix verifiable consumer assertion



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka 
fix-verifiable-consumer-assertion

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/597.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #597


commit b595fa420563cd4e7eb7fd1d16ba9adc2ab8
Author: Jason Gustafson 
Date:   2015-11-28T01:14:22Z

MINOR: fix verifiable consumer assertion




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-27 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030337#comment-15030337
 ] 

Jason Gustafson commented on KAFKA-2891:


[~benstopford] To be clear, are you saying that the message gap is on the 
server side? In other words, the messages were successfully acked by the 
producer, but were then lost? 

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: group protocol/metadata documentation

2015-11-27 Thread Jason Gustafson
Hey Dana,

The intention of the UserData field is to allow custom partition
assignments to leverage member-specific metadata. For example, it might
include the rack name of the host for a rack-aware assignment strategy or
the number of cpus for a resource-based assignment strategy. The strategies
shipped with Kafka just leave this field empty.

I think the main use case for having different topic subscriptions in a
group is a rolling upgrade which involves a change in the consumed topics.
I don't know of any cases where different subscriptions are needed in a
stable state.

By the way, I spent a little time updating the protocol documentation here:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-GroupMembershipAPI.
It's still a little incomplete, so I'll keep working on it. Feel free to
ask further questions here if something is unclear.

-Jason

On Wed, Nov 25, 2015 at 2:23 AM, Dana Powers  wrote:

> Thanks, Jason. I see the range and roundrobin assignment strategies
> documented in the source. I don't see userdata used by either -- is that
> correct (I may be misreading)? The notes suggest userdata for something
> more detailed in the future, like rack-aware placements?
>
> One other question: in what circumstances would consumer processes in a
> single group want to use different topic subscriptions rather than
> configure a new group?
>
> Thanks again,
>
> -Dana
> On Nov 25, 2015 8:59 AM, "Jason Gustafson"  wrote:
>
> > Hey Dana,
> >
> > Have a look at this wiki, which has more detail on the consumer's
> embedded
> > protocol:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
> > .
> >
> > At the moment, the group protocol supports consumer groups and kafka
> > connect groups. Kafka tooling currently depends on the structure for
> these
> > protocol types, so reuse of the same names might cause problems. I will
> > look into updating the protocol documentation to standardize the protocol
> > formats that are in use and to provide guidance for client
> implementations.
> > My own view is that unless there's a good reason not to, all consumer
> > implementation should use the same consumer protocol format so that
> tooling
> > will work correctly.
> >
> > Thanks,
> > Jason
> >
> >
> >
> > On Tue, Nov 24, 2015 at 4:16 PM, Dana Powers 
> > wrote:
> >
> > > Hi all - I've been reading through the wiki docs and mailing list
> threads
> > > for the new JoinGroup/SyncGroup/Heartbeat APIs, hoping to add
> > functionality
> > > to the python driver. It appears that there is a shared notion of group
> > > "protocols" (client publishes supported protocols, coordinator picks
> > > protocol for group to use), and their associated metadata. Is there any
> > > documentation available for existing protocols? Will there be an
> official
> > > place to document that supporting protocol X means foo? I think I can
> > > probably construct a simple working protocol, but if I pick a protocol
> > name
> > > that already exists, will things break?
> > >
> > > -Dana
> > >
> >
>


[jira] [Updated] (KAFKA-2902) StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of getBaseConsumerConfigs

2015-11-27 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-2902:
---
Status: Patch Available  (was: Open)

Changes made for using getBaseConsumerProps from StreamingConfig from 
getConsumerProps call

> StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of 
>  getBaseConsumerConfigs
> -
>
> Key: KAFKA-2902
> URL: https://issues.apache.org/jira/browse/KAFKA-2902
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.9.1.0
>
>
> When starting a KafkaStreaming instance the 
> StreamingConfig.getConsumerConfigs method uses the getRestoreConsumerConfigs 
> to retrieve properties. But this method removes the groupId property which 
> causes an error and the KafkaStreaming instance shuts down.  On 
> KafkaStreaming startup StreamingConfig should use getBaseConsumerConfigs 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (KAFKA-2902) StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of getBaseConsumerConfigs

2015-11-27 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2902 stopped by Bill Bejeck.
--
> StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of 
>  getBaseConsumerConfigs
> -
>
> Key: KAFKA-2902
> URL: https://issues.apache.org/jira/browse/KAFKA-2902
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.9.1.0
>
>
> When starting a KafkaStreaming instance the 
> StreamingConfig.getConsumerConfigs method uses the getRestoreConsumerConfigs 
> to retrieve properties. But this method removes the groupId property which 
> causes an error and the KafkaStreaming instance shuts down.  On 
> KafkaStreaming startup StreamingConfig should use getBaseConsumerConfigs 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2902) StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of getBaseConsumerConfigs

2015-11-27 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2902 started by Bill Bejeck.
--
> StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of 
>  getBaseConsumerConfigs
> -
>
> Key: KAFKA-2902
> URL: https://issues.apache.org/jira/browse/KAFKA-2902
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.9.1.0
>
>
> When starting a KafkaStreaming instance the 
> StreamingConfig.getConsumerConfigs method uses the getRestoreConsumerConfigs 
> to retrieve properties. But this method removes the groupId property which 
> causes an error and the KafkaStreaming instance shuts down.  On 
> KafkaStreaming startup StreamingConfig should use getBaseConsumerConfigs 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Kafka 2902 streaming config use get base consu...

2015-11-27 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/596

Kafka 2902 streaming config use get base consumer configs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-2902-StreamingConfig-use-getBaseConsumerConfigs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/596.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #596


commit 0702683d15f1357278a8121ba3ea0daa6b4695d2
Author: bbejeck 
Date:   2015-11-27T22:19:41Z

KAKFA-2902 Changed StreamingConfig.getConsumerConfigs to use 
getBaseConsumerConfigs instead of getRestoreConsumerConfigs

commit d4cc41cf24053ba800d5ff864e48342278cb3240
Author: bbejeck 
Date:   2015-11-27T22:49:23Z

KAKFA-2902 added license, checkstyle fixes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #861

2015-11-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2899: improve logging when unexpected exceptions thrown in 
reading

[wangguoz] MINOR: Avoiding warning about generics in sample code

--
[...truncated 1371 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchange

Build failed in Jenkins: kafka-trunk-jdk8 #190

2015-11-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2899: improve logging when unexpected exceptions thrown in 
reading

[wangguoz] MINOR: Avoiding warning about generics in sample code

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 1dd558aeceffdd98da83f8240cdc3bf8bc182c93 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1dd558aeceffdd98da83f8240cdc3bf8bc182c93
 > git rev-list 4a0e011be3d038763d6326bb0092524f809c3f4d # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4637450632307650922.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 15.795 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5102563539677951093.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 669777 found in cache 
> '

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 18.634 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Created] (KAFKA-2902) StreamingConfig getConsumerConfiigs uses getRestoreConsumerConfigs instead of getBaseConsumerConfigs

2015-11-27 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-2902:
--

 Summary: StreamingConfig getConsumerConfiigs uses 
getRestoreConsumerConfigs instead of  getBaseConsumerConfigs
 Key: KAFKA-2902
 URL: https://issues.apache.org/jira/browse/KAFKA-2902
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 0.9.1.0


When starting a KafkaStreaming instance the StreamingConfig.getConsumerConfigs 
method uses the getRestoreConsumerConfigs to retrieve properties. But this 
method removes the groupId property which causes an error and the 
KafkaStreaming instance shuts down.  On KafkaStreaming startup StreamingConfig 
should use getBaseConsumerConfigs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Possible StreamingConfig Bug

2015-11-27 Thread Guozhang Wang
Hello Bill,

Thanks for reporting it, this is a valid issue, could you create a ticket?

Guozhang

On Fri, Nov 27, 2015 at 6:19 AM, Bill Bejeck  wrote:

> All,
>
> When starting KafkaStreaming I'm getting the following error (even when
> explicitly setting the groupId with props.put("group.id
> ","test-consumer-group")
> );
>
> Exception in thread "StreamThread-1"
> org.apache.kafka.common.KafkaException:
> org.apache.kafka.common.errors.ApiException: The configured groupId is
> invalid
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
> Caused by: org.apache.kafka.common.errors.ApiException: The configured
> groupId is invalid
>
> I've traced the source of the issue to the
> StreamingConfig.getConsumerConfigs method as it calls
> getRestoreConsumerConfigs (which explicitly removes the groupId property)
> vs using getBaseConsumerConfigs which returns the passed in configs
> unaltered.
>
> When I switched the method call, KafkaStreaming starts up fine.
>
> If you agree with this change/fix, I'll create a Jira ticket and put in the
> PR, yada yada yada..
>
> Thanks,
> Bill
>



-- 
-- Guozhang


Build failed in Jenkins: kafka_0.9.0_jdk7 #47

2015-11-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2899: improve logging when unexpected exceptions thrown in 
reading

[wangguoz] MINOR: Avoiding warning about generics in sample code

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision e934a107f8e79a1169c02fb326014fcf91e9ceb5 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e934a107f8e79a1169c02fb326014fcf91e9ceb5
 > git rev-list 7d37086e5e3225b003fc353b86a130a56c1a469b # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson3235812989523435919.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 11.534 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson470801938949250.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 651053 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache.put(DefaultMultiProcessS

[GitHub] kafka pull request: Avoiding warning about generics in sample code

2015-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/594


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2899) Should log unexpected exceptions thrown when reading from local log

2015-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030210#comment-15030210
 ] 

ASF GitHub Bot commented on KAFKA-2899:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/593


> Should log unexpected exceptions thrown when reading from local log
> ---
>
> Key: KAFKA-2899
> URL: https://issues.apache.org/jira/browse/KAFKA-2899
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
> Fix For: 0.9.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2899) Should log unexpected exceptions thrown when reading from local log

2015-11-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2899:
-
Assignee: Ben Stopford

> Should log unexpected exceptions thrown when reading from local log
> ---
>
> Key: KAFKA-2899
> URL: https://issues.apache.org/jira/browse/KAFKA-2899
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
> Fix For: 0.9.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2899) Should log unexpected exceptions thrown when reading from local log

2015-11-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2899.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.1

Issue resolved by pull request 593
[https://github.com/apache/kafka/pull/593]

> Should log unexpected exceptions thrown when reading from local log
> ---
>
> Key: KAFKA-2899
> URL: https://issues.apache.org/jira/browse/KAFKA-2899
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
> Fix For: 0.9.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2899: (trivial) Log unexpected exception...

2015-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/593


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-27 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030082#comment-15030082
 ] 

Ben Stopford commented on KAFKA-2891:
-

One more bit of info - when this problem occurs the missing messages are not in 
the server data files. This implies the problem should be on the consumer side. 
However we don't seem to see this when the old consumer is used. 

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2901) Extend ListGroups and DescribeGroup APIs to cover offsets

2015-11-27 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030059#comment-15030059
 ] 

Ben Stopford commented on KAFKA-2901:
-

Thanks andy. Just looping in [~hachikuji]

> Extend ListGroups and DescribeGroup APIs to cover offsets
> -
>
> Key: KAFKA-2901
> URL: https://issues.apache.org/jira/browse/KAFKA-2901
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: The Data Lorax
>Assignee: Neha Narkhede
> Fix For: 0.9.0.1
>
>
> The {{ListGroupsRequest}} and {{DescribeGroupsRequest}} added to 0.9.0.0 
> allow admin tools to get details of consumer groups now that this information 
> is not stored in ZK.
> The brokers also now store offset information for consumer groups. At 
> present, there is no API for admin tools to discover the groups that brokers 
> are storing offsets for.
> For example, if a consumer is using the new consumer api, is storing offsets 
> in Kafka under the groupId 'Bob', but is using manual partition assignment, 
> then the {{groupCache}} in the {{GroupMetadataManager}} will not contain any 
> information about the group 'Bob'. However, the {{offsetCache}} in the 
> {{GroupMetadataManager}} will contain information about 'Bob'.
> Currently the only way for admin tools to know the full set of groups being 
> managed by Kafka, i.e. those storing offsets in Kafka, those using Kafka for 
> balancing of consumer groups, and those using Kafka for both, is to consume 
> the offset topic.
> We need to extend the List/Describe groups API to allow admin tools to 
> discover 'Bob' and allow the partition offsets to be retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2901) Extend ListGroups and DescribeGroup APIs to cover offsets

2015-11-27 Thread The Data Lorax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

The Data Lorax updated KAFKA-2901:
--
Description: 
The {{ListGroupsRequest}} and {{DescribeGroupsRequest}} added to 0.9.0.0 allow 
admin tools to get details of consumer groups now that this information is not 
stored in ZK.

The brokers also now store offset information for consumer groups. At present, 
there is no API for admin tools to discover the groups that brokers are storing 
offsets for.

For example, if a consumer is using the new consumer api, is storing offsets in 
Kafka under the groupId 'Bob', but is using manual partition assignment, then 
the {{groupCache}} in the {{GroupMetadataManager}} will not contain any 
information about the group 'Bob'. However, the {{offsetCache}} in the 
{{GroupMetadataManager}} will contain information about 'Bob'.

Currently the only way for admin tools to know the full set of groups being 
managed by Kafka, i.e. those storing offsets in Kafka, those using Kafka for 
balancing of consumer groups, and those using Kafka for both, is to consume the 
offset topic.

We need to extend the List/Describe groups API to allow admin tools to discover 
'Bob' and allow the partition offsets to be retrieved.

  was:
The {{ListGroupsRequest}} and {{DescribeGroupsRequest}} added to 0.9.0.0 allow 
admin tools to get details of consumer groups now that this information is not 
stored in ZK.

The brokers also now store offset information for consumer groups. At present, 
there is no API for admin tools to discover the groups that brokers are storing 
offsets for.

For example, if a consumer is using the new consumer api, is storing offsets in 
Kafka under the groupId 'Bob', but is using manual partition assignment, then 
the {{groupCache}} in the {{GroupMetadataManager}} will not contain any 
information about the group 'Bob'. However, the {{offsetCache}} in the 
{{GroupMetadataManager}} will contain information about 'Bob'.

We need to extend the List/Describe groups API to allow admin tools to discover 
'Bob' and allow the partition offsets to be retrieved.


> Extend ListGroups and DescribeGroup APIs to cover offsets
> -
>
> Key: KAFKA-2901
> URL: https://issues.apache.org/jira/browse/KAFKA-2901
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: The Data Lorax
>Assignee: Neha Narkhede
> Fix For: 0.9.0.1
>
>
> The {{ListGroupsRequest}} and {{DescribeGroupsRequest}} added to 0.9.0.0 
> allow admin tools to get details of consumer groups now that this information 
> is not stored in ZK.
> The brokers also now store offset information for consumer groups. At 
> present, there is no API for admin tools to discover the groups that brokers 
> are storing offsets for.
> For example, if a consumer is using the new consumer api, is storing offsets 
> in Kafka under the groupId 'Bob', but is using manual partition assignment, 
> then the {{groupCache}} in the {{GroupMetadataManager}} will not contain any 
> information about the group 'Bob'. However, the {{offsetCache}} in the 
> {{GroupMetadataManager}} will contain information about 'Bob'.
> Currently the only way for admin tools to know the full set of groups being 
> managed by Kafka, i.e. those storing offsets in Kafka, those using Kafka for 
> balancing of consumer groups, and those using Kafka for both, is to consume 
> the offset topic.
> We need to extend the List/Describe groups API to allow admin tools to 
> discover 'Bob' and allow the partition offsets to be retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Possible StreamingConfig Bug

2015-11-27 Thread Bill Bejeck
All,

When starting KafkaStreaming I'm getting the following error (even when
explicitly setting the groupId with props.put("group.id","test-consumer-group")
);

Exception in thread "StreamThread-1"
org.apache.kafka.common.KafkaException:
org.apache.kafka.common.errors.ApiException: The configured groupId is
invalid
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:309)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:198)
Caused by: org.apache.kafka.common.errors.ApiException: The configured
groupId is invalid

I've traced the source of the issue to the
StreamingConfig.getConsumerConfigs method as it calls
getRestoreConsumerConfigs (which explicitly removes the groupId property)
vs using getBaseConsumerConfigs which returns the passed in configs
unaltered.

When I switched the method call, KafkaStreaming starts up fine.

If you agree with this change/fix, I'll create a Jira ticket and put in the
PR, yada yada yada..

Thanks,
Bill


[jira] [Created] (KAFKA-2901) Extend ListGroups and DescribeGroup APIs to cover offsets

2015-11-27 Thread The Data Lorax (JIRA)
The Data Lorax created KAFKA-2901:
-

 Summary: Extend ListGroups and DescribeGroup APIs to cover offsets
 Key: KAFKA-2901
 URL: https://issues.apache.org/jira/browse/KAFKA-2901
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.9.0.0
Reporter: The Data Lorax
Assignee: Neha Narkhede
 Fix For: 0.9.0.1


The {{ListGroupsRequest}} and {{DescribeGroupsRequest}} added to 0.9.0.0 allow 
admin tools to get details of consumer groups now that this information is not 
stored in ZK.

The brokers also now store offset information for consumer groups. At present, 
there is no API for admin tools to discover the groups that brokers are storing 
offsets for.

For example, if a consumer is using the new consumer api, is storing offsets in 
Kafka under the groupId 'Bob', but is using manual partition assignment, then 
the {{groupCache}} in the {{GroupMetadataManager}} will not contain any 
information about the group 'Bob'. However, the {{offsetCache}} in the 
{{GroupMetadataManager}} will contain information about 'Bob'.

We need to extend the List/Describe groups API to allow admin tools to discover 
'Bob' and allow the partition offsets to be retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #189

2015-11-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2882: Add constructor cache for Snappy and LZ4 Output/Input

--
[...truncated 2793 lines...]

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest >

[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15029592#comment-15029592
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/595

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/595.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #595


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit 58bb01c47a1054301d7416503d51912611473087
Author: jinxing 
Date:   2015-11-27T08:15:57Z

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2875: remove slf4j multi binding warning...

2015-11-27 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/595

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/595.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #595


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit 58bb01c47a1054301d7416503d51912611473087
Author: jinxing 
Date:   2015-11-27T08:15:57Z

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---