Build failed in Jenkins: kafka-0.10.1-jdk7 #10

2016-09-25 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-4055; System tests for secure quotas

--
[...truncated 12770 lines...]

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionIfWindowStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionIfWindowStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionWhenLookingForWindowStoreWithDifferentType STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionWhenLookingForWindowStoreWithDifferentType PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionWhenLookingForKVStoreWithDifferentType STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionWhenLookingForKVStoreWithDifferentType PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionIfKVStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldThrowExceptionIfKVStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldIterateAllStoredItems STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldIterateAllStoredItems PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldIterateOverRange STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldIterateOverRange PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardDirtyItemsWhenFlushCalled STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardDirtyItemsWhenFlushCalled PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldFlushEvictedItemsIntoUnderlyingStore STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldFlushEvictedItemsIntoUnderlyingStore PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldPutGetToFromCache STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldPutGetToFromCache PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardOldValuesWhenEnabled STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardOldValuesWhenEnabled PASSED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardDirtyItemToListenerWhenEvicted STARTED

org.apache.kafka.streams.state.internals.CachingKeyValueStoreTest > 
shouldForwardDirtyItemToListenerWhenEvicted PASSED

org.apache.kafka.streams.state.internals.NamedCacheTest > 
shouldEvictEldestEntry STARTED

org.apache.kafka.streams.state.internals.NamedCacheTest > 
shouldEvictEldestEntry PASSED

org.apache.kafka.streams.state.internals.NamedCacheTest > 
shouldDeleteAndUpdateSize STARTED

org.apache.kafka.streams.state.internals.NamedCacheTest > 
shouldDeleteAndUpdateSize PASSED

org.apache.kafka.streams.state.internals.NamedCacheTest > shouldPutAll STARTED

org.apache.kafka.streams.state.internals.NamedCacheTest > shouldPutAll PASSED

org.apache.kafka.streams.state.internals.NamedCacheTest > shouldPutGet STARTED

org.apache.kafka.streams.state.internals.NamedCacheTest > shouldPutGet PASSED

org.apache.kafka.streams.state.internals.NamedCacheTest > 

[jira] [Updated] (KAFKA-1196) java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

2016-09-25 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1196:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

This issue should have been resolved by KIP-74 (KAFKA-2063).

> java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33
> ---
>
> Key: KAFKA-1196
> URL: https://issues.apache.org/jira/browse/KAFKA-1196
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: running java 1.7, linux and kafka compiled against scala 
> 2.9.2
>Reporter: Gerrit Jansen van Vuuren
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: newbie
> Fix For: 0.10.1.0
>
> Attachments: KAFKA-1196.patch
>
>
> I have 6 topics each with 8 partitions spread over 4 kafka servers.
> the servers are 24 core 72 gig ram.
> While consuming from the topics I get an IlegalArgumentException and all 
> consumption stops, the error keeps on throwing.
> I've tracked it down to FectchResponse.scala line 33
> The error happens when the FetchResponsePartitionData object's readFrom 
> method calls:
> messageSetBuffer.limit(messageSetSize)
> I put in some debug code the the messageSetSize is 671758648, while the 
> buffer.capacity() gives 155733313, for some reason the buffer is smaller than 
> the required message size.
> I don't know the consumer code enough to debug this. It doesn't matter if 
> compression is used or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3383) Producer should not remove an in flight request before successfully parsing the response.

2016-09-25 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved KAFKA-3383.
-
   Resolution: Won't Fix
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

The problem was caused by a broker side issue and fixed by KAFKA-2071. Close 
this ticket as won't fix.

> Producer should not remove an in flight request before successfully parsing 
> the response.
> -
>
> Key: KAFKA-3383
> URL: https://issues.apache.org/jira/browse/KAFKA-3383
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: chen zhu
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> In the NetworkClient, we remove the in flight request before we successfully 
> parse the response. If the response parse failed, the request will not be 
> fulfilled but just lost. For a producer request, that means the callback of 
> the messages won't be fired forever.
> We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1569

2016-09-25 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-4055; System tests for secure quotas

--
[...truncated 5651 lines...]

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration STARTED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit STARTED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.ServerGenerateBrokerIdTest > testGetSequenceIdMethod STARTED

kafka.server.ServerGenerateBrokerIdTest > testGetSequenceIdMethod PASSED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK STARTED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate STARTED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic STARTED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED


Re: Help related to KStreams and Stateful event processing.

2016-09-25 Thread Matthias J. Sax
Hi Arvind,

short answer, Kafka Streams does definitely help you!

Long answer, Kafka Streams offers two layer to program your stream
processing job. The low-level Processor API and the high level DSL.
Please check the documentation to get further details:
http://docs.confluent.io/3.0.1/streams/index.html

With Processor API you are able to do anything -- on the cost of lower
abstraction and thus more coding. I guess, this would be there best way
for your use case to program your Kafka Streams application.

The DSL is easier to use and provides high level abstractions --
however, I am not sure if it covers what you need in your use case. But
maybe it's worth to try it out before using Processor API...

For your second question, I would recommend to use Processor API an
attach a state store to a processor node and write to your data
warehouse whenever a state is "complete" (see
http://docs.confluent.io/3.0.1/streams/developer-guide.html#defining-a-state-store).

One more hint: you can actually mix DSL and Processor API by using (eg.
process() or transform() within DSL).


Hope this gives you some initial pointers. Please follow up if you have
more questions.

-Matthias


On 09/24/2016 11:25 PM, Arvind Kalyan wrote:
> Hello there!
> 
> I read about Kafka Streams recently, pretty interesting the way it solves
> the stream processing problem in a more cleaner way with less overheads and
> complexities.
> 
> I work as a Software Engineer in a startup, and we are in the design stage
> for building a stream processing pipeline (if you will) for the millions of
> events we get every day. We use Kafka (cluster) as the log aggregation
> layer already in production a 5-6 months back and very happy about it.
> 
> I went through a few confluent blogs (by Jay, Neha) as to how KStreams
> solve for sort of a state-ful event processing, and maybe I missed the
> whole point in this regard, I have some doubts.
> 
> We have use-cases like the following:
> 
> There is an event E1, which is sort-of the base event after which we have a
> lot of sub- events E2,E3..En enriching E1 with lot of extra properties
> (with considerable delay, say 30-40 mins).
> 
> Eg. 1: An order event has come in where the user has ordered an item on our
> website (This is the base event). After say 30-40 minutes, we get events
> like packaging_time, shipping_time, delivered_time or cancelled_time etc
> related to that order (These are the sub-events).
> 
> So before we get the whole event to a warehouse, we need to collect all
> these (ordered, packaged, shipped, cancelled/delivered), and whenever I get
> a cancelled or delivered event for an order, I know that completes the
> lifecycle for that order, and can put it in the warehouse.
> 
> Eg. 2: User login events - If we are to capture events like User-Logged-In,
> User-Logged-Out, I need it to be in the warehouse as a single row. Eg. row
> would have these columns *user_id, login_time, logout_time*. So as and when
> I receive a logout event (and if I have login event stored in some store),
> there would be a trigger which combines both, and send it across to the
> warehouse.
> 
> All these involve storing the state of the events and act as-and-when
> another event (that completes lifecycle) occurs, after which you have a
> trigger for further steps (warehouse or anything else).
> 
> Does KStream help me do this? If not, how should I go about solving this
> problem?
> 
> Also, I wanted some advice as to whether it is a standard practice to
> aggregate like this and *then* store to warehouse, or should I append each
> event into the warehouse and do sort-of an ELT on that using the warehouse?
> (Run a query to re-structure the data in the database and store it off as a
> separate table)
> 
> Eagerly waiting for your reply,
> Arvind
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk8 #912

2016-09-25 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-4055; System tests for secure quotas

--
[...truncated 13433 lines...]
org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > afterBelowLower PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > beforeOverUpper PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStreamWithProcessors PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForTumblingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowsForHoppingWindows 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
windowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.StreamsConfigTest > 

[jira] [Updated] (KAFKA-4055) Add system tests for secure quotas

2016-09-25 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-4055:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1860
[https://github.com/apache/kafka/pull/1860]

> Add system tests for secure quotas
> --
>
> Key: KAFKA-4055
> URL: https://issues.apache.org/jira/browse/KAFKA-4055
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Add system tests for quotas for authenticated users and  
> (corresponding to KIP-55). Implementation is being done under KAFKA-3492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4055) Add system tests for secure quotas

2016-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521641#comment-15521641
 ] 

ASF GitHub Bot commented on KAFKA-4055:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1860


> Add system tests for secure quotas
> --
>
> Key: KAFKA-4055
> URL: https://issues.apache.org/jira/browse/KAFKA-4055
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Add system tests for quotas for authenticated users and  
> (corresponding to KIP-55). Implementation is being done under KAFKA-3492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1860: KAFKA-4055: System tests for secure quotas

2016-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1860


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2016-09-25 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan updated KAFKA-4220:
--
Description: 
When the argument provided to a command line statement is not of the right 
type, a stack trace is returned. This can be replaced by a cleaner error 
message that is earlier to read & understand for the user.

For example-
bin/kafka-console-consumer.sh --new-consumer --bootstrap-server localhost:9092 
--topic foo --timeout-ms abc
'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
number.

> Clean up & provide better error message when incorrect argument types are 
> provided in the command line client
> -
>
> Key: KAFKA-4220
> URL: https://issues.apache.org/jira/browse/KAFKA-4220
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ishita Mandhan
>Assignee: Ishita Mandhan
>Priority: Minor
>
> When the argument provided to a command line statement is not of the right 
> type, a stack trace is returned. This can be replaced by a cleaner error 
> message that is earlier to read & understand for the user.
> For example-
> bin/kafka-console-consumer.sh --new-consumer --bootstrap-server 
> localhost:9092 --topic foo --timeout-ms abc
> 'abc' is an incorrect type for the --timeout-ms parameter, which expects a 
> number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4220) Clean up & provide better error message when incorrect argument types are provided in the command line client

2016-09-25 Thread Ishita Mandhan (JIRA)
Ishita Mandhan created KAFKA-4220:
-

 Summary: Clean up & provide better error message when incorrect 
argument types are provided in the command line client
 Key: KAFKA-4220
 URL: https://issues.apache.org/jira/browse/KAFKA-4220
 Project: Kafka
  Issue Type: Bug
Reporter: Ishita Mandhan
Assignee: Ishita Mandhan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-25 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521287#comment-15521287
 ] 

Elias Levy edited comment on KAFKA-4212 at 9/25/16 7:17 PM:


It should be noted that a variable-capacity memory-overflowing TTL caching 
store is semantically equivalent to a KTable that expires entries via a TTL.  
Such a KTable may be a viable alternative or at least a useful additional 
abstraction.


was (Author: elevy):
I should be noted that a variable-capacity memory-overflowing TTL caching store 
is semantically equivalent to a KTable that expires entries via a TTL.  Such a 
KTable may be a viable alternative or at least a useful additional abstraction.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-25 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521287#comment-15521287
 ] 

Elias Levy commented on KAFKA-4212:
---

I should be noted that a variable-capacity memory-overflowing TTL caching store 
is semantically equivalent to a KTable that expires entries via a TTL.  Such a 
KTable may be a viable alternative or at least a useful additional abstraction.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4219) Permit setting of event time in stream processor

2016-09-25 Thread Elias Levy (JIRA)
Elias Levy created KAFKA-4219:
-

 Summary: Permit setting of event time in stream processor
 Key: KAFKA-4219
 URL: https://issues.apache.org/jira/browse/KAFKA-4219
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.0.1
Reporter: Elias Levy
Assignee: Guozhang Wang


Event time is assigned in stream sources via {{TimestampExtractor}}.  Once the 
event time has been assigned, it remains the same, regardless of any downstream 
processing in the topology.  This is insufficient for many processing jobs, 
particularly when the output of the job is written back into a Kafka topic, 
where the record's time is encoded outside of the record's value.

For instance:

* When performing windowed aggregations it may be desirable for the timestamp 
of the emitted record to be lower or higher limits of the time window, rather 
than the timestamp of the last processed element, which may be anywhere within 
the time window.

* When joining two streams, it is non-deterministic which of the two record's 
timestamps will be the timestamp of the emitted record.  It would be either one 
depending on what order the records are processed.  Even where this 
deterministic, it may be desirable for the emitted timestamp to be altogether 
different from the timestamp of the joined records.  For instance, setting the 
timestamp to the current processing time may be desirable.

* In general, lower level processors may wish to set the timestamp of emitted 
records to an arbitrary value.
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4120) byte[] keys in RocksDB state stores do not work as expected

2016-09-25 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521271#comment-15521271
 ] 

Elias Levy commented on KAFKA-4120:
---

{{o.a.k.common.utils.Bytes}} is not a documented class.

> byte[] keys in RocksDB state stores do not work as expected
> ---
>
> Key: KAFKA-4120
> URL: https://issues.apache.org/jira/browse/KAFKA-4120
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
>
> We ran into an issue using a byte[] key in a RocksDB state store (with the 
> byte array serde.) Internally, the RocksDB store keeps a LRUCache that is 
> backed by a LinkedHashMap that sits between the callers and the actual db. 
> The problem is that while the underlying rocks db will persist byte arrays 
> with equal data as equivalent keys, the LinkedHashMap uses byte[] reference 
> equality from Object.equals/hashcode. So, this can result in multiple entries 
> in the cache for two different byte arrays that have the same contents and 
> are backed by the same key in the db, resulting in unexpected behavior. 
> One such behavior that manifests from this is if you store a value in the 
> state store with a specific key, if you re-read that key with the same byte 
> array you will get the new value, but if you re-read that key with a 
> different byte array with the same bytes, you will get a stale value until 
> the db is flushed. (This made it particularly tricky to track down what was 
> happening :))
> The workaround for us is to convert the keys from raw byte arrays to a 
> deserialized avro structure that provides proper hashcode/equals semantics 
> for the intermediate cache. In general this seems like good practice, so one 
> of the proposed solutions is to simply emit a warning or exception if a key 
> type with breaking semantics like this is provided.
> A few proposed solutions:
> - When the state store is defined on array keys, ensure that the cache map 
> does proper comparisons on array values not array references. This would fix 
> this problem, but seems a bit strange to special case. However, I have a hard 
> time of thinking of other examples where this behavior would burn users.
> - Change the LRU cache to deserialize and serialize all keys to bytes and use 
> a value based comparison for the map. This would be the most correct, as it 
> would ensure that both the rocks db and the cache have identical key spaces 
> and equality/hashing semantics. However, this is probably slow, and since the 
> general case of using avro record types as keys works fine, it will largely 
> be unnecessary overhead.
> - Don't change anything about the behavior, but trigger a warning in the log 
> or fail to start if a state store is defined on array keys (or possibly any 
> key type that fails to properly override Object.equals/hashcode.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Help related to KStreams and Stateful event processing.

2016-09-25 Thread Arvind Kalyan
Hello there!

I read about Kafka Streams recently, pretty interesting the way it solves
the stream processing problem in a more cleaner way with less overheads and
complexities.

I work as a Software Engineer in a startup, and we are in the design stage
for building a stream processing pipeline (if you will) for the millions of
events we get every day. We use Kafka (cluster) as the log aggregation
layer already in production a 5-6 months back and very happy about it.

I went through a few confluent blogs (by Jay, Neha) as to how KStreams
solve for sort of a state-ful event processing, and maybe I missed the
whole point in this regard, I have some doubts.

We have use-cases like the following:

There is an event E1, which is sort-of the base event after which we have a
lot of sub- events E2,E3..En enriching E1 with lot of extra properties
(with considerable delay, say 30-40 mins).

Eg. 1: An order event has come in where the user has ordered an item on our
website (This is the base event). After say 30-40 minutes, we get events
like packaging_time, shipping_time, delivered_time or cancelled_time etc
related to that order (These are the sub-events).

So before we get the whole event to a warehouse, we need to collect all
these (ordered, packaged, shipped, cancelled/delivered), and whenever I get
a cancelled or delivered event for an order, I know that completes the
lifecycle for that order, and can put it in the warehouse.

Eg. 2: User login events - If we are to capture events like User-Logged-In,
User-Logged-Out, I need it to be in the warehouse as a single row. Eg. row
would have these columns *user_id, login_time, logout_time*. So as and when
I receive a logout event (and if I have login event stored in some store),
there would be a trigger which combines both, and send it across to the
warehouse.

All these involve storing the state of the events and act as-and-when
another event (that completes lifecycle) occurs, after which you have a
trigger for further steps (warehouse or anything else).

Does KStream help me do this? If not, how should I go about solving this
problem?

Also, I wanted some advice as to whether it is a standard practice to
aggregate like this and *then* store to warehouse, or should I append each
event into the warehouse and do sort-of an ELT on that using the warehouse?
(Run a query to re-structure the data in the database and store it off as a
separate table)

Eagerly waiting for your reply,
Arvind


[DISCUSS] KIP-73: Replication Quotas

2016-09-25 Thread Ben Stopford
Hi All

We’ve made an adjustment to KIP-73: Replication Quotas which I’d like to open 
up to the community for approval. 

Previously the admin passed a list of replicas to be throttled:

quota.replication.throttled.replicas = 
[partId]:[replica],[partId]:[replica],[partId]:[replica] etc

The change is to split this into two properties. One that corresponds to the 
leader-side throttle, and the other that corresponds to the follower-side 
throttle.

quota.leader.replication.throttled.replicas = 
[partId]:[replica],[partId]:[replica],[partId]:[replica] 
quota.follower.replication.throttled.replicas = 
[partId]:[replica],[partId]:[replica],[partId]:[replica] 

This provides more control over the throttling process. It also helps us with a 
common use case which I’ve described below, for those interested. 

Please reply if you have any comments / issues / suggestions. 

Thanks as ever.

Ben


Sample Use Case:

Say we are moving partition 0. It has replicas [104,107,109] moving to 
[105,107,113]

So the leader could be any of [104,107,109] and we know we will be creating new 
replicas on 105 & 113.

In the current mechanism, all we can do is apply both leader and follower 
throttles to all 5:  [104,107,109,105,113] which will mean the regular 
replication traffic between (say 107 is leader) 107->104 and 107->109 will be 
throttled also. This is potentially problematic. 

What we really want to do is apply:

LeaderThrottle [104,107,109]
FollowerThrottle [105,113]

This way, during a rebalance, we that standard replication traffic will not be  
throttled, but the rebalance will perform correctly if leaders move. One 
subtlety is that, should the leader move to the “move destination” node, it 
would no longer be throttled. But this is actually to our benefit in the normal 
case. 



[jira] [Updated] (KAFKA-4216) Replication Throttling: Leader may not be throttled if it is not "moving"

2016-09-25 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4216:

Description: 
The current algorithm in kafka-reassign-partitions applies a throttle to all 
moving replicas, be they leader-side or follower-side. 

A preferable solution would be to change the throttled replica list to specify 
whether the throttle applies to leader or follower. That way we can ensure that 
the regular replication will not be throttled.  

To do this we should change the way the throttled replica list is specified so 
it is spread over two separate properties. One that corresponds to the 
leader-side throttle, and the other that corresponds to the follower-side 
throttle.

quota.leader.replication.throttled.replicas = 
[partId]:[replica],[partId]:[replica],[partId]:[replica] 
quota.follower.replication.throttled.replicas = 
[partId]:[replica],[partId]:[replica],[partId]:[replica] 

Then, when applying the throttle, the leader quota can be applied to all 
current replicas, and the follower quota can be applied only to the new 
replicas we are creating as part of the rebalance. 


  was:
The current algorithm in kafka-reassign-partitions applies a throttle to all 
moving replicas, be they leader-side or follower-side. 

The problem is that the leader may not be moving, which would mean it would not 
have a throttle applied. So the throttle should be applied to:
[all existing replicas] ++  [all proposed replicas that moved]

A preferable solution would be to change the throttled replica list to specify 
whether the throttle applies to leader or follower. That way we can ensure that 
the regular replication will not be throttled.  


> Replication Throttling: Leader may not be throttled if it is not "moving"
> -
>
> Key: KAFKA-4216
> URL: https://issues.apache.org/jira/browse/KAFKA-4216
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> The current algorithm in kafka-reassign-partitions applies a throttle to all 
> moving replicas, be they leader-side or follower-side. 
> A preferable solution would be to change the throttled replica list to 
> specify whether the throttle applies to leader or follower. That way we can 
> ensure that the regular replication will not be throttled.  
> To do this we should change the way the throttled replica list is specified 
> so it is spread over two separate properties. One that corresponds to the 
> leader-side throttle, and the other that corresponds to the follower-side 
> throttle.
> quota.leader.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> quota.follower.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> Then, when applying the throttle, the leader quota can be applied to all 
> current replicas, and the follower quota can be applied only to the new 
> replicas we are creating as part of the rebalance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.1-jdk7 #9

2016-09-25 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3282; Change tools to use new consumer if zookeeper is not

--
[...truncated 7398 lines...]

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupLeaderAfterFollower STARTED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupLeaderAfterFollower PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownMember 
STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownMember 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidLeaveGroup STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testValidLeaveGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupInactiveGroup 
STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupInactiveGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup STARTED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat STARTED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync STARTED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers PASSED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToDeadTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToDeadTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testStableToStableIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testAwaitingSyncToStableTransition STARTED

kafka.coordinator.GroupMetadataTest > testAwaitingSyncToStableTransition PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailureWithAnotherPending 
STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailureWithAnotherPending 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToStableIllegalTransition STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #1568

2016-09-25 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3282; Change tools to use new consumer if zookeeper is not

--
[...truncated 1584 lines...]
kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.EndToEndClusterIdTest > testEndToEnd STARTED

kafka.api.EndToEndClusterIdTest > testEndToEnd PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED


[jira] [Commented] (KAFKA-2983) Remove old Scala clients and all related code, tests, and tools.

2016-09-25 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520486#comment-15520486
 ] 

Ismael Juma commented on KAFKA-2983:


Setting "Fix version" as 0.11.0.0 just for tracking purposes. This would have 
to be discussed and voted in the mailing list before anything happens.

> Remove old Scala clients and all related code, tests, and tools.
> 
>
> Key: KAFKA-2983
> URL: https://issues.apache.org/jira/browse/KAFKA-2983
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2983) Remove old Scala clients and all related code, tests, and tools.

2016-09-25 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2983:
---
Fix Version/s: 0.11.0.0

> Remove old Scala clients and all related code, tests, and tools.
> 
>
> Key: KAFKA-2983
> URL: https://issues.apache.org/jira/browse/KAFKA-2983
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3264) Mark the old Scala consumer and related classes as deprecated

2016-09-25 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3264:
---
Fix Version/s: 0.10.2.0

> Mark the old Scala consumer and related classes as deprecated
> -
>
> Key: KAFKA-3264
> URL: https://issues.apache.org/jira/browse/KAFKA-3264
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Vahid Hashemian
> Fix For: 0.10.2.0
>
>
> Once the new consumer is out of beta, we should consider deprecating the old 
> Scala consumers to encourage use of the new consumer and facilitate the 
> removal of the old consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1905: MINOR: Remove no longer required --new-consumer sw...

2016-09-25 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1905

MINOR: Remove no longer required --new-consumer switch in docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka no-new-consumer-switch-in-examples

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1905.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1905


commit d77f5ca0de9f0300f96cd1841a8d4b8903c6a245
Author: Ismael Juma 
Date:   2016-09-25T08:45:30Z

MINOR: Remove no longer required --new-consumer switch in docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3282) Change tools to use new consumer if zookeeper is not specified

2016-09-25 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3282:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1376
[https://github.com/apache/kafka/pull/1376]

> Change tools to use new consumer if zookeeper is not specified
> --
>
> Key: KAFKA-3282
> URL: https://issues.apache.org/jira/browse/KAFKA-3282
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Arun Mahadevan
> Fix For: 0.10.1.0
>
>
> This only applies to tools that support the new consumer and it's similar to 
> what we did with the producer for 0.9.0.0, but with a better compatibility 
> story.
> Part of this JIRA is updating the documentation to remove `--new-consumer` 
> from command invocations where appropriate. An example where this will be the 
> case is in the security documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3282) Change tools to use new consumer if zookeeper is not specified

2016-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520436#comment-15520436
 ] 

ASF GitHub Bot commented on KAFKA-3282:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1376


> Change tools to use new consumer if zookeeper is not specified
> --
>
> Key: KAFKA-3282
> URL: https://issues.apache.org/jira/browse/KAFKA-3282
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Arun Mahadevan
> Fix For: 0.10.1.0
>
>
> This only applies to tools that support the new consumer and it's similar to 
> what we did with the producer for 0.9.0.0, but with a better compatibility 
> story.
> Part of this JIRA is updating the documentation to remove `--new-consumer` 
> from command invocations where appropriate. An example where this will be the 
> case is in the security documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1376: KAFKA-3282: Change tools to use --new-consumer by ...

2016-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1376


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---