[GitHub] kafka pull request #4248: KAFKA-6258; SSLTransportLayer should keep reading ...

2017-11-21 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/4248

KAFKA-6258; SSLTransportLayer should keep reading from socket until either 
the buffer is full or the socket has no more data



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-6258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4248.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4248


commit 12907b1a13a34f1b71c073b0a662e4c23eba20c0
Author: Dong Lin 
Date:   2017-11-22T04:08:50Z

KAFKA-6258; SSLTransportLayer should keep reading from socket until either 
the buffer is full or the socket has no more data




---


[jira] [Created] (KAFKA-6258) SSLTransportLayer should keep reading from socket until either the buffer is full or the socket has no more data

2017-11-21 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-6258:
---

 Summary: SSLTransportLayer should keep reading from socket until 
either the buffer is full or the socket has no more data
 Key: KAFKA-6258
 URL: https://issues.apache.org/jira/browse/KAFKA-6258
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4247: KAFKA-6250: Use existing internal topics without r...

2017-11-21 Thread gavrie
GitHub user gavrie opened a pull request:

https://github.com/apache/kafka/pull/4247

KAFKA-6250: Use existing internal topics without requiring ACL

When using Kafka Connect with a cluster that doesn't allow the user to
create topics (due to ACL configuration), Connect fails when trying to
create its internal topics, even if these topics already exist. This is
incorrect behavior according to the documentation, which mentions that
R/W access should be enough.

This happens specifically when using Aiven Kafka, which does not permit
creation of topics via the Kafka Admin Client API.

The patch ignores the returned error, similar to the behavior for older
brokers that don't support the API.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gavrie/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4247.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4247


commit 0b17d56257784d9def1418ab87650cd240892227
Author: Gavrie Philipson 
Date:   2017-11-22T06:56:28Z

KAFKA-6250: Use existing internal topics without requiring ACL

When using Kafka Connect with a cluster that doesn't allow the user to
create topics (due to ACL configuration), Connect fails when trying to
create its internal topics, even if these topics already exist. This is
incorrect behavior according to the documentation, which mentions that
R/W access should be enough.

This happens specifically when using Aiven Kafka, which does not permit
creation of topics via the Kafka Admin Client API.

The patch ignores the returned error, similar to the behavior for older
brokers that don't support the API.




---


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Jun Rao
Hi, Jay,

I guess in your proposal the leader has to cache the last offset given back
for each partition so that it knows from which offset to serve the next
fetch request. This is doable but it means that the leader needs to do an
additional index lookup per partition to serve a fetch request. Not sure if
the benefit from the lighter fetch request obviously offsets the additional
index lookup though.

Thanks,

Jun

On Tue, Nov 21, 2017 at 7:03 PM, Jay Kreps  wrote:

> I think the general thrust of this makes a ton of sense.
>
> I don't love that we're introducing a second type of fetch request. I think
> the motivation is for compatibility, right? But isn't that what versioning
> is for? Basically to me although the modification we're making makes sense,
> the resulting protocol doesn't really seem like something you would design
> this way from scratch.
>
> I think I may be misunderstanding the semantics of the partitions in
> IncrementalFetchRequest. I think the intention is that the server remembers
> the partitions you last requested, and the partitions you specify in the
> request are added to this set. This is a bit odd though because you can add
> partitions but I don't see how you remove them, so it doesn't really let
> you fully make changes incrementally. I suspect I'm misunderstanding that
> somehow, though. You'd also need to be a little bit careful that there was
> no way for the server's idea of what the client is interested in and the
> client's idea to ever diverge as you made these modifications over time
> (due to bugs or whatever).
>
> It seems like an alternative would be to not add a second request, but
> instead change the fetch api and implementation
>
>1. We save the partitions you last fetched on that connection in the
>session for the connection (as I think you are proposing)
>2. It only gives you back info on partitions that have data or have
>changed (no reason you need the others, right?)
>3. Not specifying any partitions means "give me the usual", as defined
>by whatever you requested before attached to the session.
>
> This would be a new version of the fetch API, so compatibility would be
> retained by retaining the older version as is.
>
> This seems conceptually simpler to me. It's true that you have to resend
> the full set whenever you want to change it, but that actually seems less
> error prone and that should be rare.
>
> I suspect you guys thought about this and it doesn't quite work, but maybe
> you could explain why?
>
> -Jay
>
> On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I created a KIP to improve the scalability and latency of FetchRequest:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> > Partition+Scalability
> >
> > Please take a look.
> >
> > cheers,
> > Colin
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #2990

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6214: enable use of in-memory store for standby tasks

--
[...truncated 1.84 MB...]
org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCheckpointOffsetsWhenStateIsFlushed PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testLatencyMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testLatencyMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testRemoveSensor STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testRemoveSensor PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testRemoveNullSensor STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testRemoveNullSensor PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testNullMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testNullMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldRequireBrokerVersions0110OrHigherWhenEosEnabled STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldRequireBrokerVersions0110OrHigherWhenEosEnabled PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldAddDefaultTopicConfigFromStreamConfig STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldAddDefaultTopicConfigFromStreamConfig PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldSetPropertiesDefinedByInternalTopicConfig STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldSetPropertiesDefinedByInternalTopicConfig PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldThrowStreamsExceptionOnEmptyFetchMetadataResponse STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldThrowStreamsExceptionOnEmptyFetchMetadataResponse PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldNotAllowNullTopicConfigs STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldNotAllowNullTopicConfigs PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldOverrideDefaultTopicConfigsFromStreamsConfig STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldOverrideDefaultTopicConfigsFromStreamsConfig PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldAddCleanupPolicyToTopicConfigWhenCreatingTopic STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldAddCleanupPolicyToTopicConfigWhenCreatingTopic PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldRequireBrokerVersion0101OrHigherWhenEosDisabled STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
shouldRequireBrokerVersion0101OrHigherWhenEosDisabled PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #2229

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6214: enable use of in-memory store for standby tasks

--
[...truncated 384.54 KB...]
kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Jay Kreps
I think the general thrust of this makes a ton of sense.

I don't love that we're introducing a second type of fetch request. I think
the motivation is for compatibility, right? But isn't that what versioning
is for? Basically to me although the modification we're making makes sense,
the resulting protocol doesn't really seem like something you would design
this way from scratch.

I think I may be misunderstanding the semantics of the partitions in
IncrementalFetchRequest. I think the intention is that the server remembers
the partitions you last requested, and the partitions you specify in the
request are added to this set. This is a bit odd though because you can add
partitions but I don't see how you remove them, so it doesn't really let
you fully make changes incrementally. I suspect I'm misunderstanding that
somehow, though. You'd also need to be a little bit careful that there was
no way for the server's idea of what the client is interested in and the
client's idea to ever diverge as you made these modifications over time
(due to bugs or whatever).

It seems like an alternative would be to not add a second request, but
instead change the fetch api and implementation

   1. We save the partitions you last fetched on that connection in the
   session for the connection (as I think you are proposing)
   2. It only gives you back info on partitions that have data or have
   changed (no reason you need the others, right?)
   3. Not specifying any partitions means "give me the usual", as defined
   by whatever you requested before attached to the session.

This would be a new version of the fetch API, so compatibility would be
retained by retaining the older version as is.

This seems conceptually simpler to me. It's true that you have to resend
the full set whenever you want to change it, but that actually seems less
error prone and that should be rare.

I suspect you guys thought about this and it doesn't quite work, but maybe
you could explain why?

-Jay

On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe  wrote:

> Hi all,
>
> I created a KIP to improve the scalability and latency of FetchRequest:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> Partition+Scalability
>
> Please take a look.
>
> cheers,
> Colin
>


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Jun Rao
Hi, Colin,

Thanks for the KIP. A few comments below.

1. Currently, if num.replica.fetchers is configured with a value larger
than 1, a broker will be using multiple fetcher threads to fetch from
another broker. So, there will be multiple concurrent fetch requests from a
given follower broker. Generating the UUID just based on the replica id
probably won't be enough in this case.

2. It's not very clear to me how the follower knows when to include the
skipped partitions again when new data is published to those partitions in
the future.

3. When replica.fetch.response.max.bytes is exceeded, the leader stops
giving data back for the remaining partitions in the fetch response. In
that case, we don't want to skip those partitions with empty data in the
IncrementalFetchRequest since they may actually have data.

4. Similar to #3, if replication throttling is enabled, it's possible for
the leader to give empty data in a partition even though the partition has
new data. In the case, it's not clear if the follower should blindly skip
that partition in the IncrementalFetchRequest.

5. Currently, the leader maintains a _lastCaughtUpTimeMs. If a follower
stops fetching a partition and _lastCaughtUpTimeMs is not updated, the
follower will fall out of ISR. So, will the leader remember all the
partitions in the last full fetch request so that it can keep
updating _lastCaughtUpTimeMs when serving IncrementalFetchRequest?

Jun


On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe  wrote:

> Hi all,
>
> I created a KIP to improve the scalability and latency of FetchRequest:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> Partition+Scalability
>
> Please take a look.
>
> cheers,
> Colin
>


[jira] [Created] (KAFKA-6257) KafkaConsumer was hung when bootstrap servers was not existed

2017-11-21 Thread Brian Clark (JIRA)
Brian Clark created KAFKA-6257:
--

 Summary: KafkaConsumer was hung when bootstrap servers was not 
existed
 Key: KAFKA-6257
 URL: https://issues.apache.org/jira/browse/KAFKA-6257
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Brian Clark
Priority: Minor


Could anyone help me on this?

We have an issue if we entered an non-existed host:port for bootstrap.servers 
property on KafkaConsumer. The created KafkaConsumer was hung forever.

the debug message:
java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at 
org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:223)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
[2017-08-28 09:20:56,400] DEBUG Node -1 disconnected. 
(org.apache.kafka.clients.NetworkClient)
[2017-08-28 09:20:56,400] WARN Connection to node -1 could not be established. 
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-08-28 09:20:56,400] DEBUG Give up sending metadata request since no node 
is available (org.apache.kafka.clients.NetworkClient)
[2017-08-28 09:20:56,450] DEBUG Initialize connection to node -1 for sending 
metadata request (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-11-21 Thread Guozhang Wang
Jan,

Not sure I understand your argument that "we still going to present
change.oldValue to the filter even though the record context() is for
change.newValue". Are you referring to `KTableFilter#process()`? If yes
could you point to me which LOC are you concerning about?


Guozhang


On Mon, Nov 20, 2017 at 9:29 PM, Jan Filipiak 
wrote:

> a remark of mine that got missed during migration:
>
> There is this problem that even though we have source.table.filter.join
> the state-fullness happens at the table step not a the join step. In a
> filter
> we still going to present change.oldValue to the filter even though the
> record context() is for change.newValue. I would go as far as applying
> the filter before the table processor. Not to just get KIP-159, but because
> I think its a side effect of a non ideal topology layout. If i can filter
> 99% of my
> records. my state could be way smaller. Also widely escalates the context
> of the KIP
>
> I can only see upsides of executing the filter first.
>
> Best Jan
>
>
>
> On 20.11.2017 22:22, Matthias J. Sax wrote:
>
>> I am moving this back to the DISCUSS thread... Last 10 emails were sent
>> to VOTE thread.
>>
>> Copying Guozhang's last summary below. Thanks for this summary. Very
>> comprehensive!
>>
>> It seems, we all agree, that the current implementation of the context
>> at PAPI level is ok, but we should not leak it into DSL.
>>
>> Thus, we can go with (2) or (3), were (3) is an extension to (2)
>> carrying the context to more operators than just sources. It also seems,
>> that we all agree, that many-to-one operations void the context.
>>
>> I still think, that just going with plain (2) is too restrictive -- but
>> I am also fine if we don't go with the full proposal of (3).
>>
>> Also note, that the two operators filter() and filterNot() don't modify
>> the record and thus for both, it would be absolutely valid to keep the
>> context.
>>
>> I personally would keep the context for at least all one-to-one
>> operators. One-to-many is debatable and I am fine to not carry the
>> context further: at least the offset information is questionable for
>> this case -- note thought, that semantically, the timestamp is inherited
>> via one-to-many, and I also think this applies to "topic" and
>> "partition". Thus, I think it's still valuable information we can carry
>> downstreams.
>>
>>
>> -Matthias
>>
>> Jan: which approach are you referring to as "the approach that is on the
>>> table would be perfect"?
>>>
>>> Note that in today's PAPI layer we are already effectively exposing the
>>> record context which has the issues that we have been discussing right
>>> now,
>>> and its semantics is always referring to the "processing record" at hand.
>>> More specifically, we can think of processing a record a bit different:
>>>
>>> 1) the record traversed the topology from source to sink, it may be
>>> transformed into new object or even generate multiple new objects (think:
>>> branch) along the traversal. And the record context is referring to this
>>> processing record. Here the "lifetime" of the record lasts for the entire
>>> topology traversal and any new records of this traversal is treated as
>>> different transformed values of this record (this applies to join and
>>> aggregations as well).
>>>
>>> 2) the record being processed is wiped out in the first operator after
>>> the
>>> source, and NEW records are forwarded to downstream operators. I.e. each
>>> record only lives between two adjacent operators, once it reached the new
>>> operator it's lifetime has ended and new records are generated.
>>>
>>> I think in the past we have talked about Streams under both context, and
>>> we
>>> do not have a clear agreement. I agree that 2) is logically more
>>> understandable for users as it does not leak any internal implementation
>>> details (e.g. for stream-table joins, table record's traversal ends at
>>> the
>>> join operator as it is only be materialized, while stream record's
>>> traversal goes through the join operator to further down until sinks).
>>> However if we are going to interpret following 2) above then even for
>>> non-stateful operators we would not inherit record context. What we're
>>> discussing now, seems to infer a third semantics:
>>>
>>> 3) a record would traverse "through" one-to-one (non-stateful) operators,
>>> will "replicate" at one-to-many (non-stateful) operators (think:
>>> "mapValues"
>>>   ) and will "end" at many-to-one (stateful) operators where NEW records
>>> will be generated and forwarded to the downstream operators.
>>>
>>> Just wanted to lay the ground for discussions so we are all on the same
>>> page before chatting more.
>>>
>>>
>>> Guozhang
>>>
>>
>>
>> On 11/6/17 1:41 PM, Jeyhun Karimov wrote:
>>
>>> Hi Matthias,
>>>
>>> Thanks a lot for correcting. It is a leftover from the past designs when
>>> punctuate() was not deprecated.
>>> I corrected.
>>>
>>> Cheers,
>>> Jeyhun
>>>
>>> On Mon, Nov 

Build failed in Jenkins: kafka-trunk-jdk8 #2228

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4115: Increasing the heap settings for Connect scripts

--
[...truncated 387.40 KB...]
kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #2989

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4115: Increasing the heap settings for Connect scripts

--
[...truncated 387.83 KB...]
kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > 

[jira] [Resolved] (KAFKA-6214) Using standby replicas with an in memory state store causes Streams to crash

2017-11-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6214.
--
   Resolution: Fixed
Fix Version/s: 1.0.1
   1.1.0

Issue resolved by pull request 4239
[https://github.com/apache/kafka/pull/4239]

> Using standby replicas with an in memory state store causes Streams to crash
> 
>
> Key: KAFKA-6214
> URL: https://issues.apache.org/jira/browse/KAFKA-6214
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Matt Farmer
>Assignee: Damian Guy
>  Labels: statestore
> Fix For: 1.1.0, 1.0.1
>
>
> We decided to start experimenting with Standby Replicas of our State Stores 
> by setting the following configuration setting:
> {code}
> num.standby.replicas=1
> {code}
> Most applications did okay with this except for one that used an in memory 
> state store instead of a persistent state store. With the new configuration, 
> the first instance of this application booted fine. When the second instance 
> came up, both instances crashed with the following exception:
> {code}
> java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
> assigned any partitions
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1037)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:752)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:524)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> {code}
> Monit attempted to restart both instances but they would just continue to 
> crash over and over again. The state store in our problematic application is 
> declared like so:
> {code}
> Stores
> .create("TheStateStore")
> .withStringKeys()
> .withStringValues()
> .inMemory()
> .build()
> {code}
> Luckily we had a config switch in place that could turn on an alternate, 
> persistent state store. As soon as we flipped to the persistent state store, 
> things started working as we expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4239: KAFKA-6214: enable use of in-memory store for stan...

2017-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4239


---


[jira] [Created] (KAFKA-6256) Flaky Unit test: KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegionWithNonZeroByteCache

2017-11-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6256:
--

 Summary: Flaky Unit test: 
KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegionWithNonZeroByteCache
 Key: KAFKA-6256
 URL: https://issues.apache.org/jira/browse/KAFKA-6256
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Matthias J. Sax


{noformat}
Expected: <[KeyValue(americas, 101), KeyValue(europe, 109), KeyValue(asia, 
124)]>
 but: was <[KeyValue(europe, 13), KeyValue(asia, 25), KeyValue(americas, 
23)]>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest.countClicksPerRegion(KStreamKTableJoinIntegrationTest.java:301)
at 
org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegionWithNonZeroByteCache(KStreamKTableJoinIntegrationTest.java:144)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4246: MINOR: improve flaky Streams tests

2017-11-21 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4246

MINOR: improve flaky Streams tests

Use TestUtil test directory for state directory instead of default 
/tmp/kafka-streams

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka improve-flaky-streams-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4246.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4246


commit 12e53e3a8bd7d8cda22bb46c37f1fcc1a334960e
Author: Matthias J. Sax 
Date:   2017-11-22T01:22:45Z

MINOR: improve flaky Streams tests




---


[GitHub] kafka pull request #4245: KAFKA-6255: Add ProduceBench to Trogdor

2017-11-21 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/4245

KAFKA-6255: Add ProduceBench to Trogdor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-6255

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4245.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4245


commit c6684ed3de05a68098f8bdb82b9306e80c1a4f16
Author: Colin P. Mccabe 
Date:   2017-11-22T01:00:56Z

KAFKA-6255: Add ProduceBench to Trogdor




---


Build failed in Jenkins: kafka-trunk-jdk8 #2227

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-3073: Add topic regex support for Connect sinks

--
[...truncated 1.80 MB...]

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED


Access to confluence

2017-11-21 Thread Siva Santhalingam
Hi There,

Could you please provide access to cwiki.apache.org so that i can start
working on my KIP. I would also need access to
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams page.

My id: sssanthalin...@kafka.apache.org


Thanks,
Siva


[jira] [Created] (KAFKA-6255) Add ProduceBench to Trogdor

2017-11-21 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-6255:
--

 Summary: Add ProduceBench to Trogdor
 Key: KAFKA-6255
 URL: https://issues.apache.org/jira/browse/KAFKA-6255
 Project: Kafka
  Issue Type: Sub-task
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Add ProduceBench, a benchmark of producer latency, to Trogdor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Colin McCabe
I was under the impression that the log4j2 configuration file syntax was
different than that of the original log4j.  The log4j 2 documentation
seems to confirm this (see
https://logging.apache.org/log4j/2.x/manual/migration.html ):

> Although the Log4j 2 configuration syntax is different than that of
> Log4j 1.x, most, if not all, of the same functionality is available.

So that would prevent us from moving to log4j2, except in a
compatibility-breaking 2.0 release.

best,
Colin


On Tue, Nov 21, 2017, at 16:52, Xavier Léauté wrote:
> Is there anything that would prevents us from moving directly to log4j2
> as
> the default log backend – independently of our efforts to move remaining
> pieces to sl4fj? Unless we have some very custom code, it should be
> possible to rely on log4j-1.2-api.jar to avoid having to migrate
> everything
> at once, but still benefit from appenders already included with log4j2
> (e.g. RollingFileAppender is already built into log4j2 core)
> 
> On Tue, Nov 21, 2017 at 4:16 PM Ismael Juma  wrote:
> 
> > Hi Colin,
> >
> > I think it's reasonable to include log4j-extras with the broker in the next
> > release (1.1.0). In the 2.0.0 timeframe, we may decide to move to log4j2
> > (or logback), but people can benefit from log4j-extras in the meantime.
> >
> > Ismael
> >
> > On Tue, Nov 21, 2017 at 11:50 PM, Colin McCabe  wrote:
> >
> > > Hi Viktor,
> > >
> > > I was under the impression that slf4j is just a facade over the
> > > underlying log library.  So the good thing about moving to slf4j is that
> > > it will allow us to more easily migrate to a new log library (logback,
> > > log4j2, etc.) in the future if we want.  slf4j is also better for
> > > library code, because it allows the library user to continue using their
> > > own logging library, rather than adopting ours.
> > >
> > > But I don't see why moving to slf4j would make log4j-extras less useful
> > > to us.  Perhaps I'm missing something?
> > >
> > > Also, just to clarify, I was proposing including log4j-extras with the
> > > Kafka brokers, not as a dependency of the producer, consumer, or other
> > > library code.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Tue, Nov 21, 2017, at 06:25, Viktor Somogyi wrote:
> > > > Hi Colin,
> > > >
> > > > Currently we are moving away from directly referencing log4j to using
> > > > slf4j
> > > > instead (KAFKA-1044). As this jira only aims to remove some runtime
> > > > dependencies we still need more work but eventually users will be able
> > to
> > > > change to their own implementation.
> > > >
> > > > Despite all this the current default is still log4j and I think it
> > would
> > > > be
> > > > a valuable conversation to have that what whether we keep it as a
> > default
> > > > in the future (it's quite old but with log4j-extras it can be a
> > > > competition) or we change to others, like Log4j2 or Logback?
> > > >
> > > > What do you think?
> > > >
> > > > Viktor
> > > >
> > > >
> > > > On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe 
> > > wrote:
> > > >
> > > > > I'm curious if there is a reason we do not include log4j-extras in
> > > > > Kafka.  If we had it, users could configure RollingFileAppender with
> > > > > compression.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > >
> >


Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Ismael Juma
The slf4j conversion is nearly done, I expect it to be merged by tomorrow.

The main challenge in moving to log4j2 by default is that the configuration
is not compatible. If there's a way to support the old configuration, then
we can go straight to log4j2.

Ismael

On Wed, Nov 22, 2017 at 12:52 AM, Xavier Léauté  wrote:

> Is there anything that would prevents us from moving directly to log4j2 as
> the default log backend – independently of our efforts to move remaining
> pieces to sl4fj? Unless we have some very custom code, it should be
> possible to rely on log4j-1.2-api.jar to avoid having to migrate everything
> at once, but still benefit from appenders already included with log4j2
> (e.g. RollingFileAppender is already built into log4j2 core)
>
> On Tue, Nov 21, 2017 at 4:16 PM Ismael Juma  wrote:
>
> > Hi Colin,
> >
> > I think it's reasonable to include log4j-extras with the broker in the
> next
> > release (1.1.0). In the 2.0.0 timeframe, we may decide to move to log4j2
> > (or logback), but people can benefit from log4j-extras in the meantime.
> >
> > Ismael
> >
> > On Tue, Nov 21, 2017 at 11:50 PM, Colin McCabe 
> wrote:
> >
> > > Hi Viktor,
> > >
> > > I was under the impression that slf4j is just a facade over the
> > > underlying log library.  So the good thing about moving to slf4j is
> that
> > > it will allow us to more easily migrate to a new log library (logback,
> > > log4j2, etc.) in the future if we want.  slf4j is also better for
> > > library code, because it allows the library user to continue using
> their
> > > own logging library, rather than adopting ours.
> > >
> > > But I don't see why moving to slf4j would make log4j-extras less useful
> > > to us.  Perhaps I'm missing something?
> > >
> > > Also, just to clarify, I was proposing including log4j-extras with the
> > > Kafka brokers, not as a dependency of the producer, consumer, or other
> > > library code.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Tue, Nov 21, 2017, at 06:25, Viktor Somogyi wrote:
> > > > Hi Colin,
> > > >
> > > > Currently we are moving away from directly referencing log4j to using
> > > > slf4j
> > > > instead (KAFKA-1044). As this jira only aims to remove some runtime
> > > > dependencies we still need more work but eventually users will be
> able
> > to
> > > > change to their own implementation.
> > > >
> > > > Despite all this the current default is still log4j and I think it
> > would
> > > > be
> > > > a valuable conversation to have that what whether we keep it as a
> > default
> > > > in the future (it's quite old but with log4j-extras it can be a
> > > > competition) or we change to others, like Log4j2 or Logback?
> > > >
> > > > What do you think?
> > > >
> > > > Viktor
> > > >
> > > >
> > > > On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe 
> > > wrote:
> > > >
> > > > > I'm curious if there is a reason we do not include log4j-extras in
> > > > > Kafka.  If we had it, users could configure RollingFileAppender
> with
> > > > > compression.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #2988

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-3073: Add topic regex support for Connect sinks

--
[...truncated 384.66 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Xavier Léauté
Is there anything that would prevents us from moving directly to log4j2 as
the default log backend – independently of our efforts to move remaining
pieces to sl4fj? Unless we have some very custom code, it should be
possible to rely on log4j-1.2-api.jar to avoid having to migrate everything
at once, but still benefit from appenders already included with log4j2
(e.g. RollingFileAppender is already built into log4j2 core)

On Tue, Nov 21, 2017 at 4:16 PM Ismael Juma  wrote:

> Hi Colin,
>
> I think it's reasonable to include log4j-extras with the broker in the next
> release (1.1.0). In the 2.0.0 timeframe, we may decide to move to log4j2
> (or logback), but people can benefit from log4j-extras in the meantime.
>
> Ismael
>
> On Tue, Nov 21, 2017 at 11:50 PM, Colin McCabe  wrote:
>
> > Hi Viktor,
> >
> > I was under the impression that slf4j is just a facade over the
> > underlying log library.  So the good thing about moving to slf4j is that
> > it will allow us to more easily migrate to a new log library (logback,
> > log4j2, etc.) in the future if we want.  slf4j is also better for
> > library code, because it allows the library user to continue using their
> > own logging library, rather than adopting ours.
> >
> > But I don't see why moving to slf4j would make log4j-extras less useful
> > to us.  Perhaps I'm missing something?
> >
> > Also, just to clarify, I was proposing including log4j-extras with the
> > Kafka brokers, not as a dependency of the producer, consumer, or other
> > library code.
> >
> > best,
> > Colin
> >
> >
> > On Tue, Nov 21, 2017, at 06:25, Viktor Somogyi wrote:
> > > Hi Colin,
> > >
> > > Currently we are moving away from directly referencing log4j to using
> > > slf4j
> > > instead (KAFKA-1044). As this jira only aims to remove some runtime
> > > dependencies we still need more work but eventually users will be able
> to
> > > change to their own implementation.
> > >
> > > Despite all this the current default is still log4j and I think it
> would
> > > be
> > > a valuable conversation to have that what whether we keep it as a
> default
> > > in the future (it's quite old but with log4j-extras it can be a
> > > competition) or we change to others, like Log4j2 or Logback?
> > >
> > > What do you think?
> > >
> > > Viktor
> > >
> > >
> > > On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe 
> > wrote:
> > >
> > > > I'm curious if there is a reason we do not include log4j-extras in
> > > > Kafka.  If we had it, users could configure RollingFileAppender with
> > > > compression.
> > > >
> > > > best,
> > > > Colin
> > > >
> >
>


Re: [DISCUSS] KIP-213 Support non-key joining in KTable

2017-11-21 Thread Guozhang Wang
Just to clarify, though "CombinedKey" will still be exposed as public APIs
it would not be used in any of the returned key types, so users do not need
to worry providing a serde for it at all; it will only be used in the
Mapper parameter, and internally Streams library would know how to serde it
if ever necessary.


Guozhang

On Tue, Nov 21, 2017 at 3:14 PM, Matthias J. Sax 
wrote:

> Jan,
>
> Thanks for explaining the Serde issue! This makes a lot of sense.
>
> I discussed with Guozhang about this issue and came up with the
> following idea that bridges both APIs:
>
> We still introduce CombinedKey as a public interface and exploit it to
> manage the key in the store and the changelog topic. For this case we
> can construct a suitable Serde internally based on the Serdes of both
> keys that are combined.
>
> However, the type of the result table is user defined and can be
> anything. To bridge between the CombinedKey and the user defined result
> type, users need to hand in a `ValueMapper` that
> convert the CombinedKey into the desired result type.
>
> Thus, the method signature would be something like
>
> >  KTable oneToManyJoin(> KTable other,
> > ValueMapper keyExtractor,> ValueJoiner joiner,
> > ValueMapper, KO> resultKeyMapper);
>
> The interface parameters are still easy to understand and don't leak
> implementation details IMHO.
>
> WDYT about this idea?
>
>
> -Matthias
>
>
> On 11/19/17 11:28 AM, Guozhang Wang wrote:
> > Hello Jan,
> >
> > I think I get your point about the cumbersome that CombinedKey would
> > introduce for serialization and tooling based on serdes. What I'm still
> > wondering is the underlying of joinPrefixFakers mapper: from your latest
> > comment it seems this mapper will be a one-time mapper: we use this to
> map
> > the original resulted KTable, V0> to KTable and
> > then that mapper can be thrown away and be forgotten. Is that true? My
> > original thought is that you propose to carry this mapper all the way
> along
> > the rest of the topology to "abstract" the underlying combined keys.
> >
> > If it is the other way (i.e. the former approach), then the diagram of
> > these two approaches would be different: for the less intrusive approach
> we
> > would add one more step in this diagram to always do a mapping after the
> > "task perform join" block.
> >
> > Also another minor comment on the internal topic: I think many readers
> may
> > not get the schema of this topic, so it is better to indicate that what
> > would be the key of this internal topic used for compaction, and what
> would
> > be used as the partition-key.
> >
> > Guozhang
> >
> >
> > On Sat, Nov 18, 2017 at 2:30 PM, Jan Filipiak 
> > wrote:
> >
> >> -> it think the relationships between the different used types, K0,K1,KO
> >> should be explains explicitly (all information is there implicitly, but
> >> one need to think hard to figure it out)
> >>
> >>
> >> I'm probably blind for this. can you help me here? how would you
> formulate
> >> this?
> >>
> >> Thanks,
> >>
> >> Jan
> >>
> >>
> >> On 16.11.2017 23:18, Matthias J. Sax wrote:
> >>
> >>> Hi,
> >>>
> >>> I am just catching up on this discussion and did re-read the KIP and
> >>> discussion thread.
> >>>
> >>> In contrast to you, I prefer the second approach with CombinedKey as
> >>> return type for the following reasons:
> >>>
> >>>   1) the oneToManyJoin() method had less parameter
> >>>   2) those parameters are easy to understand
> >>>   3) we hide implementation details (joinPrefixFaker, leftKeyExtractor,
> >>> and the return type KO leaks internal implementation details from my
> >>> point of view)
> >>>   4) user can get their own KO type by extending CombinedKey interface
> >>> (this would also address the nesting issue Trevor pointed out)
> >>>
> >>> That's unclear to me is, why you care about JSON serdes? What is the
> >>> problem with regard to prefix? It seems I am missing something here.
> >>>
> >>> I also don't understand the argument about "the user can stick with his
> >>> default serde or his standard way of serializing"? If we have
> >>> `CombinedKey` as output, the use just provide the serdes for both input
> >>> combined-key types individually, and we can reuse both internally to do
> >>> the rest. This seems to be a way simpler API. With the KO output type
> >>> approach, users need to write an entirely new serde for KO in contrast.
> >>>
> >>> Finally, @Jan, there are still some open comments you did not address
> >>> and the KIP wiki page needs some updates. Would be great if you could
> do
> >>> this.
> >>>
> >>> Can you also explicitly describe the data layout of the store that is
> >>> used to do the range scans?
> >>>
> >>> Additionally:
> >>>
> >>> -> some arrows in the algorithm diagram are missing
> >>> -> was are those XXX in the diagram
> >>> -> 

Build failed in Jenkins: kafka-0.11.0-jdk7 #339

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Update Powermock to fix PushHttpMetricsReporterTest failures

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldThroughOnUnassignedStateStoreAccess STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Ismael Juma
Hi Colin,

I think it's reasonable to include log4j-extras with the broker in the next
release (1.1.0). In the 2.0.0 timeframe, we may decide to move to log4j2
(or logback), but people can benefit from log4j-extras in the meantime.

Ismael

On Tue, Nov 21, 2017 at 11:50 PM, Colin McCabe  wrote:

> Hi Viktor,
>
> I was under the impression that slf4j is just a facade over the
> underlying log library.  So the good thing about moving to slf4j is that
> it will allow us to more easily migrate to a new log library (logback,
> log4j2, etc.) in the future if we want.  slf4j is also better for
> library code, because it allows the library user to continue using their
> own logging library, rather than adopting ours.
>
> But I don't see why moving to slf4j would make log4j-extras less useful
> to us.  Perhaps I'm missing something?
>
> Also, just to clarify, I was proposing including log4j-extras with the
> Kafka brokers, not as a dependency of the producer, consumer, or other
> library code.
>
> best,
> Colin
>
>
> On Tue, Nov 21, 2017, at 06:25, Viktor Somogyi wrote:
> > Hi Colin,
> >
> > Currently we are moving away from directly referencing log4j to using
> > slf4j
> > instead (KAFKA-1044). As this jira only aims to remove some runtime
> > dependencies we still need more work but eventually users will be able to
> > change to their own implementation.
> >
> > Despite all this the current default is still log4j and I think it would
> > be
> > a valuable conversation to have that what whether we keep it as a default
> > in the future (it's quite old but with log4j-extras it can be a
> > competition) or we change to others, like Log4j2 or Logback?
> >
> > What do you think?
> >
> > Viktor
> >
> >
> > On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe 
> wrote:
> >
> > > I'm curious if there is a reason we do not include log4j-extras in
> > > Kafka.  If we had it, users could configure RollingFileAppender with
> > > compression.
> > >
> > > best,
> > > Colin
> > >
>


[GitHub] kafka pull request #4213: KAFKA-4115: Increasing the heap settings for conne...

2017-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4213


---


[GitHub] kafka pull request #4151: KAFKA-3073: Add topic regex support for Connect si...

2017-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4151


---


[GitHub] kafka pull request #4244: MINOR: improve flaky Streams system test

2017-11-21 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4244

MINOR: improve flaky Streams system test

Handle TimeoutException in Producer callback and retry sending input data

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka improve-flaky-system-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4244.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4244


commit 4587cbf0db4ec3cd30f9f9c3ba981fa7c7a93c83
Author: Matthias J. Sax 
Date:   2017-11-21T23:58:22Z

MINOR: improve flaky Streams system test




---


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Colin McCabe
On Tue, Nov 21, 2017, at 15:48, Ted Yu wrote:
> For compatibility, I assume that the follower with new code would detect
> when the leader doesn't support this feature and fall back to the
> existing
> full request.
> 
> Cheers

In general, the protocol version which the brokers use to communicate
with each other is controlled by the inter.broker.protocol.version
configuration.  Probably at some point we should switch this to use
ApiVersionsRequest/Response, the same as how the client works.  But that
is outside the scope of this KIP.

best,
Colin


> 
> On Tue, Nov 21, 2017 at 3:44 PM, Colin McCabe  wrote:
> 
> > On Tue, Nov 21, 2017, at 14:35, Ted Yu wrote:
> > > Fill out the JIRA number.
> >
> > OK, will do.
> >
> > >
> > > bq. If the leader receives an *IncrementalFetchRequest* with a UUID that
> > > does not match that of the latest *FetchResponse*
> > >
> > > *By *latest *FetchResponse, you mean latest response for the broker Id,
> > > right ?*
> >
> > Right.
> >
> > best,
> > Colin
> >
> > >
> > > *Cheers*
> > >
> > > On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe 
> > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I created a KIP to improve the scalability and latency of FetchRequest:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> > > > Partition+Scalability
> > > >
> > > > Please take a look.
> > > >
> > > > cheers,
> > > > Colin
> > > >
> >


Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Colin McCabe
Hi Viktor,

I was under the impression that slf4j is just a facade over the
underlying log library.  So the good thing about moving to slf4j is that
it will allow us to more easily migrate to a new log library (logback,
log4j2, etc.) in the future if we want.  slf4j is also better for
library code, because it allows the library user to continue using their
own logging library, rather than adopting ours.

But I don't see why moving to slf4j would make log4j-extras less useful
to us.  Perhaps I'm missing something?

Also, just to clarify, I was proposing including log4j-extras with the
Kafka brokers, not as a dependency of the producer, consumer, or other
library code.

best,
Colin


On Tue, Nov 21, 2017, at 06:25, Viktor Somogyi wrote:
> Hi Colin,
> 
> Currently we are moving away from directly referencing log4j to using
> slf4j
> instead (KAFKA-1044). As this jira only aims to remove some runtime
> dependencies we still need more work but eventually users will be able to
> change to their own implementation.
> 
> Despite all this the current default is still log4j and I think it would
> be
> a valuable conversation to have that what whether we keep it as a default
> in the future (it's quite old but with log4j-extras it can be a
> competition) or we change to others, like Log4j2 or Logback?
> 
> What do you think?
> 
> Viktor
> 
> 
> On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe  wrote:
> 
> > I'm curious if there is a reason we do not include log4j-extras in
> > Kafka.  If we had it, users could configure RollingFileAppender with
> > compression.
> >
> > best,
> > Colin
> >


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Ted Yu
For compatibility, I assume that the follower with new code would detect
when the leader doesn't support this feature and fall back to the existing
full request.

Cheers

On Tue, Nov 21, 2017 at 3:44 PM, Colin McCabe  wrote:

> On Tue, Nov 21, 2017, at 14:35, Ted Yu wrote:
> > Fill out the JIRA number.
>
> OK, will do.
>
> >
> > bq. If the leader receives an *IncrementalFetchRequest* with a UUID that
> > does not match that of the latest *FetchResponse*
> >
> > *By *latest *FetchResponse, you mean latest response for the broker Id,
> > right ?*
>
> Right.
>
> best,
> Colin
>
> >
> > *Cheers*
> >
> > On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe 
> wrote:
> >
> > > Hi all,
> > >
> > > I created a KIP to improve the scalability and latency of FetchRequest:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> > > Partition+Scalability
> > >
> > > Please take a look.
> > >
> > > cheers,
> > > Colin
> > >
>


Build failed in Jenkins: kafka-trunk-jdk7 #2987

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-6168: Connect Schema comparison is slow for large schemas

--
[...truncated 389.55 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords STARTED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED


[jira] [Created] (KAFKA-6254) Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-6254:
--

 Summary: Introduce Incremental FetchRequests to Increase Partition 
Scalability
 Key: KAFKA-6254
 URL: https://issues.apache.org/jira/browse/KAFKA-6254
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Introduce Incremental FetchRequests to Increase Partition Scalability.  See 
https://cwiki.apache.org/confluence/pages/editpage.action?pageId=74687799



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2226

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-6168: Connect Schema comparison is slow for large schemas

--
[...truncated 1.80 MB...]

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Colin McCabe
On Tue, Nov 21, 2017, at 14:35, Ted Yu wrote:
> Fill out the JIRA number.

OK, will do.

> 
> bq. If the leader receives an *IncrementalFetchRequest* with a UUID that
> does not match that of the latest *FetchResponse*
> 
> *By *latest *FetchResponse, you mean latest response for the broker Id,
> right ?*

Right.

best,
Colin

> 
> *Cheers*
> 
> On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe  wrote:
> 
> > Hi all,
> >
> > I created a KIP to improve the scalability and latency of FetchRequest:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> > Partition+Scalability
> >
> > Please take a look.
> >
> > cheers,
> > Colin
> >


[jira] [Created] (KAFKA-6253) Improve sink connector topic regex validation

2017-11-21 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6253:


 Summary: Improve sink connector topic regex validation
 Key: KAFKA-6253
 URL: https://issues.apache.org/jira/browse/KAFKA-6253
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Assignee: Randall Hauch
 Fix For: 1.1.0


KAFKA-3073 adds topic regex support for sink connectors. The addition requires 
that you only specify one of topics or topics.regex settings. This is being 
validated in one place, but not during submission of connectors. We should 
improve this since this means it's possible to get a bad connector config into 
the config topic.

For more detailed discussion, see 
https://github.com/apache/kafka/pull/4151#pullrequestreview-77300221



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-213 Support non-key joining in KTable

2017-11-21 Thread Matthias J. Sax
Jan,

Thanks for explaining the Serde issue! This makes a lot of sense.

I discussed with Guozhang about this issue and came up with the
following idea that bridges both APIs:

We still introduce CombinedKey as a public interface and exploit it to
manage the key in the store and the changelog topic. For this case we
can construct a suitable Serde internally based on the Serdes of both
keys that are combined.

However, the type of the result table is user defined and can be
anything. To bridge between the CombinedKey and the user defined result
type, users need to hand in a `ValueMapper` that
convert the CombinedKey into the desired result type.

Thus, the method signature would be something like

>  KTable oneToManyJoin(> KTable other,
> ValueMapper keyExtractor,> ValueJoiner joiner,
> ValueMapper, KO> resultKeyMapper);

The interface parameters are still easy to understand and don't leak
implementation details IMHO.

WDYT about this idea?


-Matthias


On 11/19/17 11:28 AM, Guozhang Wang wrote:
> Hello Jan,
> 
> I think I get your point about the cumbersome that CombinedKey would
> introduce for serialization and tooling based on serdes. What I'm still
> wondering is the underlying of joinPrefixFakers mapper: from your latest
> comment it seems this mapper will be a one-time mapper: we use this to map
> the original resulted KTable, V0> to KTable and
> then that mapper can be thrown away and be forgotten. Is that true? My
> original thought is that you propose to carry this mapper all the way along
> the rest of the topology to "abstract" the underlying combined keys.
> 
> If it is the other way (i.e. the former approach), then the diagram of
> these two approaches would be different: for the less intrusive approach we
> would add one more step in this diagram to always do a mapping after the
> "task perform join" block.
> 
> Also another minor comment on the internal topic: I think many readers may
> not get the schema of this topic, so it is better to indicate that what
> would be the key of this internal topic used for compaction, and what would
> be used as the partition-key.
> 
> Guozhang
> 
> 
> On Sat, Nov 18, 2017 at 2:30 PM, Jan Filipiak 
> wrote:
> 
>> -> it think the relationships between the different used types, K0,K1,KO
>> should be explains explicitly (all information is there implicitly, but
>> one need to think hard to figure it out)
>>
>>
>> I'm probably blind for this. can you help me here? how would you formulate
>> this?
>>
>> Thanks,
>>
>> Jan
>>
>>
>> On 16.11.2017 23:18, Matthias J. Sax wrote:
>>
>>> Hi,
>>>
>>> I am just catching up on this discussion and did re-read the KIP and
>>> discussion thread.
>>>
>>> In contrast to you, I prefer the second approach with CombinedKey as
>>> return type for the following reasons:
>>>
>>>   1) the oneToManyJoin() method had less parameter
>>>   2) those parameters are easy to understand
>>>   3) we hide implementation details (joinPrefixFaker, leftKeyExtractor,
>>> and the return type KO leaks internal implementation details from my
>>> point of view)
>>>   4) user can get their own KO type by extending CombinedKey interface
>>> (this would also address the nesting issue Trevor pointed out)
>>>
>>> That's unclear to me is, why you care about JSON serdes? What is the
>>> problem with regard to prefix? It seems I am missing something here.
>>>
>>> I also don't understand the argument about "the user can stick with his
>>> default serde or his standard way of serializing"? If we have
>>> `CombinedKey` as output, the use just provide the serdes for both input
>>> combined-key types individually, and we can reuse both internally to do
>>> the rest. This seems to be a way simpler API. With the KO output type
>>> approach, users need to write an entirely new serde for KO in contrast.
>>>
>>> Finally, @Jan, there are still some open comments you did not address
>>> and the KIP wiki page needs some updates. Would be great if you could do
>>> this.
>>>
>>> Can you also explicitly describe the data layout of the store that is
>>> used to do the range scans?
>>>
>>> Additionally:
>>>
>>> -> some arrows in the algorithm diagram are missing
>>> -> was are those XXX in the diagram
>>> -> can you finish the "Step by Step" example
>>> -> it think the relationships between the different used types, K0,K1,KO
>>> should be explains explicitly (all information is there implicitly, but
>>> one need to think hard to figure it out)
>>>
>>>
>>> Last but not least:
>>>
>>> But noone is really interested.

>>> Don't understand this statement...
>>>
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 11/16/17 9:05 AM, Jan Filipiak wrote:
>>>
 We are running this perfectly fine. for us the smaller table changes
 rather infrequent say. only a few times per day. The performance of the
 flush is way lower than the computing 

Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Ted Yu
Fill out the JIRA number.

bq. If the leader receives an *IncrementalFetchRequest* with a UUID that
does not match that of the latest *FetchResponse*

*By *latest *FetchResponse, you mean latest response for the broker Id,
right ?*

*Cheers*

On Tue, Nov 21, 2017 at 1:02 PM, Colin McCabe  wrote:

> Hi all,
>
> I created a KIP to improve the scalability and latency of FetchRequest:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 227%3A+Introduce+Incremental+FetchRequests+to+Increase+
> Partition+Scalability
>
> Please take a look.
>
> cheers,
> Colin
>


[GitHub] kafka pull request #4176: KAFKA-6168 Connect Schema comparison is slow for l...

2017-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4176


---


[jira] [Resolved] (KAFKA-6168) Connect Schema comparison is slow for large schemas

2017-11-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6168.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4176
[https://github.com/apache/kafka/pull/4176]

> Connect Schema comparison is slow for large schemas
> ---
>
> Key: KAFKA-6168
> URL: https://issues.apache.org/jira/browse/KAFKA-6168
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Randall Hauch
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 1.1.0
>
> Attachments: 6168.v1.txt
>
>
> The {{ConnectSchema}} implementation computes the hash code every time its 
> needed, and {{equals(Object)}} is a deep equality check. This extra work can 
> be expensive for large schemas, especially in code like the {{AvroConverter}} 
> (or rather {{AvroData}} in the converter) that uses instances as keys in a 
> hash map that then requires significant use of {{hashCode}} and {{equals}}.
> The {{ConnectSchema}} is an immutable object and should at a minimum 
> precompute the hash code. Also, the order that the fields are compared in 
> {{equals(...)}} should use the cheapest comparisons first (e.g., the {{name}} 
> field is one of the _last_ fields to be checked). Finally, it might be worth 
> considering having each instance precompute and cache a string or byte[] 
> representation of all fields that can be used for faster equality checking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4243: MINOR: Update Powermock to fix PushHttpMetricsRepo...

2017-11-21 Thread ewencp
GitHub user ewencp reopened a pull request:

https://github.com/apache/kafka/pull/4243

MINOR: Update Powermock to fix PushHttpMetricsReporterTest failures

Fixes test failures where old versions of Powermock don't handle nested 
classes accessing parent field members when using mockStatic.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka powermock-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4243.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4243


commit 7bb08997d9bd387875df6e18244ab48ac4e5f4d5
Author: Ewen Cheslack-Postava 
Date:   2017-11-21T19:01:56Z

MINOR: Update Powermock to fix PushHttpMetricsReporterTest failures

commit b6bb4ec143bc07a5e38eae347300b3a46d963c98
Author: Ewen Cheslack-Postava 
Date:   2017-11-21T19:35:25Z

Update WorkerTest for new PowerMock




---


[GitHub] kafka pull request #4243: MINOR: Update Powermock to fix PushHttpMetricsRepo...

2017-11-21 Thread ewencp
Github user ewencp closed the pull request at:

https://github.com/apache/kafka/pull/4243


---


[DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-11-21 Thread Colin McCabe
Hi all,

I created a KIP to improve the scalability and latency of FetchRequest:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-227%3A+Introduce+Incremental+FetchRequests+to+Increase+Partition+Scalability

Please take a look.

cheers,
Colin


[GitHub] kafka pull request #4243: MINOR: Update Powermock to fix PushHttpMetricsRepo...

2017-11-21 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/4243

MINOR: Update Powermock to fix PushHttpMetricsReporterTest failures

Fixes test failures where old versions of Powermock don't handle nested 
classes accessing parent field members when using mockStatic.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka powermock-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4243.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4243


commit 7bb08997d9bd387875df6e18244ab48ac4e5f4d5
Author: Ewen Cheslack-Postava 
Date:   2017-11-21T19:01:56Z

MINOR: Update Powermock to fix PushHttpMetricsReporterTest failures




---


Kafka consumer query.

2017-11-21 Thread sumit singhal
Hi Team,

My Kafka consumer has 2 threads and the number of partitions is let's say
10 so overall 5 partitions per consumer thread. I am saving the time at
which a particular record needs to be processed. Now if record1 on
partition1 needs to be picked 10 hours from now Thread should move to next
partition to see if next partition can be picked.

example :

P1 - 8

P2 - 7

P3 - 6

P4 - 5

P5 - 4

Now data on partition P1 needs to be picked at 8 hours and let's say the
current time is 6 hours if I make my thread to wait for 2 hours I'll wait
for 2 hours although I could process P3, P4, and P5.

Please let me know how should I proceed. Any help is highly appreciable.

Regards,
Sumit.


[jira] [Created] (KAFKA-6252) A metric named 'XX' already exists, can't register another one.

2017-11-21 Thread Alexis Sellier (JIRA)
Alexis Sellier created KAFKA-6252:
-

 Summary: A metric named 'XX' already exists, can't register 
another one.
 Key: KAFKA-6252
 URL: https://issues.apache.org/jira/browse/KAFKA-6252
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0
 Environment: Linux
Reporter: Alexis Sellier


When a connector crashes, It cannot be restarted and an exception like this is 
thrown 


{code:java}
java.lang.IllegalArgumentException: A metric named 'MetricName 
[name=offset-commit-max-time-ms, group=connector-task-metrics, description=The 
maximum time in milliseconds taken by this task to commit offsets., 
tags={connector=hdfs-sink-connector-recover, task=0}]' already exists, can't 
register another one.
at 
org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:532)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:256)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:241)
at 
org.apache.kafka.connect.runtime.WorkerTask$TaskMetricsGroup.(WorkerTask.java:328)
at 
org.apache.kafka.connect.runtime.WorkerTask.(WorkerTask.java:69)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.(WorkerSinkTask.java:98)
at 
org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:449)
at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:404)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:852)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:108)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:866)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:862)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}


I guess it's because the function taskMetricsGroup.close in not call in all the 
cases



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6251) Update kafka-configs.sh to use the new AdminClient

2017-11-21 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6251:
-

 Summary: Update kafka-configs.sh to use the new AdminClient
 Key: KAFKA-6251
 URL: https://issues.apache.org/jira/browse/KAFKA-6251
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Reporter: Rajini Sivaram
 Fix For: 1.1.0


The tool {{kafka-configs.sh}} that is used to describe/update dynamic 
configuration options (topic/quota etc.) currently updates configs directly in 
ZooKeeper. We should switch this to using the new AdminClient so that updates 
can be validated and secured without access to ZK. 

This needs a KIP since command line options will need to change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6250) Kafka Connect requires permission to create internal topics even if they exist

2017-11-21 Thread Gavrie Philipson (JIRA)
Gavrie Philipson created KAFKA-6250:
---

 Summary: Kafka Connect requires permission to create internal 
topics even if they exist
 Key: KAFKA-6250
 URL: https://issues.apache.org/jira/browse/KAFKA-6250
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0, 0.11.0.1
Reporter: Gavrie Philipson


When using Kafka Connect with a cluster that doesn't allow the user to create 
topics (due to ACL configuration), Connect fails when trying to create its 
internal topics, even if these topics already exist.

This happens specifically when using hosted [Aiven 
Kafka|https://aiven.io/kafka], which does not permit creation of topics via the 
Kafka Admin Client API.

The problem is that Connect tries to create the topics, and ignores some 
specific errors such as topics that already exist, but not authorization errors.

This is what happens:
{noformat}
2017-11-21 15:57:24,176 [DistributedHerder] ERROR DistributedHerder:206 - 
Uncaught exception in herder work thread, exiting:
org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
create/find topic(s) 'connect-offsets'
at 
org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:245)
at 
org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
at 
org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
at 
org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
at org.apache.kafka.connect.runtime.Worker.start(Worker.java:146)
at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:99)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:194)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster 
authorization failed.
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:213)
at 
org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:226)
... 11 more
Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: 
Cluster authorization failed.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2225

2017-11-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6247; Install Kibosh on Vagrant and fix release downloads in

--
[...truncated 385.93 KB...]
kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > 

[jira] [Resolved] (KAFKA-4871) Kafka doesn't respect TTL on Zookeeper hostname - crash if zookeeper IP changes

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4871.

Resolution: Duplicate

Duplicate of KAFKA-5473.

> Kafka doesn't respect TTL on Zookeeper hostname - crash if zookeeper IP 
> changes
> ---
>
> Key: KAFKA-4871
> URL: https://issues.apache.org/jira/browse/KAFKA-4871
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Stephane Maarek
>
> I had a Zookeeper cluster that automatically obtains hostname so that they 
> remain constant over time. I deleted my 3 zookeeper machines and new machines 
> came back online, with the same hostname, and they updated their CNAME
> Kafka then failed and couldn't reconnect to Zookeeper as it didn't try to 
> resolve the IP of Zookeeper again. See log below:
> [2017-03-09 05:49:57,302] INFO Client will use GSSAPI as SASL mechanism. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2017-03-09 05:49:57,302] INFO Opening socket connection to server 
> zookeeper-3.example.com/10.12.79.43:2181. Will attempt to SASL-authenticate 
> using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
> [ec2-user]$ dig +short zookeeper-3.example.com
> 10.12.79.36
> As you can see even though the machine is capable of finding the new 
> hostname, Kafka somehow didn't respect the TTL (was set to 60 seconds) and 
> didn't get the new IP. I feel that on failed Zookeeper connection, Kafka 
> should at least try to resolve the new Zookeeper IP. That allows Kafka to 
> keep up with Zookeeper changes over time
> What do you think? Is that expected behaviour or a bug?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4041) kafka unable to reconnect to zookeeper behind an ELB

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4041.

Resolution: Fixed

Marking as a duplicate of KAFKA-5473. In that JIRA, we will recreate the 
`ZooKeeper` instance if there's an issue connecting and should hopefully fix 
this issue too.

> kafka unable to reconnect to zookeeper behind an ELB
> 
>
> Key: KAFKA-4041
> URL: https://issues.apache.org/jira/browse/KAFKA-4041
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.0.1
> Environment: RHEL EC2 instances
>Reporter: prabhakar
>Priority: Blocker
>
> Kafka brokers are unable to connect to  zookeeper which is behind an ELB.
> Kafka is using zkClient which is caching the IP address of zookeeper  and 
> even when there is a change in the IP for zookeeper it is using the Old 
> zookeeper IP.
> The server.properties has a DNS name. Ideally kafka should resolve the IP 
> using the DNS in case of any failures connecting to the broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (KAFKA-4041) kafka unable to reconnect to zookeeper behind an ELB

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reopened KAFKA-4041:


> kafka unable to reconnect to zookeeper behind an ELB
> 
>
> Key: KAFKA-4041
> URL: https://issues.apache.org/jira/browse/KAFKA-4041
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.0.1
> Environment: RHEL EC2 instances
>Reporter: prabhakar
>Priority: Blocker
>
> Kafka brokers are unable to connect to  zookeeper which is behind an ELB.
> Kafka is using zkClient which is caching the IP address of zookeeper  and 
> even when there is a change in the IP for zookeeper it is using the Old 
> zookeeper IP.
> The server.properties has a DNS name. Ideally kafka should resolve the IP 
> using the DNS in case of any failures connecting to the broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4041) kafka unable to reconnect to zookeeper behind an ELB

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4041.

Resolution: Duplicate

> kafka unable to reconnect to zookeeper behind an ELB
> 
>
> Key: KAFKA-4041
> URL: https://issues.apache.org/jira/browse/KAFKA-4041
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.0.1
> Environment: RHEL EC2 instances
>Reporter: prabhakar
>Priority: Blocker
>
> Kafka brokers are unable to connect to  zookeeper which is behind an ELB.
> Kafka is using zkClient which is caching the IP address of zookeeper  and 
> even when there is a change in the IP for zookeeper it is using the Old 
> zookeeper IP.
> The server.properties has a DNS name. Ideally kafka should resolve the IP 
> using the DNS in case of any failures connecting to the broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2193) Intermittent network + DNS issues can cause brokers to permanently drop out of a cluster

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2193.

Resolution: Duplicate

Duplicate of KAFKA-5473.

> Intermittent network + DNS issues can cause brokers to permanently drop out 
> of a cluster
> 
>
> Key: KAFKA-2193
> URL: https://issues.apache.org/jira/browse/KAFKA-2193
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Tom Lee
>  Labels: broker
>
> Our Kafka cluster recently experienced some intermittent network & DNS 
> resolution issues such that this call to connect to Zookeeper failed with an 
> UnknownHostException:
> https://github.com/sgroschupf/zkclient/blob/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe/src/main/java/org/I0Itec/zkclient/ZkConnection.java#L67
> We observed this happen during a processStateChanged(KeeperState.Expired) 
> call:
> https://github.com/sgroschupf/zkclient/blob/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe/src/main/java/org/I0Itec/zkclient/ZkClient.java#L649
> the session expiry was in turn caused by what we suspect to be intermittent 
> network issues.
> The failed ZK reconnect seemed to put ZkClient into a state where it would 
> never recover and the Kafka broker into a state where it would need a restart 
> to rejoin the cluster: ZkConnection._zk == null, 0.3.x doesn't appear to 
> automatically try to make further attempts to reconnect after the failure, 
> and obviously no further state transitions seem likely to happen without a 
> connection to ZK.
> The newer zkclient 0.4.0/0.5.0 releases will helpfully fire a notification 
> when this occurs, so the brokers have an opportunity to handle this sort of 
> failure in a more graceful manner (e.g. by trying to reconnect after some 
> backoff period):
> https://github.com/sgroschupf/zkclient/blob/0.4.0/src/main/java/org/I0Itec/zkclient/ZkClient.java#L461
> Happy to provide more info here if I can.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2864) Bad zookeeper host causes broker to shutdown uncleanly and stall producers

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2864.

Resolution: Duplicate

Duplicate of KAFKA-5473.

> Bad zookeeper host causes broker to shutdown uncleanly and stall producers
> --
>
> Key: KAFKA-2864
> URL: https://issues.apache.org/jira/browse/KAFKA-2864
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.1
>Reporter: Mahdi
>Priority: Critical
> Attachments: kafka.log
>
>
> We are using kafka 0.8.2.1 and we noticed that kafka/zookeeper-client were 
> not able to gracefully handle a non existing zookeeper instance. This caused 
> one of our brokers to get stuck during a self-inflicted shutdown and that 
> seemed to impact the partitions for which the broker was a leader even though 
> we had two other replicas.
> Here is a timeline of what happened (shortened for brevity, I'll attach log 
> snippets):
> We have a 7 node zookeeper cluster. Two of our nodes were decommissioned and 
> their dns records removed (zookeeper15 and zookeeper16). The decommissioning 
> happened about two weeks earlier. We noticed the following in the logs
> - Opening socket connection to server ip-10-0-0-1.ec2.internal/10.0.0.1:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - Client session timed out, have not heard from server in 858ms for sessionid 
> 0x1250c5c0f1f5001c, closing socket connection and attempting reconnect
> - Opening socket connection to server ip-10.0.0.2.ec2.internal/10.0.0.2:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - zookeeper state changed (Disconnected)
> - Client session timed out, have not heard from server in 2677ms for 
> sessionid 0x1250c5c0f1f5001c, closing socket connection and attempting 
> reconnect
> - Opening socket connection to server ip-10.0.0.3.ec2.internal/10.0.0.3:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - Socket connection established to ip-10.0.0.3.ec2.internal/10.0.0.3:2181, 
> initiating session
> - zookeeper state changed (Expired)
> - Initiating client connection, 
> connectString=zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
>  sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@3bbc39f8
> - Unable to reconnect to ZooKeeper service, session 0x1250c5c0f1f5001c has 
> expired, closing socket connection
> - Unable to re-establish connection. Notifying consumer of the following 
> exception:
> org.I0Itec.zkclient.exception.ZkException: Unable to connect to 
> zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
> at org.I0Itec.zkclient.ZkClient.reconnect(ZkClient.java:1176)
> at org.I0Itec.zkclient.ZkClient.processStateChanged(ZkClient.java:649)
> at org.I0Itec.zkclient.ZkClient.process(ZkClient.java:560)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> Caused by: java.net.UnknownHostException: zookeeper16.example.com: unknown 
> error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:67)
> ... 5 more
> That seems to have caused the following:
>  [main-EventThread] [org.apache.zookeeper.ClientCnxn ]: EventThread shut 
> down
> Which in turn caused kafka to shut itself down
> [Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], 
> shutting down
> [Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], 
> Starting controlled shutdown
> However, the shutdown didn't go as expected apparently due to an NPE in the 
> zk client
> 2015-11-12T12:03:40.101Z WARN  [Thread-2 

[jira] [Resolved] (KAFKA-6247) Fix system test dependency issues

2017-11-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6247.

   Resolution: Fixed
 Assignee: Colin P. McCabe
Fix Version/s: 1.1.0

> Fix system test dependency issues
> -
>
> Key: KAFKA-6247
> URL: https://issues.apache.org/jira/browse/KAFKA-6247
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.1.0
>
>
> Kibosh needs to be installed on Vagrant instances as well as in Docker 
> environments.  And we need to download old Apache Kafka releases from a 
> stable mirror that will not purge old releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6249) Interactive query downtime when node goes down even with standby replicas

2017-11-21 Thread Charles Crain (JIRA)
Charles Crain created KAFKA-6249:


 Summary: Interactive query downtime when node goes down even with 
standby replicas
 Key: KAFKA-6249
 URL: https://issues.apache.org/jira/browse/KAFKA-6249
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Charles Crain


In a multi-node Kafka Streams application that uses interactive queries, the 
queryable store will become unavailable (throw InvalidStateStoreException) for 
up to several minutes when a node goes down.  This happens regardless of how 
many nodes are in the application as well as how many standby replicas are 
configured.

My expectation is that if a standby replica is present, that the interactive 
query would fail over to the live replica immediately causing negligible 
downtime for interactive queries.  Instead, what appears to happen is that the 
queryable store is down for however long it takes for the nodes to completely 
rebalance (this takes a few minutes for a couple GB of total data in the 
queryable store's backing topic).

I am filing this as a bug, realizing that it may in fact be a feature request.  
However, until there is a way we can use interactive queries with minimal 
(~zero) downtime on node failure, we are having to entertain other strategies 
for serving queries (e.g. manually materializing the topic to an external 
resilient store such as Cassandra) in order to meet our SLAs.

If there is a way to minimize the downtime of interactive queries on node 
failure that I am missing, I would like to know what it is.

Our team is super-enthusiastic about Kafka Streams and we're keen to use it for 
just about everything!  This is out only major roadblock.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4240: KAFKA-6247. Fix system test dependency issues

2017-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4240


---


Re: Adding log4j-extras to Apache Kafka?

2017-11-21 Thread Viktor Somogyi
Hi Colin,

Currently we are moving away from directly referencing log4j to using slf4j
instead (KAFKA-1044). As this jira only aims to remove some runtime
dependencies we still need more work but eventually users will be able to
change to their own implementation.

Despite all this the current default is still log4j and I think it would be
a valuable conversation to have that what whether we keep it as a default
in the future (it's quite old but with log4j-extras it can be a
competition) or we change to others, like Log4j2 or Logback?

What do you think?

Viktor


On Fri, Nov 17, 2017 at 7:16 PM, Colin McCabe  wrote:

> I'm curious if there is a reason we do not include log4j-extras in
> Kafka.  If we had it, users could configure RollingFileAppender with
> compression.
>
> best,
> Colin
>


[jira] [Created] (KAFKA-6248) Enable configuration of internal topics of Kafka Streams applications

2017-11-21 Thread Tim Van Laer (JIRA)
Tim Van Laer created KAFKA-6248:
---

 Summary: Enable configuration of internal topics of Kafka Streams 
applications
 Key: KAFKA-6248
 URL: https://issues.apache.org/jira/browse/KAFKA-6248
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Tim Van Laer
Priority: Minor


In the current implementation of Kafka Streams, it is not possible to set 
custom configuration to internal topics (e.g. max.message.bytes, 
retention.ms...). It would be nice if a developer can set some specific 
configuration. 

E.g. if you want to store messages bigger than 1MiB in a state store, you have 
to alter the corresponding changelog topic with a max.message.bytes setting. 

The workaround is to create the 'internal' topics upfront using the correct 
naming convention so Kafka Streams will use the explicitly defined topics as if 
they are internal. 
An alternative is to alter the internal topics after the Kafka Streams 
application is started and has created its internal topics. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-11-21 Thread Rajini Sivaram
Hi Ted,

You can quote the config name, but it is not necessary for deleting a
config since the name doesn't contain any special characters that requires
quoting.

On Mon, Nov 20, 2017 at 9:20 PM, Ted Yu  wrote:

> Thanks for the quick response.
>
> It seems the config following --delete-config should be quoted.
>
> Cheers
>
> On Mon, Nov 20, 2017 at 12:02 PM, Rajini Sivaram 
> wrote:
>
> > Ted,
> >
> > Have added an example for --delete-config.
> >
> > On Mon, Nov 20, 2017 at 7:42 PM, Ted Yu  wrote:
> >
> > > bq. There is a --delete-config option
> > >
> > > Consider adding a sample with the above option to the KIP.
> > >
> > > Thanks
> > >
> > > On Mon, Nov 20, 2017 at 11:36 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Ted,
> > > >
> > > > Thank you for reviewing the KIP.
> > > >
> > > > *Would decreasing network/IO threads be supported ?*
> > > > Yes, As described in the KIP, some connections will be closed if
> > network
> > > > thread count is reduced (and reconnections will be processed on
> > remaining
> > > > threads)
> > > >
> > > > *What if some keys in configs are not in the Set returned
> > > > by reconfigurableConfigs()? Would exception be thrown ?*
> > > > No, *reconfigurableConfigs() *will be used to decide which classes
> are
> > > > notified when a configuration update is made*.
> > **reconfigure(Map > > ?>
> > > > configs)* will be invoked with all of the configured configs of the
> > > broker,
> > > >  similar to  *configure(Map configs). *For example, when
> > > > *SslChannelBuilder* is made reconfigurable, it could just create a
> new
> > > > SslFactory with the latest configs, using the same code as
> > *configure()*.
> > > > We avoid reconfiguring *SslChannelBuilder *unnecessarily*, *for
> example
> > > if
> > > > a topic config has changed, since topic configs are not listed in the
> > > > *SslChannelBuilder#**reconfigurableConfigs().*
> > > >
> > > > *The sample commands for bin/kafka-configs include '--add-config'.
> > Would
> > > > there be '--remove-config' ?*
> > > > bin/kafka-configs.sh is an existing tool whose parameters will not be
> > > > modified by this KIP. There is a --delete-config option.
> > > >
> > > > *ssl.keystore.password appears a few lines above. Would there be any
> > > > issue with mixture of connections (with old and new password) ?*
> > > > No, passwords (and the actual keystore) are only used during
> > > > authentication. Any channel created using the old SslFactory will not
> > be
> > > > impacted.
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Mon, Nov 20, 2017 at 4:39 PM, Ted Yu  wrote:
> > > >
> > > > > bq. (e.g. increase network/IO threads)
> > > > >
> > > > > Would decreasing network/IO threads be supported ?
> > > > >
> > > > > bq. void reconfigure(Map configs);
> > > > >
> > > > > What if some keys in configs are not in the Set returned by
> > > > > reconfigurableConfigs()
> > > > > ? Would exception be thrown ?
> > > > > If so, please specify which exception would be thrown.
> > > > >
> > > > > The sample commands for bin/kafka-configs include '--add-config'.
> > > > > Would there be '--remove-config' ?
> > > > >
> > > > > bq. Existing connections will not be affected, new connections will
> > use
> > > > the
> > > > > new keystore.
> > > > >
> > > > > ssl.keystore.password appears a few lines above. Would there be any
> > > issue
> > > > > with mixture of connections (with old and new password) ?
> > > > >
> > > > >
> > > > > Cheers
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Nov 20, 2017 at 5:57 AM, Rajini Sivaram <
> > > rajinisiva...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I have submitted KIP-226 to enable dynamic reconfiguration of
> > brokers
> > > > > > without restart:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 226+-+Dynamic+Broker+Configuration
> > > > > >
> > > > > > The KIP proposes to extend the current dynamic replication quota
> > > > > > configuration for brokers to support dynamic reconfiguration of a
> > > > limited
> > > > > > set of configuration options that are typically updated during
> the
> > > > > lifetime
> > > > > > of a broker.
> > > > > >
> > > > > > Feedback and suggestions are welcome.
> > > > > >
> > > > > > Thank you...
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > >
> > > >
> > >
> >
>