Build failed in Jenkins: kafka-trunk-jdk10 #455

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7117: Support AdminClient API in AclCommand (KIP-332) (#5463)

--
[...truncated 2.15 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-07 Thread Ron Dagostino
Hi again, Rajini.  It occurs to me that from a *behavior* perspective there
are really 3 fundamental differences between the low-level approach you
provided and the high-level approach as it currently exists in the PR:

1) *When re-authentication starts*.  The low-level approach initiates
re-authentication only if/when the connection is actually used, so it may
start after the existing credential expires; the current PR implementation
injects re-authentication requests into the existing flow, and
re-authentication starts immediately regardless of whether or not the
connection is being used for something else.

2) *When requests not related to re-authentication can use the
re-authenticating connection*.  The low-level approach finishes
re-authentication completely before allowing anything else to travers the
connection; the current PR implementation interleaves re-authentication
requests with existing flow requests.

3) *What happens if re-authentication fails*.  The low-level approach
results in a closed connection and does not support retries -- at least as
currently implemented; the current PR implementation supports retries.

Do you agree that these are the (only?) behavioral differences?

For these facets of behavior, I wonder what the requirements are for this
feature.  I think they are as follows:

1) *When re-authentication starts*: I don't think we have a hard
requirement here when considered in isolation -- whether re-authentication
starts immediately or is delayed until the connection is used probably
doesn't matter.

2) *When requests not related to re-authentication can use the
re-authenticating connection*: there is a tradeoff here between latency and
ability to debug re-authentication problems.  Delaying use of the
connection until re-authentication finishes results in bigger latency
spikes but keeps re-authentication requests somewhat together; interleaving
minimizes the size of individual latency spikes but adds some separation
between the requests.

3) *What happens if re-authentication fails*: I believe we have a clear
requirement for retry capability given what I have written previously.  Do
you agree?  Note that while the current low-level implementation does not
support retry, I have been thinking about how that can be done, and I am
pretty sure it can be.  We can keep the old authenticators on the client
and server sides and put them back into place if the re-authentication
fails; we would also have to make sure the server side doesn't delay any
failed re-authentication result and also doesn't close the connection upon
re-authentication failure.  I think I see how to do all of that.

There are some interactions between the above requirements.  If
re-authentication can't start immediately and has to wait for the
connection to be used then that precludes interleaving because we can't be
sure that the credential will be active by the time it is used -- if it
isn't active, and we allow interleaving, then requests not related to
re-authentication will fail if server-side
connection-close-due-to-expired-credential functionality is in place.  Also
note that any such server-side connection-close-due-to-expired-credential
functionality would likely have to avoid closing a connection until it is
used for anything other than re-authentication -- it must allow
re-authentication requests through when the credential is expired.

Given all of the above, it feels to me like the low-level solution is
viable only under the following conditions:

1) We must accept a delay in when re-authentication occurs to when the
connection is used
2) We must accept the bigger latency spikes associated with not
interleaving requests

Does this sound right to you, and if so, do you find these conditions
acceptable?  Or have I missed something and/or made incorrect assumptions
somewhere?

Ron


On Fri, Sep 7, 2018 at 5:19 PM Ron Dagostino  wrote:

> Hi Rajini.  The code really helped me to understand what you were thinking
> -- thanks.  I'm still digesting, but here are some quick observations:
>
> Your approach (I'll call it the "low-level" approach, as compared to the
> existing PR, which works at a higher level of abstraction) -- the low-level
> approach is certainly intriguing.  The smaller code size is welcome, of
> course, as is the fact that re-authentication simply works for everyone
> regardless of the style of use (async vs. sync I/O).
>
> I did notice that re-authentication of the connection starts only if/when
> the client uses the connection.  For example, if I run a console producer,
> re-authentication does not happen unless I try to produce something.  On
> the one hand this is potentially good -- if the client isn't using the
> connection then the connection stays "silent" and could be closed on the
> server side if it stays idle long enough -- whereas if we start injecting
> re-authentication requests (as is done in the high-level approach) then the
> connection never goes completely silent and could 

[jira] [Created] (KAFKA-7388) An equal sign in a property value causes the broker to fail

2018-09-07 Thread Andre Araujo (JIRA)
Andre Araujo created KAFKA-7388:
---

 Summary: An equal sign in a property value causes the broker to 
fail
 Key: KAFKA-7388
 URL: https://issues.apache.org/jira/browse/KAFKA-7388
 Project: Kafka
  Issue Type: Bug
Reporter: Andre Araujo


I caught this due to a keystore password that had an equal sign in it.

In this case the code 
[here|https://github.com/apache/kafka/blob/3cdc78e6bb1f83973a14ce1550fe3874f7348b05/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L63-L76]
 throws a "Invalid command line properties" error and the broker start is 
aborted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7117) Allow AclCommand to use AdminClient API

2018-09-07 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7117.

   Resolution: Fixed
Fix Version/s: 2.1.0

Merged the PR to trunk.

> Allow AclCommand to use AdminClient API
> ---
>
> Key: KAFKA-7117
> URL: https://issues.apache.org/jira/browse/KAFKA-7117
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> Currently AclCommand (kafka-acls.sh) uses authorizer class (default 
> SimpleAclAuthorizer) to manage acls.
> We should also allow AclCommand to support AdminClient API based acl 
> management. This will allow kafka-acls.sh script users to manage acls without 
> interacting zookeeper/authorizer directly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #454

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-6082; Fence zookeeper updates with controller epoch zkVersion

--
[...truncated 2.15 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-07 Thread Matthias J. Sax
(1) Sounds good to me, to just use IllegalArgumentException for both --
and thanks for pointing out that Duration can be negative and we need to
check for this. For the KIP, it would be nice to add to all methods than
(even if we don't do it in the code but only document in JavaDocs).

(2) I would argue for a new single method interface. Not sure about the
name though.

(3) Even if `#fetch(K, K, long, long)` and `#fetchAll(long, long)` is
_currently_ not used internally, I would still argue they are both dual
use -- we might all a new DSL operator at any point that uses those
methods. Thus to be "future prove" I would consider them dual use.

> Since the ReadOnlyWindowStore is only used by IQ,

This contradicts your other statement:

> org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, long) is used
> in KStreamWindowAggregate.

Or do you suggest to move `fetch(K, long)` from `ReadOnlyWindowStore` to
`WindowStore` ? This would not make sense IMHO, as `WindowStore extends
ReadOnlyWindowStore` and thus, we would loose this method for IQ.


(4) While I agree that we might want to deprecate it, I am not sure if
this should be part of this KIP? Seems to be unrelated? Should this have
been part of KIP-319? If yes, we might still want to updated this other
KIP? WDYT?


-Matthias


On 9/7/18 12:09 PM, John Roesler wrote:
> Hey all,
> 
> (1): Duration can be negative, just like long. We need to enforce any
> bounds that we currently enforce. We don't need the `throws` declaration
> for runtime exceptions, but the potential IllegalArgumentException should
> be documented in the javadoc for these methods. I still feel that surfacing
> the ArithmeticException directly would not be a great experience, so I
> still advocate for wrapping it in an IllegalArgumentException that explains
> our upper bound for Duration is "max-long number of milliseconds"
> 
> (2): I agree with your performance intuition. I don't think creating one
> object per call to punctuate is going to substantially affect the
> performance.
> 
> I think the deeper problem with Punctuator is that it's currently a SAM
> interface. If we add a new method to it, we break the source code of anyone
> passing a function. We can add the new method with a default
> implementation, as Nikolay suggested, but then you get into figuring out
> which one to default, and no one's happy. Alternatively, we can just make a
> brand new interface that is still a single method (but an Instant) and add
> the appropriate overloads and deprecate the old ones.
> 
> (3): I disagree. I think only two methods are dual use, and we should
> separate the internal from external usages. The internal usage should be
> added to WindowStore.
> org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, long) is used
> in KStreamWindowAggregate.
> org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, long, long) is
> used in KStreamKStreamJoin.
> Both of these usages are as WindowStore, so adding these interfaces to
> WindowStore would be transparent.
> 
> org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, K, long, long)
> is only used for IQ
> org.apache.kafka.streams.state.ReadOnlyWindowStore#fetchAll(long, long) is
> only used for IQ
> 
> Since the ReadOnlyWindowStore is only used by IQ, we can safely say that
> *all* of its methods are external use and can be deprecated and replaced.
> The first two usages I noted are WindowStore usages, not
> ReadOnlyWindowStores, and WindowStore is only used *internally*, so it's
> free to offer `long` methods if needed for performance reasons.
> 
> Does this make sense? The same reasoning extends to the other stores.
> 
> (4) Yes, that was my suggestion. I'm not sure if anyone is actually using
> this variant, so it seemed like a good time to just deprecate it and see.
> 
> Thoughts?
> -John
> 
> 
> On Fri, Sep 7, 2018 at 10:21 AM Nikolay Izhikov  wrote:
> 
>> Hello, Matthias.
>>
>> Thanks, for feedback.
>>
>>> (1) Some methods declare `throws IllegalArgumentException`, others>
>> don't.
>>
>> `duration.toMillis()` can throw ArithmeticException.
>> It can happen if overflow occurs during conversion.
>> Please, see source of jdk method Duration#toMillis.
>> Task author suggest to wrap it to IllegalArgumentException.
>> I think we should add `throws IllegalArgumentException` for all method
>> with Duration parameter.
>> (I updated KIP with this throws)
>>
>> What do you think?
>>
>>> (3) ReadOnlyWindowStore: All three methods are dual use and I think we
>> should not deprecate them.
>>
>> This is my typo, already fixed.
>> I propose to add new methods to `ReadOnlyWindowStore`.
>> No methods will become deprecated.
>>
>>> (4) Stores: 3 methods are listed as deprecated but only 2 new methods
>> are added.
>>
>> My proposal based on John Roesler mail [1]:
>> "10. Stores: I think we can just deprecate without replacement the method
>> that takes `segmentInterval`."
>>
>> Is it wrong?
>>
>> [1] 

[VOTE] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-07 Thread Randall Hauch
I believe the feedback on KIP-158 has been addressed. I'd like to start a
vote.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics

Discussion Thread:
https://www.mail-archive.com/dev@kafka.apache.org/msg73775.html

Thanks!

Randall


[jira] [Resolved] (KAFKA-6082) consider fencing zookeeper updates with controller epoch zkVersion

2018-09-07 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6082.
-
Resolution: Fixed

> consider fencing zookeeper updates with controller epoch zkVersion
> --
>
> Key: KAFKA-6082
> URL: https://issues.apache.org/jira/browse/KAFKA-6082
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Zhanxiang (Patrick) Huang
>Priority: Major
> Fix For: 2.1.0
>
>
>  
> Kafka controller may fail to function properly (even after repeated 
> controller movement) due to the following sequence of events:
>  - User requests topic deletion
>  - Controller A deletes the partition znode
>  - Controller B becomes controller and reads the topic znode
>  - Controller A deletes the topic znode and remove the topic from the topic 
> deletion znode
>  - Controller B reads the partition znode and topic deletion znode
>  - According to controller B's context, the topic znode exists, the topic is 
> not listed for deletion, and some partition is not found for the given topic. 
> Then controller B will create topic znode with empty data (i.e. partition 
> assignment) and create the partition znodes.
>  - All controller after controller B will fail because there is not data in 
> the topic znode.
> The long term solution is to have a way to prevent old controller from 
> writing to zookeeper if it is not the active controller. One idea is to use 
> the zookeeper multi API (See 
> [https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable))]
>  such that controller only writes to zookeeper if the zk version of the 
> controller epoch znode has not been changed.
> The short term solution is to let controller reads the topic deletion znode 
> first. If the topic is still listed in the topic deletion znode, then the new 
> controller will properly handle partition states of this topic without 
> creating partition znodes for this topic. And if the topic is not listed in 
> the topic deletion znode, then both the topic znode and the partition znodes 
> of this topic should have been deleted by the time the new controller tries 
> to read them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-07 Thread Ron Dagostino
Hi Rajini.  The code really helped me to understand what you were thinking
-- thanks.  I'm still digesting, but here are some quick observations:

Your approach (I'll call it the "low-level" approach, as compared to the
existing PR, which works at a higher level of abstraction) -- the low-level
approach is certainly intriguing.  The smaller code size is welcome, of
course, as is the fact that re-authentication simply works for everyone
regardless of the style of use (async vs. sync I/O).

I did notice that re-authentication of the connection starts only if/when
the client uses the connection.  For example, if I run a console producer,
re-authentication does not happen unless I try to produce something.  On
the one hand this is potentially good -- if the client isn't using the
connection then the connection stays "silent" and could be closed on the
server side if it stays idle long enough -- whereas if we start injecting
re-authentication requests (as is done in the high-level approach) then the
connection never goes completely silent and could (?) potentially avoid
being closed due to idleness.

However, if we implement sever-side kill of connections using expired
credentials (which we agree is needed), then we might end up with the
broker killing connections that are sitting idle for only a short period of
time.  For example, if we refresh the token on the client side and tell the
connection that it is eligible to be re-authenticated, then it is
conceivable that the connection might be sitting idle at that point and
might not be used until after the token it is currently using expires.  The
server might kill the connection, and that would force the client to
re-connect with a new connection (requiring TLS negotiation). The
probability of this happening increases as the token lifetime decreases, of
course, and it can be offset by decreasing the window factor (i.e. make it
eligible for re-authenticate at 50% of the lifetime rather than 80%, for
example -- it would have to sit idle for longer before the server tried to
kill it).  We haven't implemented server-side kill yet, so maybe we could
make it intelligent and only kill the connection if it is used (for
anything except re-authentication) after the expiration time...

I also wonder about the ability to add retry into the low-level approach.
Do you think it would be possible?  It doesn't seem like it to me -- at
least not without some big changes that would eliminate some of the
advantage of being able to reuse the existing authentication code.  The
reason I ask is because I think retry is necessary.  It is part of how
refreshes work for both GSSAPI and OAUTHBEARER -- they refresh based on
some window factor (i.e. 80%) and implement periodic retry upon failure so
that they can maximize the chances of having a new credential available for
any new connection attempt.  Without refresh we could end up in the
situation where the connection still has some lifetime left (10%, 20%, or
whatever) but it tries to re-authenticate and cannot through no fault of
its own (i.e. token endpoint down, some Kerberos failure, etc.) -=- the
connection is closed at that point, and it is then unable to reconnect
because of the same temporary problem.  We could end up with an especially
ill-timed, temporary outage in some non-Kafka system (related to OAuth or
Kerberos, or some LDAP directory) causing all clients to be kicked off the
cluster.  Retry capability seems to me to be the way to mitigate this risk.

Anyway, that's it for now.  I really like the approach you outlined -- at
least at this point based on my current understanding.  I will continue to
dig in, and I may send more comments/questions.  But for now, I think the
lack of retry -- and my definitely-could-be-wrong sense that it can't
easily be added -- is my biggest concern with this low-level approach.

Ron

On Thu, Sep 6, 2018 at 4:57 PM Rajini Sivaram 
wrote:

> Hi Ron,
>
> The commit
>
> https://github.com/rajinisivaram/kafka/commit/b9d711907ad843c11d17e80d6743bfb1d4e3f3fd
> shows
> the kind of flow I was thinking of. It is just a prototype with a fixed
> re-authentication period to explore the possibility of implementing
> re-authentication similar to authentication. There will be edge cases to
> cover and errors to handle, but hopefully the code makes the approach
> clearer than my earlier explanation!
>
> So the differences in the two implementations as you have already mentioned
> earlier.
>
>1. Re-authentication sequences are not interleaved with Kafka requests.
>As you said, this has a higher impact on latency. IMO, this also makes
> it
>easier to debug, especially with complex protocols like Kerberos which
> are
>notoriously difficult to diagnose.
>2. Re-authentication failures will not be retried, they will be treated
>as fatal errors similar to authentication failures. IMO, since we rely
> on
>brokers never rejecting valid authentication requests (clients treat
>

Jenkins build is back to normal : kafka-trunk-jdk8 #2942

2018-09-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-1.1-jdk7 #190

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[lindong28] MINOR: Use annotationProcessor instead of compile for JMH annotation

--
[...truncated 1.93 MB...]
org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest 

Build failed in Jenkins: kafka-1.1-jdk7 #191

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update README to specify Gradle 4.6 as the minimum required

--
[...truncated 422.88 KB...]
kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls STARTED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls 

Build failed in Jenkins: kafka-1.0-jdk7 #238

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update README to specify Gradle 4.6 as the minimum required

--
[...truncated 288.59 KB...]

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.LogDirFailureTest > testIOExceptionDuringLogRoll STARTED

kafka.api.LogDirFailureTest > testIOExceptionDuringLogRoll PASSED

kafka.api.LogDirFailureTest > testIOExceptionDuringCheckpoint STARTED

kafka.api.LogDirFailureTest > testIOExceptionDuringCheckpoint PASSED

kafka.api.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower STARTED

kafka.api.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower PASSED

kafka.api.MetricsTest > testMetrics STARTED

kafka.api.MetricsTest > testMetrics PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED


Build failed in Jenkins: kafka-trunk-jdk10 #449

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7372: Upgrade Jetty for preliminary Java 11 and TLS 1.3 
support

--
[...truncated 2.20 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-trunk-jdk10 #452

2018-09-07 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-1.1-jdk7 #192

2018-09-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2941

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update README to specify Gradle 4.6 as the minimum required

--
[...truncated 2.68 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-2.0-jdk8 #135

2018-09-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #451

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update README to specify Gradle 4.6 as the minimum required

--
[...truncated 2.23 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-0.11.0-jdk7 #398

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Update streams upgrade system tests 0.11.0.3 (#5613)

--
[...truncated 1.55 MB...]

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreGlobalInMemoryKTableOnRestart STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreGlobalInMemoryKTableOnRestart PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 

Jenkins build is back to normal : kafka-1.0-jdk7 #239

2018-09-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.0-jdk8 #137

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Enable ignored upgrade system tests 2.0 (#5614)

--
[...truncated 434.57 KB...]
kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction 

Build failed in Jenkins: kafka-2.0-jdk8 #134

2018-09-07 Thread Apache Jenkins Server
See 


Changes:

[lindong28] MINOR: Use annotationProcessor instead of compile for JMH annotation

--
[...truncated 1.98 MB...]
org.apache.kafka.clients.producer.MockProducerTest > 
shouldDropConsumerGroupOffsetsOnAbortIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > testNullChecksum STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > testNullChecksum PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithRelativeOffset STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithRelativeOffset PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithMissingRelativeOffset STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithMissingRelativeOffset PASSED

org.apache.kafka.clients.producer.RecordSendTest > testTimeout STARTED

org.apache.kafka.clients.producer.RecordSendTest > testTimeout PASSED

org.apache.kafka.clients.producer.RecordSendTest > testError STARTED

org.apache.kafka.clients.producer.RecordSendTest > testError PASSED

org.apache.kafka.clients.producer.RecordSendTest > testBlocking STARTED

org.apache.kafka.clients.producer.RecordSendTest > testBlocking PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testOsDefaultSocketBufferSizes STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testOsDefaultSocketBufferSizes PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInvalidSocketSendBufferSize STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInvalidSocketSendBufferSize PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInvalidSocketReceiveBufferSize STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInvalidSocketReceiveBufferSize PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > closeShouldBeIdempotent 
STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > closeShouldBeIdempotent 
PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetricConfigRecordingLevel STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetricConfigRecordingLevel PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testSerializerClose 
STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testSerializerClose PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorWithSerializers STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorWithSerializers PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testNoSerializerProvided 
STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testNoSerializerProvided 
PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInterceptorConstructClose STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInterceptorConstructClose PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testPartitionerClose 
STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testPartitionerClose 
PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
shouldCloseProperlyAndThrowIfInterrupted STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
shouldCloseProperlyAndThrowIfInterrupted PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testTopicRefreshInMetadata STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testTopicRefreshInMetadata PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testPartitionsForWithNullTopic STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testPartitionsForWithNullTopic PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInitTransactionTimeout STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInitTransactionTimeout PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testOnlyCanExecuteCloseAfterInitTransactionsTimeout STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testOnlyCanExecuteCloseAfterInitTransactionsTimeout PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testCloseWhenWaitingForMetadataUpdate STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testCloseWhenWaitingForMetadataUpdate PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetadataFetchOnStaleMetadata STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetadataFetchOnStaleMetadata PASSED


Jenkins build is back to normal : kafka-trunk-jdk10 #450

2018-09-07 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-07 Thread Manikumar
Hi all,

I have created a KIP that proposes couple of options for building custom
SSL principal names.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-371%3A+Add+a+configuration+to+build+custom+SSL+principal+name

Please take a look.

Thanks,
Manikumar


Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-07 Thread John Roesler
Hey all,

(1): Duration can be negative, just like long. We need to enforce any
bounds that we currently enforce. We don't need the `throws` declaration
for runtime exceptions, but the potential IllegalArgumentException should
be documented in the javadoc for these methods. I still feel that surfacing
the ArithmeticException directly would not be a great experience, so I
still advocate for wrapping it in an IllegalArgumentException that explains
our upper bound for Duration is "max-long number of milliseconds"

(2): I agree with your performance intuition. I don't think creating one
object per call to punctuate is going to substantially affect the
performance.

I think the deeper problem with Punctuator is that it's currently a SAM
interface. If we add a new method to it, we break the source code of anyone
passing a function. We can add the new method with a default
implementation, as Nikolay suggested, but then you get into figuring out
which one to default, and no one's happy. Alternatively, we can just make a
brand new interface that is still a single method (but an Instant) and add
the appropriate overloads and deprecate the old ones.

(3): I disagree. I think only two methods are dual use, and we should
separate the internal from external usages. The internal usage should be
added to WindowStore.
org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, long) is used
in KStreamWindowAggregate.
org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, long, long) is
used in KStreamKStreamJoin.
Both of these usages are as WindowStore, so adding these interfaces to
WindowStore would be transparent.

org.apache.kafka.streams.state.ReadOnlyWindowStore#fetch(K, K, long, long)
is only used for IQ
org.apache.kafka.streams.state.ReadOnlyWindowStore#fetchAll(long, long) is
only used for IQ

Since the ReadOnlyWindowStore is only used by IQ, we can safely say that
*all* of its methods are external use and can be deprecated and replaced.
The first two usages I noted are WindowStore usages, not
ReadOnlyWindowStores, and WindowStore is only used *internally*, so it's
free to offer `long` methods if needed for performance reasons.

Does this make sense? The same reasoning extends to the other stores.

(4) Yes, that was my suggestion. I'm not sure if anyone is actually using
this variant, so it seemed like a good time to just deprecate it and see.

Thoughts?
-John


On Fri, Sep 7, 2018 at 10:21 AM Nikolay Izhikov  wrote:

> Hello, Matthias.
>
> Thanks, for feedback.
>
> > (1) Some methods declare `throws IllegalArgumentException`, others>
> don't.
>
> `duration.toMillis()` can throw ArithmeticException.
> It can happen if overflow occurs during conversion.
> Please, see source of jdk method Duration#toMillis.
> Task author suggest to wrap it to IllegalArgumentException.
> I think we should add `throws IllegalArgumentException` for all method
> with Duration parameter.
> (I updated KIP with this throws)
>
> What do you think?
>
> > (3) ReadOnlyWindowStore: All three methods are dual use and I think we
> should not deprecate them.
>
> This is my typo, already fixed.
> I propose to add new methods to `ReadOnlyWindowStore`.
> No methods will become deprecated.
>
> > (4) Stores: 3 methods are listed as deprecated but only 2 new methods
> are added.
>
> My proposal based on John Roesler mail [1]:
> "10. Stores: I think we can just deprecate without replacement the method
> that takes `segmentInterval`."
>
> Is it wrong?
>
> [1] https://www.mail-archive.com/dev@kafka.apache.org/msg91348.html
>
>
> В Чт, 06/09/2018 в 21:04 -0700, Matthias J. Sax пишет:
> > Thanks for updating the KIP!
> >
> > Couple of minor follow ups:
> >
> > (1) Some methods declare `throws IllegalArgumentException`, others
> > don't. It's runtime exception and thus it's not required to declare it
> > -- it just looks inconsistent in the KIP and maybe it's inconsistent in
> > the code, too. I am not sure if it is possible to provide a negative
> > Duration? If not, we would not need to check the provided value and can
> > remove the declaration.
> >
> > (2) About punctuations: I still think, it would be ok to change the
> > callback from `long` to `Instance` -- even if it is possible to register
> > a punctuation on a ms-basis, in practice many people used schedules in
> > the range of seconds or larger. Thus, I don't think there will be a
> > performance penalty. Of course, we can still revisit this later, too.
> > John and Bill, you did not comment on this. Would also be good to get
> > feedback from Guozhang about this.
> >
> > (3) ReadOnlyWindowStore: All three methods are dual use and I think we
> > should not deprecate them. However, we can add the new proposed methods
> > in parallel -- the names can be the same without conflict as the
> > parameter lists are different. (Or did you just forget to remove the
> > comment line?)
> >
> > (4) Stores: 3 methods are listed as deprecated but only 2 new methods
> > are added. Maybe this was discussed 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-09-07 Thread John Roesler
Hi James,

The proposal we are discussing is
https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable

I'm not sure if it's been updated to reflect current thinking.

-John

On Fri, Sep 7, 2018 at 8:49 AM James Kwan  wrote:

> I am new to this group and I found this subject interesting.  Sounds like
> you guys want to implement a join table of two streams? Is there somewhere
> I can see the original requirement or proposal?
>
> > On Sep 7, 2018, at 8:13 AM, Jan Filipiak 
> wrote:
> >
> >
> > On 05.09.2018 22:17, Adam Bellemare wrote:
> >> I'm currently testing using a Windowed Store to store the highwater
> mark.
> >> By all indications this should work fine, with the caveat being that it
> can
> >> only resolve out-of-order arrival for up to the size of the window (ie:
> >> 24h, 72h, etc). This would remove the possibility of it being unbounded
> in
> >> size.
> >>
> >> With regards to Jan's suggestion, I believe this is where we will have
> to
> >> remain in disagreement. While I do not disagree with your statement
> about
> >> there likely to be additional joins done in a real-world workflow, I do
> not
> >> see how you can conclusively deal with out-of-order arrival of
> foreign-key
> >> changes and subsequent joins. I have attempted what I think you have
> >> proposed (without a high-water, using groupBy and reduce) and found
> that if
> >> the foreign key changes too quickly, or the load on a stream thread is
> too
> >> high, the joined messages will arrive out-of-order and be incorrectly
> >> propagated, such that an intermediate event is represented as the final
> >> event.
> > Can you shed some light on your groupBy implementation. There must be
> some sort of flaw in it.
> > I have a suspicion where it is, I would just like to confirm. The idea
> is bullet proof and it must be
> > an implementation mess up. I would like to clarify before we draw a
> conclusion.
> >
> >>  Repartitioning the scattered events back to their original
> >> partitions is the only way I know how to conclusively deal with
> >> out-of-order events in a given time frame, and to ensure that the data
> is
> >> eventually consistent with the input events.
> >>
> >> If you have some code to share that illustrates your approach, I would
> be
> >> very grateful as it would remove any misunderstandings that I may have.
> >
> > ah okay you were looking for my code. I don't have something easily
> readable here as its bloated with OO-patterns.
> >
> > its anyhow trivial:
> >
> > @Override
> >public T apply(K aggKey, V value, T aggregate)
> >{
> >Map currentStateAsMap = asMap(aggregate); << imaginary
> >U toModifyKey = mapper.apply(value);
> ><< this is the place where people actually gonna have issues
> and why you probably couldn't do it. we would need to find a solution here.
> I didn't realize that yet.
> ><< we propagate the field in the joiner, so that we can pick
> it up in an aggregate. Probably you have not thought of this in your
> approach right?
> ><< I am very open to find a generic solution here. In my
> honest opinion this is broken in KTableImpl.GroupBy that it looses the keys
> and only maintains the aggregate key.
> ><< I abstracted it away back then way before i was thinking
> of oneToMany join. That is why I didn't realize its significance here.
> ><< Opinions?
> >
> >for (V m : current)
> >{
> >currentStateAsMap.put(mapper.apply(m), m);
> >}
> >if (isAdder)
> >{
> >currentStateAsMap.put(toModifyKey, value);
> >}
> >else
> >{
> >currentStateAsMap.remove(toModifyKey);
> >if(currentStateAsMap.isEmpty()){
> >return null;
> >}
> >}
> >retrun asAggregateType(currentStateAsMap)
> >}
> >
> >
> >
> >
> >
> >>
> >> Thanks,
> >>
> >> Adam
> >>
> >>
> >>
> >>
> >>
> >> On Wed, Sep 5, 2018 at 3:35 PM, Jan Filipiak 
> >> wrote:
> >>
> >>> Thanks Adam for bringing Matthias to speed!
> >>>
> >>> about the differences. I think re-keying back should be optional at
> best.
> >>> I would say we return a KScatteredTable with reshuffle() returning
> >>> KTable to make the backwards repartitioning
> optional.
> >>> I am also in a big favour of doing the out of order processing using
> group
> >>> by instead high water mark tracking.
> >>> Just because unbounded growth is just scary + It saves us the header
> stuff.
> >>>
> >>> I think the abstraction of always repartitioning back is just not so
> >>> strong. Like the work has been done before we partition back and
> grouping
> >>> by something else afterwards is really common.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On 05.09.2018 13:49, Adam Bellemare wrote:
> >>>
>  Hi Matthias
> 
>  Thank you for your feedback, I do appreciate it!
> 
>  While name spacing would be possible, it would 

[jira] [Resolved] (KAFKA-7387) Kafka distributed worker reads config from backup while launching connector

2018-09-07 Thread satyanarayan komandur (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

satyanarayan komandur resolved KAFKA-7387.
--
Resolution: Not A Bug

Issue is with the JDBC connector and not with the kafka connect framework

> Kafka distributed worker reads config from backup while launching connector
> ---
>
> Key: KAFKA-7387
> URL: https://issues.apache.org/jira/browse/KAFKA-7387
> Project: Kafka
>  Issue Type: Bug
>Reporter: satyanarayan komandur
>Priority: Minor
>
> While launching kafka connector using REST API in the distributed mode, the 
> kafka worker uses the old configuration. Normally this is fine when we 
> relaunch the connector without changing configuration.
> If the prior failure is related to a connector configuration issue and we are 
> relaunching, worker is still using old configuration for the first time 
> though subsequently it is updating the new configuration supplied. So 
> essentially we have to launch it twice in such cases. This behavior is not 
> changing even if we call the update config first and then launching the 
> connector
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7387) Kafka distributed worker reads config from backup while launching connector

2018-09-07 Thread satyanarayan komandur (JIRA)
satyanarayan komandur created KAFKA-7387:


 Summary: Kafka distributed worker reads config from backup while 
launching connector
 Key: KAFKA-7387
 URL: https://issues.apache.org/jira/browse/KAFKA-7387
 Project: Kafka
  Issue Type: Bug
Reporter: satyanarayan komandur


While launching kafka connector using REST API in the distributed mode, the 
kafka worker uses the old configuration. Normally this is fine when we relaunch 
the connector without changing configuration.

If the prior failure is related to a connector configuration issue and we are 
relaunching, worker is still using old configuration for the first time though 
subsequently it is updating the new configuration supplied. So essentially we 
have to launch it twice in such cases. This behavior is not changing even if we 
call the update config first and then launching the connector

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7386) Streams Scala wrapper should not cache serdes

2018-09-07 Thread John Roesler (JIRA)
John Roesler created KAFKA-7386:
---

 Summary: Streams Scala wrapper should not cache serdes
 Key: KAFKA-7386
 URL: https://issues.apache.org/jira/browse/KAFKA-7386
 Project: Kafka
  Issue Type: Bug
Reporter: John Roesler
Assignee: John Roesler


for example, 
[https://github.com/apache/kafka/blob/trunk/streams/streams-scala/src/main/scala/org/apache/kafka/streams/scala/Serdes.scala#L28]
 invokes Serdes.String() once and caches the result.

However, the implementation of the String serde has a non-empty configure 
method that is variant in whether it's used as a key or value serde. So we 
won't get correct execution if we create one serde and use it for both keys and 
values.

The fix is simple: change all the `val` declarations in scala.Serdes to `def`. 
Thanks to the referential transparency for parameterless methods in scala, no 
user-facing code will break.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Add to contributor list

2018-09-07 Thread Murali Mani
Thanks

On Fri, Sep 7, 2018 at 4:50 PM Guozhang Wang  wrote:

> Added "Murali Krishnan Mani".
>
>
> Guozhang
>
> On Fri, Sep 7, 2018 at 8:42 AM, Murali Mani  wrote:
>
> > Thanks  Guozhang.  I already have the id for Apache Jira / Issues.
> "Murali
> > Mani" is my id, can you please add me to the Apache Kafka contributor
> list?
> >
> > On Tue, Sep 4, 2018 at 7:29 PM Guozhang Wang  wrote:
> >
> > > Hi Murali,
> > >
> > > Note you do not need PMC member to send you the invitation for
> becoming a
> > > contributor; you only need it to be a committer.
> > >
> > > All you need to do is go to https://issues.apache.org/ and create an
> > > account.
> > >
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, Sep 3, 2018 at 4:27 AM, Murali Mani 
> > wrote:
> > >
> > > > Guozhang,
> > > >
> > > > Could you please send the invitation for creating my new Apache ID
> > > > (muralimani)? I had sent the email to *priv...@kafka.apache.org
> > > >  * mail id. I am sending this mail as you
> > are
> > > > one
> > > > of the PMC member of Apache Kakfa.
> > > >
> > > >
> > > > On Sat, Sep 1, 2018 at 1:31 PM Murali Mani 
> > > wrote:
> > > >
> > > > > Guozhang,
> > > > >
> > > > > In order to create a apache id, I have to submit the ICLA document,
> > > which
> > > > > i had completed today. As per the process defined below, a PMC
> member
> > > (in
> > > > > this case you / team for Apache Kafka) would need to send the
> > > activation
> > > > > email once the ID has been created. I shall notify you / team to
> send
> > > the
> > > > > request for actvation. I hope it is fine.
> > > > >
> > > > > https://www.apache.org/dev/new-committers-guide.html#cla
> > > > >
> > > > > Thanks for your understanding.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Sun, Aug 19, 2018 at 10:42 PM Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > >> You need an apache id for the project (github account is
> separate),
> > > you
> > > > >> can
> > > > >> create one on apache website.
> > > > >>
> > > > >>
> > > > >> Guozhang
> > > > >>
> > > > >> On Sun, Aug 19, 2018 at 2:02 PM, Murali Mani <
> murali.m...@gmail.com
> > >
> > > > >> wrote:
> > > > >>
> > > > >> > Yes, it is correct. muralimani is the user id of Github. The
> mail
> > id
> > > > for
> > > > >> > github - murali.m...@gmail.com
> > > > >> >
> > > > >> > On Sun, Aug 19, 2018 at 7:03 PM Guozhang Wang <
> wangg...@gmail.com
> > >
> > > > >> wrote:
> > > > >> >
> > > > >> > > Hello,
> > > > >> > >
> > > > >> > > I cannot find the id "muralimani" for either JIRA or wiki
> > system.
> > > > >> Could
> > > > >> > you
> > > > >> > > double check if it is created?
> > > > >> > >
> > > > >> > >
> > > > >> > > Guozhang
> > > > >> > >
> > > > >> > >
> > > > >> > > On Sun, Aug 19, 2018 at 12:46 AM, Murali Mani <
> > > > murali.m...@gmail.com>
> > > > >> > > wrote:
> > > > >> > >
> > > > >> > > > Hi,
> > > > >> > > >
> > > > >> > > > Could you please add me to the contributor list? My jira id
> is
> > > > >> > muralimani
> > > > >> > > >
> > > > >> > > > --
> > > > >> > > > regards
> > > > >> > > > Murali Krishnan Mani
> > > > >> > > > Ph: +44 (0)7432128519
> > > > >> > > >
> > > > >> > >
> > > > >> > >
> > > > >> > >
> > > > >> > > --
> > > > >> > > -- Guozhang
> > > > >> > >
> > > > >> >
> > > > >> >
> > > > >> > --
> > > > >> > regards
> > > > >> > Murali Krishnan Mani
> > > > >> > Ph: +44 (0)7432128519
> > > > >> >
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> -- Guozhang
> > > > >>
> > > > >
> > > > >
> > > > > --
> > > > > regards
> > > > > Murali Krishnan Mani
> > > > > Ph: +44 (0)7432128519
> > > > >
> > > >
> > > >
> > > > --
> > > > regards
> > > > Murali Krishnan Mani
> > > > Ph: +44 (0)7432128519
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> > --
> > regards
> > Murali Krishnan Mani
> > Ph: +44 (0)7432128519
> >
>
>
>
> --
> -- Guozhang
>


-- 
regards
Murali Krishnan Mani
Ph: +44 (0)7432128519


Re: Add to contributor list

2018-09-07 Thread Guozhang Wang
Added "Murali Krishnan Mani".


Guozhang

On Fri, Sep 7, 2018 at 8:42 AM, Murali Mani  wrote:

> Thanks  Guozhang.  I already have the id for Apache Jira / Issues. "Murali
> Mani" is my id, can you please add me to the Apache Kafka contributor list?
>
> On Tue, Sep 4, 2018 at 7:29 PM Guozhang Wang  wrote:
>
> > Hi Murali,
> >
> > Note you do not need PMC member to send you the invitation for becoming a
> > contributor; you only need it to be a committer.
> >
> > All you need to do is go to https://issues.apache.org/ and create an
> > account.
> >
> >
> >
> > Guozhang
> >
> > On Mon, Sep 3, 2018 at 4:27 AM, Murali Mani 
> wrote:
> >
> > > Guozhang,
> > >
> > > Could you please send the invitation for creating my new Apache ID
> > > (muralimani)? I had sent the email to *priv...@kafka.apache.org
> > >  * mail id. I am sending this mail as you
> are
> > > one
> > > of the PMC member of Apache Kakfa.
> > >
> > >
> > > On Sat, Sep 1, 2018 at 1:31 PM Murali Mani 
> > wrote:
> > >
> > > > Guozhang,
> > > >
> > > > In order to create a apache id, I have to submit the ICLA document,
> > which
> > > > i had completed today. As per the process defined below, a PMC member
> > (in
> > > > this case you / team for Apache Kafka) would need to send the
> > activation
> > > > email once the ID has been created. I shall notify you / team to send
> > the
> > > > request for actvation. I hope it is fine.
> > > >
> > > > https://www.apache.org/dev/new-committers-guide.html#cla
> > > >
> > > > Thanks for your understanding.
> > > >
> > > >
> > > >
> > > >
> > > > On Sun, Aug 19, 2018 at 10:42 PM Guozhang Wang 
> > > wrote:
> > > >
> > > >> You need an apache id for the project (github account is separate),
> > you
> > > >> can
> > > >> create one on apache website.
> > > >>
> > > >>
> > > >> Guozhang
> > > >>
> > > >> On Sun, Aug 19, 2018 at 2:02 PM, Murali Mani  >
> > > >> wrote:
> > > >>
> > > >> > Yes, it is correct. muralimani is the user id of Github. The mail
> id
> > > for
> > > >> > github - murali.m...@gmail.com
> > > >> >
> > > >> > On Sun, Aug 19, 2018 at 7:03 PM Guozhang Wang  >
> > > >> wrote:
> > > >> >
> > > >> > > Hello,
> > > >> > >
> > > >> > > I cannot find the id "muralimani" for either JIRA or wiki
> system.
> > > >> Could
> > > >> > you
> > > >> > > double check if it is created?
> > > >> > >
> > > >> > >
> > > >> > > Guozhang
> > > >> > >
> > > >> > >
> > > >> > > On Sun, Aug 19, 2018 at 12:46 AM, Murali Mani <
> > > murali.m...@gmail.com>
> > > >> > > wrote:
> > > >> > >
> > > >> > > > Hi,
> > > >> > > >
> > > >> > > > Could you please add me to the contributor list? My jira id is
> > > >> > muralimani
> > > >> > > >
> > > >> > > > --
> > > >> > > > regards
> > > >> > > > Murali Krishnan Mani
> > > >> > > > Ph: +44 (0)7432128519
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > > -- Guozhang
> > > >> > >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > regards
> > > >> > Murali Krishnan Mani
> > > >> > Ph: +44 (0)7432128519
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> -- Guozhang
> > > >>
> > > >
> > > >
> > > > --
> > > > regards
> > > > Murali Krishnan Mani
> > > > Ph: +44 (0)7432128519
> > > >
> > >
> > >
> > > --
> > > regards
> > > Murali Krishnan Mani
> > > Ph: +44 (0)7432128519
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
> --
> regards
> Murali Krishnan Mani
> Ph: +44 (0)7432128519
>



-- 
-- Guozhang


Re: Add to contributor list

2018-09-07 Thread Murali Mani
Thanks  Guozhang.  I already have the id for Apache Jira / Issues. "Murali
Mani" is my id, can you please add me to the Apache Kafka contributor list?

On Tue, Sep 4, 2018 at 7:29 PM Guozhang Wang  wrote:

> Hi Murali,
>
> Note you do not need PMC member to send you the invitation for becoming a
> contributor; you only need it to be a committer.
>
> All you need to do is go to https://issues.apache.org/ and create an
> account.
>
>
>
> Guozhang
>
> On Mon, Sep 3, 2018 at 4:27 AM, Murali Mani  wrote:
>
> > Guozhang,
> >
> > Could you please send the invitation for creating my new Apache ID
> > (muralimani)? I had sent the email to *priv...@kafka.apache.org
> >  * mail id. I am sending this mail as you are
> > one
> > of the PMC member of Apache Kakfa.
> >
> >
> > On Sat, Sep 1, 2018 at 1:31 PM Murali Mani 
> wrote:
> >
> > > Guozhang,
> > >
> > > In order to create a apache id, I have to submit the ICLA document,
> which
> > > i had completed today. As per the process defined below, a PMC member
> (in
> > > this case you / team for Apache Kafka) would need to send the
> activation
> > > email once the ID has been created. I shall notify you / team to send
> the
> > > request for actvation. I hope it is fine.
> > >
> > > https://www.apache.org/dev/new-committers-guide.html#cla
> > >
> > > Thanks for your understanding.
> > >
> > >
> > >
> > >
> > > On Sun, Aug 19, 2018 at 10:42 PM Guozhang Wang 
> > wrote:
> > >
> > >> You need an apache id for the project (github account is separate),
> you
> > >> can
> > >> create one on apache website.
> > >>
> > >>
> > >> Guozhang
> > >>
> > >> On Sun, Aug 19, 2018 at 2:02 PM, Murali Mani 
> > >> wrote:
> > >>
> > >> > Yes, it is correct. muralimani is the user id of Github. The mail id
> > for
> > >> > github - murali.m...@gmail.com
> > >> >
> > >> > On Sun, Aug 19, 2018 at 7:03 PM Guozhang Wang 
> > >> wrote:
> > >> >
> > >> > > Hello,
> > >> > >
> > >> > > I cannot find the id "muralimani" for either JIRA or wiki system.
> > >> Could
> > >> > you
> > >> > > double check if it is created?
> > >> > >
> > >> > >
> > >> > > Guozhang
> > >> > >
> > >> > >
> > >> > > On Sun, Aug 19, 2018 at 12:46 AM, Murali Mani <
> > murali.m...@gmail.com>
> > >> > > wrote:
> > >> > >
> > >> > > > Hi,
> > >> > > >
> > >> > > > Could you please add me to the contributor list? My jira id is
> > >> > muralimani
> > >> > > >
> > >> > > > --
> > >> > > > regards
> > >> > > > Murali Krishnan Mani
> > >> > > > Ph: +44 (0)7432128519
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > --
> > >> > > -- Guozhang
> > >> > >
> > >> >
> > >> >
> > >> > --
> > >> > regards
> > >> > Murali Krishnan Mani
> > >> > Ph: +44 (0)7432128519
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> > >
> > >
> > > --
> > > regards
> > > Murali Krishnan Mani
> > > Ph: +44 (0)7432128519
> > >
> >
> >
> > --
> > regards
> > Murali Krishnan Mani
> > Ph: +44 (0)7432128519
> >
>
>
>
> --
> -- Guozhang
>


-- 
regards
Murali Krishnan Mani
Ph: +44 (0)7432128519


[jira] [Created] (KAFKA-7385) Log compactor crashes when empty headers are retained with idempotent / transaction producers

2018-09-07 Thread Dhruvil Shah (JIRA)
Dhruvil Shah created KAFKA-7385:
---

 Summary: Log compactor crashes when empty headers are retained 
with idempotent / transaction producers
 Key: KAFKA-7385
 URL: https://issues.apache.org/jira/browse/KAFKA-7385
 Project: Kafka
  Issue Type: Bug
Reporter: Dhruvil Shah


During log compaction, we retain an empty header if the batch contains the last 
sequence number for a particular producer. When such headers are the only 
messages retained, we do not update state such as `maxOffset` in 
`MemoryRecords#filterTo` causing us to append these into the cleaned segment 
with `largestOffset` = -1. This throws a `LogSegmentOffsetOverflowException` 
for a segment that does not actually have an overflow. When we attempt to split 
the segment, the log cleaner dies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-07 Thread Nikolay Izhikov
Hello, Matthias.

Thanks, for feedback.

> (1) Some methods declare `throws IllegalArgumentException`, others> don't.

`duration.toMillis()` can throw ArithmeticException.
It can happen if overflow occurs during conversion.
Please, see source of jdk method Duration#toMillis.
Task author suggest to wrap it to IllegalArgumentException.
I think we should add `throws IllegalArgumentException` for all method with 
Duration parameter.
(I updated KIP with this throws)

What do you think?

> (3) ReadOnlyWindowStore: All three methods are dual use and I think we should 
> not deprecate them.

This is my typo, already fixed.
I propose to add new methods to `ReadOnlyWindowStore`.
No methods will become deprecated.

> (4) Stores: 3 methods are listed as deprecated but only 2 new methods are 
> added.

My proposal based on John Roesler mail [1]:
"10. Stores: I think we can just deprecate without replacement the method that 
takes `segmentInterval`."

Is it wrong?

[1] https://www.mail-archive.com/dev@kafka.apache.org/msg91348.html


В Чт, 06/09/2018 в 21:04 -0700, Matthias J. Sax пишет:
> Thanks for updating the KIP!
> 
> Couple of minor follow ups:
> 
> (1) Some methods declare `throws IllegalArgumentException`, others
> don't. It's runtime exception and thus it's not required to declare it
> -- it just looks inconsistent in the KIP and maybe it's inconsistent in
> the code, too. I am not sure if it is possible to provide a negative
> Duration? If not, we would not need to check the provided value and can
> remove the declaration.
> 
> (2) About punctuations: I still think, it would be ok to change the
> callback from `long` to `Instance` -- even if it is possible to register
> a punctuation on a ms-basis, in practice many people used schedules in
> the range of seconds or larger. Thus, I don't think there will be a
> performance penalty. Of course, we can still revisit this later, too.
> John and Bill, you did not comment on this. Would also be good to get
> feedback from Guozhang about this.
> 
> (3) ReadOnlyWindowStore: All three methods are dual use and I think we
> should not deprecate them. However, we can add the new proposed methods
> in parallel -- the names can be the same without conflict as the
> parameter lists are different. (Or did you just forget to remove the
> comment line?)
> 
> (4) Stores: 3 methods are listed as deprecated but only 2 new methods
> are added. Maybe this was discussed already, but I can't recall why? Can
> you elaborate? Or should this deprecation be actually be part of KIP-328
> (\cc John)?
> 
> 
> Thanks,
> 
> -Matthias
> 
> 
> 
> ps: there are many KIPs in-flight in parallel, and it takes some time to
> get around. Please be patient :)
> 
> 
> 
> 
> On 9/5/18 12:25 AM, Nikolay Izhikov wrote:
> > Hello, Guys.
> > 
> > I've started a VOTE [1], but seems commiters have no chance to look at KIP 
> > for now.
> > 
> > Can you tell me, is it OK?
> > Should I wait for feedback? For how long?
> > 
> > Or something in KIP should be improved before voting?
> > 
> > [1] 
> > https://lists.apache.org/thread.html/e976352e7e42d459091ee66ac790b6a0de7064eac0c57760d50c983b@%3Cdev.kafka.apache.org%3E
> > 
> > В Пт, 24/08/2018 в 10:36 -0700, Matthias J. Sax пишет:
> > > It's tricky... :)
> > > 
> > > Some APIs have "dual use" as I mentioned in my first reply. I agree that
> > > it would be good to avoid abstract class and use interfaces if possible.
> > > As long as the change is source code compatible, it should be fine IMHO
> > > -- we need to document binary incompatibility of course.
> > > 
> > > I think it's best, if the KIPs gets update with a proposal on how to
> > > handle "dual use" parts. It's easier to discuss if it's written down IMHO.
> > > 
> > > For `ProcessorContext#schedule()`, you are right John: it's seems fine
> > > to use `Duration`, as it won't be called often (usually only within
> > > `Processor#init()`) -- I mixed it up with `Punctuator#punctuate(long)`.
> > > However, thinking about this twice, we might even want to update both
> > > methods. Punctuation callbacks don't happen every millisecond and thus
> > > the overhead to use `Instance` should not be a problem.
> > > 
> > > @Nikolay: it seems the KIP does not mention `Punctuator#punctuate(long)`
> > > -- should we add it?
> > > 
> > > 
> > > -Matthias
> > > 
> > > 
> > > On 8/24/18 10:11 AM, John Roesler wrote:
> > > > Quick afterthought: I guess that `Window` is exposed to the API via
> > > > `Windowed` keys. I think it would be fine to not deprecate the `long` 
> > > > start
> > > > and end, but add `Instant` variants for people preferring that 
> > > > interface.
> > > > 
> > > > On Fri, Aug 24, 2018 at 11:10 AM John Roesler  wrote:
> > > > 
> > > > > Hey Matthias,
> > > > > 
> > > > > Thanks for pointing that out. I agree that we only really need to 
> > > > > change
> > > > > methods that are API-facing, and we probably want to avoid using
> > > > > Duration/Instant for Streams-facing members.
> > > > > 
> > > > > 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-09-07 Thread James Kwan
I am new to this group and I found this subject interesting.  Sounds like you 
guys want to implement a join table of two streams? Is there somewhere I can 
see the original requirement or proposal?   

> On Sep 7, 2018, at 8:13 AM, Jan Filipiak  wrote:
> 
> 
> On 05.09.2018 22:17, Adam Bellemare wrote:
>> I'm currently testing using a Windowed Store to store the highwater mark.
>> By all indications this should work fine, with the caveat being that it can
>> only resolve out-of-order arrival for up to the size of the window (ie:
>> 24h, 72h, etc). This would remove the possibility of it being unbounded in
>> size.
>> 
>> With regards to Jan's suggestion, I believe this is where we will have to
>> remain in disagreement. While I do not disagree with your statement about
>> there likely to be additional joins done in a real-world workflow, I do not
>> see how you can conclusively deal with out-of-order arrival of foreign-key
>> changes and subsequent joins. I have attempted what I think you have
>> proposed (without a high-water, using groupBy and reduce) and found that if
>> the foreign key changes too quickly, or the load on a stream thread is too
>> high, the joined messages will arrive out-of-order and be incorrectly
>> propagated, such that an intermediate event is represented as the final
>> event.
> Can you shed some light on your groupBy implementation. There must be some 
> sort of flaw in it.
> I have a suspicion where it is, I would just like to confirm. The idea is 
> bullet proof and it must be
> an implementation mess up. I would like to clarify before we draw a 
> conclusion.
> 
>>  Repartitioning the scattered events back to their original
>> partitions is the only way I know how to conclusively deal with
>> out-of-order events in a given time frame, and to ensure that the data is
>> eventually consistent with the input events.
>> 
>> If you have some code to share that illustrates your approach, I would be
>> very grateful as it would remove any misunderstandings that I may have.
> 
> ah okay you were looking for my code. I don't have something easily readable 
> here as its bloated with OO-patterns.
> 
> its anyhow trivial:
> 
> @Override
>public T apply(K aggKey, V value, T aggregate)
>{
>Map currentStateAsMap = asMap(aggregate); << imaginary
>U toModifyKey = mapper.apply(value);
><< this is the place where people actually gonna have issues and 
> why you probably couldn't do it. we would need to find a solution here. I 
> didn't realize that yet.
><< we propagate the field in the joiner, so that we can pick it up 
> in an aggregate. Probably you have not thought of this in your approach right?
><< I am very open to find a generic solution here. In my honest 
> opinion this is broken in KTableImpl.GroupBy that it looses the keys and only 
> maintains the aggregate key.
><< I abstracted it away back then way before i was thinking of 
> oneToMany join. That is why I didn't realize its significance here.
><< Opinions?
> 
>for (V m : current)
>{
>currentStateAsMap.put(mapper.apply(m), m);
>}
>if (isAdder)
>{
>currentStateAsMap.put(toModifyKey, value);
>}
>else
>{
>currentStateAsMap.remove(toModifyKey);
>if(currentStateAsMap.isEmpty()){
>return null;
>}
>}
>retrun asAggregateType(currentStateAsMap)
>}
> 
> 
> 
> 
> 
>> 
>> Thanks,
>> 
>> Adam
>> 
>> 
>> 
>> 
>> 
>> On Wed, Sep 5, 2018 at 3:35 PM, Jan Filipiak 
>> wrote:
>> 
>>> Thanks Adam for bringing Matthias to speed!
>>> 
>>> about the differences. I think re-keying back should be optional at best.
>>> I would say we return a KScatteredTable with reshuffle() returning
>>> KTable to make the backwards repartitioning optional.
>>> I am also in a big favour of doing the out of order processing using group
>>> by instead high water mark tracking.
>>> Just because unbounded growth is just scary + It saves us the header stuff.
>>> 
>>> I think the abstraction of always repartitioning back is just not so
>>> strong. Like the work has been done before we partition back and grouping
>>> by something else afterwards is really common.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 05.09.2018 13:49, Adam Bellemare wrote:
>>> 
 Hi Matthias
 
 Thank you for your feedback, I do appreciate it!
 
 While name spacing would be possible, it would require to deserialize
> user headers what implies a runtime overhead. I would suggest to no
> namespace for now to avoid the overhead. If this becomes a problem in
> the future, we can still add name spacing later on.
> 
 Agreed. I will go with using a reserved string and document it.
 
 
 
 My main concern about the design it the type of the result KTable: If I
 understood the proposal correctly,
 
 
 In your 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-09-07 Thread Jan Filipiak



On 05.09.2018 22:17, Adam Bellemare wrote:

I'm currently testing using a Windowed Store to store the highwater mark.
By all indications this should work fine, with the caveat being that it can
only resolve out-of-order arrival for up to the size of the window (ie:
24h, 72h, etc). This would remove the possibility of it being unbounded in
size.

With regards to Jan's suggestion, I believe this is where we will have to
remain in disagreement. While I do not disagree with your statement about
there likely to be additional joins done in a real-world workflow, I do not
see how you can conclusively deal with out-of-order arrival of foreign-key
changes and subsequent joins. I have attempted what I think you have
proposed (without a high-water, using groupBy and reduce) and found that if
the foreign key changes too quickly, or the load on a stream thread is too
high, the joined messages will arrive out-of-order and be incorrectly
propagated, such that an intermediate event is represented as the final
event.
Can you shed some light on your groupBy implementation. There must be 
some sort of flaw in it.
I have a suspicion where it is, I would just like to confirm. The idea 
is bullet proof and it must be
an implementation mess up. I would like to clarify before we draw a 
conclusion.



  Repartitioning the scattered events back to their original
partitions is the only way I know how to conclusively deal with
out-of-order events in a given time frame, and to ensure that the data is
eventually consistent with the input events.

If you have some code to share that illustrates your approach, I would be
very grateful as it would remove any misunderstandings that I may have.


ah okay you were looking for my code. I don't have something easily 
readable here as its bloated with OO-patterns.


its anyhow trivial:

@Override
public T apply(K aggKey, V value, T aggregate)
{
Map currentStateAsMap = asMap(aggregate); << imaginary
U toModifyKey = mapper.apply(value);
<< this is the place where people actually gonna have 
issues and why you probably couldn't do it. we would need to find a 
solution here. I didn't realize that yet.
<< we propagate the field in the joiner, so that we can 
pick it up in an aggregate. Probably you have not thought of this in 
your approach right?
<< I am very open to find a generic solution here. In my 
honest opinion this is broken in KTableImpl.GroupBy that it looses the 
keys and only maintains the aggregate key.
<< I abstracted it away back then way before i was thinking 
of oneToMany join. That is why I didn't realize its significance here.

<< Opinions?

for (V m : current)
{
currentStateAsMap.put(mapper.apply(m), m);
}
if (isAdder)
{
currentStateAsMap.put(toModifyKey, value);
}
else
{
currentStateAsMap.remove(toModifyKey);
if(currentStateAsMap.isEmpty()){
return null;
}
}
retrun asAggregateType(currentStateAsMap)
}







Thanks,

Adam





On Wed, Sep 5, 2018 at 3:35 PM, Jan Filipiak 
wrote:


Thanks Adam for bringing Matthias to speed!

about the differences. I think re-keying back should be optional at best.
I would say we return a KScatteredTable with reshuffle() returning
KTable to make the backwards repartitioning optional.
I am also in a big favour of doing the out of order processing using group
by instead high water mark tracking.
Just because unbounded growth is just scary + It saves us the header stuff.

I think the abstraction of always repartitioning back is just not so
strong. Like the work has been done before we partition back and grouping
by something else afterwards is really common.






On 05.09.2018 13:49, Adam Bellemare wrote:


Hi Matthias

Thank you for your feedback, I do appreciate it!

While name spacing would be possible, it would require to deserialize

user headers what implies a runtime overhead. I would suggest to no
namespace for now to avoid the overhead. If this becomes a problem in
the future, we can still add name spacing later on.


Agreed. I will go with using a reserved string and document it.



My main concern about the design it the type of the result KTable: If I
understood the proposal correctly,


In your example, you have table1 and table2 swapped. Here is how it works
currently:

1) table1 has the records that contain the foreign key within their value.
table1 input stream: , , 
table2 input stream: , 

2) A Value mapper is required to extract the foreign key.
table1 foreign key mapper: ( value => value.fk )

The mapper is applied to each element in table1, and a new combined key is
made:
table1 mapped: , , 

3) The rekeyed events are copartitioned with table2:

a) Stream Thread with Partition 0:
RepartitionedTable1: , 
Table2: 

b) Stream Thread with Partition 1:
RepartitionedTable1: 
Table2: 

4) 

[jira] [Created] (KAFKA-7384) Compatibility issues between Kafka Brokers 1.1.0 and older kafka clients

2018-09-07 Thread Vasilis Tsanis (JIRA)
Vasilis Tsanis created KAFKA-7384:
-

 Summary: Compatibility issues between Kafka Brokers 1.1.0 and 
older kafka clients
 Key: KAFKA-7384
 URL: https://issues.apache.org/jira/browse/KAFKA-7384
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 1.1.0
Reporter: Vasilis Tsanis
 Attachments: logs2.txt

Hello

After upgrading the Kafka Brokers from 0.10.2.1 to 1.1.0, I am getting the 
following error logs thrown by the kafka clients 0.10.2.1 & 0.10.0.1. This 
seems to be some kind of incompatibility issue for the older clients although 
this shouldn't be true according to the following [doc 
1|https://docs.confluent.io/current/installation/upgrade.html#preparation], 
[doc2|https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version]
 and 
[thread|https://lists.apache.org/thread.html/9bc87a2c683d13fda27f01a635dba822520113cfd8fb50f3a3e82fcf@%3Cusers.kafka.apache.org%3E].

Can someone please help on this issue? Does this mean that I have to upgrade 
all kafka-clients to 1.1.0?
 
{noformat}
java.lang.IllegalArgumentException: Unknown compression type id: 4
at 
org.apache.kafka.common.record.CompressionType.forId(CompressionType.java:46)
at 
org.apache.kafka.common.record.Record.compressionType(Record.java:260)
at 
org.apache.kafka.common.record.LogEntry.isCompressed(LogEntry.java:89)
at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:70)
at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--- Another kind of exception due to same reason

java.lang.IndexOutOfBoundsException: null
at java.nio.Buffer.checkIndex(Buffer.java:546)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:365) 
at org.apache.kafka.common.utils.Utils.sizeDelimited(Utils.java:784)
at org.apache.kafka.common.record.Record.value(Record.java:268)
at 
org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.(RecordsIterator.java:149)
at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:79)
at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-336: Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer

2018-09-07 Thread Viktor Somogyi-Vass
Hi All,

Many thanks for participating the discussion and voting. The KIP has passed
with 3 binding votes (Jason, Harsha, Ismael) and 2 non-binding (Attila and
Manikumar).
If you have time, please also have a look at the corresponding code review:
https://github.com/apache/kafka/pull/5494

Cheers,
Viktor

On Thu, Sep 6, 2018 at 11:03 PM Ismael Juma  wrote:

> Thanks, +1 (binding).
>
> On Mon, Sep 3, 2018 at 6:28 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Apologies, miscounted the binding votes but the good news is that we need
> > only one.
> >
> > Cheers,
> > Viktor
> >
> > On Mon, Sep 3, 2018 at 11:09 AM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > I have completed my binary compatibility testing on this as well.
> Created
> > > a small script & example to make this reproducible:
> > > https://gist.github.com/viktorsomogyi/391defca73e7a46a2c6a40bc699231d4
> .
> > >
> > > Also, thanks for the votes so far, we're still awaiting for 2 binding.
> > >
> > > Cheers,
> > > Viktor
> > >
> > > On Thu, Aug 30, 2018 at 5:09 PM Harsha  wrote:
> > >
> > >> +1.
> > >> Thanks,
> > >> Harsha
> > >>
> > >> On Thu, Aug 30, 2018, at 4:19 AM, Attila Sasvári wrote:
> > >> > Thanks for the KIP and the updates Viktor!
> > >> >
> > >> > +1 (non-binding)
> > >> >
> > >> >
> > >> >
> > >> > On Wed, Aug 29, 2018 at 10:44 AM Manikumar <
> manikumar.re...@gmail.com
> > >
> > >> > wrote:
> > >> >
> > >> > > +1 (non-binding)
> > >> > >
> > >> > > Thanks for the KIP.
> > >> > >
> > >> > > On Wed, Aug 29, 2018 at 1:41 AM Jason Gustafson <
> ja...@confluent.io
> > >
> > >> > > wrote:
> > >> > >
> > >> > > > +1 Thanks for the updates.
> > >> > > >
> > >> > > > On Tue, Aug 28, 2018 at 1:15 AM, Viktor Somogyi-Vass <
> > >> > > > viktorsomo...@gmail.com> wrote:
> > >> > > >
> > >> > > > > Sure, I've added it. I'll also do the testing today.
> > >> > > > >
> > >> > > > > On Mon, Aug 27, 2018 at 5:03 PM Ismael Juma <
> ism...@juma.me.uk>
> > >> wrote:
> > >> > > > >
> > >> > > > > > Thanks Viktor. I think it would be good to verify that
> > existing
> > >> > > > > > ExtendedSerializer implementations work without recompiling.
> > >> This
> > >> > > could
> > >> > > > > be
> > >> > > > > > done as a manual test. If you agree, I suggest adding it to
> > the
> > >> > > testing
> > >> > > > > > plan section.
> > >> > > > > >
> > >> > > > > > Ismael
> > >> > > > > >
> > >> > > > > > On Mon, Aug 27, 2018 at 7:57 AM Viktor Somogyi-Vass <
> > >> > > > > > viktorsomo...@gmail.com>
> > >> > > > > > wrote:
> > >> > > > > >
> > >> > > > > > > Thanks guys, I've updated my KIP with this info (so to
> keep
> > >> > > solution
> > >> > > > > #1).
> > >> > > > > > > If you find it good enough, please vote as well or let me
> > >> know if
> > >> > > you
> > >> > > > > > think
> > >> > > > > > > something is missing.
> > >> > > > > > >
> > >> > > > > > > On Sat, Aug 25, 2018 at 1:14 AM Ismael Juma <
> > >> ism...@juma.me.uk>
> > >> > > > wrote:
> > >> > > > > > >
> > >> > > > > > > > I'm OK with 1 too. It makes me a bit sad that we don't
> > have
> > >> a
> > >> > > path
> > >> > > > > for
> > >> > > > > > > > removing the method without headers, but it seems like
> the
> > >> > > simplest
> > >> > > > > and
> > >> > > > > > > > least confusing option (I am assuming that headers are
> not
> > >> needed
> > >> > > > in
> > >> > > > > > the
> > >> > > > > > > > serializers in the common case).
> > >> > > > > > > >
> > >> > > > > > > > Ismael
> > >> > > > > > > >
> > >> > > > > > > > On Fri, Aug 24, 2018 at 2:42 PM Jason Gustafson <
> > >> > > > ja...@confluent.io>
> > >> > > > > > > > wrote:
> > >> > > > > > > >
> > >> > > > > > > > > Hey Viktor,
> > >> > > > > > > > >
> > >> > > > > > > > > Good summary. I agree that option 1) seems like the
> > >> simplest
> > >> > > > choice
> > >> > > > > > > and,
> > >> > > > > > > > as
> > >> > > > > > > > > you note, we can always add the default implementation
> > >> later.
> > >> > > > I'll
> > >> > > > > > > leave
> > >> > > > > > > > > Ismael to make a case for the circular forwarding
> > >> approach ;)
> > >> > > > > > > > >
> > >> > > > > > > > > -Jason
> > >> > > > > > > > >
> > >> > > > > > > > > On Fri, Aug 24, 2018 at 3:02 AM, Viktor Somogyi-Vass <
> > >> > > > > > > > > viktorsomo...@gmail.com> wrote:
> > >> > > > > > > > >
> > >> > > > > > > > > > I think in the first draft I didn't provide an
> > >> implementation
> > >> > > > for
> > >> > > > > > > them
> > >> > > > > > > > as
> > >> > > > > > > > > > it seemed very simple and straightforward. I looked
> > up a
> > >> > > couple
> > >> > > > > of
> > >> > > > > > > > > > implementations of the ExtendedSerializers on github
> > >> and the
> > >> > > > > > general
> > >> > > > > > > > > > behavior seems to be that they delegate to the 2
> > >> argument
> > >> > > > > > > (headerless)
> > >> > > > > > > > > > method:
> > >> > > > > > > > > >
> > >> > > > > > > > > >
> > >> 

Re: [DISCUSS] KIP-349 Priorities for Source Topics

2018-09-07 Thread Jan Filipiak



On 07.09.2018 05:21, Matthias J. Sax wrote:

I am still not sure how Samza's MessageChooser actually works and how
this would align with KafkaConsumer fetch requests.


Maybe I can give some background (conceptually); @Colin, please correct
me if I say anything wrong:


When a fetch request is send, all assigned topic partitions of the
consumers are ordered in a list and the broker will return data starting
with the first partition in the list and returning as many messages as
possible (either until partition end-offset or fetch size is reached).
If end-offset is reached but not fetch size, the next partition in the
list is considered. This repeats until fetch size is reached. If a
partition in the list has no data available, it's skipped.

When data is return to the consumer, the consumer moves all partitions
for which data was returned to the end of the list. Thus, in the next
fetch request, data from other partitions is returned (this avoid
starving of partitions). Note, that partitions that do not return data
(even if they are in the head of the list), stay in the head of the list.

(Note, that this topic list is actually also maintained broker side to
allow for incremental fetch request).

Because different partitions are hosted on different brokers, the
consumer will send fetch requests to different brokers (@Colin: how does
this work in detail? Does the consumer just do a round robin over all
brokers it needs to fetch from?)


Given the original proposal about topic priorities, it would be possible
to have multiple lists, one per priority. If topic partitions return
data, they would be moved to the end of their priority list. The list
would be consider in priority order. Thus, higher priority topic
partitions stay at the head and are consider first.


If I understand MessageChooser correctly (and consider a client side
implementation only), the MessageChooser can only pick from the data
that was returned in a fetch request -- but it cannot alter what the
fetch request returns.

It seems that for each fetched message, update() would be called and the
MessageChooser buffers the message. When a message should be processed
(ie, when Iterator.next() is called on the iterator returned from
poll()), choose() is called to return a message from the buffer (based
on whatever strategy the MessageChooser implements).

Thus, MessageChooser can actually not prioritize some topics over other,
because the prioritization depends on the fetch requests that the
MessageChooser cannot influence (MessageChooser can only prioritize
records from different partitions that are already fetched). Thus,
MessageChooser interface seems not to achieve what was proposed.

@Jan: please correct me, if I my understanding of MessageChooser is wrong.

Hi Matthias,

It is. I explicitly said, that if I were going for it the Message 
Chooser it could get the change to pause and resume partitions.
mentioned that in the mail on ~4.9  (pausing / resuming capabilities). 
Probably easy missed.


Main point is: I didn't say we shall do exactly Samaza interface, that 
would be nonsense just get some inspirations if there isn't a more powerful

abstraction since we are tackling the problem twice anyway.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
https://cwiki.apache.org/confluence/display/KAFKA/KIP-353%3A+Improve+Kafka+Streams+Timestamp+Synchronization

I am definitely against simple priority list
I am definitely against broker support.
I am happy to sacrifice incremental fetch support (mainly for 
MirrorMakers and replicas with tons of partitions).

I would love to see a Message Chooser.

With this said. I wish yall best luck and fun finding a good solution. 
*mic drop*


Bye




If my understanding is correct, I am not sure how the MessageChooser
interface could be used to prioritize topics in fetch requests.


Overall, I get the impression that topic prioritization and
MessageChosser are orthogonal (or complementary) to each other.



-Matthias



On 9/6/18 5:24 AM, Jan Filipiak wrote:

On 05.09.2018 17:18, Colin McCabe wrote:

Hi all,

I agree that DISCUSS is more appropriate than VOTE at this point,
since I don't remember the last discussion coming to a definite
conclusion.

I guess my concern is that this will add complexity and memory
consumption on the server side.  In the case of incremental fetch
requests, we will have to track at least two extra bytes per
partition, to know what the priority of each partition is within each
active fetch session.

It would be nice to hear more about the use-cases for this feature.  I
think Gwen asked about this earlier, and I don't remember reading a
response.  The fact that we're now talking about Samza interfaces is a
bit of a red flag.  After all, Samza didn't need partition priorities
to do what it did.  You can do a lot with muting partitions and using
appropriate threading in your code.

to show a usecase, I linked 353, especially since the