[jira] [Created] (KAFKA-10815) EosTestDriver#verifyAllTransactionFinished should break loop if all partitions are verified

2020-12-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10815:
--

 Summary: EosTestDriver#verifyAllTransactionFinished should break 
loop if all partitions are verified
 Key: KAFKA-10815
 URL: https://issues.apache.org/jira/browse/KAFKA-10815
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


If we don't break it when all partitions are verified, the loop will take 10 
mins ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-679: Producer will enable the strongest delivery guarantee by default

2020-12-07 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Cheng!

Regards,

Rajini


On Mon, Dec 7, 2020 at 6:14 AM Gwen Shapira  wrote:

> +1 (binding). Awesome suggestion, Cheng.
>
> On Fri, Dec 4, 2020 at 11:00 AM Cheng Tan  wrote:
> >
> > Hi all,
> >
> > I’m proposing a new KIP for enabling the strongest delivery guarantee by
> default. Today Kafka support EOS and N-1 concurrent failure tolerance but
> the default settings haven’t bring them out of the box. The proposal
> changes include the producer defaults change to `ack=all` and
> `enable.idempotence=true`. Also, the ACL operation type IDEMPOTENCE_WRITE
> will be deprecated. If a producer has WRITE permission to any topic, it
> will be able to request a producer id and perform idempotent produce.
> >
> > KIP here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679:+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> >
> >
> > Please vote in this mail thread.
> >
> > Thanks
> >
> > - Cheng Tan
>
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSS] KIP-691: Transactional Producer Exception Handling

2020-12-07 Thread Bruno Cadonna

Thanks Boyang for the KIP!

Like Matthias, I do also not know the producer internal well enough to 
comment on the categorization. However, I think having a super exception 
(e.g. RetriableException) that  encodes if an exception is fatal or not 
is cleaner, better understandable and less error-prone, because ideally 
when you add a new non-fatal exception in future you just need to think 
about letting it inherit from the super exception and all the rest of 
the code will just behave correctly without the need to wrap the new 
exception into another exception each time it is thrown (maybe it is 
thrown at different location in the code).


As far as I understand the following statement from your previous e-mail 
is the reason that currently such a super exception is not possible:


"Right now we have RetriableException type, if we are going to add a 
`ProducerRetriableException` type, we have to put this new interface as 
the parent of the RetriableException, because not all thrown non-fatal 
exceptions are `retriable` in general for producer"



In the list of exceptions in your KIP, I found non-fatal exceptions that 
do not inherit from RetriableException. I guess those are the ones you 
are referring to in your statement:


InvalidProducerEpochException
InvalidPidMappingException
TransactionAbortedException

All of those exceptions are non-fatal and do not inherit from 
RetriableException. Is there a reason for that? If they depended from 
RetriableException we would be a bit closer to a super exception I 
mention above.


With OutOfOrderSequenceException and UnknownProducerIdException, I think 
to understand that their fatality depends on the type (i.e. 
configuration) of the producer. That makes it difficult to have a super 
exception that encodes the retriability as mentioned above. Would it be 
possible to introduce exceptions that inherit from RetriableException 
and that are thrown when those exceptions are caught from the brokers 
and the type of the producer is such that the exceptions are retriable?


Best,
Bruno


On 04.12.20 19:34, Guozhang Wang wrote:

Thanks Boyang for the proposal! I made a pass over the list and here are
some thoughts:

0) Although this is not part of the public API, I think we should make sure
that the suggested pattern (i.e. user can always call abortTxn() when
handling non-fatal errors) are indeed supported. E.g. if the txn is already
aborted by the broker side, then users can still call abortTxn which would
not throw another exception but just be treated as a no-op.

1) *ConcurrentTransactionsException*: I think this error can also be
returned but not documented yet. This should be a non-fatal error.

2) *InvalidTxnStateException*: this error is returned from broker when txn
state transition failed (e.g. it is trying to transit to complete-commit
while the current state is not prepare-commit). This error could indicates
a bug on the client internal code or the broker code, OR a user error --- a
similar error is ConcurrentTransactionsException, i.e. if Kafka is bug-free
these exceptions would only be returned if users try to do something wrong,
e.g. calling abortTxn right after a commitTxn, etc. So I'm thinking it
should be a non-fatal error instead of a fatal error, wdyt?

3) *KafkaException*: case i "indicates fatal transactional sequence
(Fatal)", this is a bit conflicting with the *OutOfSequenceException* that
is treated as non-fatal. I guess your proposal is that
OutOfOrderSequenceException would be treated either as fatal with
transactional producer, or non-fatal with idempotent producer, is that
right? If the producer is only configured with idempotency but not
transaction, then throwing a TransactionStateCorruptedException for
non-fatal errors would be confusing since users are not using transactions
at all.. So I suggest we always throw OutOfSequenceException as-is (i.e.
not wrapped) no matter how the producer is configured, and let the caller
decide how to handle it based on whether it is only idempotent or
transactional itself.

4) Besides all the txn APIs, the `send()` callback / future can also throw
txn-related exceptions, I think this KIP should also cover this API as well?

5) This is related to 1/2) above: sometimes those non-fatal errors like
ConcurrentTxn or InvalidTxnState are not due to the state being corrupted
at the broker side, but maybe users are doing something wrong. So I'm
wondering if we should further distinguish those non-fatal errors between
a) those that are caused by Kafka itself, e.g. a broker timed out and
aborted a txn and later an endTxn request is received, and b) the user's
API call pattern is incorrect, causing the request to be rejected with an
error code from the broker. *TransactionStateCorruptedException* feels to
me more like for case a), but not case b).


Guozhang


On Wed, Dec 2, 2020 at 4:50 PM Boyang Chen 
wrote:


Thanks Matthias, I think your proposal makes sense as well, on the pro side
we could have a universall

Re: [VOTE] KIP-679: Producer will enable the strongest delivery guarantee by default

2020-12-07 Thread Ismael Juma
Thanks for the KIP Cheng. One suggestion: should we add the `kafka-acls`
deprecation warnings for `IDEMPOTENT_WRITE` in 3.0? That would give time
for authorizer implementations to be updated.

Ismael

On Fri, Dec 4, 2020 at 11:00 AM Cheng Tan  wrote:

> Hi all,
>
> I’m proposing a new KIP for enabling the strongest delivery guarantee by
> default. Today Kafka support EOS and N-1 concurrent failure tolerance but
> the default settings haven’t bring them out of the box. The proposal
> changes include the producer defaults change to `ack=all` and
> `enable.idempotence=true`. Also, the ACL operation type IDEMPOTENCE_WRITE
> will be deprecated. If a producer has WRITE permission to any topic, it
> will be able to request a producer id and perform idempotent produce.
>
> KIP here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679:+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> >
>
> Please vote in this mail thread.
>
> Thanks
>
> - Cheng Tan


Re: [VOTE] KIP-689: Extend `StreamJoined` to allow more store configs

2020-12-07 Thread Leah Thomas
Thanks all! KIP-689 is accepted with 4 binding votes (Sophie, Guozhang,
Matthias, John) and 2 non-binding votes (Bruno, Walker).

Cheers,
Leah

On Fri, Dec 4, 2020 at 10:03 PM John Roesler  wrote:

> Sorry I missed the discussion, but I just read the KIP and
> the discussion. It all looks good to me.
>
> +1 (binding)
>
> -John
>
> On Thu, 2020-12-03 at 14:00 -0700, Matthias J. Sax wrote:
> > +1 (binding)
> >
> > On 12/3/20 11:21 AM, Guozhang Wang wrote:
> > > +1 (binding)
> > >
> > > Thanks Leah!
> > >
> > > On Wed, Dec 2, 2020 at 11:27 AM Sophie Blee-Goldman <
> sop...@confluent.io>
> > > wrote:
> > >
> > > > Thanks for the KIP! +1 (binding)
> > > >
> > > > Sophie
> > > >
> > > > On Wed, Dec 2, 2020 at 8:42 AM Walker Carlson  >
> > > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > Thank you,
> > > > > walker
> > > > >
> > > > > On Wed, Dec 2, 2020 at 8:15 AM Bruno Cadonna 
> wrote:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Thanks Leah!
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On 02.12.20 16:55, Leah Thomas wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to start the vote for KIP-689 for enabling/disabling
> logging
> > > > > for
> > > > > > > `StreamJoined`.
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-689%3A+Extend+%60StreamJoined%60+to+allow+more+store+configs
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Leah
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
>
>
>


Re: [VOTE] 2.7.0 RC4

2020-12-07 Thread Bill Bejeck
Hi Boyang,

Thanks for the heads-up; I agree this is a regression for streams and
should go into 2.7.

Thanks,
Bill

On Mon, Dec 7, 2020 at 1:00 AM Boyang Chen 
wrote:

> Hey Bill,
>
> Unfortunately we have found another regression in 2.7 streams, which I have
> filed a blocker here .
> The implementation is done, and I will try to get reviews and merge ASAP.
>
> Best,
> Boyang
>
> On Fri, Dec 4, 2020 at 3:14 PM Jack Yang  wrote:
>
> > unsubsribe
> >
> >
>


[jira] [Resolved] (KAFKA-10798) Failed authentication delay doesn't work with some SASL authentication failures

2020-12-07 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10798.

  Reviewer: Manikumar
Resolution: Fixed

> Failed authentication delay doesn't work with some SASL authentication 
> failures
> ---
>
> Key: KAFKA-10798
> URL: https://issues.apache.org/jira/browse/KAFKA-10798
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.8.0
>
>
> KIP-306 introduced the config `connection.failed.authentication.delay.ms` to 
> delay connection closing on brokers for failed authentication to limit the 
> rate of retried authentications from clients in order to avoid excessive 
> authentication load on brokers from failed clients. We rely on authentication 
> failure response to be delayed in this case to prevent clients from detecting 
> the failure and retrying sooner.
> SaslServerAuthenticator delays response for SaslAuthenticationException, but 
> not for SaslException, even though SaslException is also converted into 
> SaslAuthenticationException and processed as an authentication failure by 
> both server and clients. As a result, connection delay is not applied in many 
> scenarios like SCRAM authentication failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-12-07 Thread Satish Duggana
Hi Jun,
Thanks for your comments. Please find the inline replies below.

>605.2 It's rare for the follower to need the remote data. So, the current
approach is fine too. Could you document the process of rebuilding the
producer state since we can't simply trim the producerState to an offset in
the middle of a segment.

Will clarify in the KIP.

>5102.2 Would it be clearer to make startPosiont long and endPosition of
Optional?

We will have arg checks with respective validation. It is not a good
practice to have arguments with optional as mentioned here.
https://rules.sonarsource.com/java/RSPEC-3553


>5102.5 LogSegmentData still has leaderEpochIndex as File instead of
ByteBuffer.

Updated.

>5102.7 Could you define all public methods for LogSegmentData?

Updated.

>5103.5 Could you change the reference to rlm_process_interval_ms and
rlm_retry_interval_ms to the new config names? Also, the retry interval
config seems still missing. It would be useful to support exponential
backoff with the retry interval config.

Good point. We wanted the retry with truncated exponential backoff,
updated the KIP.

>5111. "RLM follower fetches the earliest offset for the earliest leader
epoch by calling RLMM.earliestLogOffset(TopicPartition topicPartition, int
leaderEpoch) and updates that as the log start offset." This text is still
there. Also, could we remove earliestLogOffset() from RLMM?

Updated.

>5115. There are still references to "remote log cleaners".

Updated.

>6000. Since we are returning new error codes, we need to bump up the
protocol version for Fetch request. Also, it will be useful to document all
new error codes and whether they are retriable or not.

Sure, we will add that in the KIP.

>6001. public Map segmentLeaderEpochs(): Currently, leaderEpoch
is int32 instead of long.

Updated.

>6002. Is RemoteLogSegmentMetadata.markedForDeletion() needed given
RemoteLogSegmentMetadata.state()?

No, it is fixed.

>6003. RemoteLogSegmentMetadata remoteLogSegmentMetadata(TopicPartition
topicPartition, long offset, int epochForOffset): Should this return
Optional?

That makes sense, updated.

>6005. RemoteLogState: It seems it's better to split it between
DeletePartitionUpdate and RemoteLogSegmentMetadataUpdate since the states
are never shared between the two use cases.

Agree with that, updated.

>6006. RLMM.onPartitionLeadershipChanges(): This may be ok. However, is it
ture that other than the metadata topic, RLMM just needs to know whether
there is a replica assigned to this broker and doesn't need to know whether
the replica is the leader or the follower?

That may be true. If the implementation does not need that, it can
ignore the information in the callback.

>6007: "Handle expired remote segments (leader and follower)": Why is this
needed in both the leader and the follower?

Updated.

>6008.   "name": "SegmentSizeInBytes",
"type": "int64",
The segment size can just be int32.

Updated.

>6009. For the record format in the log, it seems that we need to add record
type and record version before the serialized bytes. We can follow the
convention used in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-631%3A+The+Quorum-based+Kafka+Controller#KIP631:TheQuorumbasedKafkaController-RecordFormats

Yes, KIP already mentions that these are serialized before the payload
as below. We will mention explicitly that these two are written before
the data is written.

RLMM instance on broker publishes the message to the topic with key as
null and value with the below format.

type  : unsigned var int, represents the value type. This value is
'apikey' as mentioned in the schema.
version : unsigned var int, the 'version' number of the type as
mentioned in the schema.
data  : record payload in kafka protocol message format.


>6010. remote.log.manager.thread.pool.size: The default value is 10. This
might be too high when enabling the tiered feature for the first time.
Since there are lots of segments that need to be tiered initially, a large
number of threads could overwhelm the broker.

Is the default value 5 reasonable?

6011. "The number of milli seconds to keep the local log segment before it
gets deleted. If not set, the value in `log.retention.minutes` is used. If
set to -1, no time limit is applied." We should use log.retention.ms
instead of log.retention.minutes.
Nice typo catch. Updated the KIP.

Thanks,
Satish.

On Thu, Dec 3, 2020 at 8:03 AM Jun Rao  wrote:
>
> Hi, Satish,
>
> Thanks for the updated KIP. A few more comments below.
>
> 605.2 It's rare for the follower to need the remote data. So, the current
> approach is fine too. Could you document the process of rebuilding the
> producer state since we can't simply trim the producerState to an offset in
> the middle of a segment.
>
> 5102.2 Would it be clearer to make startPosiont long and endPosition of
> Optional?
>
> 5102.5 LogSegmentData still has leaderEpochIndex as File instead of
> ByteBuffer.
>
> 5102.7 Could you define all p

[jira] [Created] (KAFKA-10816) Connect REST API should have a resource that can be used as a readiness probe

2020-12-07 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10816:
-

 Summary: Connect REST API should have a resource that can be used 
as a readiness probe
 Key: KAFKA-10816
 URL: https://issues.apache.org/jira/browse/KAFKA-10816
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch


There are a few ways to accurately detect whether a Connect worker is 
*completely* ready to process all REST requests:

# Wait for `Herder started` in the Connect worker logs
# Use the REST API to issue a request that will be completed only after the 
herder has started, such as `GET /connectors/{name}/` or `GET 
/connectors/{name}/status`.

Other techniques can be used to detect other startup states, though none of 
these will guarantee that the worker has indeed completely started up and can 
process all REST requests:

* `GET /` can be used to know when the REST server has started, but this may be 
before the worker has started completely and successfully.
* `GET /connectors` can be used to know when the REST server has started, but 
this may be before the worker has started completely and successfully. And, for 
the distributed Connect worker, this may actually return an older list of 
connectors if the worker hasn't yet completely read through the internal config 
topic. It's also possible that this request returns even if the worker is 
having trouble reading from the internal config topic.
* `GET /connector-plugins` can be used to know when the REST server has 
started, but this may be before the worker has started completely and 
successfully.

The Connect REST API should have an endpoint that more obviously and more 
simply can be used as a readiness probe. This could be a new resource (e.g., 
`GET /status`), though this would only work on newer Connect runtimes, and 
existing tooling, installations, and examples would have to be modified to take 
advantage of this feature (if it exists). 

Alternatively, we could make sure that the existing resources (e.g., `GET /` or 
`GET /connectors`) wait for the herder to start completely; this wouldn't 
require a KIP and it would not require clients use different technique for 
newer and older Connect runtimes. (Whether or not we back port this is another 
question altogether, since it's debatable whether the behavior of the existing 
REST resources is truly a bug.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10817) Add clusterId validation to Fetch handling

2020-12-07 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10817:
---

 Summary: Add clusterId validation to Fetch handling
 Key: KAFKA-10817
 URL: https://issues.apache.org/jira/browse/KAFKA-10817
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: David Jacot


Initially we were unsure how the clusterId would be generated by the cluster 
after Zookeeper removal, so we did not implement it. It is looking now like we 
will probably require users to generate it manually prior to starting the 
cluster. See here for details: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-631%3A+The+Quorum-based+Kafka+Controller#KIP631:TheQuorumbasedKafkaController-Fencing.
 In this case, we can assume that the clusterId will be provided when 
instantiating the raft client, so we can add the logic to the request handler 
to validate it in inbound Fetch requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-692: Make AdminClient value object constructors public

2020-12-07 Thread Gwen Shapira
Sounds like a good idea :)

At some point we may want to switch to builders - it gives you more
flexibility in how you construct all those objects. But those two
changes are unrelated.

On Thu, Dec 3, 2020 at 9:30 AM Noa Resare  wrote:
>
> Hi there,
>
> I finally got around to write up the KIP for a change I have wanted for some 
> time:
>
> KIP-692: Make AdminClient value object constructors public 
> 
>
> In essence: Let us encourage better patterns for testing by making it easier 
> to mock AdminService.
>
> I am looking forward to hearing your thoughts on this.
>
> Cheers
> noa



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-12-07 Thread Jun Rao
Hi, Satish,

Thanks for the reply. A few more comments below.

5102.2: It seems that both positions can just be int. Another option is to
have two methods. Would it be clearer?

InputStream fetchLogSegmentData(RemoteLogSegmentMetadata
remoteLogSegmentMetadata,
int startPosition) throws
RemoteStorageException;

InputStream fetchLogSegmentData(RemoteLogSegmentMetadata
remoteLogSegmentMetadata,
int startPosition, int endPosition)
throws RemoteStorageException;

6003: Could you also update the javadoc for the return value?

6010: What kind of tiering throughput have you seen with 5 threads?

6020: local.log.retention.bytes: Should it default to log.retention.bytes
to be consistent with local.log.retention.ms?

6021: Could you define TopicIdPartition?

6022: For all public facing classes, could you specify the package name?

It seems that you already added the topicId support. Two other remaining
items are (a) the format of local tier metadata storage and (b) upgrade.

Jun

On Mon, Dec 7, 2020 at 8:56 AM Satish Duggana 
wrote:

> Hi Jun,
> Thanks for your comments. Please find the inline replies below.
>
> >605.2 It's rare for the follower to need the remote data. So, the current
> approach is fine too. Could you document the process of rebuilding the
> producer state since we can't simply trim the producerState to an offset in
> the middle of a segment.
>
> Will clarify in the KIP.
>
> >5102.2 Would it be clearer to make startPosiont long and endPosition of
> Optional?
>
> We will have arg checks with respective validation. It is not a good
> practice to have arguments with optional as mentioned here.
> https://rules.sonarsource.com/java/RSPEC-3553
>
>
> >5102.5 LogSegmentData still has leaderEpochIndex as File instead of
> ByteBuffer.
>
> Updated.
>
> >5102.7 Could you define all public methods for LogSegmentData?
>
> Updated.
>
> >5103.5 Could you change the reference to rlm_process_interval_ms and
> rlm_retry_interval_ms to the new config names? Also, the retry interval
> config seems still missing. It would be useful to support exponential
> backoff with the retry interval config.
>
> Good point. We wanted the retry with truncated exponential backoff,
> updated the KIP.
>
> >5111. "RLM follower fetches the earliest offset for the earliest leader
> epoch by calling RLMM.earliestLogOffset(TopicPartition topicPartition, int
> leaderEpoch) and updates that as the log start offset." This text is still
> there. Also, could we remove earliestLogOffset() from RLMM?
>
> Updated.
>
> >5115. There are still references to "remote log cleaners".
>
> Updated.
>
> >6000. Since we are returning new error codes, we need to bump up the
> protocol version for Fetch request. Also, it will be useful to document all
> new error codes and whether they are retriable or not.
>
> Sure, we will add that in the KIP.
>
> >6001. public Map segmentLeaderEpochs(): Currently, leaderEpoch
> is int32 instead of long.
>
> Updated.
>
> >6002. Is RemoteLogSegmentMetadata.markedForDeletion() needed given
> RemoteLogSegmentMetadata.state()?
>
> No, it is fixed.
>
> >6003. RemoteLogSegmentMetadata remoteLogSegmentMetadata(TopicPartition
> topicPartition, long offset, int epochForOffset): Should this return
> Optional?
>
> That makes sense, updated.
>
> >6005. RemoteLogState: It seems it's better to split it between
> DeletePartitionUpdate and RemoteLogSegmentMetadataUpdate since the states
> are never shared between the two use cases.
>
> Agree with that, updated.
>
> >6006. RLMM.onPartitionLeadershipChanges(): This may be ok. However, is it
> ture that other than the metadata topic, RLMM just needs to know whether
> there is a replica assigned to this broker and doesn't need to know whether
> the replica is the leader or the follower?
>
> That may be true. If the implementation does not need that, it can
> ignore the information in the callback.
>
> >6007: "Handle expired remote segments (leader and follower)": Why is this
> needed in both the leader and the follower?
>
> Updated.
>
> >6008.   "name": "SegmentSizeInBytes",
> "type": "int64",
> The segment size can just be int32.
>
> Updated.
>
> >6009. For the record format in the log, it seems that we need to add
> record
> type and record version before the serialized bytes. We can follow the
> convention used in
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-631%3A+The+Quorum-based+Kafka+Controller#KIP631:TheQuorumbasedKafkaController-RecordFormats
>
> Yes, KIP already mentions that these are serialized before the payload
> as below. We will mention explicitly that these two are written before
> the data is written.
>
> RLMM instance on broker publishes the message to the topic with key as
> null and value with the below format.
>
> type  : unsigned var int, represents the value type. This value is
> 'apikey' as mentioned in the schema.
> version : unsigned var int, the 'version'

Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #312

2020-12-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #266

2020-12-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10811: Correct the MirrorConnectorsIntegrationTest to correctly 
mask the exit procedures (#9698)

[github] KAFKA-10798; Ensure response is delayed for failed SASL authentication 
with connection close delay (#9678)


--
[...truncated 6.92 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #289

2020-12-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10811: Correct the MirrorConnectorsIntegrationTest to correctly 
mask the exit procedures (#9698)

[github] KAFKA-10798; Ensure response is delayed for failed SASL authentication 
with connection close delay (#9678)


--
[...truncated 6.97 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestD

Contributing Avro Kafka Connect converter

2020-12-07 Thread Ravindra Nath Kakarla
Hi,

I would like to contribute an Avro converter for Kafka Connect. I described
the approach on the Issue, https://issues.apache.org/jira/browse/KAFKA-10715

I am looking for a reviewer who can validate my approach and help commit
the change. Is anyone interested in helping me?

Thank you!


Re: [VOTE] KIP-679: Producer will enable the strongest delivery guarantee by default

2020-12-07 Thread Cheng Tan
Hi Ismael,

Yes. Add deprecation warning for `IDEMPOTENT_WRITE` in 3.0 makes sense. I’ve 
updated the KIP’s “DEMPOTENT_WRITE Deprecation” section to reflect your 
suggestion. Please let me know if you have more suggestions. Thanks.


Best, - Cheng Tan


> On Dec 7, 2020, at 6:42 AM, Ismael Juma  wrote:
> 
> Thanks for the KIP Cheng. One suggestion: should we add the `kafka-acls`
> deprecation warnings for `IDEMPOTENT_WRITE` in 3.0? That would give time
> for authorizer implementations to be updated.
> 
> Ismael
> 
> On Fri, Dec 4, 2020 at 11:00 AM Cheng Tan  > wrote:
> 
>> Hi all,
>> 
>> I’m proposing a new KIP for enabling the strongest delivery guarantee by
>> default. Today Kafka support EOS and N-1 concurrent failure tolerance but
>> the default settings haven’t bring them out of the box. The proposal
>> changes include the producer defaults change to `ack=all` and
>> `enable.idempotence=true`. Also, the ACL operation type IDEMPOTENCE_WRITE
>> will be deprecated. If a producer has WRITE permission to any topic, it
>> will be able to request a producer id and perform idempotent produce.
>> 
>> KIP here:
>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679:+Producer+will+enable+the+strongest+delivery+guarantee+by+default
>>  
>> 
>>> 
>> 
>> Please vote in this mail thread.
>> 
>> Thanks
>> 
>> - Cheng Tan



Re: Contributing Avro Kafka Connect converter

2020-12-07 Thread Brandon Brown
I could be wrong but according to 
https://kafka.apache.org/documentation/#connect_running Avro should be 
supported out of the box. 

key.converter - Converter class used to convert between Kafka Connect format 
and the serialized form that is written to Kafka. This controls the format of 
the keys in messages written to or read from Kafka, and since this is 
independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.
value.converter - Converter class used to convert between Kafka Connect format 
and the serialized form that is written to Kafka. This controls the format of 
the values in messages written to or read from Kafka, and since this is 
independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.

Brandon Brown

> On Dec 7, 2020, at 4:36 PM, Ravindra Nath Kakarla  
> wrote:
> 
> Hi,
> 
> I would like to contribute an Avro converter for Kafka Connect. I described
> the approach on the Issue, https://issues.apache.org/jira/browse/KAFKA-10715
> 
> I am looking for a reviewer who can validate my approach and help commit
> the change. Is anyone interested in helping me?
> 
> Thank you!


Re: [VOTE] KIP-679: Producer will enable the strongest delivery guarantee by default

2020-12-07 Thread Ismael Juma
Thanks, +1 (binding).

On Mon, Dec 7, 2020 at 1:40 PM Cheng Tan  wrote:

> Hi Ismael,
>
> Yes. Add deprecation warning for `IDEMPOTENT_WRITE` in 3.0 makes sense.
> I’ve updated the KIP’s “DEMPOTENT_WRITE Deprecation” section to reflect
> your suggestion. Please let me know if you have more suggestions. Thanks.
>
>
> Best, - Cheng Tan
>
>
> > On Dec 7, 2020, at 6:42 AM, Ismael Juma  wrote:
> >
> > Thanks for the KIP Cheng. One suggestion: should we add the `kafka-acls`
> > deprecation warnings for `IDEMPOTENT_WRITE` in 3.0? That would give time
> > for authorizer implementations to be updated.
> >
> > Ismael
> >
> > On Fri, Dec 4, 2020 at 11:00 AM Cheng Tan  c...@confluent.io>> wrote:
> >
> >> Hi all,
> >>
> >> I’m proposing a new KIP for enabling the strongest delivery guarantee by
> >> default. Today Kafka support EOS and N-1 concurrent failure tolerance
> but
> >> the default settings haven’t bring them out of the box. The proposal
> >> changes include the producer defaults change to `ack=all` and
> >> `enable.idempotence=true`. Also, the ACL operation type
> IDEMPOTENCE_WRITE
> >> will be deprecated. If a producer has WRITE permission to any topic, it
> >> will be able to request a producer id and perform idempotent produce.
> >>
> >> KIP here:
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> >> <
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679:+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679:+Producer+will+enable+the+strongest+delivery+guarantee+by+default
> >
> >>>
> >>
> >> Please vote in this mail thread.
> >>
> >> Thanks
> >>
> >> - Cheng Tan
>
>


[jira] [Created] (KAFKA-10818) Skip conversion to `Struct` when serializing generated requests/responses

2020-12-07 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-10818:
---

 Summary: Skip conversion to `Struct` when serializing generated 
requests/responses
 Key: KAFKA-10818
 URL: https://issues.apache.org/jira/browse/KAFKA-10818
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10818) Skip conversion to `Struct` when serializing generated requests/responses

2020-12-07 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-10818.
-
Resolution: Fixed

> Skip conversion to `Struct` when serializing generated requests/responses
> -
>
> Key: KAFKA-10818
> URL: https://issues.apache.org/jira/browse/KAFKA-10818
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #290

2020-12-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10394: Add classes to read and write snapshot for KIP-630 (#9512)


--
[...truncated 3.49 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4139ab91, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4c6b899d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4c6b899d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@652d3ab1, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@652d3ab1, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@164589f9, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@164589f9, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1aaccd83, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1aaccd83, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@a9b982a, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@a9b982a, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@66fc37a, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@66fc37a, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3c5f980f, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3c5f980f, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@40639ad2, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@40639ad2, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@166f003c, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@166f003c, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6ee36598, 
timestamped = false, caching = false, logging = false] S

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #267

2020-12-07 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7819) Trogdor - Improve RoundTripWorker

2020-12-07 Thread Gwen Shapira (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-7819.
-
Fix Version/s: 2.2.0
   Resolution: Fixed

Closing since I noticed the PR was merged.

> Trogdor - Improve RoundTripWorker
> -
>
> Key: KAFKA-7819
> URL: https://issues.apache.org/jira/browse/KAFKA-7819
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
> Fix For: 2.2.0
>
>
> Trogdor's RoundTripWorker task has a couple of shortcomings:
>  * Consumer GroupID is hardcoded and consumers use `KafkaConsumer#assign()`: 
> [https://github.com/apache/kafka/blob/12947f4f944955240fd14ce8b75fab5464ea6808/tools/src/main/java/org/apache/kafka/trogdor/workload/RoundTripWorker.java#L314]
> Leaving you unable to run two separate instances of this worker on the same 
> partition in the same cluster, as the consumers would overwrite each other's 
> commits. It's probably better to add the task ID to the consumer group
>  * the task spec's `maxMessages` [is an 
> integer|https://github.com/apache/kafka/blob/12947f4f944955240fd14ce8b75fab5464ea6808/tools/src/main/java/org/apache/kafka/trogdor/workload/RoundTripWorkloadSpec.java#L39],
>  leaving you unable to schedule long-winded tasks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10819) The freeze operation should validate the content of the snapshot

2020-12-07 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-10819:
--

 Summary: The freeze operation should validate the content of the 
snapshot
 Key: KAFKA-10819
 URL: https://issues.apache.org/jira/browse/KAFKA-10819
 Project: Kafka
  Issue Type: Sub-task
  Components: replication
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio


When freeze is called the Raft Client should make sure:
 # That the file contains complete record batches
 # That the CRC for each record batch is correct

If any of the validations fail then the freeze operation should fail. The state 
of the snapshot should be as if close was called without calling freeze. In 
other words no immutable snapshot gets created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10820) Update start offset and end offset of the replicated log

2020-12-07 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-10820:
--

 Summary: Update start offset and end offset of the replicated log
 Key: KAFKA-10820
 URL: https://issues.apache.org/jira/browse/KAFKA-10820
 Project: Kafka
  Issue Type: Sub-task
  Components: replication
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio


When a fetch snapshot is successfully downloaded from the leader the follower 
and observer should update the replicated log's start and end offset.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10821) Send cluster id information with the FetchSnapshot request

2020-12-07 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-10821:
--

 Summary: Send cluster id information with the FetchSnapshot request
 Key: KAFKA-10821
 URL: https://issues.apache.org/jira/browse/KAFKA-10821
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #291

2020-12-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10822) Force some stdout from downgrade_test.py for Travis

2020-12-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10822:
--

 Summary: Force some stdout from downgrade_test.py for Travis
 Key: KAFKA-10822
 URL: https://issues.apache.org/jira/browse/KAFKA-10822
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


downgrade_test.py does upgrade/downgrade for each tests. In travis, both of 
them take 10+ mins so we should print something in order to avoid timeout from 
Travis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #292

2020-12-07 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove redundant default parameter values in call to 
LogSegment.open (#9710)


--
[...truncated 3.48 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@633af2f8,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@571efbc2,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@571efbc2,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c1058df, 
timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c1058df, 
timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1a6cea18,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1a6cea18,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2ef5669f,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2ef5669f,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@e955868, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@e955868, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@cd2fc5, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@cd2fc5, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7d0c29ee,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7d0c29ee,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c8794e,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c8794e,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c3a3827, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c3a3827, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTe

[jira] [Created] (KAFKA-10823) AdminOperationException has no error code

2020-12-07 Thread Lincong Li (Jira)
Lincong Li created KAFKA-10823:
--

 Summary: AdminOperationException has no error code
 Key: KAFKA-10823
 URL: https://issues.apache.org/jira/browse/KAFKA-10823
 Project: Kafka
  Issue Type: Bug
Reporter: Lincong Li


The AdminOperationException is one kind of RuntimeException and the fact that 
it does not have error code prevents proper handling the exception.

For example, the AdminOperationException could be thrown when 
AdminZkClient.changeTopicConfig(...) method is invoked and the topic for which 
config change is requested does not exist. When the AdminOperationException is 
thrown, the caller of AdminZkClient.changeTopicConfig(...) can choose to catch 
it. But it cannot programmatically figure out what exactly is the cause of this 
exception. Hence nothing can be done safely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)