[jira] [Created] (KAFKA-10296) Connector task reported RUNNING after hard bounce of worker

2020-07-19 Thread Greg Harris (Jira)
Greg Harris created KAFKA-10296:
---

 Summary: Connector task reported RUNNING after hard bounce of 
worker
 Key: KAFKA-10296
 URL: https://issues.apache.org/jira/browse/KAFKA-10296
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.4.1, 2.5.0, 2.3.1
Reporter: Greg Harris


While fixing flakiness for ConnectDistributedTest.test_bounce in KAFKA-10295, I 
observed that the status of connectors on zombie/offline workers was 
inconsistent during Incremental Cooperative Rebalancing's 
scheduled.delay.max.interval.ms. This is the reproduction case observed there:
 # A task is running on worker A, with another worker B in the same distributed 
cluster
 # Observe that on worker B's REST API, the task is initially correctly 
"RUNNING"
 # Worker A is hard-stopped, and goes offline
 # Observe that on worker B's REST API, the task is still "RUNNING"
 # The group rebalances without worker A in the group, and begins the delay
 # Observe that on worker B's REST API, the task is still "RUNNING"
 # Worker A recovers and joins the group, before the delay expires
 # Observe that on worker B's REST API, the task is still "RUNNING"
 # The rebalance delay expires, and the task is assigned and started
 # Observe that on worker B's REST API, the task is now correctly "RUNNING"

 * In the first state (4), after the worker goes offline, but before the other 
workers learn that they have gone offline, it is acceptable that the task is 
still reported as running. We can't expect that the other workers know that 
worker A has gone offline until the group membership protocol informs them.
 * In the second state (6), when a rebalance occurs and the worker is first 
known to be unhealthy, the state of the task is ambiguous, since it may be down 
completely, or running on a zombie worker. I'm not sure how best to capture 
this state under the existing enum's options, but it's probably closest to 
"UNASSIGNED" since the leader doesn't think that any worker is currently 
running that task.
 * In the third state (8), when the bounced worker returns, the task is 
reported RUNNING on a worker which does not have the task assigned. This is the 
most inaccurate state reported, since the cluster has reached consensus, and 
yet the REST API still reports the wrong state of the task.

State (6) could be assigned a new state "UNKNOWN", introduced by a KIP. 
However, this is a large investment in process and time for what amounts to an 
almost cosmetic change, and we could either leave this state as "RUNNING", or 
change it to "UNASSIGNED"
State (8) could be described by "UNASSIGNED", and would be a tangible 
improvement for test_bounce, which currently needs to intentionally ignore the 
REST API's result here because it is inaccurate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10295) ConnectDistributedTest.test_bounce should wait for graceful stop

2020-07-19 Thread Greg Harris (Jira)
Greg Harris created KAFKA-10295:
---

 Summary: ConnectDistributedTest.test_bounce should wait for 
graceful stop
 Key: KAFKA-10295
 URL: https://issues.apache.org/jira/browse/KAFKA-10295
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Affects Versions: 2.4.1, 2.5.0, 2.3.1, 2.6.0
Reporter: Greg Harris
Assignee: Greg Harris


In ConnectDistributedTest.test_bounce, there are flakey failures that appear to 
follow this pattern:
 # The test is parameterized for hard bounces, and with Incremental Cooperative 
Rebalancing enabled (does not appear for protocol=eager)
 # A source task is on a worker that will experience a hard bounce
 # The source task has written records which it has not yet committed in source 
offsets
 # The worker is hard-bounced, and the source task is lost
 # Incremental Cooperative Rebalance starts it's 
scheduled.rebalance.max.delay.ms delay before recovering the task
 # The test ends, connectors and Connect are stopped
 # The test verifies that the sink connector has only written records that have 
been committed by the source connector
 # This verification fails because the source offsets are stale, and there are 
un-committed records in the topic, and the sink connector has written at least 
one of them.

This can be addressed by ensuring that the test waits for the rebalance delay 
to expire, and for the lost task to recover and commit offsets past the 
progress it made before the bounce.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10294) Consider whether some of ProcessorStateException should be auto-handled by Streams

2020-07-19 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-10294:
-

 Summary: Consider whether some of ProcessorStateException should 
be auto-handled by Streams
 Key: KAFKA-10294
 URL: https://issues.apache.org/jira/browse/KAFKA-10294
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang
Assignee: Bruno Cadonna


Currently, when there's an error e.g. initializing a state store (e.g. 
RocksDB.open throws an exception), or writing checkpoints throws IOException, 
it would cause a ProcessorStateException which is a sub-class of the more 
general StreamsException and it is considered a fatal error and would cause the 
streams to stop. 

While the latter case is arguably a valid exception to throw to users to 
handle, for the first case, and some others, we can potentially let Streams to 
handle this case by, e.g., wiping out the whole store and then retry 
initializing the state store with bootstrapping position at offset 0, but this 
worth some discussion here: which type of state store management related 
exceptions should be handled automatically by Streams, and which others should 
still be thrown to users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-07-19 Thread Guozhang Wang
Hello Tom,

Thanks for reviewing the doc! Some replies inlined.

On Tue, Jul 14, 2020 at 4:11 AM Tom Bentley  wrote:

> Hi Jason, Ghouzang and Boyang,
>
> First, thanks for the KIP. I've not a few minor suggestions and nits, but
> on the whole it was pretty clear and understandable.
>
> 1. § Motivation
>
> "We have intentionally avoided any assumption about the representation
> of the log and its semantics."
>
> I found this a bit confusing because there's a whole section about the
> assumed log structure later on. I think something like:
>
> "We make no assumption about the physical log format and minimal
> assumptions about it's logical structure (see §Log Structure)."
>
> might make things a little clearer, though that suggestion does omit any
> claim about semantics.
>
> Furthermore, in the "Log Structure" section it ends with "Kafka's current
> v2 message format version supports everything we need, so we will assume
> that."
> All you need to say here is that "the v2 message format has the
> assumed/required properties", saying you're assuming the v2 message format
> negates the rest of the paragraph.
>
>
Sounds good, will update.


>
> 2. § Configurations
>
> The format for voters in the `quorum.voters` is potentially a little
> confusing because the separation of the broker id from the broker host name
> with a dot means the whole thing is syntactically indistinguishable from a
> host name. Using a different separator would avoid such ambiguity.
>
>
Hmm that's a good question given a popular host name would be "127.0.0.1".
Maybe we should just use colon as well, like "1:kafka-1:9092" and "2:
127.0.0.1:9093".


>
> 3. § Leader Election and Data Replication
>
> "The key functionalities of any consensus protocol are leader election
> and data replication. The protocol for these two functionalities consists
> of 5 core RPCs:"
>
> This sentence is followed by a list of 4 (rather than 5) items. I don't
> think you meant to imply DescribeQuorum is a core RPC.
>
> Yes, nice catch. We moved the DescribeQuorum out of core but forgot to
update the other text. Will update.


>
> 4. § BeginQuorumEpoch
>
> "observers will discover the new leader through either
> the DiscoverBrokers or Fetch APIs"
>
> DiscoverBrokers isn't defined in this KIP (I think it was when I read an
> earlier version), and I don't recall seeing another KIP which describes it.
>
>
Yes, this text also needs to be updated: we originally have a proposal of
DiscoverBrokers but decided to replace it with the log replication
mechanism, I will update it.


>
> 5. § EndQuorumEpoch
>
> "If the node's priority is highest, it will become candidate
> immediately instead of waiting for next poll."
>
> This is clarified in the later section on EndquorumEpoch request handling,
> but I found the wording " instead of waiting for next poll" to be
> confusing. Maybe "instead of waiting for next election timeout" would be
> clearer?
>
> Ack.


>
> 6. § Fetch
>
> The diagram from the Raft dissertation is a broken link (I'm guessing for
> anyone lacking access to https://confluentinc.atlassian.net).
>
>
Our bad, I will update the link.


>
> 7. § Fetch
>
> "This proposal also extends the Fetch response to include a separate
> field for the receiver of the request to indicate the current leader."
>
> I think "receiver of the request" should be "receiver of the response",
> shouldn't it?
>
>
It is actually for the receiver of the request indeed. The point is that if
a request is sent to the non-leader, in addition to returning an error code
it would also encode the current leader it knows of (if it knows, of
course) so that the fetcher would not need to send another round of
metadata request discovering it.


>
> 8. § Fetch § Response Schema
>
> The `NextOffsetAndEpoch` and `CurrentLeader` field in the fetch response
> lack `about` properties.
>
> Ack, will update.


>
> 9. § FetchRequest Handling
>
> Item 1.a. is covering only the case where the fetcher is a voter (thus in
> receipt of BeginQuorumEpoch), it doesn't cover observer fetchers.
>
> Also this algorithm talks about "the leader epoch" being larger or smaller,
> but it's not immediately clear whether this is the leader's epoch or the
> leader epoch in the fetch request and which is the epoch being compared
> against. It makes the comparisons harder to understand if you have to guess
> the order of the operands.
>
>
For observers, they will also eventually learn about the leader epoch via
the metadata request; the comparison is between the epoch of the leader
that received the request, and the encoded "CurrentLeaderEpoch" in the
request. I will update the doc.


>
> 10. § Discussion: Replication Progress Timeout for Zombie Leader
>
> "Note that the node will remain a leader until it finds that it has
> been supplanted by another voter."
>
> I don't quite follow this: Why is a _voter_ involved here? Surely it's
> simply a matter of the leader observing that a new leader exists whi

Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-07-19 Thread Guozhang Wang
Hello Jun,

Thanks for your comments, answering some of them inlined below.

On Thu, Jul 16, 2020 at 3:44 PM Jun Rao  wrote:

> Hi, Jason,
>
> Thanks for the updated KIP. Looks good overall. A few more comments below.
>
> 101. I still don't see a section on bootstrapping related issues. It would
> be useful to document if/how the following is supported.
> 101.1 Currently, we support auto broker id generation. Is this supported
> for bootstrap brokers?
> 101.2 As Colin mentioned, sometimes we may need to load the security
> credentials to be broker before it can be connected to. Could you provide a
> bit more detail on how this will work?
> 101.3 Currently, we use ZK to generate clusterId on a new cluster. With
> Raft, how does every broker generate the same clusterId in a distributed
> way?
>
> 200. It would be useful to document if the various special offsets (log
> start offset, recovery point, HWM, etc) for the Raft log are stored in the
> same existing checkpoint files or not.
> 200.1 Since the Raft log flushes every append, does that allow us to
> recover from a recovery point within the active segment or do we still need
> to scan the full segment including the recovery point? The former can be
> tricky since multiple records can fall into the same disk page and a
> subsequent flush may corrupt a page with previously flushed records.
>

I think we would still document the special offsets for the Raft log in the
existing checkpoint files. I will update the KIP.

I have not thought about optimizing our existing recovery process at the
moment; Raft log flushing on every append may open the door for some
optimization, but on the other hand we are also considering some ways to
defer the every-flush-on-append as well for future works, as suggested in
the KIP docs. So I'd say at the moment we will just keep the recovery logic
as is.


>
> 201. Configurations.
> 201.1 How do the Raft brokers get security related configs for inter broker
> communication? Is that based on the existing
> inter.broker.security.protocol?
>

We have a separate KIP proposal to address broker bootstrapping (actually,
that also includes broker reconfiguration) issues, and I believe Jason
would publish soon. The main idea is that for security types we would still
set it via "security.inter.broker.protocol" and "inter.broker.listener.name",
but the configs would not be allowed to be altered while the broker is
offline. And for the case when a broker is started for the first time,
users would need to set it in "meta.properties" if necessary.


> 201.2 We have quorum.retry.backoff.max.ms and quorum.retry.backoff.ms, but
> only quorum.election.backoff.max.ms. This seems a bit inconsistent.
>
>
The current implementation does not use `quorum.retry.backoff.max.ms`,
since we just used a static backoff logic at the connection layer relying
on `quorum.retry.backoff.ms`. Personally I think it is okay to remove the
former config at the moment since we do not have a strong motivation to use
exponential backoffs.

The election procedure indeed uses a binary exponential backoff mechanism,
but I think we do not need to have a separate `
quorum.election.backoff.base.ms` or something like that -- currently the
base ms is hardcoded -- but I'm open to other thoughts if we believe making
it configurable is also important.


> 202. Metrics:
> 202.1 TotalTimeMs, InboundQueueTimeMs, HandleTimeMs, OutboundQueueTimeMs:
> Are those the same as existing totalTime, requestQueueTime, localTime,
> responseQueueTime? Could we reuse the existing ones with the tag
> request=[request-type]?
>

Yes we can, I'm going to remove those metrics in the KIP.


> 202.2. Could you explain what InboundChannelSize and OutboundChannelSize
> are?
>

Our current implementation uses another channel between the handler to the
RaftClient, that is, upon handling a request the thread does not do
anything but just put it into a new queue which would then be polled by the
RaftClient, and similarly the generated response would be also synced to
the handler threads via a queue as well. And these metrics are for
measuring this queue size.


> 202.3 ElectionLatencyMax/Avg: It seems that both should be windowed?
>
>
You're right, they are implemented as windowed sum / avg, and so are "
*ReplicationLatencyMax/Avg*" actually. I will update the KIP.


> 203. Quorum State: I assume that LeaderId will be kept consistently with
> LeaderEpoch. For example, if a follower transitions to candidate and bumps
> up LeaderEpoch, it will set leaderId to -1 and persist both in the Quorum
> state file. Is that correct?
>
> 204. I was thinking about a corner case when a Raft broker is partitioned
> off. This broker will then be in a continuous loop of bumping up the leader
> epoch, but failing to get enough votes. When the partitioning is removed,
> this broker's high leader epoch will force a leader election. I assume
> other Raft brokers can immediately advance their leader epoch passing the
> already bumped e

Re: [DISCUSS] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2020-07-19 Thread Badai Aqrandista
Hi all

I have made a small change to KIP-431 to make it clearer which one is
"Partition" and "Offset". Also I have moved key field to the back,
before the value:

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
test --from-beginning --property print.partition=true --property
print.key=true --property print.timestamp=true --property
print.offset=true --property print.headers=true --property
key.separator='|'

CreateTime:1592475472398|Partition:0|Offset:3|h1:v1,h2:v2|key1|value1
CreateTime:1592475472456|Partition:0|Offset:4|NO_HEADERS|key2|value2

Regards
Badai

On Sun, Jun 21, 2020 at 11:39 PM Badai Aqrandista  wrote:
>
> Excellent.
>
> Would like to hear more feedback from others.
>
> On Sat, Jun 20, 2020 at 1:27 AM David Jacot  wrote:
> >
> > Hi Badai,
> >
> > Thanks for your reply.
> >
> > 2. Yes, that makes sense.
> >
> > Best,
> > David
> >
> > On Thu, Jun 18, 2020 at 2:08 PM Badai Aqrandista  wrote:
> >
> > > David
> > >
> > > Thank you for replying
> > >
> > > 1. It seems that `print.partition` is already implemented. Do you confirm?
> > > BADAI: Yes, you are correct. I have removed it from the KIP.
> > >
> > > 2. Will `null.literal` be only used when the value of the message
> > > is NULL or for any fields? Also, it seems that we print out "null"
> > > today when the key or the value is empty. Shall we use "null" as
> > > a default instead of ""?
> > > BADAI: For any fields. Do you think this is useful?
> > >
> > > 3. Could we add a small example of the output in the KIP?
> > > BADAI: Yes, I have updated the KIP to add a couple of example.
> > >
> > > 4. When there are no headers, are we going to print something
> > > to indicate it to the user? For instance, we print out NO_TIMESTAMP
> > > where there is no timestamp.
> > > BADAI: Yes, good idea. I have updated the KIP to print NO_HEADERS.
> > >
> > > Thanks
> > > Badai
> > >
> > >
> > > On Thu, Jun 18, 2020 at 7:25 PM David Jacot  wrote:
> > > >
> > > > Hi Badai,
> > > >
> > > > Thanks for resuming this. I have few small comments:
> > > >
> > > > 1. It seems that `print.partition` is already implemented. Do you
> > > confirm?
> > > >
> > > > 2. Will `null.literal` be only used when the value of the message
> > > > is NULL or for any fields? Also, it seems that we print out "null"
> > > > today when the key or the value is empty. Shall we use "null" as
> > > > a default instead of ""?
> > > >
> > > > 3. Could we add a small example of the output in the KIP?
> > > >
> > > > 4. When there are no headers, are we going to print something
> > > > to indicate it to the user? For instance, we print out NO_TIMESTAMP
> > > > where there is no timestamp.
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Wed, Jun 17, 2020 at 4:53 PM Badai Aqrandista 
> > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I have contacted Mateusz separately and he is ok for me to take over
> > > > > KIP-431:
> > > > >
> > > > >
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-431%3A+Support+of+printing+additional+ConsumerRecord+fields+in+DefaultMessageFormatter
> > > > >
> > > > > I have updated it a bit. Can anyone give a quick look at it again and
> > > > > give me some feedback?
> > > > >
> > > > > This feature will be very helpful for people supporting Kafka in
> > > > > operations.
> > > > >
> > > > > If it is ready for a vote, please let me know.
> > > > >
> > > > > Thanks
> > > > > Badai
> > > > >
> > > > > On Sat, Jun 13, 2020 at 10:59 PM Badai Aqrandista 
> > > > > wrote:
> > > > > >
> > > > > > Mateusz
> > > > > >
> > > > > > This KIP would be very useful for debugging. But the last discussion
> > > > > > is in Feb 2019.
> > > > > >
> > > > > > Are you ok if I take over this KIP?
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Badai
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Badai
> > > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Badai
> > >
>
>
>
> --
> Thanks,
> Badai



-- 
Thanks,
Badai


Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-07-19 Thread Adam Bellemare
LGTM
+1 non-binding

On Sun, Jul 19, 2020 at 4:13 AM Sagar  wrote:

> Hi All,
>
> Bumping this thread to see if there are any feedbacks.
>
> Thanks!
> Sagar.
>
> On Tue, Jul 14, 2020 at 9:49 AM John Roesler  wrote:
>
> > Thanks for the KIP, Sagar!
> >
> > I’m +1 (binding)
> >
> > -John
> >
> > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> > > Hi All,
> > >
> > > I would like to start a new voting thread for the below KIP to add
> prefix
> > > scan support to state stores:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
> > >
> > >
> > > Thanks!
> > > Sagar.
> > >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk14 #300

2020-07-19 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk11 #1651

2020-07-19 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #4724

2020-07-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fixed some resource leaks. (#8922)


--
[...truncated 6.35 MB...]
org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimes

Re: Become a contributor - Jira username

2020-07-19 Thread Mickael Maison
Hi Pavel,

I've added you to the contributors list. Welcome to the Kafka community!

On Sun, Jul 19, 2020 at 4:57 PM Pavel Solomin  wrote:
>
> Hello!
>
> I would like to start contributing to Kafka.
> https://kafka.apache.org/contributing says I need to send my Jira username
> to be able to assign myself to tickets.
>
> My Jira username at https://issues.apache.org/jira is - *solomin*
>
> Thank you in advance.
> Pavel
>
> Best Regards,
> Pavel Solomin
>
> Tel: +351 962 950 692 | Skype: pavel_solomin | Linkedin
> 


Become a contributor - Jira username

2020-07-19 Thread Pavel Solomin
Hello!

I would like to start contributing to Kafka.
https://kafka.apache.org/contributing says I need to send my Jira username
to be able to assign myself to tickets.

My Jira username at https://issues.apache.org/jira is - *solomin*

Thank you in advance.
Pavel

Best Regards,
Pavel Solomin

Tel: +351 962 950 692 | Skype: pavel_solomin | Linkedin



Re: [VOTE] 2.6.0 RC0

2020-07-19 Thread Randall Hauch
Thanks, Rajini. Is there a Jira issue for the fix related to KIP-546? If
so, please make sure the Fix Version(s) include `2.6.0`.

I'm going to start RC1 later today and hope to get it published by Monday.
In the meantime, if anyone finds anything else in RC0, please raise it here
-- if it's after RC1 is published then we'll just cut another RC with any
fixes.

We're down to just 5 system test failures [1], and folks are actively
working to address them. At least some are known to be flaky, but we still
want to get them fixed.

Best regards,

Randall

On Sun, Jul 19, 2020 at 5:45 AM Rajini Sivaram 
wrote:

> Hi Randall,
>
> Ron found an issue with the quota implementation added under KIP-546, which
> is a blocking issue for 2.6.0 since it leaks SCRAM credentials in quota
> responses. A fix has been merged into 2.6 branch in the commit
>
> https://github.com/apache/kafka/commit/dd71437de7675d92ad3e4ed01ac3ee11bf5da99d
> .
> We
> have also merged the fix for
> https://issues.apache.org/jira/browse/KAFKA-10223 into 2.6 branch since it
> causes issues for non-Java clients during reassignments.
>
> Regards,
>
> Rajini
>
>
> On Wed, Jul 15, 2020 at 11:41 PM Randall Hauch  wrote:
>
> > Thanks, Levani.
> >
> > The content of
> >
> >
> https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
> > is the correct generated site. Somehow I messed coping that to the
> > https://github.com/apache/kafka-site/tree/asf-site/26 directory. I've
> > corrected the latter so that https://kafka.apache.org/26/documentation/
> > now
> > exactly matches that documentation in RC0.
> >
> > Best regards,
> >
> > Randall
> >
> > On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze <
> levani.co...@gmail.com>
> > wrote:
> >
> > > Hi Randall,
> > >
> > > Not sure if it’s intentional but, documentation for Kafka Streams 2.6.0
> > > also contains “Streams API changes in 2.7.0”
> > > https://kafka.apache.org/26/documentation/streams/upgrade-guide <
> > > https://kafka.apache.org/26/documentation/streams/upgrade-guide>
> > >
> > > Also, there seems to be some formatting issue in 2.6.0 section.
> > >
> > > Levani
> > >
> > >
> > > > On Jul 15, 2020, at 1:48 AM, Randall Hauch  wrote:
> > > >
> > > > Thanks for catching that, Gary. Apologies to all for announcing this
> > > before
> > > > pushing the docs, but that's fixed and the following links are
> working
> > > > (along with the others in my email):
> > > >
> > > > * https://kafka.apache.org/26/documentation.html
> > > > * https://kafka.apache.org/26/protocol.html
> > > >
> > > > Randall
> > > >
> > > > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell 
> > > wrote:
> > > >
> > > >> Docs link [1] is broken.
> > > >>
> > > >> [1] https://kafka.apache.org/26/documentation.html
> > > >>
> > > >>
> > >
> > >
> >
>


Re: [VOTE] 2.6.0 RC0

2020-07-19 Thread Rajini Sivaram
Hi Randall,

Ron found an issue with the quota implementation added under KIP-546, which
is a blocking issue for 2.6.0 since it leaks SCRAM credentials in quota
responses. A fix has been merged into 2.6 branch in the commit
https://github.com/apache/kafka/commit/dd71437de7675d92ad3e4ed01ac3ee11bf5da99d.
We
have also merged the fix for
https://issues.apache.org/jira/browse/KAFKA-10223 into 2.6 branch since it
causes issues for non-Java clients during reassignments.

Regards,

Rajini


On Wed, Jul 15, 2020 at 11:41 PM Randall Hauch  wrote:

> Thanks, Levani.
>
> The content of
>
> https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
> is the correct generated site. Somehow I messed coping that to the
> https://github.com/apache/kafka-site/tree/asf-site/26 directory. I've
> corrected the latter so that https://kafka.apache.org/26/documentation/
> now
> exactly matches that documentation in RC0.
>
> Best regards,
>
> Randall
>
> On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze 
> wrote:
>
> > Hi Randall,
> >
> > Not sure if it’s intentional but, documentation for Kafka Streams 2.6.0
> > also contains “Streams API changes in 2.7.0”
> > https://kafka.apache.org/26/documentation/streams/upgrade-guide <
> > https://kafka.apache.org/26/documentation/streams/upgrade-guide>
> >
> > Also, there seems to be some formatting issue in 2.6.0 section.
> >
> > Levani
> >
> >
> > > On Jul 15, 2020, at 1:48 AM, Randall Hauch  wrote:
> > >
> > > Thanks for catching that, Gary. Apologies to all for announcing this
> > before
> > > pushing the docs, but that's fixed and the following links are working
> > > (along with the others in my email):
> > >
> > > * https://kafka.apache.org/26/documentation.html
> > > * https://kafka.apache.org/26/protocol.html
> > >
> > > Randall
> > >
> > > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell 
> > wrote:
> > >
> > >> Docs link [1] is broken.
> > >>
> > >> [1] https://kafka.apache.org/26/documentation.html
> > >>
> > >>
> >
> >
>


[jira] [Resolved] (KAFKA-10223) ReplicaNotAvailableException must be retriable to handle reassignments

2020-07-19 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10223.

Fix Version/s: (was: 2.7.0)
   2.6.0
   Resolution: Fixed

> ReplicaNotAvailableException must be retriable to handle reassignments
> --
>
> Key: KAFKA-10223
> URL: https://issues.apache.org/jira/browse/KAFKA-10223
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 2.6.0
>
>
> ReplicaNotAvailableException should be a retriable `InvalidMetadataException` 
> since consumers may throw this during reassignments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-07-19 Thread Sagar
Hi All,

Bumping this thread to see if there are any feedbacks.

Thanks!
Sagar.

On Tue, Jul 14, 2020 at 9:49 AM John Roesler  wrote:

> Thanks for the KIP, Sagar!
>
> I’m +1 (binding)
>
> -John
>
> On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> > Hi All,
> >
> > I would like to start a new voting thread for the below KIP to add prefix
> > scan support to state stores:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
> >
> >
> > Thanks!
> > Sagar.
> >
>