Ah thanks. I did not check the wiki but just browsed through all open
KIP discussions following the email threads.
-Matthias
On 9/30/18 12:06 PM, Dong Lin wrote:
> Hey Matthias,
>
> Thanks for checking back on the status. The KIP has been marked as
> replaced by KIP-320 in the KIP list wiki pa
Hey Matthias,
Thanks for checking back on the status. The KIP has been marked as
replaced by KIP-320 in the KIP list wiki page and the status has been
updated in the discussion and voting email thread.
Thanks,
Dong
On Sun, 30 Sep 2018 at 11:51 AM Matthias J. Sax
wrote:
> It seems that KIP-320
It seems that KIP-320 was accepted. Thus, I am wondering what the status
of this KIP is?
-Matthias
On 7/11/18 10:59 AM, Dong Lin wrote:
> Hey Jun,
>
> Certainly. We can discuss later after KIP-320 settles.
>
> Thanks!
> Dong
>
>
> On Wed, Jul 11, 2018 at 8:54 AM, Jun Rao wrote:
>
>> Hi, Don
For record purpose, this KIP is closed as its design has been merged into
KIP-320. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-320%3A+Allow+fetchers+to+detect+and+handle+log+truncation
.
Hey Jun,
Certainly. We can discuss later after KIP-320 settles.
Thanks!
Dong
On Wed, Jul 11, 2018 at 8:54 AM, Jun Rao wrote:
> Hi, Dong,
>
> Sorry for the late response. Since KIP-320 is covering some of the similar
> problems described in this KIP, perhaps we can wait until KIP-320 settles
>
Hi, Dong,
Sorry for the late response. Since KIP-320 is covering some of the similar
problems described in this KIP, perhaps we can wait until KIP-320 settles
and see what's still left uncovered in this KIP.
Thanks,
Jun
On Mon, Jun 4, 2018 at 7:03 PM, Dong Lin wrote:
> Hey Jun,
>
> It seems t
Hey Jun,
It seems that we have made considerable progress on the discussion of
KIP-253 since February. Do you think we should continue the discussion
there, or can we continue the voting for this KIP? I am happy to submit the
PR and move forward the progress for this KIP.
Thanks!
Dong
On Wed, F
Hey Jun,
Sure, I will come up with a KIP this week. I think there is a way to allow
partition expansion to arbitrary number without introducing new concepts
such as read-only partition or repartition epoch.
Thanks,
Dong
On Wed, Feb 7, 2018 at 5:28 PM, Jun Rao wrote:
> Hi, Dong,
>
> Thanks for
Hi, Dong,
Thanks for the reply. The general idea that you had for adding partitions
is similar to what we had in mind. It would be useful to make this more
general, allowing adding an arbitrary number of partitions (instead of just
doubling) and potentially removing partitions as well. The followi
Hi Becket,
I would argue that using IDs for partitions is not a performance improvement,
but actually a completely different way of accomplishing what this KIP is
trying to solve. If you give partitions globally unique IDs, and use a
different ID when re-creating a topic partition, you don't n
Hey Jun,
Interestingly I am also planning to sketch a KIP to allow partition
expansion for keyed topics after this KIP. Since you are already doing
that, I guess I will just share my high level idea here in case it is
helpful.
The motivation for the KIP is that we currently lose order guarantee f
Hi, Dong,
Thanks for the KIP. It looks good overall. We are working on a separate KIP
for adding partitions while preserving the ordering guarantees. That may
require another flavor of partition epoch. It's not very clear whether that
partition epoch can be merged with the partition epoch in this
+1 on the KIP.
I think the KIP is mainly about adding the capability of tracking the
system state change lineage. It does not seem necessary to bundle this KIP
with replacing the topic partition with partition epoch in produce/fetch.
Replacing topic-partition string with partition epoch is essenti
Hey Colin,
On Mon, Jan 29, 2018 at 11:23 AM, Colin McCabe wrote:
> > On Mon, Jan 29, 2018 at 10:35 AM, Dong Lin wrote:
> >
> > > Hey Colin,
> > >
> > > I understand that the KIP will adds overhead by introducing
> per-partition
> > > partitionEpoch. I am open to alternative solutions that does
> On Mon, Jan 29, 2018 at 10:35 AM, Dong Lin wrote:
>
> > Hey Colin,
> >
> > I understand that the KIP will adds overhead by introducing per-partition
> > partitionEpoch. I am open to alternative solutions that does not incur
> > additional overhead. But I don't see a better way now.
> >
> > IMO
I think it is possible to move to entirely use partitionEpoch instead of
(topic, partition) to identify a partition. Client can obtain the
partitionEpoch -> (topic, partition) mapping from MetadataResponse. We
probably need to figure out a way to assign partitionEpoch to existing
partitions in the
Hey Colin,
I understand that the KIP will adds overhead by introducing per-partition
partitionEpoch. I am open to alternative solutions that does not incur
additional overhead. But I don't see a better way now.
IMO the overhead in the FetchResponse may not be that much. We probably
should discuss
On Fri, Jan 26, 2018, at 12:17, Dong Lin wrote:
> Hey Colin,
>
>
> On Fri, Jan 26, 2018 at 10:16 AM, Colin McCabe wrote:
>
> > On Thu, Jan 25, 2018, at 16:47, Dong Lin wrote:
> > > Hey Colin,
> > >
> > > Thanks for the comment.
> > >
> > > On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe
> > wrot
Just some clarification on the current fencing logic. Currently, if the
producer uses acks=-1, a write will only succeed if the write is received
by all in-sync replicas (i.e., committed). This is true even when min.isr
is set since we first wait for a message to be committed and then check the
min
Hey Colin,
On Fri, Jan 26, 2018 at 10:16 AM, Colin McCabe wrote:
> On Thu, Jan 25, 2018, at 16:47, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks for the comment.
> >
> > On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe
> wrote:
> >
> > > On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> > > > Hey
On Thu, Jan 25, 2018, at 16:47, Dong Lin wrote:
> Hey Colin,
>
> Thanks for the comment.
>
> On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe wrote:
>
> > On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> > > Hey Colin,
> > >
> > > Thanks for reviewing the KIP.
> > >
> > > If I understand you righ
Hey Colin,
Thanks for the comment.
On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe wrote:
> On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks for reviewing the KIP.
> >
> > If I understand you right, you maybe suggesting that we can use a global
> > metadataEpoch that
On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> Hey Colin,
>
> Thanks for reviewing the KIP.
>
> If I understand you right, you maybe suggesting that we can use a global
> metadataEpoch that is incremented every time controller updates metadata.
> The problem with this solution is that, if a to
Hey Colin,
Thanks for reviewing the KIP.
If I understand you right, you maybe suggesting that we can use a global
metadataEpoch that is incremented every time controller updates metadata.
The problem with this solution is that, if a topic is deleted and created
again, user will not know whether t
Hi Dong,
Thanks for proposing this KIP. I think a metadata epoch is a really good idea.
I read through the DISCUSS thread, but I still don't have a clear picture of
why the proposal uses a metadata epoch per partition rather than a global
metadata epoch. A metadata epoch per partition is kind
Thanks much for reviewing the KIP!
Dong
On Wed, Jan 24, 2018 at 7:10 AM, Guozhang Wang wrote:
> Yeah that makes sense, again I'm just making sure we understand all the
> scenarios and what to expect.
>
> I agree that if, more generally speaking, say users have only consumed to
> offset 8, and t
Yeah that makes sense, again I'm just making sure we understand all the
scenarios and what to expect.
I agree that if, more generally speaking, say users have only consumed to
offset 8, and then call seek(16) to "jump" to a further position, then she
needs to be aware that OORE maybe thrown and sh
Yes, in general we can not prevent OffsetOutOfRangeException if user seeks
to a wrong offset. The main goal is to prevent OffsetOutOfRangeException if
user has done things in the right way, e.g. user should know that there is
message with this offset.
For example, if user calls seek(..) right afte
"If consumer wants to consume message with offset 16, then consumer must have
already fetched message with offset 15"
--> this may not be always true right? What if consumer just call seek(16)
after construction and then poll without committed offset ever stored
before? Admittedly it is rare but w
Hey Guozhang,
Thanks much for reviewing the KIP!
In the scenario you described, let's assume that broker A has messages with
offset up to 10, and broker B has messages with offset up to 20. If
consumer wants to consume message with offset 9, it will not receive
OffsetOutOfRangeException
from brok
Thanks Dong, I made a pass over the wiki and it lgtm.
Just a quick question: can we completely eliminate the
OffsetOutOfRangeException with this approach? Say if there is consecutive
leader changes such that the cached metadata's partition epoch is 1, and
the metadata fetch response returns with
Hi all,
I would like to start the voting process for KIP-232:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-232%3A+Detect+outdated+metadata+using+leaderEpoch+and+partitionEpoch
The KIP will help fix a concurrency issue in Kafka which currently can
cause message loss or message duplicatio
32 matches
Mail list logo