Hi all,
I would like to start the process for doing a 1.1.1 bug fix release. 1.1.0
was
released Mar 28, 2018, and about 2.5 months have passed and 25 bug fixes
have
accumulated so far.
A few of the more important fixes that have been merged in 1.1 branch so
far:
KAFKA-6925
[
https://issues.apache.org/jira/browse/KAFKA-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dong Lin resolved KAFKA-6946.
-
Resolution: Fixed
> Keep the session id for incremental fetch when fetch responses are thrott
Dong Lin created KAFKA-7019:
---
Summary: Reduction the contention between metadata update and
metadata read operation
Key: KAFKA-7019
URL: https://issues.apache.org/jira/browse/KAFKA-7019
Project: Kafka
, Feb 7, 2018 at 11:42 PM, Dong Lin wrote:
> Hey Jun,
>
> Sure, I will come up with a KIP this week. I think there is a way to allow
> partition expansion to arbitrary number without introducing new concepts
> such as read-only partition or repartition epoch.
>
> Thanks,
>
ook a look at KIP-287. Producers writing using the new scheme
> prior to processor catch up and cut-over makes sense. Thanks.
>
> On Sat, Apr 14, 2018 at 7:09 PM, Dong Lin wrote:
>
> > Hey Jeff,
> >
> > Thanks for the review. The scheme for expanding processors of
Thanks for the KIP! I am in favor of the option 1.
+1 as well.
On Thu, May 31, 2018 at 6:00 PM, Jason Gustafson wrote:
> Thanks everyone for the feedback. I've updated the KIP and added
> KAFKA-6979.
>
> -Jason
>
> On Wed, May 30, 2018 at 3:50 PM, Guozhang Wang wrote:
>
> > Thanks Jason. I'm
t; bugs ( https://issues.apache.org/jira/browse/KAFKA-6438 ).
>
> [fwiw, b) also can be solved by having topic TTL (in a fashion similar
> to e.g. RabbitMQ) - I will be submitting a relevant KIP soon]
>
> Yours sincerely,
> Adam Kotwasinski
>
> 2018-05-18 19
sh
> --bootstrap-server localhost:9092 --topic idontexist`
>
> Best regards,
> Adam
>
> 2018-05-18 18:49 GMT+01:00 Dong Lin <lindon...@gmail.com>:
> > Hey Adam,
> >
> > Thanks for the KIP. We currently already have the per-topic byte-out-rate
> >
Hey Adam,
Thanks for the KIP. We currently already have the per-topic byte-out-rate
(not including replication traffic) with MBean
path kafka.server:name=BytesOutPerSec,type=BrokerTopicMetrics,topic=*.
Though this is not the FetchRequest rate, it seems to address the
motivation of the KIP by
[
https://issues.apache.org/jira/browse/KAFKA-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dong Lin resolved KAFKA-3473.
-
Resolution: Fixed
> KIP-237: More Controller Health Metr
+Jon
On Mon, 22 Jan 2018 at 10:38 AM Becket Qin wrote:
> Thanks for the discussion and voting.
>
> KIP-219 has passed with +3 binding votes (Becket, Jun, Rajini).
>
> On Thu, Jan 18, 2018 at 1:32 AM, Rajini Sivaram
> wrote:
>
> > Hi Becket,
> >
>
input from developers of Kafka clients
> (librdkafka, kafka-python, etc.) for this KIP.
>
> Ismael
>
> On Thu, Apr 5, 2018 at 2:50 PM, Jonghyun Lee <jonghy...@gmail.com> wrote:
>
> > Hi,
> >
> > I have been implementing KIP-219. I discussed the interface chan
[
https://issues.apache.org/jira/browse/KAFKA-6705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dong Lin resolved KAFKA-6705.
-
Resolution: Won't Fix
> producer.send() should not block due to metadata not availa
.
After the first metadata most likely the producer will not have to wait for
metadata to send message.
On Fri, Apr 13, 2018 at 11:34 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Becket,
>
> Good point! Thanks for the comment.
>
> I have updated the KIP to move the compr
Thanks for the KIP! LGTM. +1
On Sat, Apr 14, 2018 at 5:31 PM, Ted Yu wrote:
> +1
>
> On Sat, Apr 14, 2018 at 3:54 PM, Anna Povzner wrote:
>
> > Hi All,
> >
> >
> > I would like to start the vote on KIP-279: Fix log divergence between
> > leader and
umer will be able to catch up in which it will start consuming using
> the new scheme afterwards.
>
> Thanks,
> Jeff Chao
>
> On Sat, Apr 14, 2018 at 6:44 AM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Thanks for the notes by Jun and Ray. I have read through the n
alidate the key
> distribution?
> - Jan concerned about how a consumer application would look with the
> new "split partition" design.
> - KIP introduced callback. Jan doesn't think is useful. Callback
> for switching "between Partition 1 and can start
Hi all,
I have created KIP-287 to provide high level design to support partition
and processor expansion for stateful processing. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-287%3A+Support+partition+and+processor+expansion+for+stateful+processing+jobs
.
Instead of planning to
he producer sender thread can focus on IO. With the proposed
> changes, does that mean the producer sender thread will have to do all the
> compression as well? Would this become a performance bottleneck?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Thu, Apr 12, 2018 at 1:4
d per-partition queue ?
>
> Thanks
>
> On Wed, Apr 11, 2018 at 8:50 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hi all,
> >
> > I have created KIP-286: producer.send() should not block on metadata
> > update. See
> > https://cwiki.apache.org/confl
Hi all,
I have created KIP-286: producer.send() should not block on metadata
update. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-286%3A+producer.send%28%29+should+not+block+on+metadata+update
.
The KIP intends to improve user-experience of producer.send() when metadata
is not
I am really really sorry for missing the KIP meeting. I am on vacation in
Shanghai. I drank too much and over slept yesterday night.
Thanks much for the KIP meeting notes! I will read through that carefully.
On Thu, Apr 5, 2018 at 10:31 PM, Jeff Chao wrote:
> Hi Jun, same
ing the
> consumer
> > > to
> > > > > re-partition the data. I think this is even less intuitive, when
> the
> > > > > partition function belongs to the producer.
> > > > >
> > > > >
> > > > > Point 5:
> > > > > Dong
; that when a topic is a topic its always the same even after it had
> > grown or shrunk is important. So from my POV I have major concerns that
> > this KIP is benefitial in its current state.
> >
> > What is it that makes everyone so addicted to the idea of linear hashing
saying that
> > "if you use a custom partitioner, you cannot do partition expansion" is
> > very reasonable (but I don't think we need to go that far with the
> current
> > proposal). It's similar to my statement in my email to Jun that in
> > principle KStr
ht capture
> >
> > every change to a table.three key capabilities:
> >
> >
> >
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> >
&
> > producer in the order they are produced. And we need to provide a way for
> > stream use-case to be able to flush/load state when messages with the
> same
> > key are migrated between consumers. In addition to ensuring that this
> goal
> > is correctly supported, we shou
are migrated between consumers. In addition to ensuring that this goal
is correctly supported, we should do our best to keep the performance and
organization overhead of this KIP as low as possible.
On Tue, Mar 27, 2018 at 1:14 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey John,
>
&
this to occur, so you get to amortize these copies over
> the lifetime of the topic, whereas a reshuffle just keeps making copies for
> every new event.
>
> And finally, I really do think that regardless of any performance concerns
> about this operation, if it preserves loose orga
On Tue, Mar 27, 2018 at 12:04 AM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jan,
>
> Thanks for the enthusiasm in improving Kafka's design. Now that I have
> read through your discussion with Jun, here are my thoughts:
>
> - The latest proposal should with log comp
o partitions, which
>>>>>>>> could
>>>>>>>> help
>>>>>>>> redistribute the state.
>>>>>>>>
>>>>>>>> You don't need to spin up a new consumer in this case. every
>>>>&g
t;> well
> >>>>>>>>>> with
> >>>>>>>>>> compacted topics since some keys in the original partitions will
> >>>>>>>>>> never
> >>>>>>>>>> be
> >>>>>
Dong Lin created KAFKA-6705:
---
Summary: producer.send() should be non-blocking
Key: KAFKA-6705
URL: https://issues.apache.org/jira/browse/KAFKA-6705
Project: Kafka
Issue Type: Improvement
what I meant by doing some
> data driven analysis. Maybe a quick run with hprof would help determine the
> root cause of why sanityCheck is slow?
>
> -Jay
>
> On Tue, Mar 20, 2018 at 12:13 AM Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jay,
> >
> > Thanks
Dong Lin created KAFKA-6697:
---
Summary: JBOD configured broker should not die if log directory is
invalid
Key: KAFKA-6697
URL: https://issues.apache.org/jira/browse/KAFKA-6697
Project: Kafka
Issue
" bytes which is not positive or not a multiple of 8.")
> I'm pretty such file.getAbsolutePath is a system call and I assume that
> happens whether or not you fail the in-memory check?
>
> -Jay
>
>
> On Sun, Feb 25, 2018 at 10:27 PM, Dong Lin <lindon..
Dong Lin created KAFKA-6640:
---
Summary: Improve efficiency of KafkaAdminClient.describeTopics()
Key: KAFKA-6640
URL: https://issues.apache.org/jira/browse/KAFKA-6640
Project: Kafka
Issue Type
Dong Lin created KAFKA-6638:
---
Summary: Controller should remove replica from ISR if the replica
is removed from the replica set
Key: KAFKA-6638
URL: https://issues.apache.org/jira/browse/KAFKA-6638
Project
Dong Lin created KAFKA-6636:
---
Summary: ReplicaFetcherThread should not die if hw < 0
Key: KAFKA-6636
URL: https://issues.apache.org/jira/browse/KAFKA-6636
Project: Kafka
Issue Type: Improvem
cation layer. It's hard to judge for me atm
> > what
> > >>>>> the impact would be, but it's something we should pay attention to.
> > >>>>>
> > >>>>>
> > >>>>> -Matthias
> > >>>>>
> > >>>>
s.
>
> If you are not willing to explain to me what I might be overlooking: that
> is fine.
> But I ask you to not reply to my emails then. Please understand my
> frustration with this.
>
> Best Jan
>
>
>
> On 06.03.2018 19:38, Dong Lin wrote:
>
>> Hi ev
Dong Lin created KAFKA-6618:
---
Summary: Prevent two controllers from updating znodes concurrently
Key: KAFKA-6618
URL: https://issues.apache.org/jira/browse/KAFKA-6618
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dong Lin resolved KAFKA-5626.
-
Resolution: Duplicate
Fixed in https://issues.apache.org/jira/browse/KAFKA-5960
> Producer sho
Dong Lin created KAFKA-6617:
---
Summary: Improve controller performance by batching reassignment
znode write operation
Key: KAFKA-6617
URL: https://issues.apache.org/jira/browse/KAFKA-6617
Project: Kafka
on is a
> > very important feature to really embrace kafka as a "data plattform". The
> > point I also want to make is that copying data this way is completely
> > inline with the kafka architecture. it only consists of reading and
> writing
> > to topics.
&
signature checks of files (especially closed, immutable ones) that can be
> > saved on disk and checked against at startup ? Wouldn't that help speed
> up
> > boot time, for all segments ?
> >
> > On 26 Feb. 2018 5:28 pm, "Dong Lin" <lindon...@gmail.com> wro
er) can be implemented in pure userland
> with a custom partitioner and a small feedbackloop from ProduceResponse =>
> Partitionier in coorporation with a change management system.
>
> Best Jan
>
>
>
>
>
>
>
>
> On 28.02.2018 07:13, Dong Lin wrote:
>
>
o has
> > the benefit that you can dynamically scale up or down the partition
> count.
> > This seems like it simplifies things like log compaction etc.
> >
> > -Jay
> >
> > On Sun, Feb 25, 2018 at 3:51 PM, Dong Lin <lindon...@gmail.com> wrote:
> &g
ed ConsumerGroupPositionRequest and
ConsumerGroupPositionResponse as suggested.
>
> Thanks,
> Jason
>
>
> On Tue, Feb 27, 2018 at 11:49 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Sounds awesome !
> > Are you planning to have aut
Dong Lin created KAFKA-6604:
---
Summary: ReplicaManager should not remove partitions on the log
dirctory from high watermark checkpoint file
Key: KAFKA-6604
URL: https://issues.apache.org/jira/browse/KAFKA-6604
ot;userland" with a custom partitioner that
> handles the transition as needed. I would appreciate if someone could point
> out what a custom partitioner couldn't handle in this case?
>
> With the above approach, shrinking a topic becomes the same steps. Without
> loosing keys in
Hey Allen,
Thanks for the comments.
On Mon, Feb 26, 2018 at 9:27 PM, Allen Wang <aw...@netflix.com.invalid>
wrote:
> Hi Dong,
>
> Please see my comments inline.
>
> Thanks,
> Allen
>
> On Sun, Feb 25, 2018 at 3:33 PM, Dong Lin <lindon...@gmail.com> wr
Hi all,
I have created KIP-263: Allow broker to skip sanity check of inactive
segments on broker startup. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-263%3A+Allow+broker+to+skip+sanity+check+of+inactive+segments+on+broker+startup
.
This KIP provides a way to significantly reduce
ll this work with Streams and Connect?
> > 2. How does this compare to a solution where we physically split
> partitions
> > using a linear hashing approach (the partition number is equivalent to
> the
> > hash bucket in a hash table)? https://en.wikipedia.org/wiki/
&g
hash table)? https://en.wikipedia.org/wiki/Linear_hashing
>
> -Jay
>
> On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hi all,
> >
> > I have created KIP-253: Support in-order message delivery with partition
> > expansion.
c can be bursty. When the traffic goes up,
> adding
> > partitions is a quick way of shifting some traffic to the newly added
> > brokers. Once the traffic goes down, the newly added brokers will be
> > reclaimed (potentially by moving replicas off those brokers). However
kers will be
> reclaimed (potentially by moving replicas off those brokers). However, if
> one can only add partitions without removing, eventually, one will hit the
> limit.
>
> Thanks,
>
> Jun
>
> On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin <lindon...@gmail.com> wr
are pure overhead. Overhead for consumers, producers, controller,
> etc, etc. If we have the ability to add partitions when needed, it will be
> good to also remove them when no longer needed.
>
> On Wed, Feb 21, 2018 at 12:24 PM Dong Lin <lindon...@gmail.com> wrote:
>
&g
>
> > On 2/22/18 5:19 PM, Jay Kreps wrote:
> >> Hey Dong,
> >>
> >> Two questions:
> >> 1. How will this work with Streams and Connect?
> >> 2. How does this compare to a solution where we physically split
> partitions
> >> using a
re if this is super important to address though.
>
Personally I think it is not worth adding more complexity just to optimize
this scenario. This imbalance should exist only for a short period of time.
If it is important I can think more about how to handle it.
>
> Thanks,
>
> Jun
>
Dong Lin created KAFKA-6571:
---
Summary: KafkaProducer.close(0) should be non-blocking
Key: KAFKA-6571
URL: https://issues.apache.org/jira/browse/KAFKA-6571
Project: Kafka
Issue Type: Bug
Hi all,
I have created KIP-253: Support in-order message delivery with partition
expansion. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-253%3A+Support+in-order+message+delivery+with+partition+expansion
.
This KIP provides a way to allow messages of the same key from the same
ership change must be delayed. As far
> as I can tell, this strategy hurts availability without even increasing
> consistency.
>
> best,
> Colin
>
>
> On Sat, Feb 3, 2018, at 10:03, Dong Lin wrote:
> > Hey Guozhang,
> >
> > I don't have very detailed
e re-partitioning KIP a bit more and see if
> there is any overlap with KIP-232. Would you be interested in doing that?
> If not, we can do that next week.
>
> Jun
>
>
> On Tue, Feb 6, 2018 at 11:30 AM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jun,
> &
formance improvement on top of this KIP. That can probably be done
> > separately.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Jan 29, 2018 at 11:52 AM, Dong Lin <lindon...@gmail.com> wrote:
> >
> > > Hey
ngg...@gmail.com> wrote:
> Hi Dong,
>
> Could you elaborate a bit more how controller could affect leaders to
> switch between all and quorum?
>
>
> Guozhang
>
>
> On Fri, Feb 2, 2018 at 10:12 PM, Dong Lin <lindon...@gmail.com> wrote:
>
>> Hey Guazhang,
>>
ot;surprises" to users who are already using "all". In other words, "quorum"
> is trading a bit of failure tolerance that is strictly defined on min.isr
> for better tail latency.
>
>
> Guozhang
>
>
> On Fri, Feb 2, 2018 at 6:25 PM, Dong Lin <lindon...@
:
>
> a. ISR list has 3: "all" waits for all 3, "quorum" waits for 2 of them.
> b. ISR list has 2: "all" and "quorum" waits for both 2 of them.
> c. ISR list has 1: "all" waits for leader to return, while "quorum" would
> n
Hey Jun, Jason,
Thanks for all the comments. Could you see if you can give +1 for the KIP?
I am open to make further improvements for the KIP.
Thanks,
Dong
On Tue, Jan 23, 2018 at 3:44 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jun, Jason,
>
> Thanks much for all the revi
Hey Colin,
On Mon, Jan 29, 2018 at 11:23 AM, Colin McCabe <cmcc...@apache.org> wrote:
> > On Mon, Jan 29, 2018 at 10:35 AM, Dong Lin <lindon...@gmail.com> wrote:
> >
> > > Hey Colin,
> > >
> > > I understand that the KIP will adds overhead by
Mon, Jan 29, 2018 at 10:35 AM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Colin,
>
> I understand that the KIP will adds overhead by introducing per-partition
> partitionEpoch. I am open to alternative solutions that does not incur
> additional overhead. But I don't see a
in this KIP, maybe we can do the partition ID in a
separate KIP?
Thanks,
Dong
On Mon, Jan 29, 2018 at 10:18 AM, Colin McCabe <cmcc...@apache.org> wrote:
> On Fri, Jan 26, 2018, at 12:17, Dong Lin wrote:
> > Hey Colin,
> >
> >
> > On Fri, Jan 26, 201
Hey Colin,
On Fri, Jan 26, 2018 at 10:16 AM, Colin McCabe <cmcc...@apache.org> wrote:
> On Thu, Jan 25, 2018, at 16:47, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks for the comment.
> >
> > On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe <cmcc...@apa
ason the title is "Add Support ...". In this case, we
> > wouldn't
> > > break any current promises and provide a separate option for our user.
> > > In terms of KIP-250, I feel it is more like the "Semisynchronous
> > > Replication" in the MySQL world, and yes it i
Hey Colin,
Thanks for the comment.
On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe <cmcc...@apache.org> wrote:
> On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks for reviewing the KIP.
> >
> > If I understand you right, you ma
Dong Lin created KAFKA-6488:
---
Summary: Prevent log corruption in case of OOM
Key: KAFKA-6488
URL: https://issues.apache.org/jira/browse/KAFKA-6488
Project: Kafka
Issue Type: Bug
anyway. If the client thinks
> X is the leader but Y is really the leader, the client will talk to X, and
> X will point out its mistake by sending back a NOT_LEADER_FOR_PARTITION.
> Then the client can update its metadata again and find the new leader, if
> there is one. There is no need for an epo
have only consumed to
> offset 8, and then call seek(16) to "jump" to a further position, then she
> needs to be aware that OORE maybe thrown and she needs to handle it or rely
> on reset policy which should not surprise her.
>
>
> I'm +1 on the KIP.
>
> Guozh
onsumer just call seek(16)
> after construction and then poll without committed offset ever stored
> before? Admittedly it is rare but we do not programmably disallow it.
>
>
> Guozhang
>
> On Tue, Jan 23, 2018 at 10:42 PM, Dong Lin <lindon...@gmail.com> wrote:
nse returns with partition epoch 2 pointing to
> leader broker A, while the actual up-to-date metadata has partition epoch 3
> whose leader is now broker B, the metadata refresh will still succeed and
> the follow-up fetch request may still see OORE?
>
>
> Guozhang
>
>
Hey Litao,
Thanks for the KIP. I have one quick comment before you provide more detail
on how to select the leader with the largest LEO.
Do you think it would make sense to change the default behavior of acks=-1,
such that broker will acknowledge the message once the message has been
replicated
Hi all,
I would like to start the voting process for KIP-232:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-232%3A+Detect+outdated+metadata+using+leaderEpoch+and+partitionEpoch
The KIP will help fix a concurrency issue in Kafka which currently can
cause message loss or message
Hey Jun, Jason,
Thanks much for all the review! I will open the voting thread.
Regards,
Dong
On Tue, Jan 23, 2018 at 3:37 PM, Jun Rao <j...@confluent.io> wrote:
> Hi, Dong,
>
> The current KIP looks good to me.
>
> Thanks,
>
> Jun
>
> On Tue, Jan 23, 2
Hey Jun,
Do you think the current KIP looks OK? I am wondering if we can open the
voting thread.
Thanks!
Dong
On Fri, Jan 19, 2018 at 3:08 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jun,
>
> I think we can probably have a static method in Util class to decode the
want to include OffsetEpoch in
> the AdminClient too.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 18, 2018 at 6:30 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jun,
> >
> > I agree. I have updated the KIP to remove the class OffetEpoch and
> replac
>
> On Wed, Jan 17, 2018 at 10:10 AM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Thinking about point 61 more, I realize that the async zookeeper read may
> > make it less of an issue for controller to read more zookeeper nodes.
> > Writing partition_epoch in the per
updated the KIP to use the suggested approach.
On Wed, Jan 17, 2018 at 9:57 AM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jun,
>
> Thanks much for the comments. Please see my comments inline.
>
> On Tue, Jan 16, 2018 at 4:38 PM, Jun Rao <j...@confluent.io> wrote:
of OffsetEpoch class from the KIP. I
just added it back with the public methods. Could you take another look?
>
> Jun
>
>
> On Thu, Jan 11, 2018 at 5:43 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jun,
> >
> > Thanks much. I agree that we can not r
problems in the future. I am not sure having just a
> global_epoch can achieve these. global_epoch is useful to determine which
> version of the metadata is newer, especially with topic deletion.
>
> Thanks,
>
> Jun
>
> On Tue, Jan 9, 2018 at 11:34 PM, Dong Lin <li
the global epoch.
On Tue, Jan 9, 2018 at 6:58 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jun,
>
> Thanks so much. These comments very useful. Please see below my comments.
>
> On Mon, Jan 8, 2018 at 5:52 PM, Jun Rao <j...@confluent.io> wrote:
>
>> Hi, Dong,
>
, perhaps we can have sth like the
> following. The binary encoding is probably more efficient than JSON for
> external storage.
>
> OffsetEpoch {
> static OffsetEpoch decode(byte[]);
>
> public byte[] encode();
>
> public String toString();
> }
>
Thanks much. I
n't think
> this adds much complexity and it makes the behavior consistent: every topic
> mutation results in an epoch bump.
>
> Thanks,
> Jason
>
> On Mon, Jan 8, 2018 at 3:14 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Ismael,
> >
> > I guess
hen they're on
> their own).
>
> Ismael
>
> The corresponding seek() and position() APIs might look something like
> this:
> >
> > void seek(TopicPartition partition, long offset, byte[] offsetMetadata);
> > byte[] positionMetadata(TopicPartition partition);
>
ores the
object.toString() in the external store, how can user convert the string
back to the object?
And yes, this only matters to the consumer API. The current KIP continues
to send leader_epoch and topic_epoch as separate fields in request/response
and the offset topic schema.
Thanks much,
Dong
obably have the following json format:
{
"version": 1,
"topic_epoch": int,
"leader_epoch": int.
}
In comparison to byte[], String has the benefit of being more readable and
it is also the same type of the existing metadata field, which is used for
a similar
Hey Jun, Jason,
Thanks much for all the feedback. I have updated the KIP based on the
latest discussion. Can you help check whether it looks good?
Thanks,
Dong
On Thu, Jan 4, 2018 at 5:36 PM, Dong Lin <lindon...@gmail.com> wrote:
> Hey Jun,
>
> Hmm... thinking about this more
's what I am thinking. OffsetEpoch will be composed of
> (partition_epoch,
> leader_epoch).
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 4, 2018 at 4:22 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jun,
> >
> > Thanks much. I like the the new API th
when topic is recreated.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 4, 2018 at 2:05 PM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Jun,
> >
> > Yeah I agree that ideally we don't want an ever growing global metadata
> > version. I just think it
we don't store the metadata
> version together with the offset, on a consumer restart, it's not clear how
> we can ensure the metadata in the consumer is high enough since there is no
> metadata version to compare with.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 3, 2018 at 6:43
too tightly with the
> implementation and makes it harder to refactor. That was the reason for the
> suggestion.
>
> Ismael
>
> On Wed, Dec 13, 2017 at 1:51 AM, Dong Lin <lindon...@gmail.com> wrote:
>
> > Hey Ismael,
> >
> > Thanks for your comments. Yeah t
201 - 300 of 1118 matches
Mail list logo