Hi, i wan to unsubscribe from this list.
Can i do it?
Please :)
Thank u!
Regards!!
El sáb., 11 jul. 2020 a las 22:06, Adam Bellemare ()
escribió:
> My 2 cents -
>
> I agree with Colin. I think that it's important that the metadata not grow
> unbounded without being delegated to external storage. Indefinite long-term
> storage of entity data in Kafka can result in extremely large datasets
> where the vast majority of data is stored in the external tier. I would be
> very disappointed to have the metadata storage be a limiting factor to
> exactly how much data I can store in Kafka. Additionally, and for example,
> I think it's very reasonable that an AWS metadata store could be
> implemented with DynamoDB (key-value store) paired with S3 - faster
> random-access metadata lookup than plain S3, but without needing to rebuild
> rocksDB state locally.
>
>
>
> On Fri, Jul 10, 2020 at 3:57 PM Colin McCabe wrote:
>
> > Hi all,
> >
> > Thanks for the KIP.
> >
> > I took a look and one thing that stood out to me is that the more
> metadata
> > we have, the more storage we will need on local disk for the rocksDB
> > database. This seems like it contradicts some of the goals of the
> > project. Ideally the space we need on local disk should be related only
> to
> > the size of the hot set, not the size of the cold set. It also seems
> like
> > it could lead to extremely long rocksdb rebuild times if we somehow lose
> a
> > broker's local storage and have to rebuild it.
> >
> > Instead, I think it would be more reasonable to store cold metadata in
> the
> > "remote" storage (HDFS, s3, etc.). Not only does this free up space on
> the
> > local and avoid long rebuild times, but it also gives us more control
> over
> > the management of our cache. With rocksDB we are delegating cache
> > management to an external library that doesn't really understand our
> > use-case.
> >
> > To give a concrete example of how this is bad, imagine that we have 10
> > worker threads and we get 10 requests for something that requires us to
> > fetch cold tiered storage metadata. Now every worker thread is blocked
> > inside rocksDB and the broker can do nothing until it finishes fetching
> > from disk. When accessing a remote service like HDFS or S3, in contrast,
> > we would be able to check if the data was in our local cache first. If
> it
> > wasn't, we could put the request in a purgatory and activate a background
> > thread to fetch the needed data, and then release the worker thread to be
> > used by some other request. Having control of our own caching strategy
> > increases observability, maintainability, and performance.
> >
> > I can anticipate a possible counter-argument here: the size of the
> > metadata should be small and usually fully resident in memory anyway.
> > While this is true today, I don't think it will always be true. The
> > current low limit of a few thousand partitions is not competitive in the
> > long term and needs to be lifted. We'd like to get to at least a million
> > partitions with KIP-500, and much more later. Also, when you give people
> > the ability to have unlimited retention, they will want to make use of
> it.
> > That means lots of historical log segments to track. This scenario is by
> > no means hypothetical. Even with the current software, it's easy to
> think
> > of cases where someone misconfigured the log segment roll settings and
> > overwhelmed the system with segments. So overall, I like to understand
> why
> > we want to store metadata on local disk rather than remote, and what the
> > options are for the future.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Jul 9, 2020, at 09:55, Harsha Chintalapani wrote:
> > > Hi Jun,
> > > Thanks for the replies and feedback on design and giving
> input.
> > > We are coming close to finish the implementation.
> > > We also did several perf tests as well at our peak production loads and
> > > with tiered storage we didn't see any degradation on write throughputs
> > and
> > > latencies.
> > > Ying already added some of the perf tests results in the KIP itself.
> > > It will be great if we can get design and code reviews from
> you
> > > and others in the community as we make progress.
> > > Thanks,
> > > Harsha
> > >
> > > On Tue, Jul 7, 2020 at 10:34 AM Jun Rao wrote:
> > >
> > > > Hi, Ying,
> > > >
> > > > Thanks for the update. It's good to see the progress on this. Please
> > let
> > > > us know when you are done updating the KIP wiki.
> > > >
> > > > Jun
> > > >
> > > > On Tue, Jul 7, 2020 at 10:13 AM Ying Zheng
> > wrote:
> > > >
> > > >> Hi Jun,
> > > >>
> > > >> Satish and I have added more design details in the KIP, including
> how
> > to
> > > >> keep consistency between replicas (especially when there is
> leadership
> > > >> changes / log truncations) and new metrics. We also made some other
> > minor
> > > >> changes in the doc. We will finish the KIP changes in the next
> couple
> > of
> > > >>