It depends on a number of factors, such as compaction strategy and read
patterns. I recommend sticking to the 100MB per partition limit (and I aim
for significantly less than that).
If you're doing time series with TWCS & TTL'ed data and small enough
windows, and you're only querying for a small
I disagree.
We had several over 150MB in 3.11 and we were able to break cluster doing
r/w from these partitions in a short period of time.
On Thu, Sep 13, 2018, 12:42 Gedeon Kamga wrote:
> Folks,
>
> Based on the information found here
> https://docs.datastax.com/en/dse-planning/doc/planning/pl
Hi Gedeon,
you should check Robert Stupp's 2016 talk about large partitions :
https://www.youtube.com/watch?v=N3mGxgnUiRY
Cheers,
On Thu, Sep 13, 2018 at 6:42 PM Gedeon Kamga wrote:
> Folks,
>
> Based on the information found here
> https://docs.datastax.com/en/dse-planning/doc/planning/plann
Thanks Ali!
I use a 13 months TTL on this table. I guess I need to remodel this table.
And I'll definitely try this tool.
On Tue, Apr 3, 2018 at 1:28 AM, Ali Hubail wrote:
> system.log should show you some warnings about wide rows. Do a grep on
> system.log for 'Writing large partition' The m
system.log should show you some warnings about wide rows. Do a grep on
system.log for 'Writing large partition' The message could be different
for the c* version you're using though. Plus, this doesn't show you all of
the large partitions.
There is a nice tool that analyzes sstables and can sho