Thank you John and Jeff, I was leaning towards sharding and this really helps 
support that opinion. Would you mind explaining a bit more what about vnodes 
caused those issues?

From: user@cassandra.apache.org At: 07/10/20 19:06:27To:  
user@cassandra.apache.org
Cc:  Isaac Reath (BLOOMBERG/ 919 3RD A ) 
Subject: Re: Running Large Clusters in Production

I worked on a handful of large clusters (> 200 nodes) using vnodes, and there 
were some serious issues with both performance and availability.  We had to put 
in a LOT of work to fix the problems.

I agree with Jeff - it's way better to manage multiple clusters than a really 
large one.


On Fri, Jul 10, 2020 at 2:49 PM Jeff Jirsa <jji...@gmail.com> wrote:

1000 instances are fine if you're not using vnodes.

I'm not sure what the limit is if you're using vnodes. 

If you might get to 1000, shard early before you get there. Running 8x100 host 
clusters will be easier than one 800 host cluster.


On Fri, Jul 10, 2020 at 2:19 PM Isaac Reath (BLOOMBERG/ 919 3RD A) 
<ire...@bloomberg.net> wrote:

Hi All,

I’m currently dealing with a use case that is running on around 200 nodes, due 
to growth of their product as well as onboarding additional data sources, we 
are looking at having to expand that to around 700 nodes, and potentially 
beyond to 1000+. To that end I have a couple of questions:

1)  For those who have experienced managing clusters at that scale, what types 
of operational challenges have you run into that you might not see when 
operating 100 node clusters? A couple that come to mind are version (especially 
major version) upgrades become a lot more risky as it no longer becomes 
feasible to do a blue / green style deployment of the database and backup & 
restore operations seem far more error prone as well for the same reason 
(having to do an in-place restore instead of being able to spin up a new 
cluster to restore to). 

2) Is there a cluster size beyond which sharding across multiple clusters 
becomes the recommended approach?

Thanks,
Isaac


Reply via email to