With those consistency levels it’s already possible you don’t see your writes, 
so you’re already probably seeing some of what would happen if you went to RF=5 
like that - just less common

If you did what you describe you’d have a 40% chance on each read of not seeing 
any data (or not seeing the most recent data) until repair runs.

Alternatively:
- you change the app to read at local quorum
- you change RF from 3 to 4
- run repair
- change RF from 4 to 5
- run repair 
- change the app to read local_one 

Then you’re back to status quo where you probably see most writes but it’s not 
strictly guaranteed 

> On May 22, 2020, at 8:51 AM, Leena Ghatpande <lghatpa...@hotmail.com> wrote:
> 
> 
> We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in 
> each DC. RF=3
> We have around 150M rows across tables.
> 
> We are planning to add more nodes to the cluster, and thinking of changing 
> the replication factor to 5 for each DC. 
> 
> Our application uses the below consistency level
>  read-level: LOCAL_ONE
>  write-level: LOCAL_QUORUM
> 
> if we change the RF=5 on live cluster, and run full repairs, would we see 
> read/write errors while data is being replicated? 
> if so, This is not something that we can afford in production, so how would 
> we avoid this?

Reply via email to