yes you need to restore snapshot of the node’s own snapshot for every node
Abdul Patel 于2019年9月20日 周五上午2:08写道:
> Thanks , i guess i have both.
> So can we have either or?
> If i keep auto_snapshot? Can i remove nodetool snapshot?
> Woest case scenario if i wish to restore snapshot which one will
Thanks , i guess i have both.
So can we have either or?
If i keep auto_snapshot? Can i remove nodetool snapshot?
Woest case scenario if i wish to restore snapshot which one will be best
option?
Also if we restore snapshot do we need to have snapshot on all nodes?
On Thursday, September 19, 2019,
You probably have auto_snapshot enabled, which takes snapshots when you do
certain things. You can disable that if you dont need it, but it protects
you against things like accidentally dropping / truncating a table.
You may also be doing snapshots manually - if you do this, you can
'nodetool
Hey All,
I found recentmy that the nodetool snapshot golder is creating almost 120GB
of filea when my actual keyspace folder has 20GB only.
Do we need to change any paramater to avoid this?
Is this normal?
I have 3.11.4 version
You can run removenode instead of decommission while it's down to avoid it
being online / serving reads at all.
You can also start cassandra with ` start_native_transport : false ` to
deter clients from connecting directly to it, though to be fair, that
doesnt eliminate the possibility that it's
Hi,
I ran into a situation where a newly bootstrapped node in the cluster has
crashed (due to known issue) immediately after the bootstrap process and
it remained dead for about 8 hours.
Since the node is down for about 8 hours, its missing some data after I
start cassandra. My application with
Hi Tarun,
That documentation page is a bit ambiguous. My understanding of it is that:
* Cassandra guarantees that counters are updated consistently across the
cluster by doing background reads, that don't affect write latency.
* If you use a consistency level stricter than ONE, the same read is