Based on a quick read of the code, compaction in bitcask is performed only
on "readable" files, and the current active file for writing is excluded
from that list. With default settings, that active file can grow to 2GB.
So it is possible that if objects had been replaced/deleted many times
within
Sean,
Some partial answers to your questions.
I don't believe force-replace itself will sync anything up - it just
reassigns ownership (hence handoff happens very quickly).
Read repair would synchronise a portion of the data. So if 10% of you data
is read regularly, this might explain some of w
Hi All,
A few questions on the procedure here to recover a failed node:
http://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/failed-node/
We lost a production riak server when AWS decided to delete a node and we
plan on doing this procedure to replace it with a newly built node. A
practice r