hey guys, I'm trying to take backups of a multi-node cassandra and save them on
S3.
My idea is simply doing ssh to each server and use nodetool to create the
snapshots then push then to S3.
So is this approach recommended? my concerns are about inconsistencies that
this approach can lead, sin
On Fri, Dec 6, 2013 at 6:41 AM, Amalrik Maia wrote:
> hey guys, I'm trying to take backups of a multi-node cassandra and save
> them on S3.
> My idea is simply doing ssh to each server and use nodetool to create the
> snapshots then push then to S3.
>
https://github.com/synack/tablesnap
So is th
Hmm... cassandra fundamental key features like fault tolerant, durable and
replication. Just out of curiousity, why would you want to do backup?
/Jason
On Sat, Dec 7, 2013 at 3:31 AM, Robert Coli wrote:
> On Fri, Dec 6, 2013 at 6:41 AM, Amalrik Maia wrote:
>
>> hey guys, I'm trying to take bac
One typical reason is to protect against human error.
> On 7.12.2013, at 11.09, Jason Wee wrote:
>
> Hmm... cassandra fundamental key features like fault tolerant, durable and
> replication. Just out of curiousity, why would you want to do backup?
>
> /Jason
>
>
>> On Sat, Dec 7, 2013 at 3:
If you lose RF + 1 nodes the data that is replicated to only these nodes is
gone, good idea to have a recent backup than. Another situation is when you
deploy a bug in the software and start writing crap data to Cassandra.
Replication does not help and depending on the situation you need to
restore
I have not use tablesnap but it appears that it does not necessarily depend
upon taking a cassandra snapshot. The example given in their documentation
shows the source folder as /var/lib/cassandra/data/GiantKeyspace, which is
the root of the "GiantKeyspace" keyspace. But, snapshots operate at the
c