Is there a way to keep writetime and ttl of each record as it is in new cluster?
Thanks and Regards
Noorul
On Mon, Dec 21, 2015 at 5:46 PM, DuyHai Doan wrote:
> For cross-cluster operation with the Spark/Cassandra connector, you can look
> at this trick:
> http://www.slideshare.net/doanduyhai/fa
For cross-cluster operation with the Spark/Cassandra connector, you can
look at this trick:
http://www.slideshare.net/doanduyhai/fast-track-to-getting-started-with-dse-max-ing/64
On Mon, Dec 21, 2015 at 1:14 PM, George Sigletos
wrote:
> Roughly half TB of data.
>
> There is a timestamp column in
Roughly half TB of data.
There is a timestamp column in the tables we migrated and we did use that
to achieve incremental updates.
I don't know anything about kairosdb, but I can see from the docs that
there exists a row timestamp column. Could you maybe use that one?
Kind regards,
George
On Mo
George Sigletos writes:
> Hello,
>
> We had a similar problem where we needed to migrate data from one cluster
> to another.
>
> We ended up using Spark to accomplish this. It is fast and reliable but
> some downtime was required after all.
>
> We minimized the downtime by doing a first run, and
Hello,
We had a similar problem where we needed to migrate data from one cluster
to another.
We ended up using Spark to accomplish this. It is fast and reliable but
some downtime was required after all.
We minimized the downtime by doing a first run, and then run incremental
updates.
Kind regar
Hello all,
We have two clusters X and Y with same keyspaces but distinct data sets.
We are planning to merge these into single cluster. What would be ideal
steps to achieve this without downtime for applications? We have time
series data stream continuously writing to Cassandra.
We have ruled ou