Roughly half TB of data.

There is a timestamp column in the tables we migrated and we did use that
to achieve incremental updates.

I don't know anything about kairosdb, but I can see from the docs that
there exists a row timestamp column. Could you maybe use that one?

Kind regards,
George

On Mon, Dec 21, 2015 at 12:53 PM, Noorul Islam K M <noo...@noorul.com>
wrote:

> George Sigletos <sigle...@textkernel.nl> writes:
>
> > Hello,
> >
> > We had a similar problem where we needed to migrate data from one cluster
> > to another.
> >
> > We ended up using Spark to accomplish this. It is fast and reliable but
> > some downtime was required after all.
> >
> > We minimized the downtime by doing a first run, and then run incremental
> > updates.
> >
>
> How much data are you talking about?
>
> How did you achieve incremental run? We are using kairosdb and some of
> the other schemas does not have a way to filter based on date.
>
> Thanks and Regards
> Noorul
>
> > Kind regards,
> > George
> >
> >
> >
> > On Mon, Dec 21, 2015 at 10:12 AM, Noorul Islam K M <noo...@noorul.com>
> > wrote:
> >
> >>
> >> Hello all,
> >>
> >> We have two clusters X and Y with same keyspaces but distinct data sets.
> >> We are planning to merge these into single cluster. What would be ideal
> >> steps to achieve this without downtime for applications? We have time
> >> series data stream continuously writing to Cassandra.
> >>
> >> We have ruled out export/import as that will make us loose data during
> >> the time of copy.
> >>
> >> We also ruled out sstableloader as that is not reliable. It fails often
> >> and there is not way to start from where it failed.
> >>
> >> Any suggestions will help.
> >>
> >> Thanks and Regards
> >> Noorul
> >>
>

Reply via email to