Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Ankit Gadhiya
Thanks Paul. This is interesting.
So, anything I need to do after cp? - nodetool repair?
Also I am assuming I need to be doing this exercise on all the nodes of the
cluster - right?
Any suggestion to automate this or do it just from a single node?


— Ankit Gadhiya

On Tue, Oct 29, 2019 at 11:21 PM Paul Carlucci 
wrote:

> Straight up Unix cp command, make sure you're in the right directory.  If
> you try to use schema.cql then you're going to have to massage it somewhat
> due to keyspace name differences and schema changes over time.  You'll see
> what I mean if you've got some.
>
> It goes without saying that you're gonna want to try this in non-prod
> first.  On a positive note you'll be learning some stuff they don't quite
> teach in Datastax Academy!
>
> On Tue, Oct 29, 2019, 8:39 AM Ankit Gadhiya 
> wrote:
>
>> Thanks Paul.
>>
>> Copy SSTable - How? Using SSTableLoader or some other mechanism.
>>
>>
>> *Thanks & Regards,*
>> *Ankit Gadhiya*
>>
>>
>>
>> On Tue, Oct 29, 2019 at 11:36 AM Paul Carlucci 
>> wrote:
>>
>>> Copy the schema from your source keyspace to your new target keyspace,
>>> nodetool snapshot on your source keyspace, copy the SSTable files over, do
>>> a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
>>> easier than a nodetool refresh.
>>>
>>> It's either that or just copy it with Spark.
>>>
>>> On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya 
>>> wrote:
>>>
 Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
 SSTableLoader or any other approach?)
 Also since I've multi-node cluster - I'll have to do this on every
 single node - is there any tool or better way to execute this just from a
 single node?

 *Thanks & Regards,*
 *Ankit Gadhiya*



 On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:

> You can create all tables in new keyspace, copy SSTables from 1.0 to
> 2.0 tables & use nodetool refresh on tables in KS 2.0 to say Cassandra
> about them.
>
> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
> wrote:
>
>> Hello Folks,
>>
>> Greetings!.
>>
>> I've a requirement in my project to setup Blue-Green deployment for
>> Cassandra. E.x. Say My current active schema (application pointing to) is
>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
>> some structural changes) and all testing/validation would happen on it 
>> and
>> once successful , App would switch connection to keyspace 2.0 - This 
>> would
>> be generic release deployment for our project.
>>
>> One of the approach we thought of would be to Create keyspace 2.0 as
>> clone from Keyspace 1.0 including data using sstableloader but this would
>> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
>> wouldn't be very feasible to do this manually on all the nodes for 
>> multiple
>> tables part of that keyspace. Was wondering if we have any other creative
>> way to suffice this requirement.
>>
>> Appreciate your time on this.
>>
>>
>> *Thanks & Regards,*
>> *Ankit Gadhiya*
>>
>>
>
> --
> With best wishes,Alex Ott
> http://alexott.net/
> Twitter: alexott_en (English), alexott (Russian)
>
 --
*Thanks & Regards,*
*Ankit Gadhiya*


Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Paul Carlucci
Straight up Unix cp command, make sure you're in the right directory.  If
you try to use schema.cql then you're going to have to massage it somewhat
due to keyspace name differences and schema changes over time.  You'll see
what I mean if you've got some.

It goes without saying that you're gonna want to try this in non-prod
first.  On a positive note you'll be learning some stuff they don't quite
teach in Datastax Academy!

On Tue, Oct 29, 2019, 8:39 AM Ankit Gadhiya  wrote:

> Thanks Paul.
>
> Copy SSTable - How? Using SSTableLoader or some other mechanism.
>
>
> *Thanks & Regards,*
> *Ankit Gadhiya*
>
>
>
> On Tue, Oct 29, 2019 at 11:36 AM Paul Carlucci 
> wrote:
>
>> Copy the schema from your source keyspace to your new target keyspace,
>> nodetool snapshot on your source keyspace, copy the SSTable files over, do
>> a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
>> easier than a nodetool refresh.
>>
>> It's either that or just copy it with Spark.
>>
>> On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya 
>> wrote:
>>
>>> Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
>>> SSTableLoader or any other approach?)
>>> Also since I've multi-node cluster - I'll have to do this on every
>>> single node - is there any tool or better way to execute this just from a
>>> single node?
>>>
>>> *Thanks & Regards,*
>>> *Ankit Gadhiya*
>>>
>>>
>>>
>>> On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:
>>>
 You can create all tables in new keyspace, copy SSTables from 1.0 to
 2.0 tables & use nodetool refresh on tables in KS 2.0 to say Cassandra
 about them.

 On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
 wrote:

> Hello Folks,
>
> Greetings!.
>
> I've a requirement in my project to setup Blue-Green deployment for
> Cassandra. E.x. Say My current active schema (application pointing to) is
> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
> some structural changes) and all testing/validation would happen on it and
> once successful , App would switch connection to keyspace 2.0 - This would
> be generic release deployment for our project.
>
> One of the approach we thought of would be to Create keyspace 2.0 as
> clone from Keyspace 1.0 including data using sstableloader but this would
> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
> wouldn't be very feasible to do this manually on all the nodes for 
> multiple
> tables part of that keyspace. Was wondering if we have any other creative
> way to suffice this requirement.
>
> Appreciate your time on this.
>
>
> *Thanks & Regards,*
> *Ankit Gadhiya*
>
>

 --
 With best wishes,Alex Ott
 http://alexott.net/
 Twitter: alexott_en (English), alexott (Russian)

>>>


RE: Keyspace Clone in Existing Cluster

2019-10-29 Thread ZAIDI, ASAD
If you’re planning to restore snapshot to target keyspace in same cluster – you 
can:


1.  Take snapshot and copy snapshots to shared volume like NFS share so 
later you can load sstables using sstables loader from single node.

2.  Make sure you create target keyspace and tables (without data) only 
structure BEFORE loading with sstalbes. Usually when you take snapshot, a file 
named schema.cql is created .  you can create schema object with this script.

3.  The name of target kespace/tables are import – they have  to match with 
sstableloader input arguments.

4.  Then run sstableloader – you’ll get data/sstables loaded in bulk and 
streamed to all the nodes in cluster.

Thanks/Asad


From: Ankit Gadhiya [mailto:ankitgadh...@gmail.com]
Sent: Tuesday, October 29, 2019 1:20 PM
To: user@cassandra.apache.org
Subject: Re: Keyspace Clone in Existing Cluster

Thanks folks for your responses but still haven't found concrete solution for 
this.

Thanks & Regards,
Ankit Gadhiya


On Tue, Oct 29, 2019 at 2:15 PM Sergio Bilello 
mailto:lapostadiser...@gmail.com>> wrote:
Rolling bounce = Rolling repair per node? Would not it be easy to be scheduled 
with Cassandra Reaper?
On 2019/10/29 15:35:42, Paul Carlucci 
mailto:paul.carlu...@gmail.com>> wrote:
> Copy the schema from your source keyspace to your new target keyspace,
> nodetool snapshot on your source keyspace, copy the SSTable files over, do
> a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
> easier than a nodetool refresh.
>
> It's either that or just copy it with Spark.
>
> On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya 
> mailto:ankitgadh...@gmail.com>> wrote:
>
> > Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
> > SSTableLoader or any other approach?)
> > Also since I've multi-node cluster - I'll have to do this on every single
> > node - is there any tool or better way to execute this just from a single
> > node?
> >
> > *Thanks & Regards,*
> > *Ankit Gadhiya*
> >
> >
> >
> > On Tue, Oct 29, 2019 at 11:16 AM Alex Ott 
> > mailto:alex...@gmail.com>> wrote:
> >
> >> You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
> >> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
> >> them.
> >>
> >> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
> >> mailto:ankitgadh...@gmail.com>>
> >> wrote:
> >>
> >>> Hello Folks,
> >>>
> >>> Greetings!.
> >>>
> >>> I've a requirement in my project to setup Blue-Green deployment for
> >>> Cassandra. E.x. Say My current active schema (application pointing to) is
> >>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
> >>> some structural changes) and all testing/validation would happen on it and
> >>> once successful , App would switch connection to keyspace 2.0 - This would
> >>> be generic release deployment for our project.
> >>>
> >>> One of the approach we thought of would be to Create keyspace 2.0 as
> >>> clone from Keyspace 1.0 including data using sstableloader but this would
> >>> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
> >>> wouldn't be very feasible to do this manually on all the nodes for 
> >>> multiple
> >>> tables part of that keyspace. Was wondering if we have any other creative
> >>> way to suffice this requirement.
> >>>
> >>> Appreciate your time on this.
> >>>
> >>>
> >>> *Thanks & Regards,*
> >>> *Ankit Gadhiya*
> >>>
> >>>
> >>
> >> --
> >> With best wishes,Alex Ott
> >> http://alexott.net/
> >> Twitter: alexott_en (English), alexott (Russian)
> >>
> >
>

-
To unsubscribe, e-mail: 
user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 
user-h...@cassandra.apache.org


Re: Aws instance stop and star with ebs

2019-10-29 Thread Rahul Reddy
Thanks Alex. We have 6 nodes in each DC with RF=3  with CL local qourum .
and we stopped and started only one instance at a time . Tough nodetool
status says all nodes UN and system.log says canssandra started and started
listening . Jmx explrter shows instance stayed down longer how do we
determine what caused  the Cassandra unavialbe though log says its stared
and listening ?

On Tue, Oct 29, 2019, 4:44 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

> On Tue, Oct 29, 2019 at 9:34 PM Rahul Reddy 
> wrote:
>
>>
>> We have our infrastructure on aws and we use ebs storage . And aws was
>> retiring on of the node. Since our storage was persistent we did nodetool
>> drain and stopped and start the instance . This caused 500 errors in the
>> service. We have local_quorum and rf=3 why does stopping one instance cause
>> application to have issues?
>>
>
> Can you still look up what was the underlying error from Cassandra driver
> in the application logs?  Was it request timeout or not enough replicas?
>
> For example, if you only had 3 Cassandra nodes, restarting one of them
> reduces your cluster capacity by 33% temporarily.
>
> Cheers,
> --
> Alex
>
>


[RELEASE] Apache Cassandra 4.0-alpha2 released

2019-10-29 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache 
Cassandra version 4.0-alpha2.


Apache Cassandra is a fully distributed database. It is the right choice 
when you need scalability and high availability without compromising 
performance.


 http://cassandra.apache.org/

Downloads of source and binary distributions:

http://www.apache.org/dyn/closer.lua/cassandra/4.0-alpha2/apache-cassandra-4.0-alpha2-bin.tar.gz

http://www.apache.org/dyn/closer.lua/cassandra/4.0-alpha2/apache-cassandra-4.0-alpha2-src.tar.gz

Debian and Redhat configurations

 sources.list:
 deb http://www.apache.org/dist/cassandra/debian 40x main

 yum config:
 baseurl=https://www.apache.org/dist/cassandra/redhat/40x/

See http://cassandra.apache.org/download/ for full install instructions.

This is an ALPHA version! It is not intended for production use, however 
the project would appreciate your testing and feedback to make the final 
release better. As always, please pay attention to the release notes[2] 
and let us know[3] if you encounter any problems.


Enjoy!

[1]: CHANGES.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-4.0-alpha2
[2]: NEWS.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-4.0-alpha2

[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 3.11.5 released

2019-10-29 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache 
Cassandra version 3.11.5.


Apache Cassandra is a fully distributed database. It is the right choice 
when you need scalability and high availability without compromising 
performance.


 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download 
section:


 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 3.11 series. As always, 
please pay attention to the release notes[2] and Let us know[3] if you 
were to encounter any problem.


Enjoy!

[1]: CHANGES.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.5
[2]: NEWS.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.11.5

[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 3.0.19 released

2019-10-29 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache 
Cassandra version 3.0.19.


Apache Cassandra is a fully distributed database. It is the right choice 
when you need scalability and high availability without compromising 
performance.


 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download 
section:


 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 3.0 series. As always, 
please pay attention to the release notes[2] and Let us know[3] if you 
were to encounter any problem.


Enjoy!

[1]: CHANGES.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.0.19
[2]: NEWS.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.0.19

[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 2.2.15 released

2019-10-29 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache 
Cassandra version 2.2.15.


Apache Cassandra is a fully distributed database. It is the right choice 
when you need scalability and high availability without compromising 
performance.


 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download 
section:


 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.2 series. As always, 
please pay attention to the release notes[2] and Let us know[3] if you 
were to encounter any problem.


Enjoy!

[1]: CHANGES.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-2.2.15
[2]: NEWS.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-2.2.15

[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Aws instance stop and star with ebs

2019-10-29 Thread Oleksandr Shulgin
On Tue, Oct 29, 2019 at 9:34 PM Rahul Reddy 
wrote:

>
> We have our infrastructure on aws and we use ebs storage . And aws was
> retiring on of the node. Since our storage was persistent we did nodetool
> drain and stopped and start the instance . This caused 500 errors in the
> service. We have local_quorum and rf=3 why does stopping one instance cause
> application to have issues?
>

Can you still look up what was the underlying error from Cassandra driver
in the application logs?  Was it request timeout or not enough replicas?

For example, if you only had 3 Cassandra nodes, restarting one of them
reduces your cluster capacity by 33% temporarily.

Cheers,
-- 
Alex


Aws instance stop and star with ebs

2019-10-29 Thread Rahul Reddy
Hello,

We have our infrastructure on aws and we use ebs storage . And aws was
retiring on of the node. Since our storage was persistent we did nodetool
drain and stopped and start the instance . This caused 500 errors in the
service. We have local_quorum and rf=3 why does stopping one instance cause
application to have issues?


Cassandra and UTF-8 BOM?

2019-10-29 Thread James A. Robinson
Hi folks,

I'm looking at a table that has a primary key defined as "publisher_id
text".  I've noticed some of the entries have what appears to me to be
a UTF-8 BOM marker and some do not.

https://docs.datastax.com/en/archived/cql/3.3/cql/cql_reference/cql_data_types_c.html
says text is a UTF-8 encoded string.  If I look at the first 3 bytes
of one of these columns:

$ dd if=~/tmp/sample.data of=/dev/stdout bs=1 count=3 2>/dev/null | hexdump
000 bbef 00bf
003

When I swap the byte order:

$ dd if=~/tmp/sample.data of=/dev/stdout bs=1 count=3 conv=swab
2>/dev/null | hexdump
000 efbb 00bf
003

And I think this matches the UTF-8 BOM.

However, not all the rows have this prefix, and I'm wondering if this
is a client issue (client being inconsistent about  how it's dealing
with strings) or if Cassandra is doing something special on its own.
The rest of the column falls within the US-ASCII codepoint compatible
range of UTF-8, e.g., something as simple as 'abc' but in some cases
it's got this marker in front of it.

Cassandra is treating 'abc' as a distinct value from 'abc' ,
which certainly makes sense, for the sake of efficiency I assume it'd
just be looking at the byte-for-byte values w/o layering meaning on
top of it.  But that means I'll need to clean the data up to be
consistent, and I need to figure out how to prevent it from being
reintroduced in the future.

Jim

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Ankit Gadhiya
Thanks folks for your responses but still haven't found concrete solution
for this.

*Thanks & Regards,*
*Ankit Gadhiya*



On Tue, Oct 29, 2019 at 2:15 PM Sergio Bilello 
wrote:

> Rolling bounce = Rolling repair per node? Would not it be easy to be
> scheduled with Cassandra Reaper?
> On 2019/10/29 15:35:42, Paul Carlucci  wrote:
> > Copy the schema from your source keyspace to your new target keyspace,
> > nodetool snapshot on your source keyspace, copy the SSTable files over,
> do
> > a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
> > easier than a nodetool refresh.
> >
> > It's either that or just copy it with Spark.
> >
> > On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya 
> wrote:
> >
> > > Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
> > > SSTableLoader or any other approach?)
> > > Also since I've multi-node cluster - I'll have to do this on every
> single
> > > node - is there any tool or better way to execute this just from a
> single
> > > node?
> > >
> > > *Thanks & Regards,*
> > > *Ankit Gadhiya*
> > >
> > >
> > >
> > > On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:
> > >
> > >> You can create all tables in new keyspace, copy SSTables from 1.0 to
> 2.0
> > >> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra
> about
> > >> them.
> > >>
> > >> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya  >
> > >> wrote:
> > >>
> > >>> Hello Folks,
> > >>>
> > >>> Greetings!.
> > >>>
> > >>> I've a requirement in my project to setup Blue-Green deployment for
> > >>> Cassandra. E.x. Say My current active schema (application pointing
> to) is
> > >>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0
> (with
> > >>> some structural changes) and all testing/validation would happen on
> it and
> > >>> once successful , App would switch connection to keyspace 2.0 - This
> would
> > >>> be generic release deployment for our project.
> > >>>
> > >>> One of the approach we thought of would be to Create keyspace 2.0 as
> > >>> clone from Keyspace 1.0 including data using sstableloader but this
> would
> > >>> be time consuming, also being a multi-node cluster (6+6 in each DC)
> - it
> > >>> wouldn't be very feasible to do this manually on all the nodes for
> multiple
> > >>> tables part of that keyspace. Was wondering if we have any other
> creative
> > >>> way to suffice this requirement.
> > >>>
> > >>> Appreciate your time on this.
> > >>>
> > >>>
> > >>> *Thanks & Regards,*
> > >>> *Ankit Gadhiya*
> > >>>
> > >>>
> > >>
> > >> --
> > >> With best wishes,Alex Ott
> > >> http://alexott.net/
> > >> Twitter: alexott_en (English), alexott (Russian)
> > >>
> > >
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Sergio Bilello
Rolling bounce = Rolling repair per node? Would not it be easy to be scheduled 
with Cassandra Reaper?
On 2019/10/29 15:35:42, Paul Carlucci  wrote: 
> Copy the schema from your source keyspace to your new target keyspace,
> nodetool snapshot on your source keyspace, copy the SSTable files over, do
> a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
> easier than a nodetool refresh.
> 
> It's either that or just copy it with Spark.
> 
> On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya  wrote:
> 
> > Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
> > SSTableLoader or any other approach?)
> > Also since I've multi-node cluster - I'll have to do this on every single
> > node - is there any tool or better way to execute this just from a single
> > node?
> >
> > *Thanks & Regards,*
> > *Ankit Gadhiya*
> >
> >
> >
> > On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:
> >
> >> You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
> >> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
> >> them.
> >>
> >> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
> >> wrote:
> >>
> >>> Hello Folks,
> >>>
> >>> Greetings!.
> >>>
> >>> I've a requirement in my project to setup Blue-Green deployment for
> >>> Cassandra. E.x. Say My current active schema (application pointing to) is
> >>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
> >>> some structural changes) and all testing/validation would happen on it and
> >>> once successful , App would switch connection to keyspace 2.0 - This would
> >>> be generic release deployment for our project.
> >>>
> >>> One of the approach we thought of would be to Create keyspace 2.0 as
> >>> clone from Keyspace 1.0 including data using sstableloader but this would
> >>> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
> >>> wouldn't be very feasible to do this manually on all the nodes for 
> >>> multiple
> >>> tables part of that keyspace. Was wondering if we have any other creative
> >>> way to suffice this requirement.
> >>>
> >>> Appreciate your time on this.
> >>>
> >>>
> >>> *Thanks & Regards,*
> >>> *Ankit Gadhiya*
> >>>
> >>>
> >>
> >> --
> >> With best wishes,Alex Ott
> >> http://alexott.net/
> >> Twitter: alexott_en (English), alexott (Russian)
> >>
> >
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Ankit Gadhiya
Thanks Paul.

Copy SSTable - How? Using SSTableLoader or some other mechanism.


*Thanks & Regards,*
*Ankit Gadhiya*



On Tue, Oct 29, 2019 at 11:36 AM Paul Carlucci 
wrote:

> Copy the schema from your source keyspace to your new target keyspace,
> nodetool snapshot on your source keyspace, copy the SSTable files over, do
> a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
> easier than a nodetool refresh.
>
> It's either that or just copy it with Spark.
>
> On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya 
> wrote:
>
>> Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
>> SSTableLoader or any other approach?)
>> Also since I've multi-node cluster - I'll have to do this on every single
>> node - is there any tool or better way to execute this just from a single
>> node?
>>
>> *Thanks & Regards,*
>> *Ankit Gadhiya*
>>
>>
>>
>> On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:
>>
>>> You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
>>> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
>>> them.
>>>
>>> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
>>> wrote:
>>>
 Hello Folks,

 Greetings!.

 I've a requirement in my project to setup Blue-Green deployment for
 Cassandra. E.x. Say My current active schema (application pointing to) is
 Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
 some structural changes) and all testing/validation would happen on it and
 once successful , App would switch connection to keyspace 2.0 - This would
 be generic release deployment for our project.

 One of the approach we thought of would be to Create keyspace 2.0 as
 clone from Keyspace 1.0 including data using sstableloader but this would
 be time consuming, also being a multi-node cluster (6+6 in each DC) - it
 wouldn't be very feasible to do this manually on all the nodes for multiple
 tables part of that keyspace. Was wondering if we have any other creative
 way to suffice this requirement.

 Appreciate your time on this.


 *Thanks & Regards,*
 *Ankit Gadhiya*


>>>
>>> --
>>> With best wishes,Alex Ott
>>> http://alexott.net/
>>> Twitter: alexott_en (English), alexott (Russian)
>>>
>>


Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Paul Carlucci
Copy the schema from your source keyspace to your new target keyspace,
nodetool snapshot on your source keyspace, copy the SSTable files over, do
a rolling bounce, repair, enjoy.  In my experience a rolling bounce is
easier than a nodetool refresh.

It's either that or just copy it with Spark.

On Tue, Oct 29, 2019, 11:19 AM Ankit Gadhiya  wrote:

> Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same
> SSTableLoader or any other approach?)
> Also since I've multi-node cluster - I'll have to do this on every single
> node - is there any tool or better way to execute this just from a single
> node?
>
> *Thanks & Regards,*
> *Ankit Gadhiya*
>
>
>
> On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:
>
>> You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
>> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
>> them.
>>
>> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
>> wrote:
>>
>>> Hello Folks,
>>>
>>> Greetings!.
>>>
>>> I've a requirement in my project to setup Blue-Green deployment for
>>> Cassandra. E.x. Say My current active schema (application pointing to) is
>>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
>>> some structural changes) and all testing/validation would happen on it and
>>> once successful , App would switch connection to keyspace 2.0 - This would
>>> be generic release deployment for our project.
>>>
>>> One of the approach we thought of would be to Create keyspace 2.0 as
>>> clone from Keyspace 1.0 including data using sstableloader but this would
>>> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
>>> wouldn't be very feasible to do this manually on all the nodes for multiple
>>> tables part of that keyspace. Was wondering if we have any other creative
>>> way to suffice this requirement.
>>>
>>> Appreciate your time on this.
>>>
>>>
>>> *Thanks & Regards,*
>>> *Ankit Gadhiya*
>>>
>>>
>>
>> --
>> With best wishes,Alex Ott
>> http://alexott.net/
>> Twitter: alexott_en (English), alexott (Russian)
>>
>


Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Ankit Gadhiya
Thanks Alex. So How do I copy SSTables from 1.0 to 2.0? (Same SSTableLoader
or any other approach?)
Also since I've multi-node cluster - I'll have to do this on every single
node - is there any tool or better way to execute this just from a single
node?

*Thanks & Regards,*
*Ankit Gadhiya*



On Tue, Oct 29, 2019 at 11:16 AM Alex Ott  wrote:

> You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
> tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
> them.
>
> On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
> wrote:
>
>> Hello Folks,
>>
>> Greetings!.
>>
>> I've a requirement in my project to setup Blue-Green deployment for
>> Cassandra. E.x. Say My current active schema (application pointing to) is
>> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
>> some structural changes) and all testing/validation would happen on it and
>> once successful , App would switch connection to keyspace 2.0 - This would
>> be generic release deployment for our project.
>>
>> One of the approach we thought of would be to Create keyspace 2.0 as
>> clone from Keyspace 1.0 including data using sstableloader but this would
>> be time consuming, also being a multi-node cluster (6+6 in each DC) - it
>> wouldn't be very feasible to do this manually on all the nodes for multiple
>> tables part of that keyspace. Was wondering if we have any other creative
>> way to suffice this requirement.
>>
>> Appreciate your time on this.
>>
>>
>> *Thanks & Regards,*
>> *Ankit Gadhiya*
>>
>>
>
> --
> With best wishes,Alex Ott
> http://alexott.net/
> Twitter: alexott_en (English), alexott (Russian)
>


Re: Keyspace Clone in Existing Cluster

2019-10-29 Thread Alex Ott
You can create all tables in new keyspace, copy SSTables from 1.0 to 2.0
tables & use nodetool refresh on tables in KS 2.0 to say Cassandra about
them.

On Tue, Oct 29, 2019 at 4:10 PM Ankit Gadhiya 
wrote:

> Hello Folks,
>
> Greetings!.
>
> I've a requirement in my project to setup Blue-Green deployment for
> Cassandra. E.x. Say My current active schema (application pointing to) is
> Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
> some structural changes) and all testing/validation would happen on it and
> once successful , App would switch connection to keyspace 2.0 - This would
> be generic release deployment for our project.
>
> One of the approach we thought of would be to Create keyspace 2.0 as clone
> from Keyspace 1.0 including data using sstableloader but this would be time
> consuming, also being a multi-node cluster (6+6 in each DC) - it wouldn't
> be very feasible to do this manually on all the nodes for multiple tables
> part of that keyspace. Was wondering if we have any other creative way to
> suffice this requirement.
>
> Appreciate your time on this.
>
>
> *Thanks & Regards,*
> *Ankit Gadhiya*
>
>

-- 
With best wishes,Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)


Keyspace Clone in Existing Cluster

2019-10-29 Thread Ankit Gadhiya
Hello Folks,

Greetings!.

I've a requirement in my project to setup Blue-Green deployment for
Cassandra. E.x. Say My current active schema (application pointing to) is
Keyspace V1.0 and for my next release I want to setup Keysapce 2.0 (with
some structural changes) and all testing/validation would happen on it and
once successful , App would switch connection to keyspace 2.0 - This would
be generic release deployment for our project.

One of the approach we thought of would be to Create keyspace 2.0 as clone
from Keyspace 1.0 including data using sstableloader but this would be time
consuming, also being a multi-node cluster (6+6 in each DC) - it wouldn't
be very feasible to do this manually on all the nodes for multiple tables
part of that keyspace. Was wondering if we have any other creative way to
suffice this requirement.

Appreciate your time on this.


*Thanks & Regards,*
*Ankit Gadhiya*