Re: [ANNOUNCE] StratIO's Lucene plugin fork

2018-10-30 Thread Jonathan Haddad
Very cool Ben, thanks for sharing!

On Tue, Oct 30, 2018 at 6:14 PM Ben Slater 
wrote:

> For anyone who is interested, we’ve published a blog with some more
> background on this and some more detail of our ongoing plans:
> https://www.instaclustr.com/instaclustr-support-cassandra-lucene-index/
>
> Cheers
> Ben
>
> On Fri, 19 Oct 2018 at 09:42 kurt greaves  wrote:
>
>> Hi all,
>>
>> We've had confirmation from Stratio that they are no longer maintaining
>> their Lucene plugin for Apache Cassandra. We've thus decided to fork the
>> plugin to continue maintaining it. At this stage we won't be making any
>> additions to the plugin in the short term unless absolutely necessary, and
>> as 4.0 nears we'll begin making it compatible with the new major release.
>> We plan on taking the existing PR's and issues from the Stratio repository
>> and getting them merged/resolved, however this likely won't happen until
>> early next year. Having said that, we welcome all contributions and will
>> dedicate time to reviewing bugs in the current versions if people lodge
>> them and can help.
>>
>> I'll note that this is new ground for us, we don't have much existing
>> knowledge of the plugin but are determined to learn. If anyone out there
>> has established knowledge about the plugin we'd be grateful for any
>> assistance!
>>
>> You can find our fork here:
>> https://github.com/instaclustr/cassandra-lucene-index
>> At the moment, the only difference is that there is a 3.11.3 branch which
>> just has some minor changes to dependencies to better support 3.11.3.
>>
>> Cheers,
>> Kurt
>>
> --
>
>
> *Ben Slater*
>
> *Chief Product Officer *
>
>    
>
>
> Read our latest technical blog posts here
> .
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: [ANNOUNCE] StratIO's Lucene plugin fork

2018-10-30 Thread Ben Slater
For anyone who is interested, we’ve published a blog with some more
background on this and some more detail of our ongoing plans:
https://www.instaclustr.com/instaclustr-support-cassandra-lucene-index/

Cheers
Ben

On Fri, 19 Oct 2018 at 09:42 kurt greaves  wrote:

> Hi all,
>
> We've had confirmation from Stratio that they are no longer maintaining
> their Lucene plugin for Apache Cassandra. We've thus decided to fork the
> plugin to continue maintaining it. At this stage we won't be making any
> additions to the plugin in the short term unless absolutely necessary, and
> as 4.0 nears we'll begin making it compatible with the new major release.
> We plan on taking the existing PR's and issues from the Stratio repository
> and getting them merged/resolved, however this likely won't happen until
> early next year. Having said that, we welcome all contributions and will
> dedicate time to reviewing bugs in the current versions if people lodge
> them and can help.
>
> I'll note that this is new ground for us, we don't have much existing
> knowledge of the plugin but are determined to learn. If anyone out there
> has established knowledge about the plugin we'd be grateful for any
> assistance!
>
> You can find our fork here:
> https://github.com/instaclustr/cassandra-lucene-index
> At the moment, the only difference is that there is a 3.11.3 branch which
> just has some minor changes to dependencies to better support 3.11.3.
>
> Cheers,
> Kurt
>
-- 


*Ben Slater*

*Chief Product Officer *

   


Read our latest technical blog posts here
.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Re: Cassandra | Cross Data Centre Replication Status

2018-10-30 Thread Jonathan Haddad
You need to run "nodetool rebuild -- " on each node in
the new DC to get the old data to replicate.  It doesn't do it
automatically because Cassandra has no way of knowing if you're done adding
nodes and if it were to migrate automatically, it could cause a lot of
problems. Imagine streaming 100 nodes data to 3 nodes in the new DC, not
fun.

On Tue, Oct 30, 2018 at 1:59 PM Akshay Bhardwaj <
akshay.bhardwaj1...@gmail.com> wrote:

> Hi Experts,
>
> I previously had 1 Cassandra data centre in AWS Singapore region with 5
> nodes, with my keyspace's replication factor as 3 in Network topology.
>
> After this cluster has been running smoothly for 4 months (500 GB of data
> on each node's disk), I added 2nd data centre in AWS Mumbai region with yet
> again 5 nodes in Network topology.
>
> After updating my keyspace's replication factor to
> {"AWS_Sgp":3,"AWS_Mum":3}, my expectation was that the data present in Sgp
> region will immediately start replicating on the Mum region's nodes.
> However even after 2 weeks I do not see historical data to be replicated,
> but new data being written on Sgp region is present in Mum region as well.
>
> Any help or suggestions to debug this issue will be highly appreciated.
>
> Regards
> Akshay Bhardwaj
> +91-97111-33849
>


-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Cassandra | Cross Data Centre Replication Status

2018-10-30 Thread Akshay Bhardwaj
Hi Experts,

I previously had 1 Cassandra data centre in AWS Singapore region with 5
nodes, with my keyspace's replication factor as 3 in Network topology.

After this cluster has been running smoothly for 4 months (500 GB of data
on each node's disk), I added 2nd data centre in AWS Mumbai region with yet
again 5 nodes in Network topology.

After updating my keyspace's replication factor to
{"AWS_Sgp":3,"AWS_Mum":3}, my expectation was that the data present in Sgp
region will immediately start replicating on the Mum region's nodes.
However even after 2 weeks I do not see historical data to be replicated,
but new data being written on Sgp region is present in Mum region as well.

Any help or suggestions to debug this issue will be highly appreciated.

Regards
Akshay Bhardwaj
+91-97111-33849


Re: comprehensive list of checks before rolling version upgrades

2018-10-30 Thread DuyHai Doan
To add to your excellent list:

- no topology change (joining/leaving/decommissioning) nodes
- no rebuild of index/MV under way

On Tue, Oct 30, 2018 at 4:35 PM Carl Mueller
 wrote:

> Does anyone have a pretty comprehensive list of these? Many that I don't
> currently know how to check but I'm researching...
>
> I've seen:
>
> - verify disk space available for snapshot + sstablerewrite
> - gossip state agreement, all nodes are healthy
> - schema state agreement
> - ability to access all the nodes
> - no repairs, upgradesstables, and cleans underway
> - read repair/hinted handoff is not backed up
>
> Other possibles:
> - repair state? can we get away with unrepaired data?
> - pending tasks?
> - streaming state/tasks?
>
>


RE: [EXTERNAL] Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Durity, Sean R
Just to pile on:

I agree. On our upgrades, I always aim to get the binary part done on all nodes 
before worrying about upgradesstables. Upgrade is one node at a time 
(precautionary). Upgradesstables depends on cluster size, data size, 
compactionthroughput, etc. I usually start with running upgradesstables on 2 
nodes per DC and watch how the application performs. On larger clusters (over 
30 nodes), I usually work up to 4-5 nodes per DC running upgradesstables with 
staggered start times.

NOTE: I am rarely doing streaming operations outside of repairs. But I want to 
be able to handle a down node, etc., so I do not run in mixed version mode very 
long.


Sean Durity

From: Carl Mueller 
Sent: Tuesday, October 30, 2018 11:51 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: rolling version upgrade, upgradesstables, and 
vulnerability window

Thank you very much. I couldn't find any definitive answer on that on the list 
or stackoverflow.

It's clear that the safest for a prod cluster is rolling version upgrade of the 
binary, then the upgradesstables.

I will strongly consider cstar for the upgradesstables


On Tue, Oct 30, 2018 at 10:39 AM Alexander Dejanovski 
mailto:a...@thelastpickle.com>> wrote:
Yes, as the new version can read both the old and the new sstables format.

Restrictions only apply when the cluster is in mixed versions.

On Tue, Oct 30, 2018 at 4:37 PM Carl Mueller 
mailto:carl.muel...@smartthings.com.invalid>>
 wrote:
But the topology change restrictions are only in place while there are 
heterogenous versions in the cluster? All the nodes at the upgraded version 
with "degraded" sstables does NOT preclude topology changes or node 
replacement/addition?


On Tue, Oct 30, 2018 at 10:33 AM Jeff Jirsa 
mailto:jji...@gmail.com>> wrote:
Wait for 3.11.4 to be cut

I also vote for doing all the binary bounces and upgradesstables after the 
fact, largely because normal writes/compactions are going to naturally start 
upgrading sstables anyway, and there are some hard restrictions on mixed mode 
(e.g. schema changes won’t cross version) that can be far more impactful.



--
Jeff Jirsa


> On Oct 30, 2018, at 8:21 AM, Carl Mueller 
> mailto:carl.muel...@smartthings.com>.INVALID> 
> wrote:
>
> We are about to finally embark on some version upgrades for lots of clusters, 
> 2.1.x and 2.2.x targetting eventually 3.11.x
>
> I have seen recipes that do the full binary upgrade + upgrade sstables for 1 
> node before moving forward, while I've seen a 2016 vote by Jon Haddad (a TLP 
> guy) that backs doing the binary version upgrades through the cluster on a 
> rolling basis, then doing the upgradesstables on a rolling basis.
>
> Under what cluster conditions are streaming/node replacement precluded, that 
> is we are vulnerable to a cloud provided dumping one of our nodes under us or 
> hardware failure? We ain't apple, but we do have 30+ node datacenters and 
> 80-100 node clusters.
>
> Is the node replacement and streaming only disabled while there are 
> heterogenous cassandra versions, or until all the sstables have been upgraded 
> in the cluster?
>
> My instincts tell me the best thing to do is to get all the cassandra nodes 
> to the same version without the upgradesstables step through the cluster, and 
> then roll through the upgradesstables as needed, and that upgradesstables is 
> a node-local concern that doesn't impact streaming or node replacement or 
> other situations since cassandra can read old version sstables and new 
> sstables would simply be the new format.

-
To unsubscribe, e-mail: 
user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 
user-h...@cassandra.apache.org
--
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com



The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, 

Re: Cassandra C++ driver for C++98

2018-10-30 Thread Michael Penick
Pretty much any version including the most current.

On Mon, Oct 29, 2018 at 4:29 PM, Amit Plaha  wrote:

> Hi Mike,
>
> Thanks for the response. Can you let me know which version of the driver
> can build with C++98?
>
> Regards,
> Amit
>
> On Fri, Oct 26, 2018 at 8:53 AM Michael Penick <
> michael.pen...@datastax.com> wrote:
>
>> Those changes where for testing code only.
>>
>> https://github.com/datastax/cpp-driver/commit/
>> ffc9bbd8747b43ad5dcef749fe4c63ff245fcf74
>> 
>>
>> The driver has compiled with fine w/ C++98 for some time now. The
>> encoding/decoding doesn't make any assumptions about endianess so it should
>> work fine on big-endian system, but it's not officially supported (YMMV).
>>
>> Mike
>>
>>
>>
>>
>> On Thu, Oct 25, 2018 at 3:17 PM, Amit Plaha  wrote:
>>
>>> Hi All,
>>>
>>> Is there any Cassandra C++ driver that works with C++98 and is also
>>> compatible with UNIX big-endian?
>>>
>>> I found this issue: https://datastax-oss.atlassian.net/browse/CPP-692
>>> 
>>> which seems to have been resolved, but not sure if this is exactly what I'm
>>> looking for. Does this make the DSE driver compatible with C++98?
>>>
>>> Any pointers will be appreciated. Thanks!
>>>
>>> Regards,
>>> Amit
>>>
>>
>>


Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Carl Mueller
Thank you very much. I couldn't find any definitive answer on that on the
list or stackoverflow.

It's clear that the safest for a prod cluster is rolling version upgrade of
the binary, then the upgradesstables.

I will strongly consider cstar for the upgradesstables


On Tue, Oct 30, 2018 at 10:39 AM Alexander Dejanovski <
a...@thelastpickle.com> wrote:

> Yes, as the new version can read both the old and the new sstables format.
>
> Restrictions only apply when the cluster is in mixed versions.
>
> On Tue, Oct 30, 2018 at 4:37 PM Carl Mueller
>  wrote:
>
>> But the topology change restrictions are only in place while there are
>> heterogenous versions in the cluster? All the nodes at the upgraded version
>> with "degraded" sstables does NOT preclude topology changes or node
>> replacement/addition?
>>
>>
>> On Tue, Oct 30, 2018 at 10:33 AM Jeff Jirsa  wrote:
>>
>>> Wait for 3.11.4 to be cut
>>>
>>> I also vote for doing all the binary bounces and upgradesstables after
>>> the fact, largely because normal writes/compactions are going to naturally
>>> start upgrading sstables anyway, and there are some hard restrictions on
>>> mixed mode (e.g. schema changes won’t cross version) that can be far more
>>> impactful.
>>>
>>>
>>>
>>> --
>>> Jeff Jirsa
>>>
>>>
>>> > On Oct 30, 2018, at 8:21 AM, Carl Mueller <
>>> carl.muel...@smartthings.com.INVALID> wrote:
>>> >
>>> > We are about to finally embark on some version upgrades for lots of
>>> clusters, 2.1.x and 2.2.x targetting eventually 3.11.x
>>> >
>>> > I have seen recipes that do the full binary upgrade + upgrade sstables
>>> for 1 node before moving forward, while I've seen a 2016 vote by Jon Haddad
>>> (a TLP guy) that backs doing the binary version upgrades through the
>>> cluster on a rolling basis, then doing the upgradesstables on a rolling
>>> basis.
>>> >
>>> > Under what cluster conditions are streaming/node replacement
>>> precluded, that is we are vulnerable to a cloud provided dumping one of our
>>> nodes under us or hardware failure? We ain't apple, but we do have 30+ node
>>> datacenters and 80-100 node clusters.
>>> >
>>> > Is the node replacement and streaming only disabled while there are
>>> heterogenous cassandra versions, or until all the sstables have been
>>> upgraded in the cluster?
>>> >
>>> > My instincts tell me the best thing to do is to get all the cassandra
>>> nodes to the same version without the upgradesstables step through the
>>> cluster, and then roll through the upgradesstables as needed, and that
>>> upgradesstables is a node-local concern that doesn't impact streaming or
>>> node replacement or other situations since cassandra can read old version
>>> sstables and new sstables would simply be the new format.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>> --
> -
> Alexander Dejanovski
> France
> @alexanderdeja
>
> Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>


Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Alexander Dejanovski
Yes, as the new version can read both the old and the new sstables format.

Restrictions only apply when the cluster is in mixed versions.

On Tue, Oct 30, 2018 at 4:37 PM Carl Mueller
 wrote:

> But the topology change restrictions are only in place while there are
> heterogenous versions in the cluster? All the nodes at the upgraded version
> with "degraded" sstables does NOT preclude topology changes or node
> replacement/addition?
>
>
> On Tue, Oct 30, 2018 at 10:33 AM Jeff Jirsa  wrote:
>
>> Wait for 3.11.4 to be cut
>>
>> I also vote for doing all the binary bounces and upgradesstables after
>> the fact, largely because normal writes/compactions are going to naturally
>> start upgrading sstables anyway, and there are some hard restrictions on
>> mixed mode (e.g. schema changes won’t cross version) that can be far more
>> impactful.
>>
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Oct 30, 2018, at 8:21 AM, Carl Mueller 
>> > 
>> wrote:
>> >
>> > We are about to finally embark on some version upgrades for lots of
>> clusters, 2.1.x and 2.2.x targetting eventually 3.11.x
>> >
>> > I have seen recipes that do the full binary upgrade + upgrade sstables
>> for 1 node before moving forward, while I've seen a 2016 vote by Jon Haddad
>> (a TLP guy) that backs doing the binary version upgrades through the
>> cluster on a rolling basis, then doing the upgradesstables on a rolling
>> basis.
>> >
>> > Under what cluster conditions are streaming/node replacement precluded,
>> that is we are vulnerable to a cloud provided dumping one of our nodes
>> under us or hardware failure? We ain't apple, but we do have 30+ node
>> datacenters and 80-100 node clusters.
>> >
>> > Is the node replacement and streaming only disabled while there are
>> heterogenous cassandra versions, or until all the sstables have been
>> upgraded in the cluster?
>> >
>> > My instincts tell me the best thing to do is to get all the cassandra
>> nodes to the same version without the upgradesstables step through the
>> cluster, and then roll through the upgradesstables as needed, and that
>> upgradesstables is a node-local concern that doesn't impact streaming or
>> node replacement or other situations since cassandra can read old version
>> sstables and new sstables would simply be the new format.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Carl Mueller
But the topology change restrictions are only in place while there are
heterogenous versions in the cluster? All the nodes at the upgraded version
with "degraded" sstables does NOT preclude topology changes or node
replacement/addition?


On Tue, Oct 30, 2018 at 10:33 AM Jeff Jirsa  wrote:

> Wait for 3.11.4 to be cut
>
> I also vote for doing all the binary bounces and upgradesstables after the
> fact, largely because normal writes/compactions are going to naturally
> start upgrading sstables anyway, and there are some hard restrictions on
> mixed mode (e.g. schema changes won’t cross version) that can be far more
> impactful.
>
>
>
> --
> Jeff Jirsa
>
>
> > On Oct 30, 2018, at 8:21 AM, Carl Mueller 
> > 
> wrote:
> >
> > We are about to finally embark on some version upgrades for lots of
> clusters, 2.1.x and 2.2.x targetting eventually 3.11.x
> >
> > I have seen recipes that do the full binary upgrade + upgrade sstables
> for 1 node before moving forward, while I've seen a 2016 vote by Jon Haddad
> (a TLP guy) that backs doing the binary version upgrades through the
> cluster on a rolling basis, then doing the upgradesstables on a rolling
> basis.
> >
> > Under what cluster conditions are streaming/node replacement precluded,
> that is we are vulnerable to a cloud provided dumping one of our nodes
> under us or hardware failure? We ain't apple, but we do have 30+ node
> datacenters and 80-100 node clusters.
> >
> > Is the node replacement and streaming only disabled while there are
> heterogenous cassandra versions, or until all the sstables have been
> upgraded in the cluster?
> >
> > My instincts tell me the best thing to do is to get all the cassandra
> nodes to the same version without the upgradesstables step through the
> cluster, and then roll through the upgradesstables as needed, and that
> upgradesstables is a node-local concern that doesn't impact streaming or
> node replacement or other situations since cassandra can read old version
> sstables and new sstables would simply be the new format.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


nodetool listsnapshots

2018-10-30 Thread Lou DeGenaro
It seems that "nodetool listsnapshots" is unreliable?

1. when issued, nodetool listsnapshots reports there are no snapshops.
2. when navigating through the filesystem, one can see clearly that there
are snapshots
3. when issued, nodetool clearsnapshot removes them!

Some sanitized evidence below.

Is "nodetool listsnapshots" broken or it is user error?

Lou.

-

[user@node]$ nodetool version
ReleaseVersion: 3.0.9
[]user@node$ nodetool listsnapshots
Snapshot Details:
There are no snapshots[user@node]$
[user@node]$ nodetool  tablestats -- keyspace.tablename
Keyspace: keyspace
Read Count: 15
Read Latency: 1.54693335 ms.
Write Count: 254
Write Latency: 0.021818897637795275 ms.
Pending Flushes: 0
Table: tablename
SSTable count: 0
Space used (live): 0
Space used (total): 0
Space used by snapshots (total): 0
$ du -s tablename-66cad240c8a411e89e9ad7bcfb03d529/*
0tablename-66cad240c8a411e89e9ad7bcfb03d529/backups
714964tablename-66cad240c8a411e89e9ad7bcfb03d529/snapshots


comprehensive list of checks before rolling version upgrades

2018-10-30 Thread Carl Mueller
Does anyone have a pretty comprehensive list of these? Many that I don't
currently know how to check but I'm researching...

I've seen:

- verify disk space available for snapshot + sstablerewrite
- gossip state agreement, all nodes are healthy
- schema state agreement
- ability to access all the nodes
- no repairs, upgradesstables, and cleans underway
- read repair/hinted handoff is not backed up

Other possibles:
- repair state? can we get away with unrepaired data?
- pending tasks?
- streaming state/tasks?


Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Jeff Jirsa
Wait for 3.11.4 to be cut

I also vote for doing all the binary bounces and upgradesstables after the 
fact, largely because normal writes/compactions are going to naturally start 
upgrading sstables anyway, and there are some hard restrictions on mixed mode 
(e.g. schema changes won’t cross version) that can be far more impactful.



-- 
Jeff Jirsa


> On Oct 30, 2018, at 8:21 AM, Carl Mueller 
>  wrote:
> 
> We are about to finally embark on some version upgrades for lots of clusters, 
> 2.1.x and 2.2.x targetting eventually 3.11.x 
> 
> I have seen recipes that do the full binary upgrade + upgrade sstables for 1 
> node before moving forward, while I've seen a 2016 vote by Jon Haddad (a TLP 
> guy) that backs doing the binary version upgrades through the cluster on a 
> rolling basis, then doing the upgradesstables on a rolling basis.
> 
> Under what cluster conditions are streaming/node replacement precluded, that 
> is we are vulnerable to a cloud provided dumping one of our nodes under us or 
> hardware failure? We ain't apple, but we do have 30+ node datacenters and 
> 80-100 node clusters.
> 
> Is the node replacement and streaming only disabled while there are 
> heterogenous cassandra versions, or until all the sstables have been upgraded 
> in the cluster? 
> 
> My instincts tell me the best thing to do is to get all the cassandra nodes 
> to the same version without the upgradesstables step through the cluster, and 
> then roll through the upgradesstables as needed, and that upgradesstables is 
> a node-local concern that doesn't impact streaming or node replacement or 
> other situations since cassandra can read old version sstables and new 
> sstables would simply be the new format.

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Alexander Dejanovski
Hi Carl,

the safest way is indeed (as suggested by Jon) to upgrade the whole cluster
as quick as possible, and stop all operations that could generate streaming
until all nodes are using the target version.
That includes repair, topology changes (bootstraps, decommissions) and
rebuilds.
You should also avoid all schema changes as they are most probably going to
partially fail in mixed versions clusters.

Run a rolling upgradesstables once the whole cluster is upgraded. You can
(should?) use cstar for that operation as it'll be able to run
upgradesstables with topology awareness, leaving a quorum of replicas free
of the operation at all time.
As upgradesstables will use compaction slots, you could raise your number
of compactors to 4 at least and use "-j 2" to have two slots used by the
upgradesstables. This will leave 2 compactors available for standard
compactions.

Cheers,

Alex, another TLP guy ;)



On Tue, Oct 30, 2018 at 4:21 PM Carl Mueller
 wrote:

> We are about to finally embark on some version upgrades for lots of
> clusters, 2.1.x and 2.2.x targetting eventually 3.11.x
>
> I have seen recipes that do the full binary upgrade + upgrade sstables for
> 1 node before moving forward, while I've seen a 2016 vote by Jon Haddad (a
> TLP guy) that backs doing the binary version upgrades through the cluster
> on a rolling basis, then doing the upgradesstables on a rolling basis.
>
> Under what cluster conditions are streaming/node replacement precluded,
> that is we are vulnerable to a cloud provided dumping one of our nodes
> under us or hardware failure? We ain't apple, but we do have 30+ node
> datacenters and 80-100 node clusters.
>
> Is the node replacement and streaming only disabled while there are
> heterogenous cassandra versions, or until all the sstables have been
> upgraded in the cluster?
>
> My instincts tell me the best thing to do is to get all the cassandra
> nodes to the same version without the upgradesstables step through the
> cluster, and then roll through the upgradesstables as needed, and that
> upgradesstables is a node-local concern that doesn't impact streaming or
> node replacement or other situations since cassandra can read old version
> sstables and new sstables would simply be the new format.
>
-- 
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


rolling version upgrade, upgradesstables, and vulnerability window

2018-10-30 Thread Carl Mueller
We are about to finally embark on some version upgrades for lots of
clusters, 2.1.x and 2.2.x targetting eventually 3.11.x

I have seen recipes that do the full binary upgrade + upgrade sstables for
1 node before moving forward, while I've seen a 2016 vote by Jon Haddad (a
TLP guy) that backs doing the binary version upgrades through the cluster
on a rolling basis, then doing the upgradesstables on a rolling basis.

Under what cluster conditions are streaming/node replacement precluded,
that is we are vulnerable to a cloud provided dumping one of our nodes
under us or hardware failure? We ain't apple, but we do have 30+ node
datacenters and 80-100 node clusters.

Is the node replacement and streaming only disabled while there are
heterogenous cassandra versions, or until all the sstables have been
upgraded in the cluster?

My instincts tell me the best thing to do is to get all the cassandra nodes
to the same version without the upgradesstables step through the cluster,
and then roll through the upgradesstables as needed, and that
upgradesstables is a node-local concern that doesn't impact streaming or
node replacement or other situations since cassandra can read old version
sstables and new sstables would simply be the new format.


Re: [E] Re: nodetool status and node maintenance

2018-10-30 Thread Saha, Sushanta K
Thanks!

On Mon, Oct 29, 2018 at 10:03 AM Horia Mocioi 
wrote:

> Hello,
>
> Instead of parsing the output from nodetool (running nodetool is quite
> intensive) maybe you could have a java program that would monitor via JMX
> (org.apache.cassandra.net.FailureDetector).
>
> You have less burden compared to running periodically nodetool and more
> control on the things that you could do.
>
> Regards,
> Horia
>
> On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
>
> I have script that parses "nodetool status" output and emails alerts if
> any node is down. So, when I stop cassandra on a node for maintenance, all
> nodes stats emailing alarms.
>
> Any way to temporarily make the node under maintenance invisible  from
> "nodetool status" output?
>
> Thanks
>
>

-- 

*Sushanta Saha|*MTS IV-Cslt-Sys Engrg|WebIaaS_DB Group|HQ -
* VerizonWireless O 770.797.1260  C 770.714.6555 Iaas Support Line
949-286-8810*


Re: [E] Re: nodetool status and node maintenance

2018-10-30 Thread Saha, Sushanta K
Thanks!

On Tue, Oct 30, 2018 at 1:53 AM Max C.  wrote:

> Agree - avoid parsing nodetool, if you can.  I’d add that if anyone out
> there is interested in JMX but doesn’t want to deal with Java, you should
> install Jolokia so you can interact with Cassandra’s JMX data via a
> language independent REST-like interface.
>
> https://jolokia.org/
> 
>
> Jolokia is what made it possible for us to get out of nodetool parsing and
> into writing tools in our native language — Python.  We’re big fans!
>
> - Max
>
>
> On Oct 29, 2018, at 7:02 am, Horia Mocioi 
> wrote:
>
> Hello,
>
> Instead of parsing the output from nodetool (running nodetool is quite
> intensive) maybe you could have a java program that would monitor via JMX
> (org.apache.cassandra.net.FailureDetector).
>
> You have less burden compared to running periodically nodetool and more
> control on the things that you could do.
>
> Regards,
> Horia
>
> On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
>
> I have script that parses "nodetool status" output and emails alerts if
> any node is down. So, when I stop cassandra on a node for maintenance, all
> nodes stats emailing alarms.
>
> Any way to temporarily make the node under maintenance invisible  from
> "nodetool status" output?
>
> Thanks
>
>
>

-- 

*Sushanta Saha|*MTS IV-Cslt-Sys Engrg|WebIaaS_DB Group|HQ -
* VerizonWireless O 770.797.1260  C 770.714.6555 Iaas Support Line
949-286-8810*