Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-26 Thread manish khandelwal
Did you run any alter command during upgrade.?

No need to run drain before running upgrade sstable.

Regards
Manish



On Fri, Jun 26, 2020 at 9:48 PM Meenakshi Subramanyam 
wrote:

> A quick question on the same topic, we are upgrading from 3.11.1 to
> 3.11.6. We had a schema mismatch after upgrading one node. RR did ont fix
> it and we had to remove that node. have anyone faced this issue ?
>  Also Do we need to do a nodetool drain before running upgrade sstables.
>
>
> Meena
>
> On Wed, Jun 24, 2020 at 8:54 AM Jon Haddad  wrote:
>
>> Generally speaking, don't run mixed versions longer than you have to, and
>> don't upgrade that way.
>>
>> Why?
>>
>> * We don't support it.
>> * We don't even test it.
>> * If you run into trouble and ask for help, the first thing people will
>> tell you is to get all nodes on the same version.
>>
>> Anyone that's doing so that didn't specifically read the source and test
>> it out for themselves only got lucky in that they didn't hit any issues.
>> If you do it, and hit issues, be prepared to get very familiar with the C*
>> source as you're on your own.
>>
>> Be smart and go the supported, well traveled route.  You'll need to do it
>> when upgrading majors *anyways*, so you might as well figure out the right
>> way of doing it *today* and follow the same stable method every time you
>> upgrade.
>>
>>
>>
>> On Wed, Jun 24, 2020 at 8:36 AM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Thank you all for the suggestions.
>>>
>>> I am not trying to scale up the cluster for capacity but for the upgrade
>>> process instead of in place upgrade I am planning to add nodes with 3.11.6
>>> and then decommission  the nodes with 3.11.3.
>>>
>>> On Wednesday, June 24, 2020, Durity, Sean R 
>>> wrote:
>>>
>>>> Streaming operations (repair/bootstrap) with different file versions is
>>>> usually a problem. Running a mixed version cluster is fine – for the time
>>>> you are doing the upgrade. I would not stay on mixed versions for any
>>>> longer than that. It takes more time, but I separate out the admin tasks so
>>>> that I can reason what should happen. I would either scale up or upgrade
>>>> (depending on which is more urgent), then do the other.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Sean Durity
>>>>
>>>>
>>>>
>>>> *From:* manish khandelwal 
>>>> *Sent:* Wednesday, June 24, 2020 5:52 AM
>>>> *To:* user@cassandra.apache.org
>>>> *Subject:* [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6
>>>>
>>>>
>>>>
>>>> Rightly said by Surbhi, it is not good to scale with mixed versions as
>>>> debugging issues will be very difficult.
>>>>
>>>> Better to upgrade first and then scale.
>>>>
>>>>
>>>>
>>>> Regards
>>>>
>>>>
>>>>
>>>> On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
>>>> wrote:
>>>>
>>>> In case of any issue, it gets very difficult to debug when we have
>>>> multiple versions.
>>>>
>>>>
>>>>
>>>> On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer <
>>>> jalbersdor...@gmail.com> wrote:
>>>>
>>>> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
>>>> Cluster running in Production in one DC with versions ranging from 3.11.1
>>>> to 3.11.6 without having had any single issue for over a year. I just
>>>> upgraded all nodes to 3.11.6 for the sake of consistency.
>>>>
>>>> Von meinem iPhone gesendet
>>>>
>>>>
>>>>
>>>> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>>>>
>>>> 
>>>>
>>>>
>>>>
>>>> Hi ,
>>>>
>>>>
>>>>
>>>> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
>>>> format change from 3.11.4 .
>>>>
>>>> We also had to expand the cluster and we also discussed about expansion
>>>> first and than upgrade. But finally we upgraded and than expanded.
>>>>
>>>> As per our experience what I could tell you is, it is not advisable to
>>>> add new nodes on higher version.
>>>>
>>>> There are many b

Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-26 Thread Meenakshi Subramanyam
A quick question on the same topic, we are upgrading from 3.11.1 to 3.11.6.
We had a schema mismatch after upgrading one node. RR did ont fix it and we
had to remove that node. have anyone faced this issue ?
 Also Do we need to do a nodetool drain before running upgrade sstables.


Meena

On Wed, Jun 24, 2020 at 8:54 AM Jon Haddad  wrote:

> Generally speaking, don't run mixed versions longer than you have to, and
> don't upgrade that way.
>
> Why?
>
> * We don't support it.
> * We don't even test it.
> * If you run into trouble and ask for help, the first thing people will
> tell you is to get all nodes on the same version.
>
> Anyone that's doing so that didn't specifically read the source and test
> it out for themselves only got lucky in that they didn't hit any issues.
> If you do it, and hit issues, be prepared to get very familiar with the C*
> source as you're on your own.
>
> Be smart and go the supported, well traveled route.  You'll need to do it
> when upgrading majors *anyways*, so you might as well figure out the right
> way of doing it *today* and follow the same stable method every time you
> upgrade.
>
>
>
> On Wed, Jun 24, 2020 at 8:36 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Thank you all for the suggestions.
>>
>> I am not trying to scale up the cluster for capacity but for the upgrade
>> process instead of in place upgrade I am planning to add nodes with 3.11.6
>> and then decommission  the nodes with 3.11.3.
>>
>> On Wednesday, June 24, 2020, Durity, Sean R 
>> wrote:
>>
>>> Streaming operations (repair/bootstrap) with different file versions is
>>> usually a problem. Running a mixed version cluster is fine – for the time
>>> you are doing the upgrade. I would not stay on mixed versions for any
>>> longer than that. It takes more time, but I separate out the admin tasks so
>>> that I can reason what should happen. I would either scale up or upgrade
>>> (depending on which is more urgent), then do the other.
>>>
>>>
>>>
>>>
>>>
>>> Sean Durity
>>>
>>>
>>>
>>> *From:* manish khandelwal 
>>> *Sent:* Wednesday, June 24, 2020 5:52 AM
>>> *To:* user@cassandra.apache.org
>>> *Subject:* [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6
>>>
>>>
>>>
>>> Rightly said by Surbhi, it is not good to scale with mixed versions as
>>> debugging issues will be very difficult.
>>>
>>> Better to upgrade first and then scale.
>>>
>>>
>>>
>>> Regards
>>>
>>>
>>>
>>> On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
>>> wrote:
>>>
>>> In case of any issue, it gets very difficult to debug when we have
>>> multiple versions.
>>>
>>>
>>>
>>> On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer <
>>> jalbersdor...@gmail.com> wrote:
>>>
>>> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
>>> Cluster running in Production in one DC with versions ranging from 3.11.1
>>> to 3.11.6 without having had any single issue for over a year. I just
>>> upgraded all nodes to 3.11.6 for the sake of consistency.
>>>
>>> Von meinem iPhone gesendet
>>>
>>>
>>>
>>> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>>>
>>> 
>>>
>>>
>>>
>>> Hi ,
>>>
>>>
>>>
>>> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
>>> format change from 3.11.4 .
>>>
>>> We also had to expand the cluster and we also discussed about expansion
>>> first and than upgrade. But finally we upgraded and than expanded.
>>>
>>> As per our experience what I could tell you is, it is not advisable to
>>> add new nodes on higher version.
>>>
>>> There are many bugs which got fixed from 3.11.3 to 3.11.6.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Surbhi
>>>
>>>
>>>
>>> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
>>> jaibheem...@gmail.com> wrote:
>>>
>>> Hello,
>>>
>>>
>>>
>>> I am trying to upgrade from 3.11.3 to 3.11.6.
>>>
>>> Can I add new nodes with the 3.11.6  version to the cluster running with
>>> 3.11.3?
>>>
>>> Also, I see the SSTable format changed from mc-* to md-*, does this
>>> cause

Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Jon Haddad
Generally speaking, don't run mixed versions longer than you have to, and
don't upgrade that way.

Why?

* We don't support it.
* We don't even test it.
* If you run into trouble and ask for help, the first thing people will
tell you is to get all nodes on the same version.

Anyone that's doing so that didn't specifically read the source and test it
out for themselves only got lucky in that they didn't hit any issues.  If
you do it, and hit issues, be prepared to get very familiar with the C*
source as you're on your own.

Be smart and go the supported, well traveled route.  You'll need to do it
when upgrading majors *anyways*, so you might as well figure out the right
way of doing it *today* and follow the same stable method every time you
upgrade.



On Wed, Jun 24, 2020 at 8:36 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thank you all for the suggestions.
>
> I am not trying to scale up the cluster for capacity but for the upgrade
> process instead of in place upgrade I am planning to add nodes with 3.11.6
> and then decommission  the nodes with 3.11.3.
>
> On Wednesday, June 24, 2020, Durity, Sean R 
> wrote:
>
>> Streaming operations (repair/bootstrap) with different file versions is
>> usually a problem. Running a mixed version cluster is fine – for the time
>> you are doing the upgrade. I would not stay on mixed versions for any
>> longer than that. It takes more time, but I separate out the admin tasks so
>> that I can reason what should happen. I would either scale up or upgrade
>> (depending on which is more urgent), then do the other.
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* manish khandelwal 
>> *Sent:* Wednesday, June 24, 2020 5:52 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6
>>
>>
>>
>> Rightly said by Surbhi, it is not good to scale with mixed versions as
>> debugging issues will be very difficult.
>>
>> Better to upgrade first and then scale.
>>
>>
>>
>> Regards
>>
>>
>>
>> On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
>> wrote:
>>
>> In case of any issue, it gets very difficult to debug when we have
>> multiple versions.
>>
>>
>>
>> On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer <
>> jalbersdor...@gmail.com> wrote:
>>
>> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
>> Cluster running in Production in one DC with versions ranging from 3.11.1
>> to 3.11.6 without having had any single issue for over a year. I just
>> upgraded all nodes to 3.11.6 for the sake of consistency.
>>
>> Von meinem iPhone gesendet
>>
>>
>>
>> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>>
>> 
>>
>>
>>
>> Hi ,
>>
>>
>>
>> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
>> format change from 3.11.4 .
>>
>> We also had to expand the cluster and we also discussed about expansion
>> first and than upgrade. But finally we upgraded and than expanded.
>>
>> As per our experience what I could tell you is, it is not advisable to
>> add new nodes on higher version.
>>
>> There are many bugs which got fixed from 3.11.3 to 3.11.6.
>>
>>
>>
>> Thanks
>>
>> Surbhi
>>
>>
>>
>> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>> Hello,
>>
>>
>>
>> I am trying to upgrade from 3.11.3 to 3.11.6.
>>
>> Can I add new nodes with the 3.11.6  version to the cluster running with
>> 3.11.3?
>>
>> Also, I see the SSTable format changed from mc-* to md-*, does this cause
>> any issues?
>>
>>
>>
>>
>> --
>>
>> The information in this Internet Email is confidential and may be legally
>> privileged. It is intended solely for the addressee. Access to this Email
>> by anyone else is unauthorized. If you are not the intended recipient, any
>> disclosure, copying, distribution or any action taken or omitted to be
>> taken in reliance on it, is prohibited and may be unlawful. When addressed
>> to our clients any opinions or advice contained in this Email are subject
>> to the terms and conditions expressed in any applicable governing The Home
>> Depot terms of business or client engagement letter. The Home Depot
>> disclaims all responsibility and liability for the accuracy and content of
>> this attachment and for any damages or losses arising from any
>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>> items of a destructive nature, which may be contained in this attachment
>> and shall not be liable for direct, indirect, consequential or special
>> damages in connection with this e-mail message or its attachment.
>>
>


RE: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Durity, Sean R
That seems like a lot of unnecessary streaming operations to me. I think 
someone said that streaming works between these 2 versions. But I would not use 
this approach. Why not an in-place upgrade?


Sean Durity

From: Jai Bheemsen Rao Dhanwada 
Sent: Wednesday, June 24, 2020 11:36 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6

Thank you all for the suggestions.

I am not trying to scale up the cluster for capacity but for the upgrade 
process instead of in place upgrade I am planning to add nodes with 3.11.6 and 
then decommission  the nodes with 3.11.3.

On Wednesday, June 24, 2020, Durity, Sean R 
mailto:sean_r_dur...@homedepot.com>> wrote:
Streaming operations (repair/bootstrap) with different file versions is usually 
a problem. Running a mixed version cluster is fine – for the time you are doing 
the upgrade. I would not stay on mixed versions for any longer than that. It 
takes more time, but I separate out the admin tasks so that I can reason what 
should happen. I would either scale up or upgrade (depending on which is more 
urgent), then do the other.


Sean Durity

From: manish khandelwal 
mailto:manishkhandelwa...@gmail.com>>
Sent: Wednesday, June 24, 2020 5:52 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6

Rightly said by Surbhi, it is not good to scale with mixed versions as 
debugging issues will be very difficult.
Better to upgrade first and then scale.

Regards

On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
mailto:surbhi.gupt...@gmail.com>> wrote:
In case of any issue, it gets very difficult to debug when we have multiple 
versions.

On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer 
mailto:jalbersdor...@gmail.com>> wrote:
Hi, I would say „It depends“ - as it always does. I have had a 21 Node Cluster 
running in Production in one DC with versions ranging from 3.11.1 to 3.11.6 
without having had any single issue for over a year. I just upgraded all nodes 
to 3.11.6 for the sake of consistency.
Von meinem iPhone gesendet

Am 24.06.2020 um 02:56 schrieb Surbhi Gupta 
mailto:surbhi.gupt...@gmail.com>>:


Hi ,

We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format 
change from 3.11.4 .
We also had to expand the cluster and we also discussed about expansion first 
and than upgrade. But finally we upgraded and than expanded.
As per our experience what I could tell you is, it is not advisable to add new 
nodes on higher version.
There are many bugs which got fixed from 3.11.3 to 3.11.6.

Thanks
Surbhi

On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada 
mailto:jaibheem...@gmail.com>> wrote:
Hello,

I am trying to upgrade from 3.11.3 to 3.11.6.
Can I add new nodes with the 3.11.6  version to the cluster running with 3.11.3?
Also, I see the SSTable format changed from mc-* to md-*, does this cause any 
issues?




The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.



The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for dir

Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Jai Bheemsen Rao Dhanwada
Thank you all for the suggestions.

I am not trying to scale up the cluster for capacity but for the upgrade
process instead of in place upgrade I am planning to add nodes with 3.11.6
and then decommission  the nodes with 3.11.3.

On Wednesday, June 24, 2020, Durity, Sean R 
wrote:

> Streaming operations (repair/bootstrap) with different file versions is
> usually a problem. Running a mixed version cluster is fine – for the time
> you are doing the upgrade. I would not stay on mixed versions for any
> longer than that. It takes more time, but I separate out the admin tasks so
> that I can reason what should happen. I would either scale up or upgrade
> (depending on which is more urgent), then do the other.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* manish khandelwal 
> *Sent:* Wednesday, June 24, 2020 5:52 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6
>
>
>
> Rightly said by Surbhi, it is not good to scale with mixed versions as
> debugging issues will be very difficult.
>
> Better to upgrade first and then scale.
>
>
>
> Regards
>
>
>
> On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
> wrote:
>
> In case of any issue, it gets very difficult to debug when we have
> multiple versions.
>
>
>
> On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer 
> wrote:
>
> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
> Cluster running in Production in one DC with versions ranging from 3.11.1
> to 3.11.6 without having had any single issue for over a year. I just
> upgraded all nodes to 3.11.6 for the sake of consistency.
>
> Von meinem iPhone gesendet
>
>
>
> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>
> 
>
>
>
> Hi ,
>
>
>
> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
> format change from 3.11.4 .
>
> We also had to expand the cluster and we also discussed about expansion
> first and than upgrade. But finally we upgraded and than expanded.
>
> As per our experience what I could tell you is, it is not advisable to add
> new nodes on higher version.
>
> There are many bugs which got fixed from 3.11.3 to 3.11.6.
>
>
>
> Thanks
>
> Surbhi
>
>
>
> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
> Hello,
>
>
>
> I am trying to upgrade from 3.11.3 to 3.11.6.
>
> Can I add new nodes with the 3.11.6  version to the cluster running with
> 3.11.3?
>
> Also, I see the SSTable format changed from mc-* to md-*, does this cause
> any issues?
>
>
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


RE: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Durity, Sean R
Streaming operations (repair/bootstrap) with different file versions is usually 
a problem. Running a mixed version cluster is fine – for the time you are doing 
the upgrade. I would not stay on mixed versions for any longer than that. It 
takes more time, but I separate out the admin tasks so that I can reason what 
should happen. I would either scale up or upgrade (depending on which is more 
urgent), then do the other.


Sean Durity

From: manish khandelwal 
Sent: Wednesday, June 24, 2020 5:52 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Cassandra upgrade from 3.11.3 -> 3.11.6

Rightly said by Surbhi, it is not good to scale with mixed versions as 
debugging issues will be very difficult.
Better to upgrade first and then scale.

Regards

On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
mailto:surbhi.gupt...@gmail.com>> wrote:
In case of any issue, it gets very difficult to debug when we have multiple 
versions.

On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer 
mailto:jalbersdor...@gmail.com>> wrote:
Hi, I would say „It depends“ - as it always does. I have had a 21 Node Cluster 
running in Production in one DC with versions ranging from 3.11.1 to 3.11.6 
without having had any single issue for over a year. I just upgraded all nodes 
to 3.11.6 for the sake of consistency.
Von meinem iPhone gesendet


Am 24.06.2020 um 02:56 schrieb Surbhi Gupta 
mailto:surbhi.gupt...@gmail.com>>:


Hi ,

We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format 
change from 3.11.4 .
We also had to expand the cluster and we also discussed about expansion first 
and than upgrade. But finally we upgraded and than expanded.
As per our experience what I could tell you is, it is not advisable to add new 
nodes on higher version.
There are many bugs which got fixed from 3.11.3 to 3.11.6.

Thanks
Surbhi

On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada 
mailto:jaibheem...@gmail.com>> wrote:
Hello,

I am trying to upgrade from 3.11.3 to 3.11.6.
Can I add new nodes with the 3.11.6  version to the cluster running with 3.11.3?
Also, I see the SSTable format changed from mc-* to md-*, does this cause any 
issues?




The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Erick Ramirez
>
> Can I add new nodes with the 3.11.6  version to the cluster running with
> 3.11.3?
>

Technically you can. You'll be able to add 3.11.6 nodes to a 3.11.3
cluster. In fact, the reverse works too in my tests but I personally
wouldn't want to do it in production.


> Also, I see the SSTable format changed from mc-* to md-*, does this cause
> any issues?
>

Operations like repairs and bootstrap still works on my limited testing.
For example, if a 3.11.3 node died then you can bootstrap a replacement.

I echo the others' sentiment here and I wouldn't do this in production.
CASSANDRA-14861 [1] and CASSANDRA-14096 [2] should give you the motivation
to want to upgrade the existing nodes first before expanding the cluster.
Cheers!

[1] CASSANDRA-14861
 Sstable
min/max metadata can cause data loss
[2] CASSANDRA-14096
 Repair
merkle tree size causes OOM


Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread manish khandelwal
Rightly said by Surbhi, it is not good to scale with mixed versions as
debugging issues will be very difficult.
Better to upgrade first and then scale.

Regards

On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta 
wrote:

> In case of any issue, it gets very difficult to debug when we have
> multiple versions.
>
> On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer 
> wrote:
>
>> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
>> Cluster running in Production in one DC with versions ranging from 3.11.1
>> to 3.11.6 without having had any single issue for over a year. I just
>> upgraded all nodes to 3.11.6 for the sake of consistency.
>>
>> Von meinem iPhone gesendet
>>
>> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>>
>> 
>>
>> Hi ,
>>
>> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
>> format change from 3.11.4 .
>> We also had to expand the cluster and we also discussed about expansion
>> first and than upgrade. But finally we upgraded and than expanded.
>> As per our experience what I could tell you is, it is not advisable to
>> add new nodes on higher version.
>> There are many bugs which got fixed from 3.11.3 to 3.11.6.
>>
>> Thanks
>> Surbhi
>>
>> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I am trying to upgrade from 3.11.3 to 3.11.6.
>>> Can I add new nodes with the 3.11.6  version to the cluster running with
>>> 3.11.3?
>>> Also, I see the SSTable format changed from mc-* to md-*, does this
>>> cause any issues?
>>>
>>>


Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-23 Thread Surbhi Gupta
In case of any issue, it gets very difficult to debug when we have multiple
versions.

On Tue, 23 Jun 2020 at 22:23, Jürgen Albersdorfer 
wrote:

> Hi, I would say „It depends“ - as it always does. I have had a 21 Node
> Cluster running in Production in one DC with versions ranging from 3.11.1
> to 3.11.6 without having had any single issue for over a year. I just
> upgraded all nodes to 3.11.6 for the sake of consistency.
>
> Von meinem iPhone gesendet
>
> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>
> 
>
> Hi ,
>
> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable
> format change from 3.11.4 .
> We also had to expand the cluster and we also discussed about expansion
> first and than upgrade. But finally we upgraded and than expanded.
> As per our experience what I could tell you is, it is not advisable to add
> new nodes on higher version.
> There are many bugs which got fixed from 3.11.3 to 3.11.6.
>
> Thanks
> Surbhi
>
> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Hello,
>>
>> I am trying to upgrade from 3.11.3 to 3.11.6.
>> Can I add new nodes with the 3.11.6  version to the cluster running with
>> 3.11.3?
>> Also, I see the SSTable format changed from mc-* to md-*, does this cause
>> any issues?
>>
>>


Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-23 Thread Jürgen Albersdorfer
Hi, I would say „It depends“ - as it always does. I have had a 21 Node Cluster 
running in Production in one DC with versions ranging from 3.11.1 to 3.11.6 
without having had any single issue for over a year. I just upgraded all nodes 
to 3.11.6 for the sake of consistency.

Von meinem iPhone gesendet

> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
> 
> 
> 
> Hi ,
> 
> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format 
> change from 3.11.4 . 
> We also had to expand the cluster and we also discussed about expansion first 
> and than upgrade. But finally we upgraded and than expanded. 
> As per our experience what I could tell you is, it is not advisable to add 
> new nodes on higher version. 
> There are many bugs which got fixed from 3.11.3 to 3.11.6. 
> 
> Thanks 
> Surbhi
> 
>> On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada 
>>  wrote:
>> Hello,
>> 
>> I am trying to upgrade from 3.11.3 to 3.11.6.
>> Can I add new nodes with the 3.11.6  version to the cluster running with 
>> 3.11.3?
>> Also, I see the SSTable format changed from mc-* to md-*, does this cause 
>> any issues?
>> 


Re: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-23 Thread Surbhi Gupta
Hi ,

We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format
change from 3.11.4 .
We also had to expand the cluster and we also discussed about expansion
first and than upgrade. But finally we upgraded and than expanded.
As per our experience what I could tell you is, it is not advisable to add
new nodes on higher version.
There are many bugs which got fixed from 3.11.3 to 3.11.6.

Thanks
Surbhi

On Tue, Jun 23, 2020 at 5:04 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Hello,
>
> I am trying to upgrade from 3.11.3 to 3.11.6.
> Can I add new nodes with the 3.11.6  version to the cluster running with
> 3.11.3?
> Also, I see the SSTable format changed from mc-* to md-*, does this cause
> any issues?
>
>


Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-23 Thread Jai Bheemsen Rao Dhanwada
Hello,

I am trying to upgrade from 3.11.3 to 3.11.6.
Can I add new nodes with the 3.11.6  version to the cluster running with
3.11.3?
Also, I see the SSTable format changed from mc-* to md-*, does this cause
any issues?


Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-29 Thread Jai Bheemsen Rao Dhanwada
Thank you Romain

On Sat, Jul 27, 2019 at 1:42 AM Romain Hardouin 
wrote:

> Hi,
>
> Here are some upgrade options:
>   - Standard rolling upgrade: node by node
>
>   - Fast rolling upgrade: rack by rack.
> If clients use CL=LOCAL_ONE then it's OK as long as one rack is UP.
> For higher CL it's possible assuming you have no more than one replica per
> rack e.g. CL=LOCAL_QUORUM with RF=3 and 2 racks is a *BAD* setup. But RF=3
> with 3 rack is OK.
>   - Double write in another cluster: easy for short TTL data (e.g. TTL of
> few days)
> When possible, this option is not only the safest but also allows major
> change (e.g. Partitioner for legacy clusters).
> And of course it's a good opportunity to use new cloud instance type,
> change number of vnodes, etc.
>
> As Sean said, it's not possible for C* servers to stream data with other
> versions when Streaming versions are different. There is no workaround.
> You can check that here
> https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java#L35
> The community plans to work on this limitation to make streaming possible
> between different major versions starting from C*4.x
>
> Last but not least, don't forget to take snapshots (+ backup) and to
> prepare a rollback script.
> System keyspace will be automatically snapshotted by Cassandra when the
> new version will start: the rollback script should be based on that
> snapshot for the system part.
> New data (both commitlog and sstables flushed in 3.11 format) will be lost
> even with such a script but it's useful to test it and to have it ready for
> the D day.
> (See also snapshot_before_compaction setting but it might be useless
> depending on your procedure.)
>
> Romain
>
>
>
> Le vendredi 26 juillet 2019 à 23:51:52 UTC+2, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> a écrit :
>
>
> yes correct, it doesn't work for the servers. trying to see if any had any
> workaround for this issue? (may be changing the protocol version during the
> upgrade time?)
>
> On Fri, Jul 26, 2019 at 1:11 PM Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
> This would handle client protocol, but not streaming protocol between
> nodes.
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Alok Dwivedi 
> *Sent:* Friday, July 26, 2019 3:21 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: [EXTERNAL] Apache Cassandra upgrade path
>
>
>
> Hi Sean
>
> The recommended practice for upgrade is to explicitly control protocol
> version in your application during upgrade process. Basically the protocol
> version is negotiated on first connection and based on chance it can talk
> to an already upgraded node first which means it will negotiate a higher
> version that will not be compatible with those nodes which are still one
> lower Cassandra version. So initially you set it a lower version that is
> like lower common denominator for mixed mode cluster and then remove the
> call to explicit setting once upgrade has completed.
>
>
>
> Cluster cluster = Cluster.builder()
>
> .addContactPoint("127.0.0.1")
>
> .withProtocolVersion(ProtocolVersion.V2)
>
> .build();
>
>
>
> Refer here for more information if using Java driver
>
>
> https://docs.datastax.com/en/developer/java-driver/3.7/manual/native_protocol/#protocol-version-with-mixed-clusters
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.datastax.com_en_developer_java-2Ddriver_3.7_manual_native-5Fprotocol_-23protocol-2Dversion-2Dwith-2Dmixed-2Dclusters=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=WLqlcmEjAYjj7TAAmvYA3NyPqe7ZqgFTNuRNZXryUQE=>
>
>
>
> Same thing applies to drivers in other languages.
>
>
>
> Thanks
>
> Alok Dwivedi
>
> Senior Consultant
>
> https://www.instaclustr.com/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.instaclustr.com_=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=gQuE9u1lRiSA9uZsshvcKIuYih5Rvz3v6lhUOLZzvw4=>
>
>
>
>
>
> On Fri, 26 Jul 2019 at 20:03, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
> Thanks Sean,
>
>
>
> In my use case all my clusters are multi DC, and I am trying my best
> effort to upgrade ASAP, however there is a chance since all machines are
> VMs. Also my key spaces are not uniform across DCs. some are replicated to
> all DCs and some of them are just one DC, so I am worried there.
>
>
>
> Is there a way

Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-27 Thread Romain Hardouin
 Hi,
Here are some upgrade options:  - Standard rolling upgrade: node by node    - 
Fast rolling upgrade: rack by rack.  If clients use CL=LOCAL_ONE then it's OK 
as long as one rack is UP. For higher CL it's possible assuming you have no 
more than one replica per rack e.g. CL=LOCAL_QUORUM with RF=3 and 2 racks is a 
*BAD* setup. But RF=3 with 3 rack is OK.   - Double write in another cluster: 
easy for short TTL data (e.g. TTL of few days) When possible, this option is 
not only the safest but also allows major change (e.g. Partitioner for legacy 
clusters). And of course it's a good opportunity to use new cloud instance 
type, change number of vnodes, etc.
As Sean said, it's not possible for C* servers to stream data with other 
versions when Streaming versions are different. There is no workaround.You can 
check that here 
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java#L35The
 community plans to work on this limitation to make streaming possible between 
different major versions starting from C*4.x
Last but not least, don't forget to take snapshots (+ backup) and to prepare a 
rollback script.System keyspace will be automatically snapshotted by Cassandra 
when the new version will start: the rollback script should be based on that 
snapshot for the system part.New data (both commitlog and sstables flushed in 
3.11 format) will be lost even with such a script but it's useful to test it 
and to have it ready for the D day.(See also snapshot_before_compaction setting 
but it might be useless depending on your procedure.)
Romain


Le vendredi 26 juillet 2019 à 23:51:52 UTC+2, Jai Bheemsen Rao Dhanwada 
 a écrit :  
 
 yes correct, it doesn't work for the servers. trying to see if any had any 
workaround for this issue? (may be changing the protocol version during the 
upgrade time?)

On Fri, Jul 26, 2019 at 1:11 PM Durity, Sean R  
wrote:


This would handle client protocol, but not streaming protocol between nodes.

 

 

Sean Durity – Staff Systems Engineer, Cassandra

 

From: Alok Dwivedi  
Sent: Friday, July 26, 2019 3:21 PM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Apache Cassandra upgrade path

 

Hi Sean

The recommended practice for upgrade is to explicitly control protocol version 
in your application during upgrade process. Basically the protocol version is 
negotiated on first connection and based on chance it can talk to an already 
upgraded node first which means it will negotiate a higher version that will 
not be compatible with those nodes which are still one lower Cassandra version. 
So initially you set it a lower version that is like lower common denominator 
for mixed mode cluster and then remove the call to explicit setting once 
upgrade has completed. 

 

Clustercluster= Cluster.builder()

   .addContactPoint("127.0.0.1")

   .withProtocolVersion(ProtocolVersion.V2)

   .build();

 

Refer here for more information if using Java driver

https://docs.datastax.com/en/developer/java-driver/3.7/manual/native_protocol/#protocol-version-with-mixed-clusters

 

Same thing applies to drivers in other languages. 

 

Thanks

Alok Dwivedi

Senior Consultant 

https://www.instaclustr.com/

 

 

On Fri, 26 Jul 2019 at 20:03, Jai Bheemsen Rao Dhanwada  
wrote:


Thanks Sean,

 

In my use case all my clusters are multi DC, and I am trying my best effort to 
upgrade ASAP, however there is a chance since all machines are VMs. Also my key 
spaces are not uniform across DCs. some are replicated to all DCs and some of 
them are just one DC, so I am worried there.

 

Is there a way to override the protocol version until the upgrade is done and 
then change it back once the upgrade is completed?

 

On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R  
wrote:


What you have seen is totally expected. You can’t stream between different 
major versions of Cassandra. Get the upgrade done, then worry about any down 
hardware. If you are using DCs, upgrade one DC at a time, so that there is an 
available environment in case of any disasters.

 

My advice, though, is to get through the rolling upgrade process as quickly as 
possible. Don’t stay in a mixed state very long. The cluster will function fine 
in a mixed state – except for those streaming operations. No repairs, no 
bootstraps. 

 

 

Sean Durity – Staff Systems Engineer, Cassandra

 

From: Jai Bheemsen Rao Dhanwada 
Sent: Friday, July 26, 2019 2:24 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Apache Cassandra upgrade path

 

Hello,

 

I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular 
rolling upgrade process works fine without any issues.

 

However, I am running into an issue where if there is a node with older version 
dies (hardware failure) and a new node comes up and tries to bootstrap, it's 
failing.

 

I tried two combinations:

 

1. Joining replacement node with 2.1.16 version of cassandra 

In

Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Jai Bheemsen Rao Dhanwada
yes correct, it doesn't work for the servers. trying to see if any had any
workaround for this issue? (may be changing the protocol version during the
upgrade time?)

On Fri, Jul 26, 2019 at 1:11 PM Durity, Sean R 
wrote:

> This would handle client protocol, but not streaming protocol between
> nodes.
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Alok Dwivedi 
> *Sent:* Friday, July 26, 2019 3:21 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: [EXTERNAL] Apache Cassandra upgrade path
>
>
>
> Hi Sean
>
> The recommended practice for upgrade is to explicitly control protocol
> version in your application during upgrade process. Basically the protocol
> version is negotiated on first connection and based on chance it can talk
> to an already upgraded node first which means it will negotiate a higher
> version that will not be compatible with those nodes which are still one
> lower Cassandra version. So initially you set it a lower version that is
> like lower common denominator for mixed mode cluster and then remove the
> call to explicit setting once upgrade has completed.
>
>
>
> Cluster cluster = Cluster.builder()
>
> .addContactPoint("127.0.0.1")
>
> .withProtocolVersion(ProtocolVersion.V2)
>
> .build();
>
>
>
> Refer here for more information if using Java driver
>
>
> https://docs.datastax.com/en/developer/java-driver/3.7/manual/native_protocol/#protocol-version-with-mixed-clusters
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.datastax.com_en_developer_java-2Ddriver_3.7_manual_native-5Fprotocol_-23protocol-2Dversion-2Dwith-2Dmixed-2Dclusters=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=WLqlcmEjAYjj7TAAmvYA3NyPqe7ZqgFTNuRNZXryUQE=>
>
>
>
> Same thing applies to drivers in other languages.
>
>
>
> Thanks
>
> Alok Dwivedi
>
> Senior Consultant
>
> https://www.instaclustr.com/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.instaclustr.com_=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=gQuE9u1lRiSA9uZsshvcKIuYih5Rvz3v6lhUOLZzvw4=>
>
>
>
>
>
> On Fri, 26 Jul 2019 at 20:03, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
> Thanks Sean,
>
>
>
> In my use case all my clusters are multi DC, and I am trying my best
> effort to upgrade ASAP, however there is a chance since all machines are
> VMs. Also my key spaces are not uniform across DCs. some are replicated to
> all DCs and some of them are just one DC, so I am worried there.
>
>
>
> Is there a way to override the protocol version until the upgrade is done
> and then change it back once the upgrade is completed?
>
>
>
> On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
> What you have seen is totally expected. You can’t stream between different
> major versions of Cassandra. Get the upgrade done, then worry about any
> down hardware. If you are using DCs, upgrade one DC at a time, so that
> there is an available environment in case of any disasters.
>
>
>
> My advice, though, is to get through the rolling upgrade process as
> quickly as possible. Don’t stay in a mixed state very long. The cluster
> will function fine in a mixed state – except for those streaming
> operations. No repairs, no bootstraps.
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada 
> *Sent:* Friday, July 26, 2019 2:24 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Apache Cassandra upgrade path
>
>
>
> Hello,
>
>
>
> I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular
> rolling upgrade process works fine without any issues.
>
>
>
> However, I am running into an issue where if there is a node with older
> version dies (hardware failure) and a new node comes up and tries to
> bootstrap, it's failing.
>
>
>
> I tried two combinations:
>
>
>
> 1. Joining replacement node with 2.1.16 version of cassandra
>
> In this case nodes with 2.1.16 version are able to stream data to the new
> node, but the nodes with 3.11.3 version are failing with the below error.
>
>
>
> ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775
> IncomingStreamingConnection.java:80 - Error while reading from socket from
> /10.y.y.y:40296.
> java.io.IOException: Received stream using protocol version 2 (my version
> 4). Terminating connection
>
> 2. Join

RE: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Durity, Sean R
This would handle client protocol, but not streaming protocol between nodes.


Sean Durity – Staff Systems Engineer, Cassandra

From: Alok Dwivedi 
Sent: Friday, July 26, 2019 3:21 PM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Apache Cassandra upgrade path

Hi Sean
The recommended practice for upgrade is to explicitly control protocol version 
in your application during upgrade process. Basically the protocol version is 
negotiated on first connection and based on chance it can talk to an already 
upgraded node first which means it will negotiate a higher version that will 
not be compatible with those nodes which are still one lower Cassandra version. 
So initially you set it a lower version that is like lower common denominator 
for mixed mode cluster and then remove the call to explicit setting once 
upgrade has completed.

Cluster cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.withProtocolVersion(ProtocolVersion.V2)
.build();

Refer here for more information if using Java driver
https://docs.datastax.com/en/developer/java-driver/3.7/manual/native_protocol/#protocol-version-with-mixed-clusters<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.datastax.com_en_developer_java-2Ddriver_3.7_manual_native-5Fprotocol_-23protocol-2Dversion-2Dwith-2Dmixed-2Dclusters=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=WLqlcmEjAYjj7TAAmvYA3NyPqe7ZqgFTNuRNZXryUQE=>

Same thing applies to drivers in other languages.

Thanks
Alok Dwivedi
Senior Consultant
https://www.instaclustr.com/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.instaclustr.com_=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=JUUAJpaOGj5fhLX2uWOwUVqUcHN3c24hEaDC1T8RZVQ=gQuE9u1lRiSA9uZsshvcKIuYih5Rvz3v6lhUOLZzvw4=>


On Fri, 26 Jul 2019 at 20:03, Jai Bheemsen Rao Dhanwada 
mailto:jaibheem...@gmail.com>> wrote:
Thanks Sean,

In my use case all my clusters are multi DC, and I am trying my best effort to 
upgrade ASAP, however there is a chance since all machines are VMs. Also my key 
spaces are not uniform across DCs. some are replicated to all DCs and some of 
them are just one DC, so I am worried there.

Is there a way to override the protocol version until the upgrade is done and 
then change it back once the upgrade is completed?

On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R 
mailto:sean_r_dur...@homedepot.com>> wrote:
What you have seen is totally expected. You can’t stream between different 
major versions of Cassandra. Get the upgrade done, then worry about any down 
hardware. If you are using DCs, upgrade one DC at a time, so that there is an 
available environment in case of any disasters.

My advice, though, is to get through the rolling upgrade process as quickly as 
possible. Don’t stay in a mixed state very long. The cluster will function fine 
in a mixed state – except for those streaming operations. No repairs, no 
bootstraps.


Sean Durity – Staff Systems Engineer, Cassandra

From: Jai Bheemsen Rao Dhanwada 
mailto:jaibheem...@gmail.com>>
Sent: Friday, July 26, 2019 2:24 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: [EXTERNAL] Apache Cassandra upgrade path

Hello,

I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular 
rolling upgrade process works fine without any issues.

However, I am running into an issue where if there is a node with older version 
dies (hardware failure) and a new node comes up and tries to bootstrap, it's 
failing.

I tried two combinations:

1. Joining replacement node with 2.1.16 version of cassandra
In this case nodes with 2.1.16 version are able to stream data to the new node, 
but the nodes with 3.11.3 version are failing with the below error.

ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775 
IncomingStreamingConnection.java:80 - Error while reading from socket from 
/10.y.y.y:40296.
java.io.IOException: Received stream using protocol version 2 (my version 4). 
Terminating connection
2. Joining replacement node with 3.11.3 version of cassandra
In this case the nodes with 3.11.3 version of cassandra are able to stream the 
data but it's not able to stream data from the 2.1.16 nodes and failing with 
the below error.

ERROR [STREAM-IN-/10.z.z.z:7000] 2019-07-26 18:08:10,380 StreamSession.java:593 
- [Stream #538c6900-afd0-11e9-a649-ab2e045ee53b] Streaming error occurred on 
session with peer 10.z.z.z
java.io.IOException: Connection reset by peer
   at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_151]
   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.8.0_151]
   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.8.0_151]
   at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_151]
   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
~[na:1.8.

Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Alok Dwivedi
Hi Sean
The recommended practice for upgrade is to explicitly control protocol
version in your application during upgrade process. Basically the protocol
version is negotiated on first connection and based on chance it can talk
to an already upgraded node first which means it will negotiate a higher
version that will not be compatible with those nodes which are still one
lower Cassandra version. So initially you set it a lower version that is
like lower common denominator for mixed mode cluster and then remove the
call to explicit setting once upgrade has completed.

Cluster cluster = Cluster.builder() .addContactPoint("127.0.0.1") .
withProtocolVersion(ProtocolVersion.V2) .build();

Refer here for more information if using Java driver
https://docs.datastax.com/en/developer/java-driver/3.7/manual/native_protocol/#protocol-version-with-mixed-clusters

Same thing applies to drivers in other languages.

Thanks
Alok Dwivedi
Senior Consultant
https://www.instaclustr.com/


On Fri, 26 Jul 2019 at 20:03, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thanks Sean,
>
> In my use case all my clusters are multi DC, and I am trying my best
> effort to upgrade ASAP, however there is a chance since all machines are
> VMs. Also my key spaces are not uniform across DCs. some are replicated to
> all DCs and some of them are just one DC, so I am worried there.
>
> Is there a way to override the protocol version until the upgrade is done
> and then change it back once the upgrade is completed?
>
> On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
>> What you have seen is totally expected. You can’t stream between
>> different major versions of Cassandra. Get the upgrade done, then worry
>> about any down hardware. If you are using DCs, upgrade one DC at a time, so
>> that there is an available environment in case of any disasters.
>>
>>
>>
>> My advice, though, is to get through the rolling upgrade process as
>> quickly as possible. Don’t stay in a mixed state very long. The cluster
>> will function fine in a mixed state – except for those streaming
>> operations. No repairs, no bootstraps.
>>
>>
>>
>>
>>
>> Sean Durity – Staff Systems Engineer, Cassandra
>>
>>
>>
>> *From:* Jai Bheemsen Rao Dhanwada 
>> *Sent:* Friday, July 26, 2019 2:24 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* [EXTERNAL] Apache Cassandra upgrade path
>>
>>
>>
>> Hello,
>>
>>
>>
>> I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the
>> regular rolling upgrade process works fine without any issues.
>>
>>
>>
>> However, I am running into an issue where if there is a node with older
>> version dies (hardware failure) and a new node comes up and tries to
>> bootstrap, it's failing.
>>
>>
>>
>> I tried two combinations:
>>
>>
>>
>> 1. Joining replacement node with 2.1.16 version of cassandra
>>
>> In this case nodes with 2.1.16 version are able to stream data to the new
>> node, but the nodes with 3.11.3 version are failing with the below error.
>>
>>
>>
>> ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775
>> IncomingStreamingConnection.java:80 - Error while reading from socket from
>> /10.y.y.y:40296.
>> java.io.IOException: Received stream using protocol version 2 (my version
>> 4). Terminating connection
>>
>> 2. Joining replacement node with 3.11.3 version of cassandra
>>
>> In this case the nodes with 3.11.3 version of cassandra are able to
>> stream the data but it's not able to stream data from the 2.1.16 nodes and
>> failing with the below error.
>>
>>
>>
>> ERROR [STREAM-IN-/10.z.z.z:7000] 2019-07-26 18:08:10,380
>> StreamSession.java:593 - [Stream #538c6900-afd0-11e9-a649-ab2e045ee53b]
>> Streaming error occurred on session with peer 10.z.z.z
>> java.io.IOException: Connection reset by peer
>>at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>> ~[na:1.8.0_151]
>>at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> ~[na:1.8.0_151]
>>at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>> ~[na:1.8.0_151]
>>at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_151]
>>at
>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>> ~[na:1.8.0_151]
>>at
>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:206)
>> ~[na:1.8.0_151]
>>at
>> sun.nio.ch.ChannelInputStream.read(Cha

Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Jai Bheemsen Rao Dhanwada
Thanks Sean,

In my use case all my clusters are multi DC, and I am trying my best effort
to upgrade ASAP, however there is a chance since all machines are VMs. Also
my key spaces are not uniform across DCs. some are replicated to all DCs
and some of them are just one DC, so I am worried there.

Is there a way to override the protocol version until the upgrade is done
and then change it back once the upgrade is completed?

On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R 
wrote:

> What you have seen is totally expected. You can’t stream between different
> major versions of Cassandra. Get the upgrade done, then worry about any
> down hardware. If you are using DCs, upgrade one DC at a time, so that
> there is an available environment in case of any disasters.
>
>
>
> My advice, though, is to get through the rolling upgrade process as
> quickly as possible. Don’t stay in a mixed state very long. The cluster
> will function fine in a mixed state – except for those streaming
> operations. No repairs, no bootstraps.
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada 
> *Sent:* Friday, July 26, 2019 2:24 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Apache Cassandra upgrade path
>
>
>
> Hello,
>
>
>
> I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular
> rolling upgrade process works fine without any issues.
>
>
>
> However, I am running into an issue where if there is a node with older
> version dies (hardware failure) and a new node comes up and tries to
> bootstrap, it's failing.
>
>
>
> I tried two combinations:
>
>
>
> 1. Joining replacement node with 2.1.16 version of cassandra
>
> In this case nodes with 2.1.16 version are able to stream data to the new
> node, but the nodes with 3.11.3 version are failing with the below error.
>
>
>
> ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775
> IncomingStreamingConnection.java:80 - Error while reading from socket from
> /10.y.y.y:40296.
> java.io.IOException: Received stream using protocol version 2 (my version
> 4). Terminating connection
>
> 2. Joining replacement node with 3.11.3 version of cassandra
>
> In this case the nodes with 3.11.3 version of cassandra are able to stream
> the data but it's not able to stream data from the 2.1.16 nodes and failing
> with the below error.
>
>
>
> ERROR [STREAM-IN-/10.z.z.z:7000] 2019-07-26 18:08:10,380
> StreamSession.java:593 - [Stream #538c6900-afd0-11e9-a649-ab2e045ee53b]
> Streaming error occurred on session with peer 10.z.z.z
> java.io.IOException: Connection reset by peer
>at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> ~[na:1.8.0_151]
>at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> ~[na:1.8.0_151]
>at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> ~[na:1.8.0_151]
>at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_151]
>at
> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> ~[na:1.8.0_151]
>at
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:206)
> ~[na:1.8.0_151]
>at
> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> ~[na:1.8.0_151]
>at
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> ~[na:1.8.0_151]
>at
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>at
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:311)
> ~[apache-cassandra-3.11.3.jar:3.11.3]
>at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>
>
>
> Note: In both cases I am using replace_address to replace dead node, as I
> am running into some issues with "nodetool removenode" . I use ephemeral
> disk, so replacement node always comes up with empty data dir and bootstrap.
>
>
>
> Any other work around to mitigate this problem? I am worried about any
> nodes going down while we are in the process of upgrade, as it could take
> several hours to upgrade depending on the cluster size.
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any op

RE: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Durity, Sean R
What you have seen is totally expected. You can’t stream between different 
major versions of Cassandra. Get the upgrade done, then worry about any down 
hardware. If you are using DCs, upgrade one DC at a time, so that there is an 
available environment in case of any disasters.

My advice, though, is to get through the rolling upgrade process as quickly as 
possible. Don’t stay in a mixed state very long. The cluster will function fine 
in a mixed state – except for those streaming operations. No repairs, no 
bootstraps.


Sean Durity – Staff Systems Engineer, Cassandra

From: Jai Bheemsen Rao Dhanwada 
Sent: Friday, July 26, 2019 2:24 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Apache Cassandra upgrade path

Hello,

I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular 
rolling upgrade process works fine without any issues.

However, I am running into an issue where if there is a node with older version 
dies (hardware failure) and a new node comes up and tries to bootstrap, it's 
failing.

I tried two combinations:

1. Joining replacement node with 2.1.16 version of cassandra
In this case nodes with 2.1.16 version are able to stream data to the new node, 
but the nodes with 3.11.3 version are failing with the below error.

ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775 
IncomingStreamingConnection.java:80 - Error while reading from socket from 
/10.y.y.y:40296.
java.io.IOException: Received stream using protocol version 2 (my version 4). 
Terminating connection
2. Joining replacement node with 3.11.3 version of cassandra
In this case the nodes with 3.11.3 version of cassandra are able to stream the 
data but it's not able to stream data from the 2.1.16 nodes and failing with 
the below error.

ERROR [STREAM-IN-/10.z.z.z:7000] 2019-07-26 18:08:10,380 StreamSession.java:593 
- [Stream #538c6900-afd0-11e9-a649-ab2e045ee53b] Streaming error occurred on 
session with peer 10.z.z.z
java.io.IOException: Connection reset by peer
   at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_151]
   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.8.0_151]
   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.8.0_151]
   at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_151]
   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
~[na:1.8.0_151]
   at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:206) 
~[na:1.8.0_151]
   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
~[na:1.8.0_151]
   at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) 
~[na:1.8.0_151]
   at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
   at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:311)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]

Note: In both cases I am using replace_address to replace dead node, as I am 
running into some issues with "nodetool removenode" . I use ephemeral disk, so 
replacement node always comes up with empty data dir and bootstrap.

Any other work around to mitigate this problem? I am worried about any nodes 
going down while we are in the process of upgrade, as it could take several 
hours to upgrade depending on the cluster size.



The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Apache Cassandra upgrade path

2019-07-26 Thread Jai Bheemsen Rao Dhanwada
Hello,

I am trying to upgrade Apache Cassandra from 2.1.16 to 3.11.3, the regular
rolling upgrade process works fine without any issues.

However, I am running into an issue where if there is a node with older
version dies (hardware failure) and a new node comes up and tries to
bootstrap, it's failing.

I tried two combinations:

1. Joining replacement node with 2.1.16 version of cassandra
In this case nodes with 2.1.16 version are able to stream data to the new
node, but the nodes with 3.11.3 version are failing with the below error.


> ERROR [STREAM-INIT-/10.x.x.x:40296] 2019-07-26 17:45:17,775
> IncomingStreamingConnection.java:80 - Error while reading from socket from
> /10.y.y.y:40296.
> java.io.IOException: Received stream using protocol version 2 (my version
> 4). Terminating connection

2. Joining replacement node with 3.11.3 version of cassandra
In this case the nodes with 3.11.3 version of cassandra are able to stream
the data but it's not able to stream data from the 2.1.16 nodes and failing
with the below error.


> ERROR [STREAM-IN-/10.z.z.z:7000] 2019-07-26 18:08:10,380
> StreamSession.java:593 - [Stream #538c6900-afd0-11e9-a649-ab2e045ee53b]
> Streaming error occurred on session with peer 10.z.z.z
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_151]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> ~[na:1.8.0_151]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_151]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_151]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> ~[na:1.8.0_151]
> at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:206)
> ~[na:1.8.0_151]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> ~[na:1.8.0_151]
> at
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> ~[na:1.8.0_151]
> at
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> at
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:311)
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]


Note: In both cases I am using replace_address to replace dead node, as I
am running into some issues with "nodetool removenode" . I use ephemeral
disk, so replacement node always comes up with empty data dir and bootstrap.

Any other work around to mitigate this problem? I am worried about any
nodes going down while we are in the process of upgrade, as it could take
several hours to upgrade depending on the cluster size.


RE: [EXTERNAL] Cassandra Upgrade Plan 2.2.4 to 3.11.3

2018-12-04 Thread Durity, Sean R
See my recent post for some additional points. But I wanted to encourage you to 
look at the in-place upgrade on your existing hardware. No need to add a DC to 
try and upgrade. The cluster will handle reads and writes with nodes of 
different versions – no problems. I have done this many times on many clusters.

Also, I tell my teams there is no real back-out after we get the first node 
upgraded. This is because any new data is being written in the new sstable 
format (assuming the version has a new sstable format) – whether inserts or 
compaction. Any snapshot of the cluster pre-upgrade is now obsolete. Test 
thoroughly, then go forward as quickly as possible.


Sean Durity

From: Devaki, Srinivas 
Sent: Sunday, December 02, 2018 9:24 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Cassandra Upgrade Plan 2.2.4 to 3.11.3

Hi everyone,

I have planned out our org's cassandra upgrade plan and want to make sure if it 
seems fine.

Details Existing Cluster:
* Cassandra 2.2.4
* 8 nodes with 32G ram and 12G max heap allocated to cassandra
* 4 nodes in each rack

1. Ensured all clients to use LOCAL_* consistency levels and all traffic to 
"old" dc
2. Add new cluster as "new" dc with cassandra 2.2.4
  2.1 update conf on all nodes in "old" dc
  2.2 rolling restart the "old" dc
3. Alter tables with similar replication factor on the "new" dc
4. cassandra repair on all nodes in "new" dc
5. upgrade each node in "new" dc to cassandra 3.11.3 (and upgradesstables)
6. switch all clients to connect to new cluster
7. repair all new nodes once more
8. alter tables to replication only on new dc
9. remove "old" dc

and I have some doubts on the same plan
D1. can i just join 3.11.3 cluster as "new" dc in the 2.2.4 cluster?
D2. how does rolling upgrade work, as in within the same cluster how can 2 
versions coexist?

Will be grateful if you could review this plan.

PS: following this plan to ensure that I can revert back to old behaviour at 
any step

Thanks
Srinivas Devaki
SRE/SDE at Zomato






The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Cassandra Upgrade Plan 2.2.4 to 3.11.3

2018-12-02 Thread Devaki, Srinivas
Hi everyone,

I have planned out our org's cassandra upgrade plan and want to make sure
if it seems fine.

Details Existing Cluster:
* Cassandra 2.2.4
* 8 nodes with 32G ram and 12G max heap allocated to cassandra
* 4 nodes in each rack

1. Ensured all clients to use LOCAL_* consistency levels and all traffic to
"old" dc
2. Add new cluster as "new" dc with cassandra 2.2.4
  2.1 update conf on all nodes in "old" dc
  2.2 rolling restart the "old" dc
3. Alter tables with similar replication factor on the "new" dc
4. cassandra repair on all nodes in "new" dc
5. upgrade each node in "new" dc to cassandra 3.11.3 (and upgradesstables)
6. switch all clients to connect to new cluster
7. repair all new nodes once more
8. alter tables to replication only on new dc
9. remove "old" dc

and I have some doubts on the same plan
D1. can i just join 3.11.3 cluster as "new" dc in the 2.2.4 cluster?
D2. how does rolling upgrade work, as in within the same cluster how can 2
versions coexist?

Will be grateful if you could review this plan.

PS: following this plan to ensure that I can revert back to old behaviour
at any step

Thanks
Srinivas Devaki
SRE/SDE at Zomato


Re: Lost counter updates during Cassandra upgrade 2.2.11 to 3.11.2

2018-11-13 Thread Konrad
Hi, 

I haven't investigated the issue further. 

-- 
  Konrad



On Sat, Nov 10, 2018, at 05:49, Laxmikant Upadhyay wrote:
> Hi,
> 
> I have faced similar issue while upgrading from 2.1.16 -> 3.11.2 in a
> 3 node cluster.> I have raised jira ticket CASSANDRA-14881[1] for this issue 
> , but have
> not got any response on this yet.> @Konrad did you get any resolution on this 
> ?
> 
> Regards,
> Laxmikant
> 
> 
> 
> 
> On Thu, Jul 26, 2018 at 5:34 PM Konrad  wrote:>> Hi,
>> 
>>  During rolling upgrade of our cluster we noticed that some updates
>>  on table with counters were not being applied. It looked as if it
>>  depended on whether coordinator handling request was already
>>  upgraded or not. I observed similar behavior while using cqlsh and
>>  executing queries manually. Sometimes it took several retries to see
>>  counter updated. There were no errors/warns in neither application
>>  nor Cassandra logs. The updates started working reliably once again
>>  when all nodes in dc have been upgraded. However, the lost updates
>>  did not reappear.>> 
>>  Our setup:
>>  2 dc cluster, 5 + 5 nodes. However, only one is used for queries as
>>  client application is co-located in one region. I believe 1 dc is
>>  enough to reproduce it.>>  Replication factor 3+2
>>  Consistency level LOCAL_QUORUM
>>  Upgrading 2.2.11 to 3.11.2
>> 
>>  I haven't found any report of similar issue on the internet. Has
>>  anyone heard about such behavior?>> 
>>  Thanks, 
>>  Konrad
>> 
>>  ---
>>  -->>  To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>  For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
> 
> 
> -- 
> 
> regards,
> Laxmikant Upadhyay
> 

Links:

  1. https://issues.apache.org/jira/browse/CASSANDRA-14881


Re: Lost counter updates during Cassandra upgrade 2.2.11 to 3.11.2

2018-11-09 Thread Laxmikant Upadhyay
Hi,

I have faced similar issue while upgrading from 2.1.16 -> 3.11.2 in a 3
node cluster.
I have raised jira ticket CASSANDRA-14881
 for this issue ,
but have not got any response on this yet.
@Konrad did you get any resolution on this ?

Regards,
Laxmikant




On Thu, Jul 26, 2018 at 5:34 PM Konrad  wrote:

> Hi,
>
> During rolling upgrade of our cluster we noticed that some updates on
> table with counters were not being applied. It looked as if it depended on
> whether coordinator handling request was already upgraded or not. I
> observed similar behavior while using cqlsh and executing queries manually.
> Sometimes it took several retries to see counter updated. There were no
> errors/warns in neither application nor Cassandra logs. The updates started
> working reliably once again when all nodes in dc have been upgraded.
> However, the lost updates did not reappear.
>
> Our setup:
> 2 dc cluster, 5 + 5 nodes. However, only one is used for queries as client
> application is co-located in one region. I believe 1 dc is enough to
> reproduce it.
> Replication factor 3+2
> Consistency level LOCAL_QUORUM
> Upgrading 2.2.11 to 3.11.2
>
> I haven't found any report of similar issue on the internet. Has anyone
> heard about such behavior?
>
> Thanks,
> Konrad
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

-- 

regards,
Laxmikant Upadhyay


Lost counter updates during Cassandra upgrade 2.2.11 to 3.11.2

2018-07-26 Thread Konrad
Hi,

During rolling upgrade of our cluster we noticed that some updates on table 
with counters were not being applied. It looked as if it depended on whether 
coordinator handling request was already upgraded or not. I observed similar 
behavior while using cqlsh and executing queries manually. Sometimes it took 
several retries to see counter updated. There were no errors/warns in neither 
application nor Cassandra logs. The updates started working reliably once again 
when all nodes in dc have been upgraded. However, the lost updates did not 
reappear. 

Our setup:
2 dc cluster, 5 + 5 nodes. However, only one is used for queries as client 
application is co-located in one region. I believe 1 dc is enough to reproduce 
it. 
Replication factor 3+2
Consistency level LOCAL_QUORUM
Upgrading 2.2.11 to 3.11.2

I haven't found any report of similar issue on the internet. Has anyone heard 
about such behavior? 

Thanks, 
Konrad

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Jeff Jirsa



> On Jul 5, 2018, at 12:45 PM, Anuj Wadehra  
> wrote:
> 
> Hi,
> 
> I woud like to know how people are doing rolling upgrade of Casandra clustes 
> when there is a change in native protocol version say from 2.1 to 3.11. 
> During rolling upgrade, if client application is restarted on nodes, the 
> client driver may first contact an upgraded Cassandra node with v4 and 
> permanently mark all old Casandra nodes on v3 as down. This may lead to 
> request failures. Datastax recommends two ways to deal with this:
> 
> 1. Before upgrade, set protocol version to lower protocol version. And move 
> to higher version once entire cluster is upgraded.
> 2. Make sure driver only contacts upraded Cassandra nodes during rolling 
> upgrade.

3. Make sure driver only contacts nonupgraded nodes during upgrade 

> 
> Second workaround will lead to failures as you may not be able to meet 
> required consistency for some time.

That definitely shouldn’t be true. Querying a 3.x node will internally query 
the 2.1 nodes and translate it. The 3.0 node will talk to the 2.1 instances 
using the 2.1 internode messaging protocol. 


> 
> Lets consider first workaround. Now imagine an application where protocol 
> version is not configurable and code uses default protocol version. You can 
> not apply first workaroud because you have to upgrade your application on all 
> nodes to first make the protocol version configurable. How would you upgrade 
> such a cluster without downtime? Thoughts?

You could try turning off native protocol on the upgraded hosts until you had 
enough to serve the load, then switch on native on the 3.0 hosts, let 
connections move, and then native off for 2.1

Alternatively, depending on your driver (and it’s discovery mechanism), you may 
be able to start some instances that only listen on the v3 protocol without 
owning any data (-Dcassandra.join_ring=false) and let clients connect there - 
then bounce through the cluster as needed.
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Jeff Jirsa
There is replication between 2.1 and 3.x, but not hints. You will have to 
repair past the window, but you should be doing that anyway if you care about 
tombstones doing the right thing

Read quorum with 2/3 in either version should work fine - if it gives you an 
error please open a JIRA with steps to repro



-- 
Jeff Jirsa


> On Jul 5, 2018, at 6:39 PM, James Shaw  wrote:
> 
> other concerns:
> there is no replication between 2.11 and 3.11, store in hints, and replay 
> hints when remote is same version. have to do repair if over window.  if read 
> quorum 2/3,  will get error.
> 
> in case rollback to 2.11, can not read new version 3.11 data files, but 
> online rolling upgrade, some new data is in new version format.
> 
> if hardlink snapshot is not copied to other device, then disk failure may 
> cause data loss. ( since may only have some data may just 1 copy during 
> upgrade because no replication).
> 
> 
>> On Thu, Jul 5, 2018 at 8:13 PM, kooljava2  
>> wrote:
>> Hello Anuj,
>> 
>> The 2nd workaround should work. As app will auto discover all the other 
>> nodes. Its the first contact with the node that app makes determines the 
>> protocol version. So if you remove the newer version nodes from the app 
>> configuration after the startup, it will auto discover the newer nodes as 
>> well.
>> 
>> Thank you,
>> TS. 
>> 
>> On Thursday, 5 July 2018, 12:45:39 GMT-7, Anuj Wadehra 
>>  wrote:
>> 
>> 
>> Hi,
>> 
>> I woud like to know how people are doing rolling upgrade of Casandra clustes 
>> when there is a change in native protocol version say from 2.1 to 3.11. 
>> During rolling upgrade, if client application is restarted on nodes, the 
>> client driver may first contact an upgraded Cassandra node with v4 and 
>> permanently mark all old Casandra nodes on v3 as down. This may lead to 
>> request failures. Datastax recommends two ways to deal with this:
>> 
>> 1. Before upgrade, set protocol version to lower protocol version. And move 
>> to higher version once entire cluster is upgraded.
>> 2. Make sure driver only contacts upraded Cassandra nodes during rolling 
>> upgrade.
>> 
>> Second workaround will lead to failures as you may not be able to meet 
>> required consistency for some time.
>> 
>> Lets consider first workaround. Now imagine an application where protocol 
>> version is not configurable and code uses default protocol version. You can 
>> not apply first workaroud because you have to upgrade your application on 
>> all nodes to first make the protocol version configurable. How would you 
>> upgrade such a cluster without downtime? Thoughts?
>> 
>> Thanks
>> Anuj
>> 
>> 
> 


Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread James Shaw
other concerns:
there is no replication between 2.11 and 3.11, store in hints, and replay
hints when remote is same version. have to do repair if over window.  if
read quorum 2/3,  will get error.

in case rollback to 2.11, can not read new version 3.11 data files, but
online rolling upgrade, some new data is in new version format.

if hardlink snapshot is not copied to other device, then disk failure may
cause data loss. ( since may only have some data may just 1 copy during
upgrade because no replication).


On Thu, Jul 5, 2018 at 8:13 PM, kooljava2 
wrote:

> Hello Anuj,
>
> The 2nd workaround should work. As app will auto discover all the other
> nodes. Its the first contact with the node that app makes determines the
> protocol version. So if you remove the newer version nodes from the app
> configuration after the startup, it will auto discover the newer nodes as
> well.
>
> Thank you,
> TS.
>
> On Thursday, 5 July 2018, 12:45:39 GMT-7, Anuj Wadehra <
> anujw_2...@yahoo.co.in.INVALID> wrote:
>
>
> Hi,
>
> I woud like to know how people are doing rolling upgrade of Casandra
> clustes when there is a change in native protocol version say from 2.1 to
> 3.11. During rolling upgrade, if client application is restarted on nodes,
> the client driver may first contact an upgraded Cassandra node with v4 and
> permanently mark all old Casandra nodes on v3 as down. This may lead to
> request failures. Datastax recommends two ways to deal with this:
>
> 1. Before upgrade, set protocol version to lower protocol version. And
> move to higher version once entire cluster is upgraded.
> 2. Make sure driver only contacts upraded Cassandra nodes during rolling
> upgrade.
>
> Second workaround will lead to failures as you may not be able to meet
> required consistency for some time.
>
> Lets consider first workaround. Now imagine an application where protocol
> version is not configurable and code uses default protocol version. You can
> not apply first workaroud because you have to upgrade your application on
> all nodes to first make the protocol version configurable. How would you
> upgrade such a cluster without downtime? Thoughts?
>
> Thanks
> Anuj
>
>
>


Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread kooljava2
 Hello Anuj,
The 2nd workaround should work. As app will auto discover all the other nodes. 
Its the first contact with the node that app makes determines the protocol 
version. So if you remove the newer version nodes from the app configuration 
after the startup, it will auto discover the newer nodes as well.
Thank you,TS. 

On Thursday, 5 July 2018, 12:45:39 GMT-7, Anuj Wadehra 
 wrote:  
 
 Hi,
I woud like to know how people are doing rolling upgrade of Casandra clustes 
when there is a change in native protocol version say from 2.1 to 3.11. During 
rolling upgrade, if client application is restarted on nodes, the client driver 
may first contact an upgraded Cassandra node with v4 and permanently mark all 
old Casandra nodes on v3 as down. This may lead to request failures. Datastax 
recommends two ways to deal with this:
1. Before upgrade, set protocol version to lower protocol version. And move to 
higher version once entire cluster is upgraded.2. Make sure driver only 
contacts upraded Cassandra nodes during rolling upgrade.
Second workaround will lead to failures as you may not be able to meet required 
consistency for some time.
Lets consider first workaround. Now imagine an application where protocol 
version is not configurable and code uses default protocol version. You can not 
apply first workaroud because you have to upgrade your application on all nodes 
to first make the protocol version configurable. How would you upgrade such a 
cluster without downtime? Thoughts?
ThanksAnuj

  

Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Anuj Wadehra
Hi,
I woud like to know how people are doing rolling upgrade of Casandra clustes 
when there is a change in native protocol version say from 2.1 to 3.11. During 
rolling upgrade, if client application is restarted on nodes, the client driver 
may first contact an upgraded Cassandra node with v4 and permanently mark all 
old Casandra nodes on v3 as down. This may lead to request failures. Datastax 
recommends two ways to deal with this:
1. Before upgrade, set protocol version to lower protocol version. And move to 
higher version once entire cluster is upgraded.2. Make sure driver only 
contacts upraded Cassandra nodes during rolling upgrade.
Second workaround will lead to failures as you may not be able to meet required 
consistency for some time.
Lets consider first workaround. Now imagine an application where protocol 
version is not configurable and code uses default protocol version. You can not 
apply first workaroud because you have to upgrade your application on all nodes 
to first make the protocol version configurable. How would you upgrade such a 
cluster without downtime? Thoughts?
ThanksAnuj



Re: Cassandra upgrade from 2.1 to 3.0

2018-05-14 Thread kooljava2
 We are using datstax java driver  1.5.0
Thank you.

On Saturday, 12 May 2018, 10:37:04 GMT-7, Jeff Jirsa  
wrote:  
 
 I haven't seen this before, but I have a guess.
What client/driver are you using?
Are you using a prepared statement that has every column listed for the update, 
and leaving the un-set columns as null? If so, the null is being translated 
into a delete, which is clearly not what you want.
The differentiation between UNSET and NULL went into 2.2 ( 
https://issues.apache.org/jira/browse/CASSANDRA-7304 ) , and most drivers have 
been updated to know the difference ( https://github.com/gocql/gocql/issues/861 
, https://datastax-oss.atlassian.net/browse/JAVA-777 , etc). I haven't read the 
patch for 7304, but I suspect that maybe there's some sort of mixup along the 
way (maybe in your driver, or maybe you upgraded the driver to support 3.0 and 
picked up a new feature you didnt realize you picked up, etc)

On Fri, May 11, 2018 at 11:26 AM, kooljava2  wrote:

 After further analyzing the data. I see some pattern. The rows which were 
updated in last 2-3 weeks, the column which were not part of this update have 
the null values.  

Has anyone encountered this issue during the upgrade? 


Thank you,

On Thursday, 10 May 2018, 19:49:50 GMT-7, kooljava2 
 wrote:  
 
  Hello Jeff,
2.1.19 to 3.0.15.
Thank you. 

On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa  
wrote:  
 
 Which minor version of 3.0

-- Jeff Jirsa

On May 11, 2018, at 2:54 AM, kooljava2  wrote:



Hello,

Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set to 
"null". These null columns were created during the row creation time.

After looking at the data see a pattern where update was done on these rows. 
Rows which were updated has data but rows which were not part of the update are 
set to null.

 created_on    | created_by  | id
-- ---+-+ 
-- ---
    null |    null |    
12345



sstabledump:- 

WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 5155159
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 5168738,
    "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
"local_delete_time" : "2018-03-28T20:38:08Z" },
    "cells" : [
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "industry", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "last_modified_date", "value" : "2018-03-28 
20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "locale", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
"{\"name\":\"TEMP_DATA\",\" ticket\":\" a42638dae8350e889f2603be1427ac 
6f5dec5e486d4db164a76bf80820cd f68d635cff5e7d555e6d4eabb9b5b8 
2597b68bec0fcd735fcca\",\" lastRenewedDate\":\"2018-03- 28T20:38:08Z\"}", 
"tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\" ticket\":\"a4263b7350d1f2683\" 
,\"lastRenewedDate\":\"2018- 03-28T20:38:07Z\"}", "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } }
    ]
  }
    ]
  }
]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 18743072
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 18751808,
    "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
    "cells" : [
  { "name" : 

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-12 Thread Jeff Jirsa
I haven't seen this before, but I have a guess.

What client/driver are you using?

Are you using a prepared statement that has every column listed for the
update, and leaving the un-set columns as null? If so, the null is being
translated into a delete, which is clearly not what you want.

The differentiation between UNSET and NULL went into 2.2 (
https://issues.apache.org/jira/browse/CASSANDRA-7304 ) , and most drivers
have been updated to know the difference (
https://github.com/gocql/gocql/issues/861 ,
https://datastax-oss.atlassian.net/browse/JAVA-777 , etc). I haven't read
the patch for 7304, but I suspect that maybe there's some sort of mixup
along the way (maybe in your driver, or maybe you upgraded the driver to
support 3.0 and picked up a new feature you didnt realize you picked up,
etc)


On Fri, May 11, 2018 at 11:26 AM, kooljava2 
wrote:

> After further analyzing the data. I see some pattern. The rows which were
> updated in last 2-3 weeks, the column which were not part of this update
> have the null values.
>
> Has anyone encountered this issue during the upgrade?
>
>
> Thank you,
>
>
> On Thursday, 10 May 2018, 19:49:50 GMT-7, kooljava2
>  wrote:
>
>
> Hello Jeff,
>
> 2.1.19 to 3.0.15.
>
> Thank you.
>
> On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa 
> wrote:
>
>
> Which minor version of 3.0
>
> --
> Jeff Jirsa
>
>
> On May 11, 2018, at 2:54 AM, kooljava2 
> wrote:
>
>
> Hello,
>
> Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being
> set to "null". These null columns were created during the row creation time.
>
> After looking at the data see a pattern where update was done on these
> rows. Rows which were updated has data but rows which were not part of the
> update are set to null.
>
>  created_on| created_by  | id
> -+-+
> -
> null |null
> |12345
>
>
>
> sstabledump:-
>
> WARN  20:47:38,741 Small cdc volume detected at
> /var/lib/cassandra/cdc_raw; setting cdc_total_space_in_mb to 1278.  You can
> override this in cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 5155159
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 5168738,
> "deletion_info" : { "marked_deleted" :
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z"
> },
> "cells" : [
>   { "name" : "doc_type", "value" : false, "tstamp" :
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "industry", "deletion_info" : { "local_delete_time" :
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "last_modified_by", "value" : "12345", "tstamp" :
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "last_modified_date", "value" : "2018-03-28
> 20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "locale", "deletion_info" : { "local_delete_time" :
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "postal_code", "deletion_info" : {
> "local_delete_time" : "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "ticket", "deletion_info" : { "marked_deleted" :
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z"
> } },
>   { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" :
> "{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac
> 6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b8
> 2597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
> "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" :
> "{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\"
> ,\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}", "tstamp" :
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" :
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z"
> } },
>   { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted"
> : "2018-03-28T20:38:08.05Z", "local_delete_time" :
> "2018-03-28T20:38:08Z" } }
> ]
>   }
> ]
>   }
> ]WARN  20:47:41,325 Small cdc volume detected at
> /var/lib/cassandra/cdc_raw; setting cdc_total_space_in_mb to 1278.  You can
> override this in cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 18743072
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18751808,
> "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
> "cells" : [
>   

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-11 Thread kooljava2
 After further analyzing the data. I see some pattern. The rows which were 
updated in last 2-3 weeks, the column which were not part of this update have 
the null values.  

Has anyone encountered this issue during the upgrade? 


Thank you,

On Thursday, 10 May 2018, 19:49:50 GMT-7, kooljava2 
 wrote:  
 
  Hello Jeff,
2.1.19 to 3.0.15.
Thank you. 

On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa  
wrote:  
 
 Which minor version of 3.0

-- Jeff Jirsa

On May 11, 2018, at 2:54 AM, kooljava2  wrote:



Hello,

Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set to 
"null". These null columns were created during the row creation time.

After looking at the data see a pattern where update was done on these rows. 
Rows which were updated has data but rows which were not part of the update are 
set to null.

 created_on    | created_by  | id
-+-+-
    null |    null |    
12345



sstabledump:- 

WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 5155159
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 5168738,
    "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
"local_delete_time" : "2018-03-28T20:38:08Z" },
    "cells" : [
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "industry", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "last_modified_date", "value" : "2018-03-28 
20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "locale", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
"{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b82597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\",\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } }
    ]
  }
    ]
  }
]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 18743072
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 18751808,
    "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
    "cells" : [
  { "name" : "created_by", "value" : "12345" },
  { "name" : "created_on", "value" : "2017-10-25 10:22:41.637Z" },
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_date", "value" : "2017-11-10 
00:09:52.668Z", "tstamp" : "2017-11-10T00:09:52.668Z" },
  { "name" : "per_type", "value" : "user" },
  { "name" : "lists", "path" : [ "cn.cncn.bpnp" ], "value" : 
"[\"::accid:ab\",\"::accid:e1\",\"::accid:d2\",\"::accid:d3\",\"::accid:f3\",\"::accid:g3\",\"::accid:f4\",\"::accid:9c486ae5-00b2-3c63-af70-cff2950c4181\"]",
 "tstamp" : "2017-10-25T10:22:42.782Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-10 Thread kooljava2
 Hello Jeff,
2.1.19 to 3.0.15.
Thank you. 

On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa  
wrote:  
 
 Which minor version of 3.0

-- Jeff Jirsa

On May 11, 2018, at 2:54 AM, kooljava2  wrote:



Hello,

Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set to 
"null". These null columns were created during the row creation time.

After looking at the data see a pattern where update was done on these rows. 
Rows which were updated has data but rows which were not part of the update are 
set to null.

 created_on    | created_by  | id
-+-+-
    null |    null |    
12345



sstabledump:- 

WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 5155159
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 5168738,
    "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
"local_delete_time" : "2018-03-28T20:38:08Z" },
    "cells" : [
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "industry", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "last_modified_date", "value" : "2018-03-28 
20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "locale", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
"{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b82597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\",\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } }
    ]
  }
    ]
  }
]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 18743072
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 18751808,
    "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
    "cells" : [
  { "name" : "created_by", "value" : "12345" },
  { "name" : "created_on", "value" : "2017-10-25 10:22:41.637Z" },
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_date", "value" : "2017-11-10 
00:09:52.668Z", "tstamp" : "2017-11-10T00:09:52.668Z" },
  { "name" : "per_type", "value" : "user" },
  { "name" : "lists", "path" : [ "cn.cncn.bpnp" ], "value" : 
"[\"::accid:ab\",\"::accid:e1\",\"::accid:d2\",\"::accid:d3\",\"::accid:f3\",\"::accid:g3\",\"::accid:f4\",\"::accid:9c486ae5-00b2-3c63-af70-cff2950c4181\"]",
 "tstamp" : "2017-10-25T10:22:42.782Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263820be49c3a222e0248532bcefc80c773194a804057561a97382e595b51f36bb46b8675589fc89dea4a5c0ceb944d63861b39d63c0067161e84c79328077c650df33530c7625857444711dc4b1051638123694ba6e9e29b1f906663f3\",\"lastRenewedDate\":\"2017-11-10T00:09:52Z\"}",
 "tstamp" : "2017-11-10T00:09:52.668Z" }
    ]
  }
    ]
  }
]
  

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-10 Thread Jeff Jirsa
Which minor version of 3.0

-- 
Jeff Jirsa


> On May 11, 2018, at 2:54 AM, kooljava2  wrote:
> 
> 
> Hello,
> 
> Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set 
> to "null". These null columns were created during the row creation time.
> 
> After looking at the data see a pattern where update was done on these rows. 
> Rows which were updated has data but rows which were not part of the update 
> are set to null.
> 
>  created_on| created_by  | id
> -+-+-
> null |null |  
>   12345
> 
> 
> 
> sstabledump:- 
> 
> WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
> setting cdc_total_space_in_mb to 1278.  You can override this in 
> cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 5155159
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 5168738,
> "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
> "local_delete_time" : "2018-03-28T20:38:08Z" },
> "cells" : [
>   { "name" : "doc_type", "value" : false, "tstamp" : 
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "industry", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "last_modified_date", "value" : "2018-03-28 
> 20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "locale", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> },
>   { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
> "{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b82597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
>  "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
> "{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\",\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}",
>  "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> },
>   { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> }
> ]
>   }
> ]
>   }
> ]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
> setting cdc_total_space_in_mb to 1278.  You can override this in 
> cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 18743072
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18751808,
> "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
> "cells" : [
>   { "name" : "created_by", "value" : "12345" },
>   { "name" : "created_on", "value" : "2017-10-25 10:22:41.637Z" },
>   { "name" : "doc_type", "value" : false, "tstamp" : 
> "2017-10-25T10:22:42.487Z" },
>   { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
> "2017-10-25T10:22:42.487Z" },
>   { "name" : "last_modified_date", "value" : "2017-11-10 
> 00:09:52.668Z", "tstamp" : "2017-11-10T00:09:52.668Z" },
>   { "name" : "per_type", "value" : "user" },
>   { "name" : "lists", "path" : [ "cn.cncn.bpnp" ], "value" : 
> "[\"::accid:ab\",\"::accid:e1\",\"::accid:d2\",\"::accid:d3\",\"::accid:f3\",\"::accid:g3\",\"::accid:f4\",\"::accid:9c486ae5-00b2-3c63-af70-cff2950c4181\"]",
>  "tstamp" : "2017-10-25T10:22:42.782Z" },
>   { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
> "{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263820be49c3a222e0248532bcefc80c773194a804057561a97382e595b51f36bb46b8675589fc89dea4a5c0ceb944d63861b39d63c0067161e84c79328077c650df33530c7625857444711dc4b1051638123694ba6e9e29b1f906663f3\",\"lastRenewedDate\":\"2017-11-10T00:09:52Z\"}",
>  "tstamp" : "2017-11-10T00:09:52.668Z" }
> ]
>   }
> ]
>   }
> ]


RE: Cassandra upgrade from 2.2.8 to 3.10

2018-03-28 Thread Fd Habash
Thank you.

In regards to my second inquiry, as we plan for C* upgrades, I did not find the 
NEWS.txt always to be telling of possible upgrade paths. Is there a rule of 
thumb or may be an official reference for upgrade paths?




Thank you

From: Alexander Dejanovski
Sent: Wednesday, March 28, 2018 1:58 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra upgrade from 2.2.8 to 3.10

You can perform an upgrade from 2.2.x straight to 3.11.2, but the op suggests 
adding nodes in 3.10 to a cluster that runs 2.2.8, which is why Jeff says it 
won't work.

I see no reason to upgrade to 3.10 and not 3.11.2 by the way.

On Wed, Mar 28, 2018 at 5:10 PM Fred Habash <fmhab...@gmail.com> wrote:
Hi ...
I'm finding anecdotal evidence on the internet that we are able to upgrade 
2.2.8 to latest 3.11.2. Post below indicates that you can upgrade to latest 3.x 
from 2.1.9 because 3.x no longer requires 'structured upgrade path'.

I just want to confirm that such upgrade is supported. If yes, where can I find 
official documentation showing upgrade path across releases.

https://stackoverflow.com/questions/42094935/apache-cassandra-upgrade-3-x-from-2-1

Thanks 

On Mon, Aug 7, 2017 at 5:58 PM, ZAIDI, ASAD A <az1...@att.com> wrote:
Hi folks, I’ve question on upgrade method I’m thinking to execute.
 
I’m  planning from apache-Cassandra 2.2.8 to release 3.10.
 
My Cassandra cluster is configured like one rack with two Datacenters like:
 
1.   DC1 has 4 nodes 
2.   DC2 has 16 nodes
 
We’re adding another 12 nodes and would eventually need to remove those 4 nodes 
in DC1.
 
I’m thinking to add another third data center with like DC3 with 12 nodes 
having apache Cassandra 3.10 installed. Then, I start upgrading seed nodes 
first in DC1 & DC2 – once all 20nodes in ( DC1 plus DC2) upgraded – I can 
safely remove 4 DC1 nodes,
can you guys please let me know if this approach would work? I’m concerned if 
having mixed version on Cassandra nodes may  cause any issues like in streaming 
 data/sstables from existing DC to newly created third DC with version 3.10 
installed, will nodes in DC3 join the cluster with data without issues?
 
Thanks/Asad
 
 
 
 




-- 


Thank you ...

Fred Habash, Database Solutions Architect (Oracle OCP 8i,9i,10g,11g)

-- 
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com



Re: Cassandra upgrade from 2.2.8 to 3.10

2018-03-28 Thread Alexander Dejanovski
You can perform an upgrade from 2.2.x straight to 3.11.2, but the op
suggests adding nodes in 3.10 to a cluster that runs 2.2.8, which is why
Jeff says it won't work.

I see no reason to upgrade to 3.10 and not 3.11.2 by the way.

On Wed, Mar 28, 2018 at 5:10 PM Fred Habash <fmhab...@gmail.com> wrote:

> Hi ...
> I'm finding anecdotal evidence on the internet that we are able to upgrade
> 2.2.8 to latest 3.11.2. Post below indicates that you can upgrade to latest
> 3.x from 2.1.9 because 3.x no longer requires 'structured upgrade path'.
>
> I just want to confirm that such upgrade is supported. If yes, where can I
> find official documentation showing upgrade path across releases.
>
>
> https://stackoverflow.com/questions/42094935/apache-cassandra-upgrade-3-x-from-2-1
>
> Thanks
>
> On Mon, Aug 7, 2017 at 5:58 PM, ZAIDI, ASAD A <az1...@att.com> wrote:
>
>> Hi folks, I’ve question on upgrade method I’m thinking to execute.
>>
>>
>>
>> I’m  planning from apache-Cassandra 2.2.8 to release 3.10.
>>
>>
>>
>> My Cassandra cluster is configured like one rack with two Datacenters
>> like:
>>
>>
>>
>> 1.   DC1 has 4 nodes
>>
>> 2.   DC2 has 16 nodes
>>
>>
>>
>> We’re adding another 12 nodes and would eventually need to remove those 4
>> nodes in DC1.
>>
>>
>>
>> I’m thinking to add another third data center with like DC3 with 12 nodes
>> having apache Cassandra 3.10 installed. Then, I start upgrading seed nodes
>> first in DC1 & DC2 – once all 20nodes in ( DC1 plus DC2) upgraded – I can
>> safely remove 4 DC1 nodes,
>>
>> can you guys please let me know if this approach would work? I’m
>> concerned if having mixed version on Cassandra nodes may  cause any issues
>> like in streaming  data/sstables from existing DC to newly created third DC
>> with version 3.10 installed, will nodes in DC3 join the cluster with data
>> without issues?
>>
>>
>>
>> Thanks/Asad
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>
>
> --
>
>
> Thank you ...
> 
> Fred Habash, Database Solutions Architect (Oracle OCP 8i,9i,10g,11g)
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Cassandra upgrade from 2.2.8 to 3.10

2018-03-28 Thread Fred Habash
Hi ...
I'm finding anecdotal evidence on the internet that we are able to upgrade
2.2.8 to latest 3.11.2. Post below indicates that you can upgrade to latest
3.x from 2.1.9 because 3.x no longer requires 'structured upgrade path'.

I just want to confirm that such upgrade is supported. If yes, where can I
find official documentation showing upgrade path across releases.

https://stackoverflow.com/questions/42094935/apache-cassandra-upgrade-3-x-from-2-1

Thanks

On Mon, Aug 7, 2017 at 5:58 PM, ZAIDI, ASAD A <az1...@att.com> wrote:

> Hi folks, I’ve question on upgrade method I’m thinking to execute.
>
>
>
> I’m  planning from apache-Cassandra 2.2.8 to release 3.10.
>
>
>
> My Cassandra cluster is configured like one rack with two Datacenters like:
>
>
>
> 1.   DC1 has 4 nodes
>
> 2.   DC2 has 16 nodes
>
>
>
> We’re adding another 12 nodes and would eventually need to remove those 4
> nodes in DC1.
>
>
>
> I’m thinking to add another third data center with like DC3 with 12 nodes
> having apache Cassandra 3.10 installed. Then, I start upgrading seed nodes
> first in DC1 & DC2 – once all 20nodes in ( DC1 plus DC2) upgraded – I can
> safely remove 4 DC1 nodes,
>
> can you guys please let me know if this approach would work? I’m concerned
> if having mixed version on Cassandra nodes may  cause any issues like in
> streaming  data/sstables from existing DC to newly created third DC with
> version 3.10 installed, will nodes in DC3 join the cluster with data
> without issues?
>
>
>
> Thanks/Asad
>
>
>
>
>
>
>
>
>



-- 


Thank you ...

Fred Habash, Database Solutions Architect (Oracle OCP 8i,9i,10g,11g)


Re: Cassandra Upgrade and Driver compatibility

2017-12-21 Thread Jeff Jirsa
I don’t have a good answer for the driver question, but for versions:

Please go to at least 3.0.15, if you can wait a few weeks for 3.0.16 that’s 
even better



-- 
Jeff Jirsa


> On Dec 21, 2017, at 6:47 PM, Mokkapati, Bhargav (Nokia - IN/Chennai) 
>  wrote:
> 
> Hi All,
>  
> We are planning to upgrade the Apache Cassandra and CPP driver as below.
>  
> Cassandra service version :
> Old  –  cassandra-3.0.6-1.noarch.rpm
> New – cassandra-3.0.13-1.noarch.rpm
>  
> Cassandra driver version :
> Old  – cassandra-cpp-driver-2.4.3-1.el7.centos.x86_64.rpm
> New - cassandra-cpp-driver-2.6.0-1.el7.centos.x86_64.rpm
>  
> We need a confirmation on below combinations of compatibility.
>  
> if we upgrade to Cassandra 3.0.13 version and continue with old CPP driver 
> “cassandra-cpp-driver-2.4.3-1.el7.centos.x86_64.rpm”
> does this combination works? Is there any incompatibility?
> Vice versa to case1, if we continue with old Cassandra 3.0.6 version and  
> upgraded only CPP driver to 
> “cassandra-cpp-driver-2.6.0-1.el7.centos.x86_64.rpm “
> does this combination works? Is there any incompatibility?
>  
> Also, please suggest what is the most stable version of Apache Cassandra 
> later to 3.0.13 ?
>  
> Thanks in advance..
>  
> Best Regards,
> Bhargav
>  
>  
>  
>  
>  


Cassandra Upgrade and Driver compatibility

2017-12-21 Thread Mokkapati, Bhargav (Nokia - IN/Chennai)
Hi All,

We are planning to upgrade the Apache Cassandra and CPP driver as below.

Cassandra service version :
Old  -  cassandra-3.0.6-1.noarch.rpm
New - cassandra-3.0.13-1.noarch.rpm

Cassandra driver version :
Old  - cassandra-cpp-driver-2.4.3-1.el7.centos.x86_64.rpm
New - cassandra-cpp-driver-2.6.0-1.el7.centos.x86_64.rpm

We need a confirmation on below combinations of compatibility.


  1.  if we upgrade to Cassandra 3.0.13 version and continue with old CPP 
driver "cassandra-cpp-driver-2.4.3-1.el7.centos.x86_64.rpm"

does this combination works? Is there any incompatibility?

  1.  Vice versa to case1, if we continue with old Cassandra 3.0.6 version and  
upgraded only CPP driver to "cassandra-cpp-driver-2.6.0-1.el7.centos.x86_64.rpm 
"

does this combination works? Is there any incompatibility?

Also, please suggest what is the most stable version of Apache Cassandra later 
to 3.0.13 ?

Thanks in advance..

Best Regards,
Bhargav








Re: Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-11 Thread kurt greaves
That ticket says that streaming SSTables that are older versions is
supported. Streaming is only one component of repairs, and this ticket
doesn't talk about repair at all, only bootstrap. For the most part it
should work but as Alain said, it's probably best avoided. Especially if
you can avoid it (your cluster isn't so huge it's going to take days to
upgrade). Just upgrade your nodes, run and re-run upgradesstables until it
stops kicking off upgrade compactions, then re-start your repairs.​


Re: Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-08 Thread shini gupta
Thanks Alain !! During Cassandra upgrade, we halt repair scheduling for 24
hours as this is good time to finish upgradesstables..we dont want
continuous manual monitoring of upgradesstables and then rescheduling of
repair tasks when upgradesstables successfully finishes on all the
nodes..Here is the problem with the approach..sometimes we see that
upgradsstables has finished on a node but still some sstable files exist in
the old format. I want to understand how safe it is if our repair executes
after 24 hours when old and new sstables cooexist on some nodes. My
understanding of
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-5772
says that this is supported and nothing could go wrong ..Can someone please
confirm my understanding of the JIRA and confirm that repairs cant break
anything in such scenarios?"

Thanks
Shini

On Fri, 8 Dec 2017 at 2:26 PM, Alain RODRIGUEZ <arodr...@gmail.com> wrote:

>  Hi Shini,
>
> First thing that comes to my mind honestly is "why?". Why would you do
> this? It's way safer to finish upgrade then repair I would say.
>
> That being said, I can make guesses for you here, but the best would
> probably be to test it.
>
> 1. Running nodetool repair on one of the nodes while upgradesstables is
>> still executing on one or more nodes in the cluster.
>
> 2. Running nodetool repair when upgradesstables failed abruptly on some of
>> the nodes such that some sstable files are in new format while other
>> sstable files are still in old format.
>
>
> These impacts are unknown for me since I never did this, I always heard
> around it was really bad. For me both case are similar. You are going to
> repair data with distinct SSTable formats. I don't remember the differences
> between Cassandra 2.0 and 2.1.
>
> In best case it will work.
>
> But my guess is it might lead to:
> - Schema disagreement (but I heard that on schema changes on multi-version
> clusters). I don't think you would hit this one.
> - Mixing all new and old over the cluster (even on nodes with upgraded
> SSTables)...
> - Maybe directly fail if  networking changed between the 2 versions.
>
> I would stay away from repairs and upgrade first all the nodes.
>
> If that's really not doable, then I recommend you to test it (ccm / AWS /
> stage cluster, as you see fit).
>
> Sorry I cannot be more precise, impacts are not clear to me because I
> always stayed away from this kind of mixed operations. The need for
> repairing before upgrading sstables is not clear to me either. I would add
> that I found out in the past that working in a rush on Cassandra often
> creates more issues than solutions in general.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-12-08 5:26 GMT+00:00 shini gupta <gupta.sh...@gmail.com>:
>
>> Hi
>> Can someone please answer this query?
>>
>> Thanks
>>
>> On Wed, Dec 6, 2017 at 9:58 AM, shini gupta <gupta.sh...@gmail.com>
>> wrote:
>>
>>> If we have upgraded Cassandra binaries from 2.0 to 2.1 on ALL the nodes
>>> but upgradesstable is still pending, please provide the impact of following
>>> scenarios:
>>>
>>>
>>>
>>> 1. Running nodetool repair on one of the nodes while upgradesstables is
>>> still executing on one or more nodes in the cluster.
>>>
>>> 2. Running nodetool repair when upgradesstables failed abruptly on some
>>> of the nodes such that some sstable files are in new format while other
>>> sstable files are still in old format.
>>>
>>>
>>>
>>> Even though it may not be recommended to run I/O intensive operations
>>> like repair and upgradesstables simultaneously, can we assume that both the
>>> above sceanrios are now supported and will not break anything, especially
>>> after https://issues.apache.org/jira/browse/CASSANDRA-5772 has been
>>> fixed in 2.0?
>>>
>>>
>>> Regards
>>> Shini
>>>
>>>
>>
>>
>> --
>> -Shini Gupta
>>
>> ""Trusting in God won't make the mountain smaller,
>> But will make climbing easier.
>> Do not ask God for a lighter load
>> But ask Him for a stronger back... ""
>>
>
> --
-Shini Gupta

""Trusting in God won't make the mountain smaller,
But will make climbing easier.
Do not ask God for a lighter load
But ask Him for a stronger back... ""


Re: Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-08 Thread Alain RODRIGUEZ
 Hi Shini,

First thing that comes to my mind honestly is "why?". Why would you do
this? It's way safer to finish upgrade then repair I would say.

That being said, I can make guesses for you here, but the best would
probably be to test it.

1. Running nodetool repair on one of the nodes while upgradesstables is
> still executing on one or more nodes in the cluster.

2. Running nodetool repair when upgradesstables failed abruptly on some of
> the nodes such that some sstable files are in new format while other
> sstable files are still in old format.


These impacts are unknown for me since I never did this, I always heard
around it was really bad. For me both case are similar. You are going to
repair data with distinct SSTable formats. I don't remember the differences
between Cassandra 2.0 and 2.1.

In best case it will work.

But my guess is it might lead to:
- Schema disagreement (but I heard that on schema changes on multi-version
clusters). I don't think you would hit this one.
- Mixing all new and old over the cluster (even on nodes with upgraded
SSTables)...
- Maybe directly fail if  networking changed between the 2 versions.

I would stay away from repairs and upgrade first all the nodes.

If that's really not doable, then I recommend you to test it (ccm / AWS /
stage cluster, as you see fit).

Sorry I cannot be more precise, impacts are not clear to me because I
always stayed away from this kind of mixed operations. The need for
repairing before upgrading sstables is not clear to me either. I would add
that I found out in the past that working in a rush on Cassandra often
creates more issues than solutions in general.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-12-08 5:26 GMT+00:00 shini gupta :

> Hi
> Can someone please answer this query?
>
> Thanks
>
> On Wed, Dec 6, 2017 at 9:58 AM, shini gupta  wrote:
>
>> If we have upgraded Cassandra binaries from 2.0 to 2.1 on ALL the nodes
>> but upgradesstable is still pending, please provide the impact of following
>> scenarios:
>>
>>
>>
>> 1. Running nodetool repair on one of the nodes while upgradesstables is
>> still executing on one or more nodes in the cluster.
>>
>> 2. Running nodetool repair when upgradesstables failed abruptly on some
>> of the nodes such that some sstable files are in new format while other
>> sstable files are still in old format.
>>
>>
>>
>> Even though it may not be recommended to run I/O intensive operations
>> like repair and upgradesstables simultaneously, can we assume that both the
>> above sceanrios are now supported and will not break anything, especially
>> after https://issues.apache.org/jira/browse/CASSANDRA-5772 has been
>> fixed in 2.0?
>>
>>
>> Regards
>> Shini
>>
>>
>
>
> --
> -Shini Gupta
>
> ""Trusting in God won't make the mountain smaller,
> But will make climbing easier.
> Do not ask God for a lighter load
> But ask Him for a stronger back... ""
>


Re: Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-07 Thread shini gupta
Hi
Can someone please answer this query?

Thanks

On Wed, Dec 6, 2017 at 9:58 AM, shini gupta  wrote:

> If we have upgraded Cassandra binaries from 2.0 to 2.1 on ALL the nodes
> but upgradesstable is still pending, please provide the impact of following
> scenarios:
>
>
>
> 1. Running nodetool repair on one of the nodes while upgradesstables is
> still executing on one or more nodes in the cluster.
>
> 2. Running nodetool repair when upgradesstables failed abruptly on some of
> the nodes such that some sstable files are in new format while other
> sstable files are still in old format.
>
>
>
> Even though it may not be recommended to run I/O intensive operations like
> repair and upgradesstables simultaneously, can we assume that both the
> above sceanrios are now supported and will not break anything, especially
> after https://issues.apache.org/jira/browse/CASSANDRA-5772 has been fixed
> in 2.0?
>
>
> Regards
> Shini
>
>


-- 
-Shini Gupta

""Trusting in God won't make the mountain smaller,
But will make climbing easier.
Do not ask God for a lighter load
But ask Him for a stronger back... ""


Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-05 Thread shini gupta
If we have upgraded Cassandra binaries from 2.0 to 2.1 on ALL the nodes but
upgradesstable is still pending, please provide the impact of following
scenarios:



1. Running nodetool repair on one of the nodes while upgradesstables is
still executing on one or more nodes in the cluster.

2. Running nodetool repair when upgradesstables failed abruptly on some of
the nodes such that some sstable files are in new format while other
sstable files are still in old format.



Even though it may not be recommended to run I/O intensive operations like
repair and upgradesstables simultaneously, can we assume that both the
above sceanrios are now supported and will not break anything, especially
after https://issues.apache.org/jira/browse/CASSANDRA-5772 has been fixed
in 2.0?


Regards
Shini


Re: Cqlsh unable to switch keyspace after Cassandra upgrade.

2017-11-14 Thread Mikhail Tsaplin
Yesterday found problem source.
It was because of old cassandra-driver for python language. Driver removal
and pip install solved the issue.

2017-11-02 20:33 GMT+07:00 Blake Eggleston :

> Looks like a bug, could you open a jira?
>
>
> On Nov 2, 2017, at 2:08 AM, Mikhail Tsaplin  wrote:
>
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
> upgrade
> cqlsh shows following error when trying to run "use {keyspace};" command:
> 'ResponseFuture' object has no attribute 'is_schema_agreed'
>
> Actual upgrade was done on Ubuntu 16.04 by running "apt-get upgrade
> cassandra" command.
> Apt repository is deb http://debian.datastax.com/community stable main.
> Following parameters were migrated from former cassandra.yaml:
> cluster_name, num_tokens, data_file_directories, commit_log_directory,
> saved_caches_directory, seeds, listen_address, rpc_address, initial_token,
> auto_bootstrap.
>
> Later I did additional test - fetched 3.0.15 binary distribution from
> cassandra.apache.org and tried to run cassandra from this distr - same
> error:
> $ ./bin/cqlsh
> Connected to cellwize.cassandra at 172.31.17.42:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.15 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> cqlsh> use listener ;
> 'ResponseFuture' object has no attribute 'is_schema_agreed'
> cqlsh>
>
> What could be the reason?
>
>


Re: Cqlsh unable to switch keyspace after Cassandra upgrade.

2017-11-02 Thread Blake Eggleston
Looks like a bug, could you open a jira?

> On Nov 2, 2017, at 2:08 AM, Mikhail Tsaplin  wrote:
> 
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After 
> upgrade 
> cqlsh shows following error when trying to run "use {keyspace};" command:
> 'ResponseFuture' object has no attribute 'is_schema_agreed'
> 
> Actual upgrade was done on Ubuntu 16.04 by running "apt-get upgrade 
> cassandra" command.
> Apt repository is deb http://debian.datastax.com/community stable main.
> Following parameters were migrated from former cassandra.yaml:
> cluster_name, num_tokens, data_file_directories, commit_log_directory, 
> saved_caches_directory, seeds, listen_address, rpc_address, initial_token, 
> auto_bootstrap.
> 
> Later I did additional test - fetched 3.0.15 binary distribution from 
> cassandra.apache.org and tried to run cassandra from this distr - same error:
> $ ./bin/cqlsh
> Connected to cellwize.cassandra at 172.31.17.42:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.15 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> cqlsh> use listener ;
> 'ResponseFuture' object has no attribute 'is_schema_agreed'
> cqlsh> 
> 
> What could be the reason?


Cqlsh unable to switch keyspace after Cassandra upgrade.

2017-11-02 Thread Mikhail Tsaplin
Hi,
I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
upgrade
cqlsh shows following error when trying to run "use {keyspace};" command:
'ResponseFuture' object has no attribute 'is_schema_agreed'

Actual upgrade was done on Ubuntu 16.04 by running "apt-get upgrade
cassandra" command.
Apt repository is deb http://debian.datastax.com/community stable main.
Following parameters were migrated from former cassandra.yaml:
cluster_name, num_tokens, data_file_directories, commit_log_directory,
saved_caches_directory, seeds, listen_address, rpc_address, initial_token,
auto_bootstrap.

Later I did additional test - fetched 3.0.15 binary distribution from
cassandra.apache.org and tried to run cassandra from this distr - same
error:
$ ./bin/cqlsh
Connected to cellwize.cassandra at 172.31.17.42:9042.
[cqlsh 5.0.1 | Cassandra 3.0.15 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.
cqlsh> use listener ;
'ResponseFuture' object has no attribute 'is_schema_agreed'
cqlsh>

What could be the reason?


Re: Cassandra upgrade from 2.2.8 to 3.10

2017-08-07 Thread Michael Shuler
Use 3.11.0, instead of 3.10 too - has bug fixes on top of 3.10 and gets
long term release support.

-- 
Michael

On 08/07/2017 05:09 PM, Jeff Jirsa wrote:
> Cant really stream cross-version. You need to add nodes and then upgrade
> them (or upgrade all the nodes, and then add new ones).
> 
> 
> On Mon, Aug 7, 2017 at 2:58 PM, ZAIDI, ASAD A  > wrote:
> 
> Hi folks, I’ve question on upgrade method I’m thinking to execute.
> 
> __ __
> 
> I’m  planning from apache-Cassandra 2.2.8 to release 3.10.
> 
> __ __
> 
> My Cassandra cluster is configured like one rack with two
> Datacenters like:
> 
> __ __
> 
> __1.   __DC1 has 4 nodes 
> 
> __2.   __DC2 has 16 nodes
> 
> __ __
> 
> We’re adding another 12 nodes and would eventually need to remove
> those 4 nodes in DC1.
> 
> __ __
> 
> I’m thinking to add another third data center with like DC3 with 12
> nodes having apache Cassandra 3.10 installed. Then, I start
> upgrading seed nodes first in DC1 & DC2 – once all 20nodes in ( DC1
> plus DC2) upgraded – I can safely remove 4 DC1 nodes,
> 
> 
> 
> can you guys please let me know if this approach would work? I’m
> concerned if having mixed version on Cassandra nodes may  cause any
> issues like in streaming  data/sstables from existing DC to newly
> created third DC with version 3.10 installed, will nodes in DC3 join
> the cluster with data without issues?
> 
> __ __
> 
> Thanks/Asad
> 
> __ __
> 
> __ __
> 
> __ __
> 
> __ __
> 
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra upgrade from 2.2.8 to 3.10

2017-08-07 Thread Jeff Jirsa
Cant really stream cross-version. You need to add nodes and then upgrade
them (or upgrade all the nodes, and then add new ones).


On Mon, Aug 7, 2017 at 2:58 PM, ZAIDI, ASAD A  wrote:

> Hi folks, I’ve question on upgrade method I’m thinking to execute.
>
>
>
> I’m  planning from apache-Cassandra 2.2.8 to release 3.10.
>
>
>
> My Cassandra cluster is configured like one rack with two Datacenters like:
>
>
>
> 1.   DC1 has 4 nodes
>
> 2.   DC2 has 16 nodes
>
>
>
> We’re adding another 12 nodes and would eventually need to remove those 4
> nodes in DC1.
>
>
>
> I’m thinking to add another third data center with like DC3 with 12 nodes
> having apache Cassandra 3.10 installed. Then, I start upgrading seed nodes
> first in DC1 & DC2 – once all 20nodes in ( DC1 plus DC2) upgraded – I can
> safely remove 4 DC1 nodes,
>
> can you guys please let me know if this approach would work? I’m concerned
> if having mixed version on Cassandra nodes may  cause any issues like in
> streaming  data/sstables from existing DC to newly created third DC with
> version 3.10 installed, will nodes in DC3 join the cluster with data
> without issues?
>
>
>
> Thanks/Asad
>
>
>
>
>
>
>
>
>


Cassandra upgrade from 2.2.8 to 3.10

2017-08-07 Thread ZAIDI, ASAD A
Hi folks, I’ve question on upgrade method I’m thinking to execute.

I’m  planning from apache-Cassandra 2.2.8 to release 3.10.

My Cassandra cluster is configured like one rack with two Datacenters like:


1.   DC1 has 4 nodes

2.   DC2 has 16 nodes

We’re adding another 12 nodes and would eventually need to remove those 4 nodes 
in DC1.

I’m thinking to add another third data center with like DC3 with 12 nodes 
having apache Cassandra 3.10 installed. Then, I start upgrading seed nodes 
first in DC1 & DC2 – once all 20nodes in ( DC1 plus DC2) upgraded – I can 
safely remove 4 DC1 nodes,
can you guys please let me know if this approach would work? I’m concerned if 
having mixed version on Cassandra nodes may  cause any issues like in streaming 
 data/sstables from existing DC to newly created third DC with version 3.10 
installed, will nodes in DC3 join the cluster with data without issues?

Thanks/Asad






Re: Cassandra Upgrade

2016-11-29 Thread Shalom Sagges
Thanks for the info Kurt,

I guess I'd go with the normal upgrade procedure then.

Thanks again for the help everyone.




Shalom Sagges
DBA
T: +972-74-700-4035
 
 We Create Meaningful Connections



On Tue, Nov 29, 2016 at 2:05 PM, kurt Greaves  wrote:

> Why would you remove all the data? That doesn't sound like a good idea.
> Just upgrade the OS and then go through the normal upgrade flow of starting
> C* with the next version and upgrading sstables.
>
> Also, *you will need to go from 2.0.14 -> 2.1.16 -> 2.2.8* and upgrade
> sstables at each stage of the upgrade. you cannot transition from 2.0.14
> straight to 2.2.8.​
>

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.


Re: Cassandra Upgrade

2016-11-29 Thread kurt Greaves
Why would you remove all the data? That doesn't sound like a good idea.
Just upgrade the OS and then go through the normal upgrade flow of starting
C* with the next version and upgrading sstables.

Also, *you will need to go from 2.0.14 -> 2.1.16 -> 2.2.8* and upgrade
sstables at each stage of the upgrade. you cannot transition from 2.0.14
straight to 2.2.8.​


Re: Cassandra Upgrade

2016-11-29 Thread Shalom Sagges
Thanks Ben and Brooke!
@Brooke, I'd like to do that because I want to install Centos 7 on those
machines instead of the current Centos 6. To achieve that, I need to make a
new installation of the OS, meaning taking the server down.
So if that's the case, and I can't perform the upgrade online, why not
install everything anew?
By the way, if I do take the longer way and add a new 2.2.8 node to the
cluster, do I still need to perform upgradesstables on the new node?




Shalom Sagges
DBA
T: +972-74-700-4035
 
 We Create Meaningful Connections



On Tue, Nov 29, 2016 at 12:38 PM, Brooke Jensen 
wrote:

> Hi Shalom.
>
> That seems like the long way around doing it. If you clear the data from
> the node then add it back in then you will have to restream and recompact
> the data again for each node. Is there a particular reason why you would
> need to do it this way?
>
> The way we do it is to update Cassandra on each node as per the steps Ben
> linked to. Once all nodes are on the newer version you can run
> upgradesstables. If you have a large cluster and are using racks you can do
> the upgrade one rack at a time to speed things up. Either way, this should
> enable you to do the upgrade fairly quickly with no downtime.
>
> Regards,
> *Brooke Jensen*
> VP Technical Operations & Customer Services
> www.instaclustr.com | support.instaclustr.com
> 
>
> This email has been sent on behalf of Instaclustr Limited (Australia) and
> Instaclustr Inc (USA). This email and any attachments may contain
> confidential and legally privileged information.  If you are not the
> intended recipient, do not copy or disclose its content, but please reply
> to this email immediately and highlight the error to the sender and then
> immediately delete the message.
>
> On 29 November 2016 at 21:12, Ben Dalling  wrote:
>
>> Hi Shalom,
>>
>> There is a pretty good write up of the procedure written up here (
>> https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/
>> upgrdCassandraDetails.html).  Things to highlight are:
>>
>>
>>- Don't have a repair running while carrying out the upgrade (so that
>>does timebox your upgrade).
>>- When the upgrade is complete.  Run "nodetool upgradesstables" on
>>all the nodes.
>>
>>
>> Pretty much what you suggested.
>>
>> Best wishes,
>>
>> Ben
>>
>> On 29 November 2016 at 09:52, Shalom Sagges 
>> wrote:
>>
>>> Hi Everyone,
>>>
>>> Hypothetically speaking, can I add a new node with version 2.2.8 to a
>>> 2.0.14 cluster?
>>> Meaning, instead of upgrading the cluster, I'd like to remove a node,
>>> clear all its data, install 2.2.8 and add it back to the cluster, with the
>>> process eventually performed on all nodes one by one.
>>>
>>> Is this possible?
>>>
>>> Thanks!
>>>
>>>
>>> Shalom Sagges
>>> DBA
>>> T: +972-74-700-4035
>>>  
>>>  We Create Meaningful Connections
>>>
>>> 
>>>
>>>
>>> This message may contain confidential and/or privileged information.
>>> If you are not the addressee or authorized to receive this on behalf of
>>> the addressee you must not use, copy, disclose or take action based on this
>>> message or any information herein.
>>> If you have received this message in error, please advise the sender
>>> immediately by reply email and delete this message. Thank you.
>>>
>>
>>
>>
>> --
>> *Ben Dalling** MSc, CEng, MBCS CITP*
>> League of Crafty Programmers Ltd
>> Mobile:  +44 (0) 776 981-1900
>> email: b.dall...@locp.co.uk
>> www: http://www.locp.co.uk
>> http://www.linkedin.com/in/bendalling
>>
>
>

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.


Re: Cassandra Upgrade

2016-11-29 Thread Brooke Jensen
Hi Shalom.

That seems like the long way around doing it. If you clear the data from
the node then add it back in then you will have to restream and recompact
the data again for each node. Is there a particular reason why you would
need to do it this way?

The way we do it is to update Cassandra on each node as per the steps Ben
linked to. Once all nodes are on the newer version you can run
upgradesstables. If you have a large cluster and are using racks you can do
the upgrade one rack at a time to speed things up. Either way, this should
enable you to do the upgrade fairly quickly with no downtime.

Regards,
*Brooke Jensen*
VP Technical Operations & Customer Services
www.instaclustr.com | support.instaclustr.com


This email has been sent on behalf of Instaclustr Limited (Australia) and
Instaclustr Inc (USA). This email and any attachments may contain
confidential and legally privileged information.  If you are not the
intended recipient, do not copy or disclose its content, but please reply
to this email immediately and highlight the error to the sender and then
immediately delete the message.

On 29 November 2016 at 21:12, Ben Dalling  wrote:

> Hi Shalom,
>
> There is a pretty good write up of the procedure written up here (
> https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/
> upgrdCassandraDetails.html).  Things to highlight are:
>
>
>- Don't have a repair running while carrying out the upgrade (so that
>does timebox your upgrade).
>- When the upgrade is complete.  Run "nodetool upgradesstables" on all
>the nodes.
>
>
> Pretty much what you suggested.
>
> Best wishes,
>
> Ben
>
> On 29 November 2016 at 09:52, Shalom Sagges 
> wrote:
>
>> Hi Everyone,
>>
>> Hypothetically speaking, can I add a new node with version 2.2.8 to a
>> 2.0.14 cluster?
>> Meaning, instead of upgrading the cluster, I'd like to remove a node,
>> clear all its data, install 2.2.8 and add it back to the cluster, with the
>> process eventually performed on all nodes one by one.
>>
>> Is this possible?
>>
>> Thanks!
>>
>>
>> Shalom Sagges
>> DBA
>> T: +972-74-700-4035
>>  
>>  We Create Meaningful Connections
>>
>> 
>>
>>
>> This message may contain confidential and/or privileged information.
>> If you are not the addressee or authorized to receive this on behalf of
>> the addressee you must not use, copy, disclose or take action based on this
>> message or any information herein.
>> If you have received this message in error, please advise the sender
>> immediately by reply email and delete this message. Thank you.
>>
>
>
>
> --
> *Ben Dalling** MSc, CEng, MBCS CITP*
> League of Crafty Programmers Ltd
> Mobile:  +44 (0) 776 981-1900
> email: b.dall...@locp.co.uk
> www: http://www.locp.co.uk
> http://www.linkedin.com/in/bendalling
>


Re: Cassandra Upgrade

2016-11-29 Thread Ben Dalling
Hi Shalom,

There is a pretty good write up of the procedure written up here (
https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html).
Things to highlight are:


   - Don't have a repair running while carrying out the upgrade (so that
   does timebox your upgrade).
   - When the upgrade is complete.  Run "nodetool upgradesstables" on all
   the nodes.


Pretty much what you suggested.

Best wishes,

Ben

On 29 November 2016 at 09:52, Shalom Sagges  wrote:

> Hi Everyone,
>
> Hypothetically speaking, can I add a new node with version 2.2.8 to a
> 2.0.14 cluster?
> Meaning, instead of upgrading the cluster, I'd like to remove a node,
> clear all its data, install 2.2.8 and add it back to the cluster, with the
> process eventually performed on all nodes one by one.
>
> Is this possible?
>
> Thanks!
>
>
> Shalom Sagges
> DBA
> T: +972-74-700-4035
>  
>  We Create Meaningful Connections
>
> 
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>



-- 
*Ben Dalling** MSc, CEng, MBCS CITP*
League of Crafty Programmers Ltd
Mobile:  +44 (0) 776 981-1900
email: b.dall...@locp.co.uk
www: http://www.locp.co.uk
http://www.linkedin.com/in/bendalling


Cassandra Upgrade

2016-11-29 Thread Shalom Sagges
Hi Everyone,

Hypothetically speaking, can I add a new node with version 2.2.8 to a
2.0.14 cluster?
Meaning, instead of upgrading the cluster, I'd like to remove a node, clear
all its data, install 2.2.8 and add it back to the cluster, with the
process eventually performed on all nodes one by one.

Is this possible?

Thanks!


Shalom Sagges
DBA
T: +972-74-700-4035
 
 We Create Meaningful Connections


-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread Kathiresan S
ok, thanks...

On Tue, Mar 15, 2016 at 1:29 PM, ssiv...@gmail.com <ssiv...@gmail.com>
wrote:

> Note, that DataStax Enterprise still uses C* v2.1..
>
>
> On 03/15/2016 08:25 PM, Kathiresan S wrote:
>
> Thank you all !
>
> Thanks,
> Kathir
>
>
> On Tue, Mar 15, 2016 at 5:50 AM, <ssiv...@gmail.com>ssiv...@gmail.com <
> ssiv...@gmail.com> wrote:
>
>> I think that it's not ready, since it has critical bugs. See emails about
>> C* memory leaks
>>
>>
>> On 03/15/2016 01:15 AM, Robert Coli wrote:
>>
>> On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <
>> <kathiresanselva...@gmail.com>kathiresanselva...@gmail.com> wrote:
>>
>>> We are planning for Cassandra upgrade in our production environment.
>>> Which version of Cassandra is stable and is advised to upgrade to, at
>>> the moment?
>>>
>>
>>
>> https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/
>>
>> (IOW, you should run either 2.1.MAX or 2.2.5)
>>
>> Relatively soon, the answer will be "3.0.x", probably around the time
>> where 3.0.x is >= 6.
>>
>> After this series, the change in release cadence may change the above
>> rule of thumb.
>>
>> =Rob
>>
>>
>> --
>> Thanks,
>> Serj
>>
>>
>
> --
> Thanks,
> Serj
>
>


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread ssiv...@gmail.com

Note, that DataStax Enterprise still uses C* v2.1..

On 03/15/2016 08:25 PM, Kathiresan S wrote:

Thank you all !

Thanks,
Kathir


On Tue, Mar 15, 2016 at 5:50 AM, ssiv...@gmail.com 
<mailto:ssiv...@gmail.com> <ssiv...@gmail.com 
<mailto:ssiv...@gmail.com>> wrote:


I think that it's not ready, since it has critical bugs. See
emails about C* memory leaks


On 03/15/2016 01:15 AM, Robert Coli wrote:

On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S
<kathiresanselva...@gmail.com
<mailto:kathiresanselva...@gmail.com>> wrote:

    We are planning for Cassandra upgrade in our production
environment.
Which version of Cassandra is stable and is advised to
upgrade to, at the moment?



https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/

(IOW, you should run either 2.1.MAX or 2.2.5)

Relatively soon, the answer will be "3.0.x", probably around the
time where 3.0.x is >= 6.

After this series, the change in release cadence may change the
above rule of thumb.

=Rob


-- 
Thanks,

Serj




--
Thanks,
Serj



Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread Kathiresan S
Thank you all !

Thanks,
Kathir


On Tue, Mar 15, 2016 at 5:50 AM, ssiv...@gmail.com <ssiv...@gmail.com>
wrote:

> I think that it's not ready, since it has critical bugs. See emails about
> C* memory leaks
>
>
> On 03/15/2016 01:15 AM, Robert Coli wrote:
>
> On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <
> kathiresanselva...@gmail.com> wrote:
>
>> We are planning for Cassandra upgrade in our production environment.
>> Which version of Cassandra is stable and is advised to upgrade to, at the
>> moment?
>>
>
>
> https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/
>
> (IOW, you should run either 2.1.MAX or 2.2.5)
>
> Relatively soon, the answer will be "3.0.x", probably around the time
> where 3.0.x is >= 6.
>
> After this series, the change in release cadence may change the above rule
> of thumb.
>
> =Rob
>
>
> --
> Thanks,
> Serj
>
>


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread ssiv...@gmail.com
I think that it's not ready, since it has critical bugs. See emails 
about C* memory leaks


On 03/15/2016 01:15 AM, Robert Coli wrote:
On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S 
<kathiresanselva...@gmail.com <mailto:kathiresanselva...@gmail.com>> 
wrote:


We are planning for Cassandra upgrade in our production environment.
Which version of Cassandra is stable and is advised to upgrade to,
at the moment?


https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/

(IOW, you should run either 2.1.MAX or 2.2.5)

Relatively soon, the answer will be "3.0.x", probably around the time 
where 3.0.x is >= 6.


After this series, the change in release cadence may change the above 
rule of thumb.


=Rob


--
Thanks,
Serj



Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-14 Thread Robert Coli
On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <kathiresanselva...@gmail.com
> wrote:

> We are planning for Cassandra upgrade in our production environment.
> Which version of Cassandra is stable and is advised to upgrade to, at the
> moment?
>

https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/

(IOW, you should run either 2.1.MAX or 2.2.5)

Relatively soon, the answer will be "3.0.x", probably around the time where
3.0.x is >= 6.

After this series, the change in release cadence may change the above rule
of thumb.

=Rob


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-14 Thread Bryan Cheng
Hi Kathir,

The specific version will depend on your needs (eg. libraries) and
risk/stability profile. Personally, I generally go with the oldest branch
with still active maintenance (which would be 2.2.x or 2.1.x if you only
need critical fixes), but there's lots of good stuff in 3.x if you're happy
being a little closer to the bleeding edge.

There was a bit of discussion elsewhere on this list, eg here:
https://www.mail-archive.com/user@cassandra.apache.org/msg45990.html,
searching may turn up some more recommendations.

--Bryan

On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <kathiresanselva...@gmail.com
> wrote:

> Hi,
>
> We are planning for Cassandra upgrade in our production environment.
> Which version of Cassandra is stable and is advised to upgrade to, at the
> moment?
>
> Looking at this JIRA (CASSANDRA-10822
> <https://issues.apache.org/jira/browse/CASSANDRA-10822>), it looks like,
> if at all we plan to upgrade any recent version, it should be >= 3.0.2/3.2
>
> Should it be 3.0.4 / 3.0.3 / 3.3 or 3.4 ? In general, is it a good
> practice to upgrade to a Tick-Tock release instead of 3.0.X version. Please
> advice.
>
> Thanks,
> ​​Kathir
>


Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-14 Thread Kathiresan S
Hi,

We are planning for Cassandra upgrade in our production environment.
Which version of Cassandra is stable and is advised to upgrade to, at the
moment?

Looking at this JIRA (CASSANDRA-10822
<https://issues.apache.org/jira/browse/CASSANDRA-10822>), it looks like, if
at all we plan to upgrade any recent version, it should be >= 3.0.2/3.2

Should it be 3.0.4 / 3.0.3 / 3.3 or 3.4 ? In general, is it a good practice
to upgrade to a Tick-Tock release instead of 3.0.X version. Please advice.

Thanks,
​​Kathir


Blog post with Cassandra upgrade tips

2014-04-11 Thread Paulo Ricardo Motta Gomes
Hey,

Some months ago (last year!!) during our previous major upgrade from 1.1 -
1.2 I started writing a blog post with some tips for a smooth rolling
upgrade, but for some reason I forgot to finish the post. I found it
recently and decided it to publish anyway, as some of the info may be
helpful for future major upgrades:

http://monkeys.chaordic.com.br/operation/zero-downtime-cassandra-upgrade/

Cheers,

-- 
*Paulo Motta*

Chaordic | *Platform*
*www.chaordic.com.br http://www.chaordic.com.br/*
+55 48 3232.3200


Re: Cassandra upgrade issues...

2012-11-01 Thread Sylvain Lebresne
The first thing I would check is if nodetool is using the right jar. I
sounds a lot like if the server has been correctly updated but
nodetool haven't and still use the old classes.
Check the nodetool executable, it's a shell script, and try echoing
the CLASSPATH in there and check it correctly point to what it should.

--
Sylvain

On Thu, Nov 1, 2012 at 9:10 AM, Brian Fleming bigbrianflem...@gmail.com wrote:
 Hi,



 I was testing upgrading from Cassandra v.1.0.7 to v.1.1.5 yesterday on a
 single node dev cluster with ~6.5GB of data  it went smoothly in that no
 errors were thrown, the data was migrated to the new directory structure, I
 can still read/write data as expected, etc.  However nodetool commands are
 behaving strangely – full details below.



 I couldn’t find anything relevant online relating to these exceptions – any
 help/pointers would be greatly appreciated.



 Thanks  Regards,



 Brian









 ‘nodetool cleanup’ runs successfully



 ‘nodetool info’ produces :



 Token: 82358484304664259547357526550084691083

 Gossip active: true

 Load : 7.69 GB

 Generation No: 1351697611

 Uptime (seconds) : 58387

 Heap Memory (MB) : 936.91 / 1928.00

 Exception in thread main java.lang.ClassCastException: java.lang.String
 cannot be cast to org.apache.cassandra.dht.Token

 at
 org.apache.cassandra.tools.NodeProbe.getEndpoint(NodeProbe.java:546)

 at
 org.apache.cassandra.tools.NodeProbe.getDataCenter(NodeProbe.java:559)

 at org.apache.cassandra.tools.NodeCmd.printInfo(NodeCmd.java:313)

 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:651)



 ‘nodetool repair’ produces :

 Exception in thread main java.lang.reflect.UndeclaredThrowableException

 at $Proxy0.forceTableRepair(Unknown Source)

 at
 org.apache.cassandra.tools.NodeProbe.forceTableRepair(NodeProbe.java:203)

 at
 org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:880)

 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:719)

 Caused by: javax.management.ReflectionException: Signature mismatch for
 operation forceTableRepair: (java.lang.String, [Ljava.lang.String;) should
 be (java.lang.String, boolean, [Ljava.lang.String;)

 at
 com.sun.jmx.mbeanserver.PerInterface.noSuchMethod(PerInterface.java:152)

 at
 com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:117)

 at
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)

 at
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)

 at
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)

 at
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)

 at
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)

 at
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)

 at
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)

 at
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

 at java.lang.reflect.Method.invoke(Method.java:597)

 at
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:303)

 at sun.rmi.transport.Transport$1.run(Transport.java:159)

 at java.security.AccessController.doPrivileged(Native Method)

 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)

 at
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)

 at
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)

 at
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)

 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

 at java.lang.Thread.run(Thread.java:662)

 at
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)

 at
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)

 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142)

 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)

 at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown
 Source)

 at
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:993)

 at
 

Re: Cassandra upgrade issues...

2012-11-01 Thread Brian Fleming
Hi Sylvain,

Simple as that!!!  Using the 1.1.5 nodetool version works as expected.  My
mistake.

Many thanks,

Brian



On Thu, Nov 1, 2012 at 8:24 AM, Sylvain Lebresne sylv...@datastax.comwrote:

 The first thing I would check is if nodetool is using the right jar. I
 sounds a lot like if the server has been correctly updated but
 nodetool haven't and still use the old classes.
 Check the nodetool executable, it's a shell script, and try echoing
 the CLASSPATH in there and check it correctly point to what it should.

 --
 Sylvain

 On Thu, Nov 1, 2012 at 9:10 AM, Brian Fleming bigbrianflem...@gmail.com
 wrote:
  Hi,
 
 
 
  I was testing upgrading from Cassandra v.1.0.7 to v.1.1.5 yesterday on a
  single node dev cluster with ~6.5GB of data  it went smoothly in that no
  errors were thrown, the data was migrated to the new directory
 structure, I
  can still read/write data as expected, etc.  However nodetool commands
 are
  behaving strangely – full details below.
 
 
 
  I couldn’t find anything relevant online relating to these exceptions –
 any
  help/pointers would be greatly appreciated.
 
 
 
  Thanks  Regards,
 
 
 
  Brian
 
 
 
 
 
 
 
 
 
  ‘nodetool cleanup’ runs successfully
 
 
 
  ‘nodetool info’ produces :
 
 
 
  Token: 82358484304664259547357526550084691083
 
  Gossip active: true
 
  Load : 7.69 GB
 
  Generation No: 1351697611
 
  Uptime (seconds) : 58387
 
  Heap Memory (MB) : 936.91 / 1928.00
 
  Exception in thread main java.lang.ClassCastException: java.lang.String
  cannot be cast to org.apache.cassandra.dht.Token
 
  at
  org.apache.cassandra.tools.NodeProbe.getEndpoint(NodeProbe.java:546)
 
  at
  org.apache.cassandra.tools.NodeProbe.getDataCenter(NodeProbe.java:559)
 
  at org.apache.cassandra.tools.NodeCmd.printInfo(NodeCmd.java:313)
 
  at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:651)
 
 
 
  ‘nodetool repair’ produces :
 
  Exception in thread main java.lang.reflect.UndeclaredThrowableException
 
  at $Proxy0.forceTableRepair(Unknown Source)
 
  at
  org.apache.cassandra.tools.NodeProbe.forceTableRepair(NodeProbe.java:203)
 
  at
  org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:880)
 
  at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:719)
 
  Caused by: javax.management.ReflectionException: Signature mismatch for
  operation forceTableRepair: (java.lang.String, [Ljava.lang.String;)
 should
  be (java.lang.String, boolean, [Ljava.lang.String;)
 
  at
  com.sun.jmx.mbeanserver.PerInterface.noSuchMethod(PerInterface.java:152)
 
  at
  com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:117)
 
  at
  com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 
  at
 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 
  at
  com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 
  at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 
  at java.lang.reflect.Method.invoke(Method.java:597)
 
  at
  sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:303)
 
  at sun.rmi.transport.Transport$1.run(Transport.java:159)
 
  at java.security.AccessController.doPrivileged(Native Method)
 
  at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 
  at
  sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 
  at
 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 
  at
 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 
  at java.lang.Thread.run(Thread.java:662)
 
  at
 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
 
  at
  sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
 
  at 

Re: Cassandra upgrade issues...

2012-11-01 Thread Bryan Talbot
Note that 1.0.7 came out before 1.1 and I know there were
some compatibility issues that were fixed in later 1.0.x releases which
could affect your upgrade.  I think it would be best to first upgrade to
the latest 1.0.x release, and then upgrade to 1.1.x from there.

-Bryan



On Thu, Nov 1, 2012 at 1:27 AM, Brian Fleming bigbrianflem...@gmail.comwrote:

 Hi Sylvain,

 Simple as that!!!  Using the 1.1.5 nodetool version works as expected.  My
 mistake.

 Many thanks,

 Brian




 On Thu, Nov 1, 2012 at 8:24 AM, Sylvain Lebresne sylv...@datastax.comwrote:

 The first thing I would check is if nodetool is using the right jar. I
 sounds a lot like if the server has been correctly updated but
 nodetool haven't and still use the old classes.
 Check the nodetool executable, it's a shell script, and try echoing
 the CLASSPATH in there and check it correctly point to what it should.

 --
 Sylvain

 On Thu, Nov 1, 2012 at 9:10 AM, Brian Fleming bigbrianflem...@gmail.com
 wrote:
  Hi,
 
 
 
  I was testing upgrading from Cassandra v.1.0.7 to v.1.1.5 yesterday on a
  single node dev cluster with ~6.5GB of data  it went smoothly in that
 no
  errors were thrown, the data was migrated to the new directory
 structure, I
  can still read/write data as expected, etc.  However nodetool commands
 are
  behaving strangely – full details below.
 
 
 
  I couldn’t find anything relevant online relating to these exceptions –
 any
  help/pointers would be greatly appreciated.
 
 
 
  Thanks  Regards,
 
 
 
  Brian
 
 
 
 
 
 
 
 
 
  ‘nodetool cleanup’ runs successfully
 
 
 
  ‘nodetool info’ produces :
 
 
 
  Token: 82358484304664259547357526550084691083
 
  Gossip active: true
 
  Load : 7.69 GB
 
  Generation No: 1351697611
 
  Uptime (seconds) : 58387
 
  Heap Memory (MB) : 936.91 / 1928.00
 
  Exception in thread main java.lang.ClassCastException:
 java.lang.String
  cannot be cast to org.apache.cassandra.dht.Token
 
  at
  org.apache.cassandra.tools.NodeProbe.getEndpoint(NodeProbe.java:546)
 
  at
  org.apache.cassandra.tools.NodeProbe.getDataCenter(NodeProbe.java:559)
 
  at
 org.apache.cassandra.tools.NodeCmd.printInfo(NodeCmd.java:313)
 
  at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:651)
 
 
 
  ‘nodetool repair’ produces :
 
  Exception in thread main
 java.lang.reflect.UndeclaredThrowableException
 
  at $Proxy0.forceTableRepair(Unknown Source)
 
  at
 
 org.apache.cassandra.tools.NodeProbe.forceTableRepair(NodeProbe.java:203)
 
  at
  org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:880)
 
  at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:719)
 
  Caused by: javax.management.ReflectionException: Signature mismatch for
  operation forceTableRepair: (java.lang.String, [Ljava.lang.String;)
 should
  be (java.lang.String, boolean, [Ljava.lang.String;)
 
  at
  com.sun.jmx.mbeanserver.PerInterface.noSuchMethod(PerInterface.java:152)
 
  at
  com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:117)
 
  at
  com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 
  at
 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 
  at
  com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 
  at
 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 
  at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 
  at java.lang.reflect.Method.invoke(Method.java:597)
 
  at
  sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:303)
 
  at sun.rmi.transport.Transport$1.run(Transport.java:159)
 
  at java.security.AccessController.doPrivileged(Native Method)
 
  at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 
  at
  sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 
  at
 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 
  at
 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
  at
 
 

Re: Losing keyspace on cassandra upgrade

2012-09-21 Thread aaron morton
Have you tried nodetool resetlocalschema on the 1.1.5 ?

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 20/09/2012, at 11:41 PM, Thomas Stets thomas.st...@gmail.com wrote:

 A follow-up:
 
 Currently I'm back on version 1.1.1.
 
 I tried - unsuccessfully - the following things:
 
 1. Create the missing keyspace on the 1.1.5 node, then copy the files back 
 into the data directory.
 This failed, since the keyspace was already known on the other node in the 
 cluster.
 
 2. shut down the 1.1.1 node, that still has the keyspace. Then create the 
 keyspace on the 1.1.5 node.
 This failes since the node could not distribute the information through the 
 cluster.
 
 3. Restore the system keyspace from the snapshot I made before the upgrade.
 The restore seemed to work, but the node behaved just like after the update: 
 it just forgot my keyspace.
 
 Right now I'm at a loss on how to proceed. Any ideas? I'm pretty sure I can 
 reproduce the problem,
 so if anyone has an idea on what to try, or where to look, I can do some 
 tests (within limits)
 
 
 On Wed, Sep 19, 2012 at 4:43 PM, Thomas Stets thomas.st...@gmail.com wrote:
 I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to 
 1.1.5
 
 I have the same cassandra keyspace on all our staging systems:
 
 development:  a 3-node cluster
 integration: a 3-node cluster
 QS: a 2-node cluster
 (productive will be a 4-node cluster, which is as yet not active)
 
 All clusters were running cassandra 1.1.1. Before going productive I wanted 
 to upgrade to the
 latest productive version of cassandra.
 
 In all cases my keyspace disappeared when I started the cluster with 
 cassandra 1.1.5.
 On the development system I didn't realize at first what was happening. I 
 just wondered that nodetool
 showed a very low amount of data. On integration I saw the problem quickly, 
 but could not recover the
 data. I re-installed the cassandra cluster from scratch, and populated it 
 with our test data, so our
 developers could work.
  ...  
 
 
   TIA, Thomas
 



Re: Losing keyspace on cassandra upgrade

2012-09-21 Thread Thomas Stets
On Fri, Sep 21, 2012 at 10:39 AM, aaron morton aa...@thelastpickle.comwrote:

 Have you tried nodetool resetlocalschema on the 1.1.5 ?


Yes, I tried a resetlocalschema, and a repair. This didn't change anything.

BTW I could find no documentation, what a resetlocalschema actually does...


  regards, Thomas


Re: Losing keyspace on cassandra upgrade

2012-09-20 Thread Thomas Stets
On Wed, Sep 19, 2012 at 5:12 PM, Michael Kjellman
mkjell...@barracuda.comwrote:

 Sounds like you are loosing your system keyspace. When you say nothing
 important changed between yaml files do you mean with or without your
 changes?


I compared the 1.1.1 cassandra.yaml (with my changes) to the cassandra.yaml
distributed with 1.1.5. The only differences were my changes (hosts, ports
ad paths), and some comments.




 Did your data directories change in the migration? Permissions okay?


The data directory containing my keyspace has not changed. Directly after
startup cassandra began a compaction of its
system keyspace (something I saw in all cases), so that obviouly has
changes. Permissions are OK.


  Thomas


Re: Losing keyspace on cassandra upgrade

2012-09-20 Thread Thomas Stets
A follow-up:

Currently I'm back on version 1.1.1.

I tried - unsuccessfully - the following things:

1. Create the missing keyspace on the 1.1.5 node, then copy the files back
into the data directory.
This failed, since the keyspace was already known on the other node in the
cluster.

2. shut down the 1.1.1 node, that still has the keyspace. Then create the
keyspace on the 1.1.5 node.
This failes since the node could not distribute the information through the
cluster.

3. Restore the system keyspace from the snapshot I made before the upgrade.
The restore seemed to work, but the node behaved just like after the
update: it just forgot my keyspace.

Right now I'm at a loss on how to proceed. Any ideas? I'm pretty sure I can
reproduce the problem,
so if anyone has an idea on what to try, or where to look, I can do some
tests (within limits)


On Wed, Sep 19, 2012 at 4:43 PM, Thomas Stets thomas.st...@gmail.comwrote:

 I consistently keep losing my keyspace on upgrading from cassandra 1.1.1
 to 1.1.5

 I have the same cassandra keyspace on all our staging systems:

 development:  a 3-node cluster
 integration: a 3-node cluster
 QS: a 2-node cluster
 (productive will be a 4-node cluster, which is as yet not active)

 All clusters were running cassandra 1.1.1. Before going productive I
 wanted to upgrade to the
 latest productive version of cassandra.

 In all cases my keyspace disappeared when I started the cluster with
 cassandra 1.1.5.
 On the development system I didn't realize at first what was happening. I
 just wondered that nodetool
 showed a very low amount of data. On integration I saw the problem
 quickly, but could not recover the
 data. I re-installed the cassandra cluster from scratch, and populated it
 with our test data, so our
 developers could work.

 ...



   TIA, Thomas



Losing keyspace on cassandra upgrade

2012-09-19 Thread Thomas Stets
I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to
1.1.5

I have the same cassandra keyspace on all our staging systems:

development:  a 3-node cluster
integration: a 3-node cluster
QS: a 2-node cluster
(productive will be a 4-node cluster, which is as yet not active)

All clusters were running cassandra 1.1.1. Before going productive I wanted
to upgrade to the
latest productive version of cassandra.

In all cases my keyspace disappeared when I started the cluster with
cassandra 1.1.5.
On the development system I didn't realize at first what was happening. I
just wondered that nodetool
showed a very low amount of data. On integration I saw the problem quickly,
but could not recover the
data. I re-installed the cassandra cluster from scratch, and populated it
with our test data, so our
developers could work.

I am currently using the QS system to recreate the problem and try to find
what I am doing wrong,
and how I can avoid losing productive data once we are live.

Basically I was doing the following:

1. create a snapshot on every node
2. create a tar.gz of my data directory, just to be safe
3. shut down and re-start cassandra 1.1.1 (just to see that it is not the
re-start that is creating the problem)
4. verify that the keyspace is still known, and the data present.
5. shut down cassandra 1.1.1
6. copy the config to cassandra 1.1.5 (doing a diff of cassandra.yaml to
the new one first, to see whether anything important has changed)
7. start cassandra 1.1.5

In the log file, after the Replaying ... messages I find the following:

 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 759 mutations from unknown (probably removed) CF with id 1187
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 606 mutations from unknown (probably removed) CF with id 1186
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 53 mutations from unknown (probably removed) CF with id 1185
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 1945 mutations from unknown (probably removed) CF with id 1184
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 1945 mutations from unknown (probably removed) CF with id 1191
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103)
Skipped 7506 mutations from unknown (probably removed) CF with id 1190
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 88 mutations from unknown (probably removed) CF with id 1189
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 87 mutations from unknown (probably removed) CF with id 1188
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 354 mutations from unknown (probably removed) CF with id 1195
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 87 mutations from unknown (probably removed) CF with id 1194
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 45 mutations from unknown (probably removed) CF with id 1192
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 82 mutations from unknown (probably removed) CF with id 1197
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 46386 mutations from unknown (probably removed) CF with id 1177
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103)
Skipped 69 mutations from unknown (probably removed) CF with id 1178
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103)
Skipped 73 mutations from unknown (probably removed) CF with id 1179
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103)
Skipped 88 mutations from unknown (probably removed) CF with id 1181
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103)
Skipped 46386 mutations from unknown (probably removed) CF with id 1182
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103)
Skipped 7506 mutations from unknown (probably removed) CF with id 1183
 INFO [main] 2012-09-19 15:15:50,325 CommitLog.java (line 131) Log replay
complete, 0 replayed mutations

This is the first obvious indication something is wrong. Going further up
in the log file I discover that the SSTableReader logs only system keyspace
files.

Currently my cluster is in the folloing state:

node 1 runs cassandra 1.1.5, and doesn't know my keyspace
node 2 runs cassandra 1.1.1, and still nows my keyspace.

nodetool ring confirms this: node a has a load of 29kb, node 2 of roughly
1GB. The cluster itself is still intact, i.e. nodetool ring shows both
nodes.

I tried a nodetool resetlocalschema, and nodetool repair, but that didn't
change anything.

Any idea what I have been doing wrong (the preferred solution), or whether
I stumbled over a cassandra bug (not so nice)?


  TIA, Thomas


Re: Losing keyspace on cassandra upgrade

2012-09-19 Thread Michael Kjellman
Sounds like you are loosing your system keyspace. When you say nothing 
important changed between yaml files do you mean with or without your changes?

Did your data directories change in the migration? Permissions okay?

I've done a 1.1.1 to 1.1.5 upgrade on many of my nodes without issue..

On Sep 19, 2012, at 7:44 AM, Thomas Stets thomas.st...@gmail.com wrote:

 I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to 
 1.1.5
 
 I have the same cassandra keyspace on all our staging systems:
 
 development:  a 3-node cluster
 integration: a 3-node cluster
 QS: a 2-node cluster
 (productive will be a 4-node cluster, which is as yet not active)
 
 All clusters were running cassandra 1.1.1. Before going productive I wanted 
 to upgrade to the
 latest productive version of cassandra.
 
 In all cases my keyspace disappeared when I started the cluster with 
 cassandra 1.1.5.
 On the development system I didn't realize at first what was happening. I 
 just wondered that nodetool
 showed a very low amount of data. On integration I saw the problem quickly, 
 but could not recover the
 data. I re-installed the cassandra cluster from scratch, and populated it 
 with our test data, so our
 developers could work.
 
 I am currently using the QS system to recreate the problem and try to find 
 what I am doing wrong,
 and how I can avoid losing productive data once we are live.
 
 Basically I was doing the following:
 
 1. create a snapshot on every node
 2. create a tar.gz of my data directory, just to be safe
 3. shut down and re-start cassandra 1.1.1 (just to see that it is not the 
 re-start that is creating the problem)
 4. verify that the keyspace is still known, and the data present.
 5. shut down cassandra 1.1.1
 6. copy the config to cassandra 1.1.5 (doing a diff of cassandra.yaml to the 
 new one first, to see whether anything important has changed)
 7. start cassandra 1.1.5
 
 In the log file, after the Replaying ... messages I find the following:
 
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 759 mutations from unknown (probably removed) CF with id 1187
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 606 mutations from unknown (probably removed) CF with id 1186
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 53 mutations from unknown (probably removed) CF with id 1185
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 1945 mutations from unknown (probably removed) CF with id 1184
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 1945 mutations from unknown (probably removed) CF with id 1191
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) 
 Skipped 7506 mutations from unknown (probably removed) CF with id 1190
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 88 mutations from unknown (probably removed) CF with id 1189
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 87 mutations from unknown (probably removed) CF with id 1188
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 354 mutations from unknown (probably removed) CF with id 1195
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 87 mutations from unknown (probably removed) CF with id 1194
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 45 mutations from unknown (probably removed) CF with id 1192
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 82 mutations from unknown (probably removed) CF with id 1197
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 46386 mutations from unknown (probably removed) CF with id 1177
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) 
 Skipped 69 mutations from unknown (probably removed) CF with id 1178
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) 
 Skipped 73 mutations from unknown (probably removed) CF with id 1179
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) 
 Skipped 88 mutations from unknown (probably removed) CF with id 1181
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) 
 Skipped 46386 mutations from unknown (probably removed) CF with id 1182
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) 
 Skipped 7506 mutations from unknown (probably removed) CF with id 1183
  INFO [main] 2012-09-19 15:15:50,325 CommitLog.java (line 131) Log replay 
 complete, 0 replayed mutations
 
 This is the first obvious indication something is wrong. Going further up in 
 the log file I discover that the SSTableReader logs only system keyspace 
 files.
 
 Currently my cluster is in the folloing state:
 
 node 1 runs cassandra 1.1.5, and doesn't know my keyspace
 node 2 runs cassandra 

Re: Losing keyspace on cassandra upgrade

2012-09-19 Thread Edward Sargisson
We've seen that before too - supposedly it was fixed in 1.1.5. Your 
experience casts some doubt on that.


Our workaround, thus far, is to shut down the entire ring and then bring 
each node back up starting with known good.
Then you do nodetool resetlocalschema on the node that's confused and 
make sure it gets the schema linked up properly.

Then nodetool repair.

I see you've done that but we found a complete ring restart was 
necessary. This was on Cass 1.1.1.


Cheers,
Edward

On 12-09-19 08:12 AM, Michael Kjellman wrote:

Sounds like you are loosing your system keyspace. When you say nothing 
important changed between yaml files do you mean with or without your changes?

Did your data directories change in the migration? Permissions okay?

I've done a 1.1.1 to 1.1.5 upgrade on many of my nodes without issue..

On Sep 19, 2012, at 7:44 AM, Thomas Stets thomas.st...@gmail.com wrote:


I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to 
1.1.5

I have the same cassandra keyspace on all our staging systems:

development:  a 3-node cluster
integration: a 3-node cluster
QS: a 2-node cluster
(productive will be a 4-node cluster, which is as yet not active)

All clusters were running cassandra 1.1.1. Before going productive I wanted to 
upgrade to the
latest productive version of cassandra.

In all cases my keyspace disappeared when I started the cluster with cassandra 
1.1.5.
On the development system I didn't realize at first what was happening. I just 
wondered that nodetool
showed a very low amount of data. On integration I saw the problem quickly, but 
could not recover the
data. I re-installed the cassandra cluster from scratch, and populated it with 
our test data, so our
developers could work.

I am currently using the QS system to recreate the problem and try to find what 
I am doing wrong,
and how I can avoid losing productive data once we are live.

Basically I was doing the following:

1. create a snapshot on every node
2. create a tar.gz of my data directory, just to be safe
3. shut down and re-start cassandra 1.1.1 (just to see that it is not the 
re-start that is creating the problem)
4. verify that the keyspace is still known, and the data present.
5. shut down cassandra 1.1.1
6. copy the config to cassandra 1.1.5 (doing a diff of cassandra.yaml to the 
new one first, to see whether anything important has changed)
7. start cassandra 1.1.5

In the log file, after the Replaying ... messages I find the following:

  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
759 mutations from unknown (probably removed) CF with id 1187
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
606 mutations from unknown (probably removed) CF with id 1186
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
53 mutations from unknown (probably removed) CF with id 1185
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1184
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1191
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
7506 mutations from unknown (probably removed) CF with id 1190
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
88 mutations from unknown (probably removed) CF with id 1189
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1188
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
354 mutations from unknown (probably removed) CF with id 1195
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1194
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
45 mutations from unknown (probably removed) CF with id 1192
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
82 mutations from unknown (probably removed) CF with id 1197
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
46386 mutations from unknown (probably removed) CF with id 1177
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
69 mutations from unknown (probably removed) CF with id 1178
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
73 mutations from unknown (probably removed) CF with id 1179
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
88 mutations from unknown (probably removed) CF with id 1181
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
46386 mutations from unknown (probably removed) CF with id 1182
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) 

Re: Losing keyspace on cassandra upgrade

2012-09-19 Thread Michael Kjellman
@Edward Do you have a bug number for that by chance?

On Sep 19, 2012, at 8:25 AM, Edward Sargisson 
edward.sargis...@globalrelay.netmailto:edward.sargis...@globalrelay.net 
wrote:

We've seen that before too - supposedly it was fixed in 1.1.5. Your experience 
casts some doubt on that.

Our workaround, thus far, is to shut down the entire ring and then bring each 
node back up starting with known good.
Then you do nodetool resetlocalschema on the node that's confused and make sure 
it gets the schema linked up properly.
Then nodetool repair.

I see you've done that but we found a complete ring restart was necessary. This 
was on Cass 1.1.1.

Cheers,
Edward

On 12-09-19 08:12 AM, Michael Kjellman wrote:

Sounds like you are loosing your system keyspace. When you say nothing 
important changed between yaml files do you mean with or without your changes?

Did your data directories change in the migration? Permissions okay?

I've done a 1.1.1 to 1.1.5 upgrade on many of my nodes without issue..

On Sep 19, 2012, at 7:44 AM, Thomas Stets 
thomas.st...@gmail.commailto:thomas.st...@gmail.com wrote:



I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to 
1.1.5

I have the same cassandra keyspace on all our staging systems:

development:  a 3-node cluster
integration: a 3-node cluster
QS: a 2-node cluster
(productive will be a 4-node cluster, which is as yet not active)

All clusters were running cassandra 1.1.1. Before going productive I wanted to 
upgrade to the
latest productive version of cassandra.

In all cases my keyspace disappeared when I started the cluster with cassandra 
1.1.5.
On the development system I didn't realize at first what was happening. I just 
wondered that nodetool
showed a very low amount of data. On integration I saw the problem quickly, but 
could not recover the
data. I re-installed the cassandra cluster from scratch, and populated it with 
our test data, so our
developers could work.

I am currently using the QS system to recreate the problem and try to find what 
I am doing wrong,
and how I can avoid losing productive data once we are live.

Basically I was doing the following:

1. create a snapshot on every node
2. create a tar.gz of my data directory, just to be safe
3. shut down and re-start cassandra 1.1.1 (just to see that it is not the 
re-start that is creating the problem)
4. verify that the keyspace is still known, and the data present.
5. shut down cassandra 1.1.1
6. copy the config to cassandra 1.1.5 (doing a diff of cassandra.yaml to the 
new one first, to see whether anything important has changed)
7. start cassandra 1.1.5

In the log file, after the Replaying ... messages I find the following:

 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
759 mutations from unknown (probably removed) CF with id 1187
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
606 mutations from unknown (probably removed) CF with id 1186
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
53 mutations from unknown (probably removed) CF with id 1185
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1184
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1191
 INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
7506 mutations from unknown (probably removed) CF with id 1190
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
88 mutations from unknown (probably removed) CF with id 1189
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1188
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
354 mutations from unknown (probably removed) CF with id 1195
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1194
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
45 mutations from unknown (probably removed) CF with id 1192
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
82 mutations from unknown (probably removed) CF with id 1197
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
46386 mutations from unknown (probably removed) CF with id 1177
 INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
69 mutations from unknown (probably removed) CF with id 1178
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
73 mutations from unknown (probably removed) CF with id 1179
 INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
88 mutations from unknown (probably removed) CF with id 1181
 INFO [main] 2012-09-19 

Re: Losing keyspace on cassandra upgrade

2012-09-19 Thread Edward Sargisson

https://issues.apache.org/jira/browse/CASSANDRA-4583

On 12-09-19 08:30 AM, Michael Kjellman wrote:

@Edward Do you have a bug number for that by chance?

On Sep 19, 2012, at 8:25 AM, Edward Sargisson 
edward.sargis...@globalrelay.netmailto:edward.sargis...@globalrelay.net wrote:

We've seen that before too - supposedly it was fixed in 1.1.5. Your experience 
casts some doubt on that.

Our workaround, thus far, is to shut down the entire ring and then bring each 
node back up starting with known good.
Then you do nodetool resetlocalschema on the node that's confused and make sure 
it gets the schema linked up properly.
Then nodetool repair.

I see you've done that but we found a complete ring restart was necessary. This 
was on Cass 1.1.1.

Cheers,
Edward

On 12-09-19 08:12 AM, Michael Kjellman wrote:

Sounds like you are loosing your system keyspace. When you say nothing 
important changed between yaml files do you mean with or without your changes?

Did your data directories change in the migration? Permissions okay?

I've done a 1.1.1 to 1.1.5 upgrade on many of my nodes without issue..

On Sep 19, 2012, at 7:44 AM, Thomas Stets 
thomas.st...@gmail.commailto:thomas.st...@gmail.com wrote:



I consistently keep losing my keyspace on upgrading from cassandra 1.1.1 to 
1.1.5

I have the same cassandra keyspace on all our staging systems:

development:  a 3-node cluster
integration: a 3-node cluster
QS: a 2-node cluster
(productive will be a 4-node cluster, which is as yet not active)

All clusters were running cassandra 1.1.1. Before going productive I wanted to 
upgrade to the
latest productive version of cassandra.

In all cases my keyspace disappeared when I started the cluster with cassandra 
1.1.5.
On the development system I didn't realize at first what was happening. I just 
wondered that nodetool
showed a very low amount of data. On integration I saw the problem quickly, but 
could not recover the
data. I re-installed the cassandra cluster from scratch, and populated it with 
our test data, so our
developers could work.

I am currently using the QS system to recreate the problem and try to find what 
I am doing wrong,
and how I can avoid losing productive data once we are live.

Basically I was doing the following:

1. create a snapshot on every node
2. create a tar.gz of my data directory, just to be safe
3. shut down and re-start cassandra 1.1.1 (just to see that it is not the 
re-start that is creating the problem)
4. verify that the keyspace is still known, and the data present.
5. shut down cassandra 1.1.1
6. copy the config to cassandra 1.1.5 (doing a diff of cassandra.yaml to the 
new one first, to see whether anything important has changed)
7. start cassandra 1.1.5

In the log file, after the Replaying ... messages I find the following:

  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
759 mutations from unknown (probably removed) CF with id 1187
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
606 mutations from unknown (probably removed) CF with id 1186
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
53 mutations from unknown (probably removed) CF with id 1185
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1184
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
1945 mutations from unknown (probably removed) CF with id 1191
  INFO [main] 2012-09-19 15:15:50,323 CommitLogReplayer.java (line 103) Skipped 
7506 mutations from unknown (probably removed) CF with id 1190
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
88 mutations from unknown (probably removed) CF with id 1189
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1188
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
354 mutations from unknown (probably removed) CF with id 1195
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
87 mutations from unknown (probably removed) CF with id 1194
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
45 mutations from unknown (probably removed) CF with id 1192
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
82 mutations from unknown (probably removed) CF with id 1197
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
46386 mutations from unknown (probably removed) CF with id 1177
  INFO [main] 2012-09-19 15:15:50,324 CommitLogReplayer.java (line 103) Skipped 
69 mutations from unknown (probably removed) CF with id 1178
  INFO [main] 2012-09-19 15:15:50,325 CommitLogReplayer.java (line 103) Skipped 
73 mutations from unknown (probably removed) CF with id 1179
  INFO [main] 2012-09-19 15:15:50,325 

Re: Cassandra upgrade 1.1.4 issue

2012-08-28 Thread Adeel Akbar

I have upgraded jdk from 1.6_u14 to 1.7_u06 and now its working.


Thanks  Regards

*Adeel**Akbar*

On 8/24/2012 8:50 PM, Eric Evans wrote:

On Fri, Aug 24, 2012 at 5:00 AM, Adeel Akbar
adeel.ak...@panasiangroup.com wrote:

I have upgraded cassandra on ring and one node successfully upgraded first
node. On second node I got following error. Please help me to resolve this
issue.

[root@X]# /u/cassandra/apache-cassandra-1.1.4/bin/cassandra -f
xss =  -ea
-javaagent:/u/cassandra/apache-cassandra-1.1.4/bin/../lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms502M -Xmx502M
-Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss128k
Segmentation fault

Segmentation faults can be caused by software bugs, or by faulty
hardware.  If it is a software bug, it's very unlikely to be a
Cassandra bug (there should be nothing we could do to cause a JVM
segfault).

I would take a close look at what is different between these two
hosts, starting with the version of JVM.  If you have a core dump,
that might provide some insight (and if you don't, it wouldn't hurt to
get one).

Cheers,





Cassandra upgrade 1.1.4 issue

2012-08-24 Thread Adeel Akbar

Hi,

I have upgraded cassandra on ring and one node successfully upgraded 
first node. On second node I got following error. Please help me to 
resolve this issue.


[root@X]# /u/cassandra/apache-cassandra-1.1.4/bin/cassandra -f
xss =  -ea 
-javaagent:/u/cassandra/apache-cassandra-1.1.4/bin/../lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms502M -Xmx502M 
-Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss128k

Segmentation fault

--


Thanks  Regards

*Adeel**Akbar*



Re: Cassandra upgrade 1.1.4 issue

2012-08-24 Thread Eric Evans
On Fri, Aug 24, 2012 at 5:00 AM, Adeel Akbar
adeel.ak...@panasiangroup.com wrote:
 I have upgraded cassandra on ring and one node successfully upgraded first
 node. On second node I got following error. Please help me to resolve this
 issue.

 [root@X]# /u/cassandra/apache-cassandra-1.1.4/bin/cassandra -f
 xss =  -ea
 -javaagent:/u/cassandra/apache-cassandra-1.1.4/bin/../lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms502M -Xmx502M
 -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss128k
 Segmentation fault

Segmentation faults can be caused by software bugs, or by faulty
hardware.  If it is a software bug, it's very unlikely to be a
Cassandra bug (there should be nothing we could do to cause a JVM
segfault).

I would take a close look at what is different between these two
hosts, starting with the version of JVM.  If you have a core dump,
that might provide some insight (and if you don't, it wouldn't hurt to
get one).

Cheers,

-- 
Eric Evans
Acunu | http://www.acunu.com | @acunu


Re: Concerns about Cassandra upgrade from 1.0.6 to 1.1.X

2012-07-12 Thread aaron morton
It's always a good idea to have a read of the NEWS.txt file 
https://github.com/apache/cassandra/blob/cassandra-1.1/NEWS.txt

Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 12/07/2012, at 5:51 PM, Tyler Hobbs wrote:

 On Wed, Jul 11, 2012 at 8:38 PM, Roshan codeva...@gmail.com wrote:
 
 
 Currently we are using Cassandra 1.0.6 in our production system but suffer
 with the CASSANDRA-3616 (it is already fixed in 1.0.7 version).
 
 We thought to upgrade the Cassandra to 1.1.X versions, to get it's new
 features, but having some concerns about the upgrade and expert advices are
 mostly welcome.
 
 1. Can Cassandra 1.1.X identify 1.0.X configurations like SSTables, commit
 logs, etc without ant issue? And vise versa. Because if something happens to
 1.1.X after deployed to production, we want to downgrade to 1.0.6 version
 (because that's the versions we tested with our applications).
 
 1.1 can handle 1.0 data/schemas/etc without a problem, but the reverse is not 
 necessarily true.  I don't know what in particular might break if you 
 downgrade from 1.1 to 1.0, but in general, Cassandra does not handle 
 downgrading gracefully; typically the SSTable formats have changed during 
 major releases.  If you snapshot prior to upgrading, you can always roll back 
 to that, but you will have lost anything written since the upgrade.
  
 
 2. How do we need to do upgrade process?  Currently we have 3 node 1.0.6
 cluster in production. Can we upgrade node by node? If we upgrade node by
 node, will the other 1.0.6 nodes identify 1.1.X nodes without any issue?
 
 Yes, you can do a rolling upgrade to 1.1, one node at a time.  It's usually 
 fine to leave the cluster in a mixed state for a short while as long as you 
 don't do things like repairs, decommissions, or bootstraps, but I wouldn't 
 stay in a mixed state any longer than you have to.
 
 It's best to test major upgrades with a second, non-production cluster if 
 that's an option.
 
 -- 
 Tyler Hobbs
 DataStax
 



Re: Concerns about Cassandra upgrade from 1.0.6 to 1.1.X

2012-07-12 Thread Roshan
Thanks Aaron. My major concern is upgrade node by node. Because currently we
are using 1.0.6 in production and plan is to upgrade singe node to 1.1.2 at
a time.

Any comments?

Thanks.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Concerns-about-Cassandra-upgrade-from-1-0-6-to-1-1-X-tp7581197p7581221.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Concerns about Cassandra upgrade from 1.0.6 to 1.1.X

2012-07-11 Thread Roshan
Hello

Currently we are using Cassandra 1.0.6 in our production system but suffer
with the CASSANDRA-3616 (it is already fixed in 1.0.7 version).

We thought to upgrade the Cassandra to 1.1.X versions, to get it's new
features, but having some concerns about the upgrade and expert advices are
mostly welcome.

1. Can Cassandra 1.1.X identify 1.0.X configurations like SSTables, commit
logs, etc without ant issue? And vise versa. Because if something happens to
1.1.X after deployed to production, we want to downgrade to 1.0.6 version
(because that's the versions we tested with our applications). 

2. How do we need to do upgrade process?  Currently we have 3 node 1.0.6
cluster in production. Can we upgrade node by node? If we upgrade node by
node, will the other 1.0.6 nodes identify 1.1.X nodes without any issue?

Appreciate experts comments on this. Many Thanks.

/Roshan 

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Concerns-about-Cassandra-upgrade-from-1-0-6-to-1-1-X-tp7581197.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Concerns about Cassandra upgrade from 1.0.6 to 1.1.X

2012-07-11 Thread Tyler Hobbs
On Wed, Jul 11, 2012 at 8:38 PM, Roshan codeva...@gmail.com wrote:



 Currently we are using Cassandra 1.0.6 in our production system but suffer
 with the CASSANDRA-3616 (it is already fixed in 1.0.7 version).

 We thought to upgrade the Cassandra to 1.1.X versions, to get it's new
 features, but having some concerns about the upgrade and expert advices are
 mostly welcome.

 1. Can Cassandra 1.1.X identify 1.0.X configurations like SSTables, commit
 logs, etc without ant issue? And vise versa. Because if something happens
 to
 1.1.X after deployed to production, we want to downgrade to 1.0.6 version
 (because that's the versions we tested with our applications).


1.1 can handle 1.0 data/schemas/etc without a problem, but the reverse is
not necessarily true.  I don't know what in particular might break if you
downgrade from 1.1 to 1.0, but in general, Cassandra does not handle
downgrading gracefully; typically the SSTable formats have changed during
major releases.  If you snapshot prior to upgrading, you can always roll
back to that, but you will have lost anything written since the upgrade.



 2. How do we need to do upgrade process?  Currently we have 3 node 1.0.6
 cluster in production. Can we upgrade node by node? If we upgrade node by
 node, will the other 1.0.6 nodes identify 1.1.X nodes without any issue?


Yes, you can do a rolling upgrade to 1.1, one node at a time.  It's usually
fine to leave the cluster in a mixed state for a short while as long as you
don't do things like repairs, decommissions, or bootstraps, but I wouldn't
stay in a mixed state any longer than you have to.

It's best to test major upgrades with a second, non-production cluster if
that's an option.

-- 
Tyler Hobbs
DataStax http://datastax.com/


Re: Cassandra upgrade to 1.1.1 resulted in slow query issue

2012-06-14 Thread Sylvain Lebresne
That does looks fishy.
Would you mind opening a ticket on jira (
https://issues.apache.org/jira/browse/CASSANDRA) directly for that. It's
easier for us to track it there.

Thanks,
Sylvain

On Wed, Jun 13, 2012 at 8:05 PM, Ganza, Ivan iga...@globeandmail.comwrote:

 Greetings,

 ** **

 We have recently introduced Cassandra at the Globe and Mail here in
 Toronto, Canada.  We are processing and storing the North American
 stock-market feed.  We have found it to work very quickly and things have
 been looking very good.

 ** **

 Recently we upgraded to version 1.1.1 and then we have noticed some issues
 occurring.

 ** **

 I will try to describe it for you here.  Basically one operation that we
 very often perform and is very critical is the ability to ‘get the latest
 quote’.  This would return to you the latest Quote adjusted against
 exchange delay rules.  With Cassandra version 1.0.3 we could get a Quote in
 around 2ms.  After update we are looking at time of at least 2-3 seconds.*
 ***

 ** **

 The way we query the quote is using a REVERSED SuperSliceQuery  with
 start=now, end=00:00:00.000 (beginning of day) LIMITED to 1.

 ** **

 Our investigation leads us to suspect that, since upgrade, Cassandra seems
 to be reading the sstable from disk even when we request a small range of
 day only 5 seconds back.  If you look at the output below you can see that
 the query does NOT get slower as the lookback increases from 5  sec, 60
 sec, 15 min, 60 min, and 24 hours.

 ** **

 We also noticed that the query was very fast for the first five minutes of
 trading, apparently until the first sstable was flushed to disk.  After
 that we go into query times of 1-2 seconds or so.

 ** **

 Query time[lookback=5]:[1711ms]

 Query time[lookback=60]:[1592ms]

 Query time[lookback=900]:[1520ms]

 Query time[lookback=3600]:[1294ms]

 Query time[lookback=86400]:[1391ms]


 We would really appreciate input or help on this.

 ** **

 Cassandra version: 1.1.1

 Hector version: 1.0-1

 ** **

 ---

 *public* *void* testCassandraIssue() {

 *try* {

   *int*[] seconds = *new* *int*[]{ 5, 60, 60 * 15, 60 *
 60, 60 * 60 * 24};

   *for*(*int* sec : seconds) {

 DateTime start = *new* DateTime();

 SuperSliceQueryString, String, String, String
 superSliceQuery = HFactory.*createSuperSliceQuery*(keyspaceOperator,
 StringSerializer.*get*(), StringSerializer.*get*(), StringSerializer.*get*(),
 StringSerializer.*get*());

 superSliceQuery.setKey(101390 + . + *
 testFormatter*.print(start));

 superSliceQuery.setColumnFamily(Quotes);

 superSliceQuery.setRange(*superKeyFormatter*
 .print(start),

 *superKeyFormatter*
 .print(start.minusSeconds(sec)),

 *true*,

 1);

 ** **

 *long* theStart = System.*currentTimeMillis*();***
 *

 QueryResultSuperSliceString, String, String
 result = superSliceQuery.execute();

 *long* end = System.*currentTimeMillis*();

 System.*out*.println(Query time[lookback= + sec
 + ]:[ + (end - theStart) + ms]);

   }

 } *catch*(Exception e) {

   e.printStackTrace();

   *fail*(e.getMessage());

 }

   }

 ** **

 ---

 create column family Quotes

 with column_type = Super

 and  comparator = BytesType

 and subcomparator = BytesType

 and keys_cached = 7000

 and rows_cached = 0

 and row_cache_save_period = 0

 and key_cache_save_period = 3600

 and memtable_throughput = 255

 and memtable_operations = 0.29

 AND compression_options={sstable_compression:SnappyCompressor,
 chunk_length_kb:64};

 ** **

 ** **

 ** **

 -Ivan/

 ** **

 ---

 [image: Description: Description: Description: Description:
 cid:3376987576_26606724]

 *Ivan Ganza* | Senior Developer | Information Technology

 c: 647.701.6084 | e:  iga...@globeandmail.com

 ** **

image001.jpg

RE: Cassandra upgrade to 1.1.1 resulted in slow query issue

2012-06-14 Thread Ganza, Ivan
Greetings,

Thank you - issue is created here:  
https://issues.apache.org/jira/browse/CASSANDRA-4340

-Ivan/

---
[cid:image001.jpg@01CD4A16.2AA22DE0]
Ivan Ganza | Senior Developer | Information Technology
c: 647.701.6084 | e:  iga...@globeandmail.com

From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: Thursday, June 14, 2012 8:20 AM
To: user@cassandra.apache.org
Cc: cassandra-u...@incubator.apache.org; Schlueter, Kevin
Subject: Re: Cassandra upgrade to 1.1.1 resulted in slow query issue

That does looks fishy.
Would you mind opening a ticket on jira 
(https://issues.apache.org/jira/browse/CASSANDRA) directly for that. It's 
easier for us to track it there.

Thanks,
Sylvain

On Wed, Jun 13, 2012 at 8:05 PM, Ganza, Ivan 
iga...@globeandmail.commailto:iga...@globeandmail.com wrote:
Greetings,

We have recently introduced Cassandra at the Globe and Mail here in Toronto, 
Canada.  We are processing and storing the North American stock-market feed.  
We have found it to work very quickly and things have been looking very good.

Recently we upgraded to version 1.1.1 and then we have noticed some issues 
occurring.

I will try to describe it for you here.  Basically one operation that we very 
often perform and is very critical is the ability to 'get the latest quote'.  
This would return to you the latest Quote adjusted against exchange delay 
rules.  With Cassandra version 1.0.3 we could get a Quote in around 2ms.  After 
update we are looking at time of at least 2-3 seconds.

The way we query the quote is using a REVERSED SuperSliceQuery  with start=now, 
end=00:00:00.000 (beginning of day) LIMITED to 1.

Our investigation leads us to suspect that, since upgrade, Cassandra seems to 
be reading the sstable from disk even when we request a small range of day only 
5 seconds back.  If you look at the output below you can see that the query 
does NOT get slower as the lookback increases from 5  sec, 60 sec, 15 min, 60 
min, and 24 hours.

We also noticed that the query was very fast for the first five minutes of 
trading, apparently until the first sstable was flushed to disk.  After that we 
go into query times of 1-2 seconds or so.

Query time[lookback=5]:[1711ms]
Query time[lookback=60]:[1592ms]
Query time[lookback=900]:[1520ms]
Query time[lookback=3600]:[1294ms]
Query time[lookback=86400]:[1391ms]

We would really appreciate input or help on this.

Cassandra version: 1.1.1
Hector version: 1.0-1

---
public void testCassandraIssue() {
try {
  int[] seconds = new int[]{ 5, 60, 60 * 15, 60 * 60, 60 * 60 * 
24};
  for(int sec : seconds) {
DateTime start = new DateTime();
SuperSliceQueryString, String, String, String 
superSliceQuery = HFactory.createSuperSliceQuery(keyspaceOperator, 
StringSerializer.get(), StringSerializer.get(), StringSerializer.get(), 
StringSerializer.get());
superSliceQuery.setKey(101390 + . + 
testFormatter.print(start));
superSliceQuery.setColumnFamily(Quotes);
superSliceQuery.setRange(superKeyFormatter.print(start),

superKeyFormatter.print(start.minusSeconds(sec)),
true,
1);

long theStart = System.currentTimeMillis();
QueryResultSuperSliceString, String, String result 
= superSliceQuery.execute();
long end = System.currentTimeMillis();
System.out.println(Query time[lookback= + sec + ]:[ 
+ (end - theStart) + ms]);
  }
} catch(Exception e) {
  e.printStackTrace();
  fail(e.getMessage());
}
  }

---
create column family Quotes
with column_type = Super
and  comparator = BytesType
and subcomparator = BytesType
and keys_cached = 7000
and rows_cached = 0
and row_cache_save_period = 0
and key_cache_save_period = 3600
and memtable_throughput = 255
and memtable_operations = 0.29
AND compression_options={sstable_compression:SnappyCompressor, 
chunk_length_kb:64};



-Ivan/

---
[cid:image001.jpg@01CD4A16.2AA22DE0]
Ivan Ganza | Senior Developer | Information Technology
c: 647.701.6084tel:647.701.6084 | e:  
iga...@globeandmail.commailto:iga...@globeandmail.com


inline: image001.jpg

Cassandra upgrade to 1.1.1 resulted in slow query issue

2012-06-13 Thread Ganza, Ivan
Greetings,

We have recently introduced Cassandra at the Globe and Mail here in Toronto, 
Canada.  We are processing and storing the North American stock-market feed.  
We have found it to work very quickly and things have been looking very good.

Recently we upgraded to version 1.1.1 and then we have noticed some issues 
occurring.

I will try to describe it for you here.  Basically one operation that we very 
often perform and is very critical is the ability to 'get the latest quote'.  
This would return to you the latest Quote adjusted against exchange delay 
rules.  With Cassandra version 1.0.3 we could get a Quote in around 2ms.  After 
update we are looking at time of at least 2-3 seconds.

The way we query the quote is using a REVERSED SuperSliceQuery  with start=now, 
end=00:00:00.000 (beginning of day) LIMITED to 1.

Our investigation leads us to suspect that, since upgrade, Cassandra seems to 
be reading the sstable from disk even when we request a small range of day only 
5 seconds back.  If you look at the output below you can see that the query 
does NOT get slower as the lookback increases from 5  sec, 60 sec, 15 min, 60 
min, and 24 hours.

We also noticed that the query was very fast for the first five minutes of 
trading, apparently until the first sstable was flushed to disk.  After that we 
go into query times of 1-2 seconds or so.

Query time[lookback=5]:[1711ms]
Query time[lookback=60]:[1592ms]
Query time[lookback=900]:[1520ms]
Query time[lookback=3600]:[1294ms]
Query time[lookback=86400]:[1391ms]

We would really appreciate input or help on this.

Cassandra version: 1.1.1
Hector version: 1.0-1

---
public void testCassandraIssue() {
try {
  int[] seconds = new int[]{ 5, 60, 60 * 15, 60 * 60, 60 * 60 * 
24};
  for(int sec : seconds) {
DateTime start = new DateTime();
SuperSliceQueryString, String, String, String 
superSliceQuery = HFactory.createSuperSliceQuery(keyspaceOperator, 
StringSerializer.get(), StringSerializer.get(), StringSerializer.get(), 
StringSerializer.get());
superSliceQuery.setKey(101390 + . + 
testFormatter.print(start));
superSliceQuery.setColumnFamily(Quotes);
superSliceQuery.setRange(superKeyFormatter.print(start),

superKeyFormatter.print(start.minusSeconds(sec)),
true,
1);

long theStart = System.currentTimeMillis();
QueryResultSuperSliceString, String, String result 
= superSliceQuery.execute();
long end = System.currentTimeMillis();
System.out.println(Query time[lookback= + sec + ]:[ 
+ (end - theStart) + ms]);
  }
} catch(Exception e) {
  e.printStackTrace();
  fail(e.getMessage());
}
  }

---
create column family Quotes
with column_type = Super
and  comparator = BytesType
and subcomparator = BytesType
and keys_cached = 7000
and rows_cached = 0
and row_cache_save_period = 0
and key_cache_save_period = 3600
and memtable_throughput = 255
and memtable_operations = 0.29
AND compression_options={sstable_compression:SnappyCompressor, 
chunk_length_kb:64};



-Ivan/

---
[cid:image001.jpg@01CD496D.8EAB8240]
Ivan Ganza | Senior Developer | Information Technology
c: 647.701.6084 | e:  iga...@globeandmail.com

inline: image001.jpg

Cassandra Upgrade from 0.8.1

2012-06-05 Thread Adeel Akbar
Dear Guys,

 

Thank you so much for your reply. 

 

Currently I have two Cassandra nodes running in ring.  I have installed
Cassandra on following location;

 

/root/apache-cassandra-0.8.1

 

Now my questions are;

 

1.   How we upgrade (Step by Step version like 0.8.1 to 0.8.5, then
0.8.5 to 1.0.0to 1.1.0)

2.   Once I take snapshot, is it have full data or only one node data?

3.   Once I download apache-cassandra-0.8.5-bin.tar.gz then after untar,
What Can I do? I only move few folder from previous version directory to new
version directory or move all directories and files.

4.   After moving data is there any command required ?

 

 

Thanks  Regards

 

Adeel Akbar



RE: Cassandra Upgrade from 0.8.1

2012-06-05 Thread Harshvardhan Ojha
You can follow these steps for your version also .
http://www.datastax.com/docs/1.0/install/upgrading

If you will keep the data directory same in Cassandra.yaml, data will be picked 
in new node.

Regards
Harsh

From: Adeel Akbar [mailto:adeel.ak...@panasiangroup.com]
Sent: Tuesday, June 05, 2012 12:11 PM
To: user@cassandra.apache.org
Subject: Cassandra Upgrade from 0.8.1

Dear Guys,

Thank you so much for your reply.

Currently I have two Cassandra nodes running in ring.  I have installed 
Cassandra on following location;

/root/apache-cassandra-0.8.1

Now my questions are;


1.   How we upgrade (Step by Step version like 0.8.1 to 0.8.5, then 0.8.5 
to 1.0.0..to 1.1.0)

2.   Once I take snapshot, is it have full data or only one node data?

3.   Once I download apache-cassandra-0.8.5-bin.tar.gz then after untar, 
What Can I do? I only move few folder from previous version directory to new 
version directory or move all directories and files.

4.   After moving data is there any command required ?


Thanks  Regards

Adeel Akbar


Re: Cassandra upgrade from 0.8.1 to 1.1.0

2012-06-04 Thread aaron morton
In addition always read the NEWS.txt file in the distribution and glance at the 
CHANGES.txt file. 

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 4/06/2012, at 12:19 PM, Roshan wrote:

 Hi
 
 Hope this will help to you.
 
 http://www.datastax.com/docs/1.0/install/upgrading
 http://www.datastax.com/docs/1.1/install/upgrading
 
 Thanks.
 
 --
 View this message in context: 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-upgrade-from-0-8-1-to-1-1-0-tp7580198p7580210.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
 Nabble.com.



Re: Cassandra upgrade from 0.8.1 to 1.1.0

2012-06-03 Thread Roshan
Hi

Hope this will help to you.

http://www.datastax.com/docs/1.0/install/upgrading
http://www.datastax.com/docs/1.1/install/upgrading

Thanks.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-upgrade-from-0-8-1-to-1-1-0-tp7580198p7580210.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


cassandra upgrade to 1.1 - migration problem

2012-05-15 Thread Casey Deccio
I recently upgraded from cassandra 1.0.10 to 1.1.  Everything worked fine
in one environment, but after I upgraded in another, I can't find my
keyspace.  When I run, e.g., cassandra-cli with 'use KeySpace;' It tells me
that the keyspace doesn't exist.  In the log I see this:

ERROR [MigrationStage:1] 2012-05-15 11:39:48,216
AbstractCassandraDaemon.java (line 134) Exception in thread
Thread[MigrationStage:1,5,main]java.lang.AssertionError
at
org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
at
org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
at
org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:248)
at
org.apache.cassandra.service.MigrationManager$MigrationTask.runMayThrow(MigrationManager.java:416)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

I can see that the data I would expect still seems to be in the new place
(/var/lib/cassandra/data/App/ColFamily/App-DomainName-*) on all nodes.

What am I missing?

Thanks,
Casey


Re: cassandra upgrade to 1.1 - migration problem

2012-05-15 Thread Casey Deccio
Here's something new in the logs:

ERROR 12:21:09,418 Exception in thread Thread[SSTableBatchOpen:2,5,main]
java.lang.RuntimeException: Cannot open
/var/lib/cassandra/data/system/Versions/system-Versions-hc-35 because
partitioner does not match org.apache.cassandra.dht.ByteOrderedPartitioner
at
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:164)
at
org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:224)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

Casey

On Tue, May 15, 2012 at 12:08 PM, Casey Deccio ca...@deccio.net wrote:

 I recently upgraded from cassandra 1.0.10 to 1.1.  Everything worked fine
 in one environment, but after I upgraded in another, I can't find my
 keyspace.  When I run, e.g., cassandra-cli with 'use KeySpace;' It tells me
 that the keyspace doesn't exist.  In the log I see this:

 ERROR [MigrationStage:1] 2012-05-15 11:39:48,216
 AbstractCassandraDaemon.java (line 134) Exception in thread
 Thread[MigrationStage:1,5,main]java.lang.AssertionError
 at
 org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
 at
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
 at
 org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
 at
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:248)
 at
 org.apache.cassandra.service.MigrationManager$MigrationTask.runMayThrow(MigrationManager.java:416)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)

 I can see that the data I would expect still seems to be in the new place
 (/var/lib/cassandra/data/App/ColFamily/App-DomainName-*) on all nodes.

 What am I missing?

 Thanks,
 Casey



Re: cassandra upgrade to 1.1 - migration problem

2012-05-15 Thread Casey Deccio
cassandra.yaml on all nodes had ByteOrderedPartitioner with both the
previous version and upgraded version.

That being said, when I first started up cassandra after upgrading  (with
the updated .yaml, including ByteOrderedPartitioner) all nodes in the ring
appeared to be up.  But the load they carried was minimal (KB, as opposed
to GB in the previous version), and the keyspace didn't exist.  Then when I
attempted to restart the daemon on each to see if it would help, but
starting up failed on each with the partition error.

Casey

On Tue, May 15, 2012 at 12:59 PM, Oleg Dulin oleg.du...@liquidanalytics.com
 wrote:

 Did you check cassandra.yaml to make sure partitioner there matches what
 was in your old cluster ?

 Regards,
 Oleg Dulin
 Please note my new office #: 732-917-0159

 On May 15, 2012, at 3:22 PM, Casey Deccio wrote:

 Here's something new in the logs:

 ERROR 12:21:09,418 Exception in thread Thread[SSTableBatchOpen:2,5,main]
 java.lang.RuntimeException: Cannot open
 /var/lib/cassandra/data/system/Versions/system-Versions-hc-35 because
 partitioner does not match org.apache.cassandra.dht.ByteOrderedPartitioner
 at
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:164)
 at
 org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:224)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

 Casey

 On Tue, May 15, 2012 at 12:08 PM, Casey Deccio ca...@deccio.net wrote:

 I recently upgraded from cassandra 1.0.10 to 1.1.  Everything worked fine
 in one environment, but after I upgraded in another, I can't find my
 keyspace.  When I run, e.g., cassandra-cli with 'use KeySpace;' It tells me
 that the keyspace doesn't exist.  In the log I see this:

 ERROR [MigrationStage:1] 2012-05-15 11:39:48,216
 AbstractCassandraDaemon.java (line 134) Exception in thread
 Thread[MigrationStage:1,5,main]java.lang.AssertionError
 at
 org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
 at
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
 at
 org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
 at
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:248)
 at
 org.apache.cassandra.service.MigrationManager$MigrationTask.runMayThrow(MigrationManager.java:416)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at
 java.util.concurrent.FutureTask.run(FutureTask.java:166)at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)

 I can see that the data I would expect still seems to be in the new place
 (/var/lib/cassandra/data/App/ColFamily/App-DomainName-*) on all nodes.

 What am I missing?

 Thanks,
 Casey






Re: cassandra upgrade to 1.1 - migration problem

2012-05-15 Thread Dave Brosius
The replication factor for a keyspace is stored in the 
system.schema_keyspaces column family.


Since you can't view this with cli as the server won't start, the only 
way to look at it, that i know of is to use the


sstable2json tool on the *.db file for that column family...

So for instance on my machine i do

./sstable2json 
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-ia-1-Data.db


and get


{
7374726573735f6b73: [[durable_writes,true,1968197311980145], 
[name,stress_ks,1968197311980145], 
[strategy_class,org.apache.cassandra.locator.SimpleStrategy,1968197311980145], 
[strategy_options,{\replication_factor\:\3\},1968197311980145]]


It's likely you don't have a entry from replication_factor.

Theoretically i suppose you could embellish the output, and use 
json2sstable to fix it, but I have no experience here, and would get the 
blessings of datastax fellas, before proceeding.






On 05/15/2012 07:02 PM, Casey Deccio wrote:
Sorry to reply to my own message (again).  I took a closer look at the 
logs and realized that the partitioner errors aren't what kept the 
daemon to stop; those errors are in the logs even before I upgraded.  
This one seems to be the culprit.


java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:160)
Caused by: java.lang.RuntimeException: 
org.apache.cassandra.config.ConfigurationException: SimpleStrategy 
requires a replication_factor strategy option.

at org.apache.cassandra.db.Table.init(Table.java:275)
at org.apache.cassandra.db.Table.open(Table.java:114)
at org.apache.cassandra.db.Table.open(Table.java:97)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:204)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.init(AbstractCassandraDaemon.java:254)

... 5 more
Caused by: org.apache.cassandra.config.ConfigurationException: 
SimpleStrategy requires a replication_factor strategy option.
at 
org.apache.cassandra.locator.SimpleStrategy.validateOptions(SimpleStrategy.java:71)
at 
org.apache.cassandra.locator.AbstractReplicationStrategy.createReplicationStrategy(AbstractReplicationStrategy.java:218)
at 
org.apache.cassandra.db.Table.createReplicationStrategy(Table.java:295)

at org.apache.cassandra.db.Table.init(Table.java:271)
... 9 more
Cannot load daemon

I'm not sure how to check the replication_factor and/or update it 
without using cassandra-cli, which requires the daemon to be running.


Casey




Re: cassandra upgrade to 1.1 - migration problem

2012-05-15 Thread Casey Deccio
On Tue, May 15, 2012 at 5:41 PM, Dave Brosius dbros...@mebigfatguy.comwrote:

 The replication factor for a keyspace is stored in the
 system.schema_keyspaces column family.

 Since you can't view this with cli as the server won't start, the only way
 to look at it, that i know of is to use the

 sstable2json tool on the *.db file for that column family...

 So for instance on my machine i do

 ./sstable2json /var/lib/cassandra/data/**system/schema_keyspaces/**
 system-schema_keyspaces-ia-1-**Data.db

 and get


 {
 7374726573735f6b73: [[durable_writes,true,**1968197311980145],
 [name,stress_ks,**1968197311980145], [strategy_class,org.apache.**
 cassandra.locator.**SimpleStrategy,**1968197311980145],
 [strategy_options,{\**replication_factor\:\3\},**
 1968197311980145]]

 It's likely you don't have a entry from replication_factor.


Yep, I checked the system.schema_keyspaces ColumnFamily, and there was no
replication_factor value, as you suspected.  But the dev cluster that
worked after upgrade did have that value, so it started up okay.
Apparently pre-1.1 was less picky about its presence.


 Theoretically i suppose you could embellish the output, and use
 json2sstable to fix it, but I have no experience here, and would get the
 blessings of datastax fellas, before proceeding.


Actually, I went ahead and took a chance because I had already completely
offline for several hours and wanted to get things back up.  I did what you
suggested and added the replication_factor value to the json returned from
sstable2json and imported it using json2sstable.  Fortunately I had the dev
cluster values to use as a basis.  I started things up, and it worked like
a champ.  Thanks!

Casey