Thank you Jonathan and all.
On Tue, Nov 14, 2017 at 10:53 PM, Jonathan Haddad wrote:
> Anthony’s suggestions using replace_address_first_boot lets you avoid that
> requirement, and it’s specifically why it was added in 2.2.
> On Tue, Nov 14, 2017 at 1:02 AM Anshu Vajpayee
> wrote:
>
>> Thanks
Anthony’s suggestions using replace_address_first_boot lets you avoid that
requirement, and it’s specifically why it was added in 2.2.
On Tue, Nov 14, 2017 at 1:02 AM Anshu Vajpayee
wrote:
> Thanks guys ,
>
> I thikn better to pass replace_address on command line rather than update
> the cassnd
Thanks guys ,
I thikn better to pass replace_address on command line rather than update
the cassndra-env file so that there would not be requirement to remove it
later.
On Tue, Nov 14, 2017 at 6:32 AM, Anthony Grasso
wrote:
> Hi Anshu,
>
> To add to Erick's comment, remember to remove the
Hi Anshu,
To add to Erick's comment, remember to remove the *replace_address* method
from the *cassandra-env.sh* file once the node has rejoined successfully.
The node will fail the next restart otherwise.
Alternatively, use the *replace_address_first_boot* method which works
exactly the same way
Use the replace_address method with its own IP address. Make sure you
delete the contents of the following directories:
- data/
- commitlog/
- saved_caches/
Forget rejoining with repair -- it will just cause more problems. Cheers!
On Mon, Nov 13, 2017 at 2:54 PM, Anshu Vajpayee
wrote:
> Hi All
I’ve had a few use cases for downgrading consistency over the years. If you’re
showing a customer dashboard w/ some Ad summary data, it’s great to be right,
but showing a number that’s close is better than not being up.
> On Oct 6, 2017, at 1:32 PM, Jeff Jirsa wrote:
>
> I think it was Brando
I think it was Brandon that used to make a pretty compelling argument that
downgrading consistency on writes was always wrong, because if you can
tolerate the lower consistency, you should just use the lower consistency
from the start (because cassandra is still going to send the write to all
repli
> Modern client drivers also have ways to “downgrade” the CL of requests, in
> case they fail. E.g. for the Java driver:
> http://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html
Quick note from a driver dev's perspective: Mark,
I’ll check to see what our app is using.
Thanks
Mark
801-705-7115 office
From: Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
Sent: Friday, October 6, 2017 12:25 PM
To: user@cassandra.apache.org
Subject: RE: Node failure
QUORUM should succeed with a RF=3 and 2 of 3 nodes
/DowngradingConsistencyRetryPolicy.html
Thomas
From: Mark Furlong [mailto:mfurl...@ancestry.com]
Sent: Freitag, 06. Oktober 2017 19:43
To: user@cassandra.apache.org
Subject: RE: Node failure
Thanks for the detail. I’ll have to remove and then add one back in. It’s my
consistency levels that may bite me in the interim.
Thanks
We are using quorum on our reads and writes.
Thanks
Mark
801-705-7115 office
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, October 6, 2017 11:30 AM
To: cassandra
Subject: Re: Node failure
If you write with CL:ANY, CL:ONE (or LOCAL_ONE), and one node fails, you may
lose data that
Thanks for the detail. I’ll have to remove and then add one back in. It’s my
consistency levels that may bite me in the interim.
Thanks
Mark
801-705-7115 office
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, October 6, 2017 11:29 AM
To: cassandra
Subject: Re: Node failure
There
ld be aware of?
>
>
>
> *Thanks*
>
> *Mark*
>
> *801-705-7115 <(801)%20705-7115> office*
>
>
>
> *From:* Akshit Jain [mailto:akshit13...@iiitd.ac.in]
> *Sent:* Friday, October 6, 2017 11:25 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Node f
The only time I’ll have a problem is if I have a do a read all or write all.
Any other gotchas I should be aware of?
Thanks
Mark
801-705-7115 office
From: Akshit Jain [mailto:akshit13...@iiitd.ac.in]
Sent: Friday, October 6, 2017 11:25 AM
To: user@cassandra.apache.org
Subject: Re: Node failure
You replace it with a new node and bootstraping happens.The new node
receives data from other two nodes.
Rest depends on the scenerio u are asking for.
Regards
Akshit Jain
B-Tech,2013124
9891724697
On Fri, Oct 6, 2017 at 10:50 PM, Mark Furlong wrote:
> What happens when I have a 3 node cluster
There's a lot to talk about here, what's your exact question?
- You can either remove it from the cluster or replace it. You typically
remove it if it'll never be replaced, but in RF=3 with 3 nodes, you
probably need to replace it. To replace, you'll start a new server with
-Dcassandra.replace_ad
distributed across all of your
cluster. And you want to delete whole partitions, if at all possible. (Or at
least a reasonable number of deletes within a partition.)
Sean Durity
From: Karthick V [mailto:karthick...@zohocorp.com]
Sent: Monday, July 03, 2017 12:47 PM
To: user
Subject: Re: Node failure Due
your
tables with [tombstones], A quick [grep –i tombstone /path/to/system.log]
command would tell you what objects are suffering with tombstones!
From: Karthick V [mailto:karthick...@zohocorp.com]
Sent: Monday, July 03, 2017 11:47 AM
To: user
Subject: Re: Node failure Due To Very high GC pa
Hi Bryan,
Thanks for your quick response. We have already tuned our memory
and GC based on our hardware specification and it was working fine until
yesterday, i.e before facing the below specified delete request. As you
specified we will once again look into our GC & memory confi
This is a very antagonistic use case for Cassandra :P I assume you're
familiar with Cassandra and deletes? (eg.
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html,
http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_about_deletes_c.html
)
That being said, are you gi
It helps, Thanks a lot,
miriam
On Mon, Feb 28, 2011 at 9:50 PM, Aaron Morton wrote:
> I thought there was more to it.
>
> The steps for move or removing nodes are outlined on the operations page
> wiki as you probably know.
>
> What approach are you considering to rebalancing the token distribut
I thought there was more to it.
The steps for move or removing nodes are outlined on the operations page wiki
as you probably know.
What approach are you considering to rebalancing the token distribution when
removing a node? E.g. If you have 5 nodes and remove 1 the best long term
solution is
Aaron,
Thanks a lot,
Actually I meant a larger number of nodes than 3 and replication factor of
3.
We are looking on a system that may shrink due to permanent failures, and
then automatically detects the failure and stream its range to other nodes
in the cluster to have again 3 replicas.
I understn
AFAIK the general assumption is that you will want to repair the node manually,
within the GCGraceSeconds period. If this cannot be done then nodetool
decomission and removetoken are the recommended approach.
In your example though, with 3 nodes and an RF of 3 your cluster can sustain a
single
24 matches
Mail list logo