Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Hi, On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote: > I strongly suggest you don't use materialized views at all. There are > edge cases that in my opinion make them unsuitable for production, both in > terms of cluster stability as well as data integrity. > Oh, there is already an open and

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
, Gábor AUTH > On Fri, May 17, 2024 at 8:58 AM Gábor Auth wrote: > >> Hi, >> >> I know, I know, the materialized view is experimental... :) >> >> So, I ran into a strange error. Among others, I have a very small 4-nodes >> cluster, with very minimal data (~100

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Jon Haddad
w, I know, the materialized view is experimental... :) > > So, I ran into a strange error. Among others, I have a very small 4-nodes > cluster, with very minimal data (~100 MB at all), the keyspace's > replication factor is 3, everything is works fine... except: if I restart a > node, I get a lot of

Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Hi, I know, I know, the materialized view is experimental... :) So, I ran into a strange error. Among others, I have a very small 4-nodes cluster, with very minimal data (~100 MB at all), the keyspace's replication factor is 3, everything is works fine... except: if I restart a node, I get a lot

Re: write on ONE node vs replication factor

2023-07-16 Thread Anurag Bisht
Thank you Dipan, it makes sense now. Cheers, Anurag On Sun, Jul 16, 2023 at 12:43 AM Dipan Shah wrote: > Hello Anurag, > > In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is > Read consistency, W is Write consistency and N is the Replication Fa

Re: write on ONE node vs replication factor

2023-07-16 Thread Dipan Shah
Hello Anurag, In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is Read consistency, W is Write consistency and N is the Replication Factor. So in your case, R(2) + W(1) = 3 which is NOT greater than your replication factor(3) so you will not be able to gua

Re: write on ONE node vs replication factor

2023-07-15 Thread Anurag Bisht
thank you Jeff, it makes more sense now. How about I write with ONE consistency, replication factor = 3 and read consistency is QUORUM. I am guessing in that case, I will not have the empty read even if it is happened immediately after the write request, let me know your thoughts ? Cheers, Anurag

Re: write on ONE node vs replication factor

2023-07-15 Thread Jeff Jirsa
Consistency level controls when queries acknowledge/succeed Replication factor is where data lives / how many copies If you write at consistency ONE and replication factor 3, the query finishes successfully when the write is durable on one of the 3 copies. It will get sent to all 3, but it’ll

write on ONE node vs replication factor

2023-07-15 Thread Anurag Bisht
Hello Users, I am new to Cassandra and trying to understand the architecture of it. If I write to ONE node for a particular key and have a replication factor of 3, would the written key will get replicated to the other two nodes ? Let me know if I am thinking incorrectly. Thanks, Anurag

RE: Trouble After Changing Replication Factor

2021-10-13 Thread Isaeed Mohanna
Replication Factor The most likely explanation is that repair failed and you didnt notice. Or that you didnt actually repair every host / every range. Which version are you using? How did you run repair? On Tue, Oct 12, 2021 at 4:33 AM Isaeed Mohanna mailto:isa...@xsense.co>> wrote: Hi

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Jeff Jirsa
request will actually return a correct result? > > > > Thanks > > > > *From:* Bowen Song > *Sent:* Monday, October 11, 2021 5:13 PM > *To:* user@cassandra.apache.org > *Subject:* Re: Trouble After Changing Replication Factor > > > > You have RF=3

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Dmitry Saprykin
; Thanks > > > > *From:* Bowen Song > *Sent:* Monday, October 11, 2021 5:13 PM > *To:* user@cassandra.apache.org > *Subject:* Re: Trouble After Changing Replication Factor > > > > You have RF=3 and both read & write CL=1, which means you are asking > Cass

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Bowen Song
, October 11, 2021 5:13 PM *To:* user@cassandra.apache.org *Subject:* Re: Trouble After Changing Replication Factor You have RF=3 and both read & write CL=1, which means you are asking Cassandra to give up strong consistency in order to gain higher availability and perhaps slight faster s

RE: Trouble After Changing Replication Factor

2021-10-12 Thread Isaeed Mohanna
request will actually return a correct result? Thanks From: Bowen Song Sent: Monday, October 11, 2021 5:13 PM To: user@cassandra.apache.org Subject: Re: Trouble After Changing Replication Factor You have RF=3 and both read & write CL=1, which means you are asking Cassandra to give up st

Re: Trouble After Changing Replication Factor

2021-10-11 Thread Bowen Song
e CL) > RF. On 10/10/2021 11:55, Isaeed Mohanna wrote: Hi We had a cluster with 3 Nodes with Replication Factor 2 and we were using read with consistency Level One. We recently added a 4^th node and changed the replication factor to 3, once this was done apps reading from DB with CL1 would

Trouble After Changing Replication Factor

2021-10-10 Thread Isaeed Mohanna
Hi We had a cluster with 3 Nodes with Replication Factor 2 and we were using read with consistency Level One. We recently added a 4th node and changed the replication factor to 3, once this was done apps reading from DB with CL1 would receive an empty record, Looking around I was surprised

Re: Anti-entropy repair with a 4 node cluster replication factor 4

2020-10-27 Thread manish khandelwal
If you run full repair then it should be fine, since all the replicas are present on all the nodes. If you are using -pr option then you need to run on all the nodes. On Tue, Oct 27, 2020 at 4:11 PM Fred Al wrote: > Hello! > Running Cassandra 2.2.9 with a 4 node cluster with replication

Anti-entropy repair with a 4 node cluster replication factor 4

2020-10-27 Thread Fred Al
Hello! Running Cassandra 2.2.9 with a 4 node cluster with replication factor 4. When running anti-entropy repair is it required to run repair on all 4 nodes or is it sufficient to run it on only one node? Since all data is replicated on all nodes i.m.o. only one node would need to be repaired

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-27 Thread Leena Ghatpande
: Tuesday, May 26, 2020 11:33 PM To: user@cassandra.apache.org Subject: Re: any risks with changing replication factor on live production cluster without downtime and service interruption? By retry logic, I’m going to guess you are doing some kind of version consistency trick where you have

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-26 Thread Reid Pinchback
ads to LOCAL_QUORUM until you’re done to buffer yourself from that risk. From: Leena Ghatpande Reply-To: "user@cassandra.apache.org" Date: Tuesday, May 26, 2020 at 1:20 PM To: "user@cassandra.apache.org" Subject: Re: any risks with changing replication factor on live production c

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-26 Thread Leena Ghatpande
From: Leena Ghatpande Sent: Friday, May 22, 2020 11:51 AM To: cassandra cassandra Subject: any risks with changing replication factor on live production cluster without downtime and service interruption? We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-25 Thread Oleksandr Shulgin
On Fri, May 22, 2020 at 9:51 PM Jeff Jirsa wrote: > With those consistency levels it’s already possible you don’t see your > writes, so you’re already probably seeing some of what would happen if you > went to RF=5 like that - just less common > > If you did what you describe you’d have a 40%

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-22 Thread Jeff Jirsa
and thinking of changing > the replication factor to 5 for each DC. > > Our application uses the below consistency level > read-level: LOCAL_ONE > write-level: LOCAL_QUORUM > > if we change the RF=5 on live cluster, and run full repairs, would we see > read/write

any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-22 Thread Leena Ghatpande
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in each DC. RF=3 We have around 150M rows across tables. We are planning to add more nodes to the cluster, and thinking of changing the replication factor to 5 for each DC. Our application uses the below consistency level

Re: system_auth keyspace replication factor

2018-11-26 Thread Sam Tunnicliffe
> I suspect some of the intermediate queries (determining role, etc) happen at > quorum in 2.2+, but I don’t have time to go read the code and prove it. This isn’t true. Aside from when using the default superuser, only CRM::getAllRoles reads at QUORUM (because the resultset would include the

Re: system_auth keyspace replication factor

2018-11-26 Thread Oleksandr Shulgin
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote: > > We have recently met a problem when we added 60 nodes in 1 region to the > cluster > and set an RF=60 for the system_auth ks, following this documentation > https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html >

Re: system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
Attaching the runner log snippet, where we can see that "Rebuilding token map" took most of the time. getAllroles is using quorum, don't if it is used during login

Re: system_auth keyspace replication factor

2018-11-23 Thread Jeff Jirsa
I suspect some of the intermediate queries (determining role, etc) happen at quorum in 2.2+, but I don’t have time to go read the code and prove it. In any case, RF > 10 per DC is probably excessive Also want to crank up the validity times so it uses cached info longer -- Jeff Jirsa > On

Re: system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
no its not a cassandra user and as i understood all other users login local_one. On Fri, 23 Nov 2018, 19:30 Jonathan Haddad Any chance you’re logging in with the Cassandra user? It uses quorum > reads. > > > On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk > wrote: > >> Hi, >> We have recently

Re: system_auth keyspace replication factor

2018-11-23 Thread Jonathan Haddad
Any chance you’re logging in with the Cassandra user? It uses quorum reads. On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk wrote: > Hi, > We have recently met a problem when we added 60 nodes in 1 region to the > cluster > and set an RF=60 for the system_auth ks, following this documentation

system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
Hi, We have recently met a problem when we added 60 nodes in 1 region to the cluster and set an RF=60 for the system_auth ks, following this documentation https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html However we've started to see increased login latencies in the

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-11 Thread Jürgen Albersdorfer
by Key, but not for searching. High Availibility is a nice giveaway here. If you end having only one Table in C*, maybe something like Redis would work for your needs, too. Some hints from my own expierience with it - if you choose to use Cassandra: Have at least as much Racks as Replication Factor

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Jeff Jirsa
link between the two racks goes down, but both are otherwise functional - a query at ONE in either rack would be able to read and write data, but it would diverge between the two racks for some period of time). > > When I go to set up the database though, I am required to set a >

Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Code Wiget
around 1s, then there shouldn’t be an issue. When I go to set up the database though, I am required to set a replication factor to a number - 1,2,3,etc. So I can’t just say “ALL” and have it replicate to all nodes. Right now, I have a 2 node cluster with replication factor 3. Will this cause

Re: Reducing the replication factor

2018-01-09 Thread Jeff Jirsa
e to reduce the replication factor from 3 to 2 but we are not > sure if it is a safe operation. We would like to get some feedback from you > guys. > > Have anybody tried to shrink the replication factor? > > Does "nodetool cleanup" get rid of the replicated data no

Reducing the replication factor

2018-01-09 Thread Alessandro Pieri
Dear Everyone, We are running Cassandra v2.0.15 on our production cluster. We would like to reduce the replication factor from 3 to 2 but we are not sure if it is a safe operation. We would like to get some feedback from you guys. Have anybody tried to shrink the replication factor? Does

Cassandra Replication Factor change from 2 to 3 for each data center

2017-12-15 Thread Harika Vangapelli -T (hvangape - AKRAYA INC at Cisco)
This is just basic question to ask..but it is worth to ask. We changed Replication factor from 2 to 3 in our production cluster. We have 2 data centers. Does nodetool repair -dcpar from single node in one data center is sufficient for the whole replication to take effect? Please confirm. Do I

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Nate McCall
Regardless, if you are not modifying users frequently (with five you most likely are not), make sure turn the permission cache wyyy up. In 2.1 that is just: permissions_validity_in_ms (default is 2000 or 2 seconds). Feel free to set it to 1 day or some such. The corresponding async update

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Erick Ramirez
601771 > > > Processing response from /xx.xx.xx.116 [SharedPool-Worker-1] | 2017-08-30 > 10:51:25.015000 | xx.xx.xx.113 | 601824 > > > Request complete | > 2017-08-30 10:51:25.014874 | xx.xx.xx.113 | 601874 > >

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread kurt greaves
For that many nodes mixed with vnodes you probably want a lower RF than N per datacenter. 5 or 7 would be reasonable. The only down side is that auth queries may take slightly longer as they will often have to go to other nodes to be resolved, but in practice this is likely not a big deal as the

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
7 at 10:42 AM To: User <user@cassandra.apache.org> Subject: Re: system_auth replication factor in Cassandra 2.1 On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds <creyno...@ancestry.com<mailto:creyno...@ancestry.com>> wrote: How many users do you have (or expect to be foun

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds wrote: > How many users do you have (or expect to be found in system_auth.users)? > > 5 users. > > What are the current RF for system_auth and consistency level you are > using in cqlsh? > > 135 in one DC and 227 in the

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
gt; Subject: Re: system_auth replication factor in Cassandra 2.1 On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds <creyno...@ancestry.com<mailto:creyno...@ancestry.com>> wrote: So I’ve read that if your using authentication in Cassandra 2.1 that your replication factor should matc

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 6:20 PM, Chuck Reynolds wrote: > So I tried to run a repair with the following on one of the server. > > nodetool repair system_auth -pr –local > > > > After two hours it hadn’t finished. I had to kill the repair because of > another issue and

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
e.org" <user@cassandra.apache.org> Subject: Re: system_auth replication factor in Cassandra 2.1 It's a better rule of thumb to use an RF of 3 to 5 per DC and this is what the docs now suggest: http://cassandra.apache.org/doc/latest/operating/security.html#authentication Out o

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds <creyno...@ancestry.com> wrote: > So I’ve read that if your using authentication in Cassandra 2.1 that your > replication factor should match the number of nodes in your datacenter. > > > > *Is that true?* > > > >

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Sam Tunnicliffe
s & superusers, the link above also has info on this. Thanks, Sam On 30 August 2017 at 16:50, Chuck Reynolds <creyno...@ancestry.com> wrote: > So I’ve read that if your using authentication in Cassandra 2.1 that your > replication factor should match the number of nodes in your dat

RE: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Jonathan Baynes
into a secure cluster, set the replication factor of the system_auth and dse_security keyspaces to a value that is greater than 1. In a multi-node cluster, using the default of 1 prevents logging into any node when the node that stores the user data is down. From: Chuck Reynolds [mailto:creyno

system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
So I’ve read that if your using authentication in Cassandra 2.1 that your replication factor should match the number of nodes in your datacenter. Is that true? I have two datacenter cluster, 135 nodes in datacenter 1 & 227 nodes in an AWS datacenter. Why do I want to replicate the system_

Re: Dropping down replication factor

2017-08-15 Thread Erick Ramirez
; Rather than troubleshoot this further, what I was thinking about doing was: > - drop the replication factor on our keyspace to two > - hopefully this would reduce load on these two remaining nodes > - run repairs/cleanup across the cluster > - then shoot these two nodes in the 'c' rack >

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
Thanks Kurt. We had one sstable from a cf of ours. I am actually running a repair on that cf now and then plan to try and join the additional nodes as you suggest. I deleted the opscenter corrupt sstables as well but will not bother repairing that before adding capacity. Been keeping an eye

Re: Dropping down replication factor

2017-08-13 Thread kurt greaves
On 14 Aug. 2017 00:59, "Brian Spindler" wrote: Do you think with the setup I've described I'd be ok doing that now to recover this node? The node died trying to run the scrub; I've restarted it but I'm not sure it's going to get past a scrub/repair, this is why I

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
ondary index build. Hard to say for >>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack >>> probably not necessary, streaming is being marked as failed and it’s >>> turning itself off. Not sure why streaming is marked as failing, though, >>> an

Re: Dropping down replication factor

2017-08-13 Thread Jeff Jirsa
0 0 68403 0 >>>0 >>> MiscStage 0 0 0 0 >>>0 >>> AntiEntropySessions 0 0 0 0 >>> 0 >>&g

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
ian.spind...@gmail.com> >> Reply-To: <user@cassandra.apache.org> >> Date: Saturday, August 12, 2017 at 6:34 PM >> To: <user@cassandra.apache.org> >> Subject: Re: Dropping down replication factor >> >> Thanks for replying Jeff. >>

Re: Dropping down replication factor

2017-08-12 Thread Brian Spindler
; > > > > From: Brian Spindler <brian.spind...@gmail.com> > Reply-To: <user@cassandra.apache.org> > Date: Saturday, August 12, 2017 at 6:34 PM > To: <user@cassandra.apache.org> > Subject: Re: Dropping down replication factor > > Thanks for replying Jeff. > &g

Re: Dropping down replication factor

2017-08-12 Thread Jeffrey Jirsa
Re: Dropping down replication factor Thanks for replying Jeff. Responses below. On Sat, Aug 12, 2017 at 8:33 PM Jeff Jirsa <jji...@gmail.com> wrote: > Answers inline > > -- > Jeff Jirsa > > >> > On Aug 12, 2017, at 2:58 PM, brian.spind...@gmail.com wrote: >

Re: Dropping down replication factor

2017-08-12 Thread Brian Spindler
15 _TRACE 0 MUTATION 2949001 COUNTER_MUTATION 0 BINARY 0 REQUEST_RESPONSE 0 PAGED_RANGE 0 READ_REPAIR 8571 I can get a jstack if needed. > > > > > Rather than

Re: Dropping down replication factor

2017-08-12 Thread Jeff Jirsa
ike building secondary index or similar? jstack thread dump would be useful, or at least nodetool tpstats > > Rather than troubleshoot this further, what I was thinking about doing was: > - drop the replication factor on our keyspace to two Repair before you do this, or you'll lose you

Dropping down replication factor

2017-08-12 Thread brian . spindler
about doing was: - drop the replication factor on our keyspace to two - hopefully this would reduce load on these two remaining nodes - run repairs/cleanup across the cluster - then shoot these two nodes in the 'c' rack - run repairs/cleanup across the cluster Would this work with minimal

RE: Question about replica and replication factor

2016-09-20 Thread Jun Wu
Great explanation! For the single partition read, it makes sense to read data from only one replica. Thank you so much Ben! Jun From: ben.sla...@instaclustr.com Date: Tue, 20 Sep 2016 05:30:43 + Subject: Re: Question about replica and replication factor To: wuxiaomi...@hotmail.com CC: user

Re: Question about replica and replication factor

2016-09-19 Thread Ben Slater
in the post shows that the coordinator only >> contact/read data from one replica, and operate read repair for the left >> replicas. >> >> Also, how could read accross all nodes in the cluster? >> >> Thanks! >> >> Jun >> >> >> From: be

Re: Question about replica and replication factor

2016-09-19 Thread Jun Wu
gt; Thanks! >> >> Jun >> >> >> From: ben.sla...@instaclustr.com >> Date: Tue, 20 Sep 2016 04:18:59 + >> Subject: Re: Question about replica and replication factor >> To: user@cassandra.apache.org >> >> >> Each individual read (whe

RE: Question about replica and replication factor

2016-09-19 Thread Jun Wu
: ben.sla...@instaclustr.com Date: Tue, 20 Sep 2016 04:18:59 + Subject: Re: Question about replica and replication factor To: user@cassandra.apache.org Each individual read (where a read is a single row or single partition) will read from one node (ignoring read repairs) as each partition

Re: Question about replica and replication factor

2016-09-19 Thread Ben Slater
distributed across all the nodes in your cluster). Cheers Ben On Tue, 20 Sep 2016 at 14:09 Jun Wu <wuxiaomi...@hotmail.com> wrote: > Hi there, > > I have a question about the replica and replication factor. > > For example, I have a cluster of 6 nodes in the same data

Question about replica and replication factor

2016-09-19 Thread Jun Wu
Hi there, I have a question about the replica and replication factor. For example, I have a cluster of 6 nodes in the same data center. Replication factor RF is set to 3 and the consistency level is default 1. According to this calculator http://www.ecyrd.com/cassandracalculator

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
ess the problem may have been with the initial addition of the >>>>>> 10.128.0.20 node because when I added it in it never synced data I >>>>>> guess? It was at around 50 MB when it first came up and transitioned to >>>>>> "UN". After it w

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
is stuff >>>>> that has been written since it came up. We never delete data ever so we >>>>> should have zero tombstones. >>>>> >>>>> If I am not mistaken, only two of my nodes actually have all the data, >>>>> 10

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Mike Yeap
tually have all the data, >>>> 10.128.0.3 and 10.142.0.14 since they agree on the data amount. 10.142.0.13 >>>> is almost a GB lower and then of course 10.128.0.20 which is missing >>>> over 5 GB of data. I tried running nodetool -local on both DCs and

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bryan Cheng
agree on the data amount. 10.142.0.13 >>> is almost a GB lower and then of course 10.128.0.20 which is missing >>> over 5 GB of data. I tried running nodetool -local on both DCs and it >>> didn't fix either one. >>> >>> Am I running into a bug of some kind? >>

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread kurt Greaves
gt;> didn't fix either one. >> >> Am I running into a bug of some kind? >> >> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal <bhu1ra...@gmail.com> wrote: >> >>> Hi Luke, >>> >>> You mentioned that replication factor was increased from

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bhuvan Rawal
> is almost a GB lower and then of course 10.128.0.20 which is missing over > 5 GB of data. I tried running nodetool -local on both DCs and it didn't > fix either one. > > Am I running into a bug of some kind? > > On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal <bhu1ra...@gmail.

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Luke Jolly
; wrote: > Hi Luke, > > You mentioned that replication factor was increased from 1 to 2. In that > case was the node bearing ip 10.128.0.20 carried around 3GB data earlier? > > You can run nodetool repair with option -local to initiate repair local > datacenter for gce-us-central1

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bhuvan Rawal
Hi Luke, You mentioned that replication factor was increased from 1 to 2. In that case was the node bearing ip 10.128.0.20 carried around 3GB data earlier? You can run nodetool repair with option -local to initiate repair local datacenter for gce-us-central1. Also you may suspect that if a lot

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Luke Jolly
te: > >> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and >> gce-us-east1. I increased the replication factor of gce-us-central1 from 1 >> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for >> the node switched to 100%

Re: Increasing replication factor and repair doesn't seem to work

2016-05-23 Thread kurt Greaves
> gce-us-east1. I increased the replication factor of gce-us-central1 from 1 > to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for > the node switched to 100% as it should but the Load showed that it didn't > actually sync the data. I then ran a full

Increasing replication factor and repair doesn't seem to work

2016-05-23 Thread Luke Jolly
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and gce-us-east1. I increased the replication factor of gce-us-central1 from 1 to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for the node switched to 100% as it should but the Load showed that it didn'

Re: Replication Factor Change

2015-11-05 Thread Yulian Oifa
doesn't really change your > availability model). > > On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa <oifa.yul...@gmail.com> wrote: > >> Hello to all. >> I am planning to change replication factor from 1 to 3. >> Will it cause data read errors in time of nodes repair? >> >> Best regards >> Yulian Oifa >> >

Re: Replication Factor Change

2015-11-05 Thread Eric Stevens
for a node failure, so that doesn't really change your availability model). On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa <oifa.yul...@gmail.com> wrote: > Hello to all. > I am planning to change replication factor from 1 to 3. > Will it cause data read errors in time of nodes repair? &

RE: Replication Factor Change

2015-11-05 Thread aeljami.ext
Hello, If current CL = ONE, Be careful on production at the time of change replication factor, 3 nodes will be queried while data is being transformed ==> So data read errors! De : Yulian Oifa [mailto:oifa.yul...@gmail.com] Envoyé : jeudi 5 novembre 2015 16:02 À : user@cassandra.apache.

Replication Factor Change

2015-11-05 Thread Yulian Oifa
Hello to all. I am planning to change replication factor from 1 to 3. Will it cause data read errors in time of nodes repair? Best regards Yulian Oifa

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-11-01 Thread sai krishnam raju potturi
gt; capture the tokens of the dead node. Any way we could make sure the >>>>> replication of 3 is maintained? >>>>> >>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com> >>>>> wrote: >>>>> >>>>>&

Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
hi; would unsafeassasinating a dead node maintain the replication factor like decommission process or removenode process? thanks

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
e node is up and wait till streaming happens . >> You can check is the streaming is completed by nodetool netstats . If >> streaming is completed you can do unsafe assanitation . >> >> To answer your question unsafe assanitation will not take care of >> replication

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
>> streaming is completed you can do unsafe assanitation . >> >> To answer your question unsafe assanitation will not take care of >> replication factor . >> It is like forcing a node out from the cluster . >> >> Hope this helps. >> >>

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
ation will not take care of > replication factor . > It is like forcing a node out from the cluster . > > Hope this helps. > > Sent from my iPhone > > > On Oct 31, 2015, at 5:12 AM, sai krishnam raju potturi < > pskraj...@gmail.com> wrote: > > > > hi; >

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
gt;> You have to do few things before unsafe as sanitation . First run the >>> nodetool decommission if the node is up and wait till streaming happens . >>> You can check is the streaming is completed by nodetool netstats . If >>> streaming is completed you can do unsafe as

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
etool decommission if the node is up and wait till streaming happens . >>>> You can check is the streaming is completed by nodetool netstats . If >>>> streaming is completed you can do unsafe assanitation . >>>> >>>> To answer your question

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
ke sure the >>>>> replication of 3 is maintained? >>>>> >>>>> >>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com> >>>>>> wrote: >>>>>> You have to do few things before un

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
t;>>> You have to do few things before unsafe as sanitation . First run the >>>>> nodetool decommission if the node is up and wait till streaming happens . >>>>> You can check is the streaming is completed by nodetool netstats . If >>>>> streaming

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
de. Any way we could make sure the >>>>>>> replication of 3 is maintained? >>>>>>> >>>>>>> >>>>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com> >>>>>>>> wrot

Re: Re : Replication factor for system_auth keyspace

2015-10-16 Thread Victor Chen
8 nodes in each DC. >> For the system_auth keyspace, what should be the ideal replication_factor >> set? >> >> We tried setting the replication factor equal to the number of nodes in a >> datacenter, and the repair for the system_auth keyspace took really long. >> Your suggestions would be of great help. >> > > More than 1 and a lot less than 48. > > =Rob > >

Re: Re : Replication factor for system_auth keyspace

2015-10-16 Thread sai krishnam raju potturi
thanks guys for the advice. We were running parallel repairs earlier, with cassandra version 2.0.14. As pointed out having set the replication factor really huge for system_auth was causing the repair to take really long. thanks Sai On Fri, Oct 16, 2015 at 9:56 AM, Victor Chen <victor.

Re: Re : Replication factor for system_auth keyspace

2015-10-15 Thread Robert Coli
On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi < pskraj...@gmail.com> wrote: > we are deploying a new cluster with 2 datacenters, 48 nodes in each DC. > For the system_auth keyspace, what should be the ideal replication_factor > set? > > We tried setting t

Re : Replication factor for system_auth keyspace

2015-10-15 Thread sai krishnam raju potturi
hi; we are deploying a new cluster with 2 datacenters, 48 nodes in each DC. For the system_auth keyspace, what should be the ideal replication_factor set? We tried setting the replication factor equal to the number of nodes in a datacenter, and the repair for the system_auth keyspace took

Run witch repair cmd when increase replication factor

2015-03-06 Thread 曹志富
I want fo increase replication factor in my C* 2.1.3 cluster(rf chang from 2 to 3 for some keyspaces). I read the doc of Updating the replication factor http://www.datastax.com/documentation/cql/3.1/cql/cql_using/update_ks_rf_t.html . The step two is run the nodetool repair.But as I know nodetool

Re: Changing replication factor of Cassandra cluster

2015-01-06 Thread Pranay Agarwal
Thanks Robert. Also, I have seen the node-repair operation to fail for some nodes. What are the chances of the data getting corrupt if node-repair fails? I am okay with data availability issues for some time as long as I don't loose or corrupt data. Also, is there way to restore the graph without

Re: Changing replication factor of Cassandra cluster

2015-01-06 Thread Robert Coli
On Tue, Jan 6, 2015 at 4:40 PM, Pranay Agarwal agarwalpran...@gmail.com wrote: Thanks Robert. Also, I have seen the node-repair operation to fail for some nodes. What are the chances of the data getting corrupt if node-repair fails? If repair does not complete before gc_grace_seconds, chance

Re: Changing replication factor of Cassandra cluster

2014-12-29 Thread Pranay Agarwal
wrote: Hi All, I have 20 nodes cassandra cluster with 500gb of data and replication factor of 1. I increased the replication factor to 3 and ran nodetool repair on each node one by one as the docs says. But it takes hours for 1 node to finish repair. Is that normal or am I doing something

Re: Changing replication factor of Cassandra cluster

2014-12-29 Thread Robert Coli
On Mon, Dec 29, 2014 at 1:40 PM, Pranay Agarwal agarwalpran...@gmail.com wrote: I want to understand what is the best way to increase/change the replica factor of the cassandra cluster? My priority is consistency and probably I am tolerant about some down time of the cluster. Is it totally

  1   2   3   >