Hi,
On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote:
> I strongly suggest you don't use materialized views at all. There are
> edge cases that in my opinion make them unsuitable for production, both in
> terms of cluster stability as well as data integrity.
>
Oh, there is already an open and
,
Gábor AUTH
> On Fri, May 17, 2024 at 8:58 AM Gábor Auth wrote:
>
>> Hi,
>>
>> I know, I know, the materialized view is experimental... :)
>>
>> So, I ran into a strange error. Among others, I have a very small 4-nodes
>> cluster, with very minimal data (~100
w, I know, the materialized view is experimental... :)
>
> So, I ran into a strange error. Among others, I have a very small 4-nodes
> cluster, with very minimal data (~100 MB at all), the keyspace's
> replication factor is 3, everything is works fine... except: if I restart a
> node, I get a lot of
Hi,
I know, I know, the materialized view is experimental... :)
So, I ran into a strange error. Among others, I have a very small 4-nodes
cluster, with very minimal data (~100 MB at all), the keyspace's
replication factor is 3, everything is works fine... except: if I restart a
node, I get a lot
Thank you Dipan, it makes sense now.
Cheers,
Anurag
On Sun, Jul 16, 2023 at 12:43 AM Dipan Shah wrote:
> Hello Anurag,
>
> In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is
> Read consistency, W is Write consistency and N is the Replication Fa
Hello Anurag,
In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is
Read consistency, W is Write consistency and N is the Replication Factor.
So in your case, R(2) + W(1) = 3 which is NOT greater than your replication
factor(3) so you will not be able to gua
thank you Jeff,
it makes more sense now. How about I write with ONE consistency,
replication factor = 3 and read consistency is QUORUM. I am guessing in
that case, I will not have the empty read even if it is happened
immediately after the write request, let me know your thoughts ?
Cheers,
Anurag
Consistency level controls when queries acknowledge/succeed
Replication factor is where data lives / how many copies
If you write at consistency ONE and replication factor 3, the query finishes
successfully when the write is durable on one of the 3 copies.
It will get sent to all 3, but it’ll
Hello Users,
I am new to Cassandra and trying to understand the architecture of it. If I
write to ONE node for a particular key and have a replication factor of 3,
would the written key will get replicated to the other two nodes ? Let me
know if I am thinking incorrectly.
Thanks,
Anurag
Replication Factor
The most likely explanation is that repair failed and you didnt notice.
Or that you didnt actually repair every host / every range.
Which version are you using?
How did you run repair?
On Tue, Oct 12, 2021 at 4:33 AM Isaeed Mohanna
mailto:isa...@xsense.co>> wrote:
Hi
request will actually return a correct result?
>
>
>
> Thanks
>
>
>
> *From:* Bowen Song
> *Sent:* Monday, October 11, 2021 5:13 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Trouble After Changing Replication Factor
>
>
>
> You have RF=3
; Thanks
>
>
>
> *From:* Bowen Song
> *Sent:* Monday, October 11, 2021 5:13 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Trouble After Changing Replication Factor
>
>
>
> You have RF=3 and both read & write CL=1, which means you are asking
> Cass
, October 11, 2021 5:13 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Trouble After Changing Replication Factor
You have RF=3 and both read & write CL=1, which means you are asking
Cassandra to give up strong consistency in order to gain higher
availability and perhaps slight faster s
request will actually return a correct result?
Thanks
From: Bowen Song
Sent: Monday, October 11, 2021 5:13 PM
To: user@cassandra.apache.org
Subject: Re: Trouble After Changing Replication Factor
You have RF=3 and both read & write CL=1, which means you are asking Cassandra
to give up st
e CL) > RF.
On 10/10/2021 11:55, Isaeed Mohanna wrote:
Hi
We had a cluster with 3 Nodes with Replication Factor 2 and we were
using read with consistency Level One.
We recently added a 4^th node and changed the replication factor to 3,
once this was done apps reading from DB with CL1 would
Hi
We had a cluster with 3 Nodes with Replication Factor 2 and we were using read
with consistency Level One.
We recently added a 4th node and changed the replication factor to 3, once this
was done apps reading from DB with CL1 would receive an empty record, Looking
around I was surprised
If you run full repair then it should be fine, since all the replicas are
present on all the nodes. If you are using -pr option then you need to run
on all the nodes.
On Tue, Oct 27, 2020 at 4:11 PM Fred Al wrote:
> Hello!
> Running Cassandra 2.2.9 with a 4 node cluster with replication
Hello!
Running Cassandra 2.2.9 with a 4 node cluster with replication factor 4.
When running anti-entropy repair is it required to run repair on all 4
nodes or is it sufficient to run it on only one node?
Since all data is replicated on all nodes i.m.o. only one node would need
to be repaired
: Tuesday, May 26, 2020 11:33 PM
To: user@cassandra.apache.org
Subject: Re: any risks with changing replication factor on live production
cluster without downtime and service interruption?
By retry logic, I’m going to guess you are doing some kind of version
consistency trick where you have
ads
to LOCAL_QUORUM until you’re done to buffer yourself from that risk.
From: Leena Ghatpande
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, May 26, 2020 at 1:20 PM
To: "user@cassandra.apache.org"
Subject: Re: any risks with changing replication factor on live production
c
From: Leena Ghatpande
Sent: Friday, May 22, 2020 11:51 AM
To: cassandra cassandra
Subject: any risks with changing replication factor on live production cluster
without downtime and service interruption?
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes
On Fri, May 22, 2020 at 9:51 PM Jeff Jirsa wrote:
> With those consistency levels it’s already possible you don’t see your
> writes, so you’re already probably seeing some of what would happen if you
> went to RF=5 like that - just less common
>
> If you did what you describe you’d have a 40%
and thinking of changing
> the replication factor to 5 for each DC.
>
> Our application uses the below consistency level
> read-level: LOCAL_ONE
> write-level: LOCAL_QUORUM
>
> if we change the RF=5 on live cluster, and run full repairs, would we see
> read/write
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in each
DC. RF=3
We have around 150M rows across tables.
We are planning to add more nodes to the cluster, and thinking of changing the
replication factor to 5 for each DC.
Our application uses the below consistency level
> I suspect some of the intermediate queries (determining role, etc) happen at
> quorum in 2.2+, but I don’t have time to go read the code and prove it.
This isn’t true. Aside from when using the default superuser, only
CRM::getAllRoles reads at QUORUM (because the resultset would include the
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote:
>
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
>
Attaching the runner log snippet, where we can see that "Rebuilding token
map" took most of the time.
getAllroles is using quorum, don't if it is used during login
I suspect some of the intermediate queries (determining role, etc) happen at
quorum in 2.2+, but I don’t have time to go read the code and prove it.
In any case, RF > 10 per DC is probably excessive
Also want to crank up the validity times so it uses cached info longer
--
Jeff Jirsa
> On
no its not a cassandra user and as i understood all other users login
local_one.
On Fri, 23 Nov 2018, 19:30 Jonathan Haddad Any chance you’re logging in with the Cassandra user? It uses quorum
> reads.
>
>
> On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk
> wrote:
>
>> Hi,
>> We have recently
Any chance you’re logging in with the Cassandra user? It uses quorum reads.
On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk wrote:
> Hi,
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
Hi,
We have recently met a problem when we added 60 nodes in 1 region to the
cluster
and set an RF=60 for the system_auth ks, following this documentation
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
However we've started to see increased login latencies in the
by Key, but not for searching. High
Availibility is a nice giveaway here.
If you end having only one Table in C*, maybe something like Redis would
work for your needs, too.
Some hints from my own expierience with it - if you choose to use Cassandra:
Have at least as much Racks as Replication Factor
link between the
two racks goes down, but both are otherwise functional - a query at ONE in
either rack would be able to read and write data, but it would diverge
between the two racks for some period of time).
>
> When I go to set up the database though, I am required to set a
>
around 1s, then there shouldn’t be an issue.
When I go to set up the database though, I am required to set a replication
factor to a number - 1,2,3,etc. So I can’t just say “ALL” and have it replicate
to all nodes. Right now, I have a 2 node cluster with replication factor 3.
Will this cause
e to reduce the replication factor from 3 to 2 but we are not
> sure if it is a safe operation. We would like to get some feedback from you
> guys.
>
> Have anybody tried to shrink the replication factor?
>
> Does "nodetool cleanup" get rid of the replicated data no
Dear Everyone,
We are running Cassandra v2.0.15 on our production cluster.
We would like to reduce the replication factor from 3 to 2 but we are not
sure if it is a safe operation. We would like to get some feedback from you
guys.
Have anybody tried to shrink the replication factor?
Does
This is just basic question to ask..but it is worth to ask.
We changed Replication factor from 2 to 3 in our production cluster. We have 2
data centers.
Does nodetool repair -dcpar from single node in one data center is sufficient
for the whole replication to take effect? Please confirm.
Do I
Regardless, if you are not modifying users frequently (with five you most
likely are not), make sure turn the permission cache wyyy up.
In 2.1 that is just: permissions_validity_in_ms (default is 2000 or 2
seconds). Feel free to set it to 1 day or some such. The corresponding
async update
601771
>
>
> Processing response from /xx.xx.xx.116 [SharedPool-Worker-1] | 2017-08-30
> 10:51:25.015000 | xx.xx.xx.113 | 601824
>
>
> Request complete |
> 2017-08-30 10:51:25.014874 | xx.xx.xx.113 | 601874
>
>
For that many nodes mixed with vnodes you probably want a lower RF than N
per datacenter. 5 or 7 would be reasonable. The only down side is that auth
queries may take slightly longer as they will often have to go to other
nodes to be resolved, but in practice this is likely not a big deal as the
7 at 10:42 AM
To: User <user@cassandra.apache.org>
Subject: Re: system_auth replication factor in Cassandra 2.1
On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds
<creyno...@ancestry.com<mailto:creyno...@ancestry.com>> wrote:
How many users do you have (or expect to be foun
On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds
wrote:
> How many users do you have (or expect to be found in system_auth.users)?
>
> 5 users.
>
> What are the current RF for system_auth and consistency level you are
> using in cqlsh?
>
> 135 in one DC and 227 in the
gt;
Subject: Re: system_auth replication factor in Cassandra 2.1
On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds
<creyno...@ancestry.com<mailto:creyno...@ancestry.com>> wrote:
So I’ve read that if your using authentication in Cassandra 2.1 that your
replication factor should matc
On Wed, Aug 30, 2017 at 6:20 PM, Chuck Reynolds
wrote:
> So I tried to run a repair with the following on one of the server.
>
> nodetool repair system_auth -pr –local
>
>
>
> After two hours it hadn’t finished. I had to kill the repair because of
> another issue and
e.org" <user@cassandra.apache.org>
Subject: Re: system_auth replication factor in Cassandra 2.1
It's a better rule of thumb to use an RF of 3 to 5 per DC and this is what the
docs now suggest:
http://cassandra.apache.org/doc/latest/operating/security.html#authentication
Out o
On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds <creyno...@ancestry.com>
wrote:
> So I’ve read that if your using authentication in Cassandra 2.1 that your
> replication factor should match the number of nodes in your datacenter.
>
>
>
> *Is that true?*
>
>
>
>
s &
superusers, the link above also has info on this.
Thanks,
Sam
On 30 August 2017 at 16:50, Chuck Reynolds <creyno...@ancestry.com> wrote:
> So I’ve read that if your using authentication in Cassandra 2.1 that your
> replication factor should match the number of nodes in your dat
into a secure cluster, set
the replication factor of the system_auth and dse_security keyspaces to a value
that is greater than 1. In a multi-node cluster, using the default of 1
prevents logging into any node when the node that stores the user data is down.
From: Chuck Reynolds [mailto:creyno
So I’ve read that if your using authentication in Cassandra 2.1 that your
replication factor should match the number of nodes in your datacenter.
Is that true?
I have two datacenter cluster, 135 nodes in datacenter 1 & 227 nodes in an AWS
datacenter.
Why do I want to replicate the system_
; Rather than troubleshoot this further, what I was thinking about doing was:
> - drop the replication factor on our keyspace to two
> - hopefully this would reduce load on these two remaining nodes
> - run repairs/cleanup across the cluster
> - then shoot these two nodes in the 'c' rack
>
Thanks Kurt.
We had one sstable from a cf of ours. I am actually running a repair on
that cf now and then plan to try and join the additional nodes as you
suggest. I deleted the opscenter corrupt sstables as well but will not
bother repairing that before adding capacity.
Been keeping an eye
On 14 Aug. 2017 00:59, "Brian Spindler" wrote:
Do you think with the setup I've described I'd be ok doing that now to
recover this node?
The node died trying to run the scrub; I've restarted it but I'm not sure
it's going to get past a scrub/repair, this is why I
ondary index build. Hard to say for
>>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack
>>> probably not necessary, streaming is being marked as failed and it’s
>>> turning itself off. Not sure why streaming is marked as failing, though,
>>> an
0 0 68403 0
>>>0
>>> MiscStage 0 0 0 0
>>>0
>>> AntiEntropySessions 0 0 0 0
>>> 0
>>&g
ian.spind...@gmail.com>
>> Reply-To: <user@cassandra.apache.org>
>> Date: Saturday, August 12, 2017 at 6:34 PM
>> To: <user@cassandra.apache.org>
>> Subject: Re: Dropping down replication factor
>>
>> Thanks for replying Jeff.
>>
;
>
>
>
> From: Brian Spindler <brian.spind...@gmail.com>
> Reply-To: <user@cassandra.apache.org>
> Date: Saturday, August 12, 2017 at 6:34 PM
> To: <user@cassandra.apache.org>
> Subject: Re: Dropping down replication factor
>
> Thanks for replying Jeff.
>
&g
Re: Dropping down replication factor
Thanks for replying Jeff.
Responses below.
On Sat, Aug 12, 2017 at 8:33 PM Jeff Jirsa <jji...@gmail.com> wrote:
> Answers inline
>
> --
> Jeff Jirsa
>
>
>> > On Aug 12, 2017, at 2:58 PM, brian.spind...@gmail.com wrote:
>
15
_TRACE 0
MUTATION 2949001
COUNTER_MUTATION 0
BINARY 0
REQUEST_RESPONSE 0
PAGED_RANGE 0
READ_REPAIR 8571
I can get a jstack if needed.
>
> >
> > Rather than
ike
building secondary index or similar? jstack thread dump would be useful, or at
least nodetool tpstats
>
> Rather than troubleshoot this further, what I was thinking about doing was:
> - drop the replication factor on our keyspace to two
Repair before you do this, or you'll lose you
about doing was:
- drop the replication factor on our keyspace to two
- hopefully this would reduce load on these two remaining nodes
- run repairs/cleanup across the cluster
- then shoot these two nodes in the 'c' rack
- run repairs/cleanup across the cluster
Would this work with minimal
Great explanation!
For the single partition read, it makes sense to read data from only one
replica.
Thank you so much Ben!
Jun
From: ben.sla...@instaclustr.com
Date: Tue, 20 Sep 2016 05:30:43 +
Subject: Re: Question about replica and replication factor
To: wuxiaomi...@hotmail.com
CC: user
in the post shows that the coordinator only
>> contact/read data from one replica, and operate read repair for the left
>> replicas.
>>
>> Also, how could read accross all nodes in the cluster?
>>
>> Thanks!
>>
>> Jun
>>
>>
>> From: be
gt; Thanks!
>>
>> Jun
>>
>>
>> From: ben.sla...@instaclustr.com
>> Date: Tue, 20 Sep 2016 04:18:59 +
>> Subject: Re: Question about replica and replication factor
>> To: user@cassandra.apache.org
>>
>>
>> Each individual read (whe
: ben.sla...@instaclustr.com
Date: Tue, 20 Sep 2016 04:18:59 +
Subject: Re: Question about replica and replication factor
To: user@cassandra.apache.org
Each individual read (where a read is a single row or single partition) will
read from one node (ignoring read repairs) as each partition
distributed across all the nodes in your cluster).
Cheers
Ben
On Tue, 20 Sep 2016 at 14:09 Jun Wu <wuxiaomi...@hotmail.com> wrote:
> Hi there,
>
> I have a question about the replica and replication factor.
>
> For example, I have a cluster of 6 nodes in the same data
Hi there,
I have a question about the replica and replication factor.
For example, I have a cluster of 6 nodes in the same data center.
Replication factor RF is set to 3 and the consistency level is default 1.
According to this calculator http://www.ecyrd.com/cassandracalculator
ess the problem may have been with the initial addition of the
>>>>>> 10.128.0.20 node because when I added it in it never synced data I
>>>>>> guess? It was at around 50 MB when it first came up and transitioned to
>>>>>> "UN". After it w
is stuff
>>>>> that has been written since it came up. We never delete data ever so we
>>>>> should have zero tombstones.
>>>>>
>>>>> If I am not mistaken, only two of my nodes actually have all the data,
>>>>> 10
tually have all the data,
>>>> 10.128.0.3 and 10.142.0.14 since they agree on the data amount. 10.142.0.13
>>>> is almost a GB lower and then of course 10.128.0.20 which is missing
>>>> over 5 GB of data. I tried running nodetool -local on both DCs and
agree on the data amount. 10.142.0.13
>>> is almost a GB lower and then of course 10.128.0.20 which is missing
>>> over 5 GB of data. I tried running nodetool -local on both DCs and it
>>> didn't fix either one.
>>>
>>> Am I running into a bug of some kind?
>>
gt;> didn't fix either one.
>>
>> Am I running into a bug of some kind?
>>
>> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
>>
>>> Hi Luke,
>>>
>>> You mentioned that replication factor was increased from
> is almost a GB lower and then of course 10.128.0.20 which is missing over
> 5 GB of data. I tried running nodetool -local on both DCs and it didn't
> fix either one.
>
> Am I running into a bug of some kind?
>
> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal <bhu1ra...@gmail.
; wrote:
> Hi Luke,
>
> You mentioned that replication factor was increased from 1 to 2. In that
> case was the node bearing ip 10.128.0.20 carried around 3GB data earlier?
>
> You can run nodetool repair with option -local to initiate repair local
> datacenter for gce-us-central1
Hi Luke,
You mentioned that replication factor was increased from 1 to 2. In that
case was the node bearing ip 10.128.0.20 carried around 3GB data earlier?
You can run nodetool repair with option -local to initiate repair local
datacenter for gce-us-central1.
Also you may suspect that if a lot
te:
>
>> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
>> gce-us-east1. I increased the replication factor of gce-us-central1 from 1
>> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
>> the node switched to 100%
> gce-us-east1. I increased the replication factor of gce-us-central1 from 1
> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
> the node switched to 100% as it should but the Load showed that it didn't
> actually sync the data. I then ran a full
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn'
doesn't really change your
> availability model).
>
> On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa <oifa.yul...@gmail.com> wrote:
>
>> Hello to all.
>> I am planning to change replication factor from 1 to 3.
>> Will it cause data read errors in time of nodes repair?
>>
>> Best regards
>> Yulian Oifa
>>
>
for a node failure, so that doesn't really change your
availability model).
On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa <oifa.yul...@gmail.com> wrote:
> Hello to all.
> I am planning to change replication factor from 1 to 3.
> Will it cause data read errors in time of nodes repair?
&
Hello,
If current CL = ONE, Be careful on production at the time of change replication
factor, 3 nodes will be queried while data is being transformed ==> So data
read errors!
De : Yulian Oifa [mailto:oifa.yul...@gmail.com]
Envoyé : jeudi 5 novembre 2015 16:02
À : user@cassandra.apache.
Hello to all.
I am planning to change replication factor from 1 to 3.
Will it cause data read errors in time of nodes repair?
Best regards
Yulian Oifa
gt; capture the tokens of the dead node. Any way we could make sure the
>>>>> replication of 3 is maintained?
>>>>>
>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>&
hi;
would unsafeassasinating a dead node maintain the replication factor
like decommission process or removenode process?
thanks
e node is up and wait till streaming happens .
>> You can check is the streaming is completed by nodetool netstats . If
>> streaming is completed you can do unsafe assanitation .
>>
>> To answer your question unsafe assanitation will not take care of
>> replication
>> streaming is completed you can do unsafe assanitation .
>>
>> To answer your question unsafe assanitation will not take care of
>> replication factor .
>> It is like forcing a node out from the cluster .
>>
>> Hope this helps.
>>
>>
ation will not take care of
> replication factor .
> It is like forcing a node out from the cluster .
>
> Hope this helps.
>
> Sent from my iPhone
>
> > On Oct 31, 2015, at 5:12 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
> >
> > hi;
>
gt;> You have to do few things before unsafe as sanitation . First run the
>>> nodetool decommission if the node is up and wait till streaming happens .
>>> You can check is the streaming is completed by nodetool netstats . If
>>> streaming is completed you can do unsafe as
etool decommission if the node is up and wait till streaming happens .
>>>> You can check is the streaming is completed by nodetool netstats . If
>>>> streaming is completed you can do unsafe assanitation .
>>>>
>>>> To answer your question
ke sure the
>>>>> replication of 3 is maintained?
>>>>>
>>>>>
>>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com>
>>>>>> wrote:
>>>>>> You have to do few things before un
t;>>> You have to do few things before unsafe as sanitation . First run the
>>>>> nodetool decommission if the node is up and wait till streaming happens .
>>>>> You can check is the streaming is completed by nodetool netstats . If
>>>>> streaming
de. Any way we could make sure the
>>>>>>> replication of 3 is maintained?
>>>>>>>
>>>>>>>
>>>>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta <surbhi.gupt...@gmail.com>
>>>>>>>> wrot
8 nodes in each DC.
>> For the system_auth keyspace, what should be the ideal replication_factor
>> set?
>>
>> We tried setting the replication factor equal to the number of nodes in a
>> datacenter, and the repair for the system_auth keyspace took really long.
>> Your suggestions would be of great help.
>>
>
> More than 1 and a lot less than 48.
>
> =Rob
>
>
thanks guys for the advice. We were running parallel repairs earlier, with
cassandra version 2.0.14. As pointed out having set the replication factor
really huge for system_auth was causing the repair to take really long.
thanks
Sai
On Fri, Oct 16, 2015 at 9:56 AM, Victor Chen <victor.
On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> we are deploying a new cluster with 2 datacenters, 48 nodes in each DC.
> For the system_auth keyspace, what should be the ideal replication_factor
> set?
>
> We tried setting t
hi;
we are deploying a new cluster with 2 datacenters, 48 nodes in each DC.
For the system_auth keyspace, what should be the ideal replication_factor
set?
We tried setting the replication factor equal to the number of nodes in a
datacenter, and the repair for the system_auth keyspace took
I want fo increase replication factor in my C* 2.1.3 cluster(rf chang from
2 to 3 for some keyspaces).
I read the doc of Updating the replication factor
http://www.datastax.com/documentation/cql/3.1/cql/cql_using/update_ks_rf_t.html
.
The step two is run the nodetool repair.But as I know nodetool
Thanks Robert. Also, I have seen the node-repair operation to fail for some
nodes. What are the chances of the data getting corrupt if node-repair
fails? I am okay with data availability issues for some time as long as I
don't loose or corrupt data. Also, is there way to restore the graph
without
On Tue, Jan 6, 2015 at 4:40 PM, Pranay Agarwal agarwalpran...@gmail.com
wrote:
Thanks Robert. Also, I have seen the node-repair operation to fail for
some nodes. What are the chances of the data getting corrupt if node-repair
fails?
If repair does not complete before gc_grace_seconds, chance
wrote:
Hi All,
I have 20 nodes cassandra cluster with 500gb of data and replication
factor of 1. I increased the replication factor to 3 and ran nodetool
repair on each node one by one as the docs says. But it takes hours for 1
node to finish repair. Is that normal or am I doing something
On Mon, Dec 29, 2014 at 1:40 PM, Pranay Agarwal agarwalpran...@gmail.com
wrote:
I want to understand what is the best way to increase/change the replica
factor of the cassandra cluster? My priority is consistency and probably I
am tolerant about some down time of the cluster. Is it totally
1 - 100 of 269 matches
Mail list logo