On Dec 9, 2010, at 18:50, Tyler Hobbs wrote:
If you switch your writes to CL ONE when a failure occurs, you might as well
use ONE for all writes. ONE and QUORUM behave the same when all nodes are
working correctly.
That's finally a precise statement! :) I was wondering what to at least 1
Hi!
I've 3 servers running (0.7rc1) with a replication_factor of 2 and use quorum
for writes. But when I shut down one of them UnavailableExceptions are thrown.
Why is that? Isn't that the sense of quorum and a fault-tolerant DB that it
continues with the remaining 2 nodes and redistributes
Hi,
The UnavailableExceptions will be thrown because quorum of size 2
needs at least 2 nodes to be alive (as for qurom of size 3 as well).
The data won't be automatically redistributed to other nodes.
Thibaut
On Thu, Dec 9, 2010 at 4:40 PM, Timo Nentwig timo.nent...@toptarif.de wrote:
Hi!
Quorum is really only useful when RF 2, since the for a quorum to
succeed RF/2+1 replicas must be available.
This means for RF = 2, consistency levels QUORUM and ALL yield the same result.
/d
On Thu, Dec 9, 2010 at 4:40 PM, Timo Nentwig timo.nent...@toptarif.de wrote:
Hi!
I've 3 servers
On Dec 9, 2010, at 16:50, Daniel Lundin wrote:
Quorum is really only useful when RF 2, since the for a quorum to
succeed RF/2+1 replicas must be available.
2/2+1==2 and I killed 1 of 3, so... don't get it.
This means for RF = 2, consistency levels QUORUM and ALL yield the same
result.
: Thursday, December 09, 2010 6:01 PM
To: user@cassandra.apache.org
Subject: Re: Quorum: killing 1 out of 3 server kills the cluster (?)
On Dec 9, 2010, at 16:50, Daniel Lundin wrote:
Quorum is really only useful when RF 2, since the for a quorum to
succeed RF/2+1 replicas must be available.
2/2+1
I'ts 2 out of the number of replicas, not the number of nodes. At RF=2, you have
2 replicas. And since quorum is also 2 with that replication factor,
you cannot lose
a node, otherwise some query will end up as UnavailableException.
Again, this is not related to the total number of nodes. Even
In other words, if you want to use QUORUM, you need to set RF=3.
(I know because I had exactly the same problem.)
On Thu, Dec 9, 2010 at 6:05 PM, Sylvain Lebresne sylv...@yakaz.com wrote:
I'ts 2 out of the number of replicas, not the number of nodes. At RF=2, you
have
2 replicas. And since
On Dec 9, 2010, at 17:39, David Boxenhorn wrote:
In other words, if you want to use QUORUM, you need to set RF=3.
(I know because I had exactly the same problem.)
I naively assume that if I kill either node that holds N1 (i.e. node 1 or 3),
N1 will still remain on another node. Only if
If that is what you want, use CL=ONE
On Thu, Dec 9, 2010 at 6:43 PM, Timo Nentwig timo.nent...@toptarif.dewrote:
On Dec 9, 2010, at 17:39, David Boxenhorn wrote:
In other words, if you want to use QUORUM, you need to set RF=3.
(I know because I had exactly the same problem.)
I naively
On Thu, Dec 9, 2010 at 10:43 AM, Timo Nentwig timo.nent...@toptarif.dewrote:
On Dec 9, 2010, at 17:39, David Boxenhorn wrote:
In other words, if you want to use QUORUM, you need to set RF=3.
(I know because I had exactly the same problem.)
I naively assume that if I kill either node
I naively assume that if I kill either node that holds N1 (i.e. node 1 or 3),
N1 will still remain on another node. Only if both fail, I actually lose
data. But apparently this is not how it works...
Sure, the data that N1 holds is also on another node and you won't
lose it by only losing
On Dec 9, 2010, at 17:55, Sylvain Lebresne wrote:
I naively assume that if I kill either node that holds N1 (i.e. node 1 or
3), N1 will still remain on another node. Only if both fail, I actually lose
data. But apparently this is not how it works...
Sure, the data that N1 holds is also
If you switch your writes to CL ONE when a failure occurs, you might as well
use ONE for all writes. ONE and QUORUM behave the same when all nodes are
working correctly.
- Tyler
On Thu, Dec 9, 2010 at 11:26 AM, Timo Nentwig timo.nent...@toptarif.dewrote:
On Dec 9, 2010, at 17:55, Sylvain
And my application would fall back to ONE. Quorum writes will also fail so I
would also use ONE so that the app stays up. What would I have to do make the
data to redistribute when the broken node is up again? Simply call nodetool
repair on it?
There is 3 mechanisms for that:
- hinted
15 matches
Mail list logo