Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Yes, I think I get it now. "quorum of replicas" != "quorum of nodes"
and I don't think quorum of nodes is ever defined. Thank you,
Konstantin.

Now, I believe I need to change my cluster to store data in two
remaining nodes in DC1, keeping 3 nodes in DC2. I believe nodetool
removetoken is what I need to use. Anything else I can/should do?

On Fri, Sep 2, 2011 at 1:56 PM, Konstantin  Naryshkin
 wrote:
> I think that Oleg may have misunderstood how replicas are selected. If you 
> have 3 nodes in your cluster and a RF of 2, Cassandra first selects what two 
> nodes, out of the 3 will get data, then, and only then does it write it out. 
> The selection is based on the row key, the token of the node, and you choice 
> of partitioner. This means that Cassandra does not need to store what node is 
> responsible for a given row. That information can be recalculated whenever it 
> is needed.
>
> The error that you are getting is because you may have 2 nodes up, those are 
> not the nodes that Cassandra will use to store data.
>
> - Original Message -
> From: "Nate McCall" 
> To: hector-us...@googlegroups.com
> Cc: "Cassandra Users" 
> Sent: Friday, September 2, 2011 4:44:01 PM
> Subject: Re: HUnavailableException: : May not be enough replicas present to 
> handle consistency level.
>
> In your options, you have configured 2 replicas for each data center:
> Options: [DC2:2, DC1:2]
>
> If one of those replicas is down, then LOCAL_QUORUM will fail as there
> is only one replica left 'locally.'
>
>
> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
>> from http://www.datastax.com/docs/0.8/consistency/index:
>>
>> > 2 + 1 with any resulting fractions rounded down.>
>>
>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>> node goes down?
>>
>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>>> It looks like you only have 2 replicas configured in each data center?
>>>
>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>> QUORUM on RF=2 in a single DC cluster.
>>>
>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>>>> I believe I don't quite understand semantics of this exception:
>>>>
>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>> be enough replicas present to handle consistency level.
>>>>
>>>> Does it mean there *might be* enough?
>>>> Does it mean there *is not* enough?
>>>>
>>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>>> following:
>>>>
>>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>>> Durable Writes: true
>>>> Options: [DC2:2, DC1:2]
>>>>
>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>> nodes can see any of the remaining 5.
>>>>
>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>> However, this morning one node went down and I started seeing the
>>>> HUnavailableException: : May not be enough replicas present to handle
>>>> consistency level.
>>>>
>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>> are sufficient for my configuration.
>>>>
>>>> Please help me to understand what's going on.
>>>>
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
And now, when I have one node down with no chance of bringing it back
anytime soon, can I still change RF to 3 and get restore functionality
of my cluster? Should I run 'nodetool repair' or simple keyspace
update will suffice?

On Fri, Sep 2, 2011 at 1:55 PM, Nate McCall  wrote:
> Yes - you would need at least 3 replicas per data center to use
> LOCAL_QUORUM and survive a node failure.
>
> On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev  wrote:
>> Do you mean I need to configure 3 replicas in each DC and keep using
>> LOCAL_QUORUM? In which case, if I'm following your logic, even one of
>> the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?
>>
>> On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall  wrote:
>>> In your options, you have configured 2 replicas for each data center:
>>> Options: [DC2:2, DC1:2]
>>>
>>> If one of those replicas is down, then LOCAL_QUORUM will fail as there
>>> is only one replica left 'locally.'
>>>
>>>
>>> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
>>>> from http://www.datastax.com/docs/0.8/consistency/index:
>>>>
>>>> >>> 2 + 1 with any resulting fractions rounded down.>
>>>>
>>>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>>>> node goes down?
>>>>
>>>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>>>>> It looks like you only have 2 replicas configured in each data center?
>>>>>
>>>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>>>> QUORUM on RF=2 in a single DC cluster.
>>>>>
>>>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  
>>>>> wrote:
>>>>>> I believe I don't quite understand semantics of this exception:
>>>>>>
>>>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>>>> be enough replicas present to handle consistency level.
>>>>>>
>>>>>> Does it mean there *might be* enough?
>>>>>> Does it mean there *is not* enough?
>>>>>>
>>>>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>>>>> following:
>>>>>>
>>>>>> Replication Strategy: 
>>>>>> org.apache.cassandra.locator.NetworkTopologyStrategy
>>>>>> Durable Writes: true
>>>>>> Options: [DC2:2, DC1:2]
>>>>>>
>>>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>>>> nodes can see any of the remaining 5.
>>>>>>
>>>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>>>> However, this morning one node went down and I started seeing the
>>>>>> HUnavailableException: : May not be enough replicas present to handle
>>>>>> consistency level.
>>>>>>
>>>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>>>> are sufficient for my configuration.
>>>>>>
>>>>>> Please help me to understand what's going on.
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Konstantin Naryshkin
I think that Oleg may have misunderstood how replicas are selected. If you have 
3 nodes in your cluster and a RF of 2, Cassandra first selects what two nodes, 
out of the 3 will get data, then, and only then does it write it out. The 
selection is based on the row key, the token of the node, and you choice of 
partitioner. This means that Cassandra does not need to store what node is 
responsible for a given row. That information can be recalculated whenever it 
is needed.

The error that you are getting is because you may have 2 nodes up, those are 
not the nodes that Cassandra will use to store data.

- Original Message -
From: "Nate McCall" 
To: hector-us...@googlegroups.com
Cc: "Cassandra Users" 
Sent: Friday, September 2, 2011 4:44:01 PM
Subject: Re: HUnavailableException: : May not be enough replicas present to 
handle consistency level.

In your options, you have configured 2 replicas for each data center:
Options: [DC2:2, DC1:2]

If one of those replicas is down, then LOCAL_QUORUM will fail as there
is only one replica left 'locally.'


On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
> from http://www.datastax.com/docs/0.8/consistency/index:
>
>  2 + 1 with any resulting fractions rounded down.>
>
> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
> node goes down?
>
> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>> It looks like you only have 2 replicas configured in each data center?
>>
>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>> QUORUM on RF=2 in a single DC cluster.
>>
>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>>> I believe I don't quite understand semantics of this exception:
>>>
>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>> be enough replicas present to handle consistency level.
>>>
>>> Does it mean there *might be* enough?
>>> Does it mean there *is not* enough?
>>>
>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>> following:
>>>
>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>> Durable Writes: true
>>> Options: [DC2:2, DC1:2]
>>>
>>> Hector can only connect to nodes in DC1 and configured to neither see
>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>> nodes can see any of the remaining 5.
>>>
>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>> However, this morning one node went down and I started seeing the
>>> HUnavailableException: : May not be enough replicas present to handle
>>> consistency level.
>>>
>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>> are sufficient for my configuration.
>>>
>>> Please help me to understand what's going on.
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
Yes - you would need at least 3 replicas per data center to use
LOCAL_QUORUM and survive a node failure.

On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev  wrote:
> Do you mean I need to configure 3 replicas in each DC and keep using
> LOCAL_QUORUM? In which case, if I'm following your logic, even one of
> the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?
>
> On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall  wrote:
>> In your options, you have configured 2 replicas for each data center:
>> Options: [DC2:2, DC1:2]
>>
>> If one of those replicas is down, then LOCAL_QUORUM will fail as there
>> is only one replica left 'locally.'
>>
>>
>> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
>>> from http://www.datastax.com/docs/0.8/consistency/index:
>>>
>>> >> 2 + 1 with any resulting fractions rounded down.>
>>>
>>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>>> node goes down?
>>>
>>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>>>> It looks like you only have 2 replicas configured in each data center?
>>>>
>>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>>> QUORUM on RF=2 in a single DC cluster.
>>>>
>>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  
>>>> wrote:
>>>>> I believe I don't quite understand semantics of this exception:
>>>>>
>>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>>> be enough replicas present to handle consistency level.
>>>>>
>>>>> Does it mean there *might be* enough?
>>>>> Does it mean there *is not* enough?
>>>>>
>>>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>>>> following:
>>>>>
>>>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>>>> Durable Writes: true
>>>>> Options: [DC2:2, DC1:2]
>>>>>
>>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>>> nodes can see any of the remaining 5.
>>>>>
>>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>>> However, this morning one node went down and I started seeing the
>>>>> HUnavailableException: : May not be enough replicas present to handle
>>>>> consistency level.
>>>>>
>>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>>> are sufficient for my configuration.
>>>>>
>>>>> Please help me to understand what's going on.
>>>>>
>>>>
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Do you mean I need to configure 3 replicas in each DC and keep using
LOCAL_QUORUM? In which case, if I'm following your logic, even one of
the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?

On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall  wrote:
> In your options, you have configured 2 replicas for each data center:
> Options: [DC2:2, DC1:2]
>
> If one of those replicas is down, then LOCAL_QUORUM will fail as there
> is only one replica left 'locally.'
>
>
> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
>> from http://www.datastax.com/docs/0.8/consistency/index:
>>
>> > 2 + 1 with any resulting fractions rounded down.>
>>
>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>> node goes down?
>>
>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>>> It looks like you only have 2 replicas configured in each data center?
>>>
>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>> QUORUM on RF=2 in a single DC cluster.
>>>
>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>>>> I believe I don't quite understand semantics of this exception:
>>>>
>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>> be enough replicas present to handle consistency level.
>>>>
>>>> Does it mean there *might be* enough?
>>>> Does it mean there *is not* enough?
>>>>
>>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>>> following:
>>>>
>>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>>> Durable Writes: true
>>>> Options: [DC2:2, DC1:2]
>>>>
>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>> nodes can see any of the remaining 5.
>>>>
>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>> However, this morning one node went down and I started seeing the
>>>> HUnavailableException: : May not be enough replicas present to handle
>>>> consistency level.
>>>>
>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>> are sufficient for my configuration.
>>>>
>>>> Please help me to understand what's going on.
>>>>
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
In your options, you have configured 2 replicas for each data center:
Options: [DC2:2, DC1:2]

If one of those replicas is down, then LOCAL_QUORUM will fail as there
is only one replica left 'locally.'


On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev  wrote:
> from http://www.datastax.com/docs/0.8/consistency/index:
>
>  2 + 1 with any resulting fractions rounded down.>
>
> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
> node goes down?
>
> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
>> It looks like you only have 2 replicas configured in each data center?
>>
>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>> QUORUM on RF=2 in a single DC cluster.
>>
>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>>> I believe I don't quite understand semantics of this exception:
>>>
>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>> be enough replicas present to handle consistency level.
>>>
>>> Does it mean there *might be* enough?
>>> Does it mean there *is not* enough?
>>>
>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>> following:
>>>
>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>> Durable Writes: true
>>> Options: [DC2:2, DC1:2]
>>>
>>> Hector can only connect to nodes in DC1 and configured to neither see
>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>> nodes can see any of the remaining 5.
>>>
>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>> However, this morning one node went down and I started seeing the
>>> HUnavailableException: : May not be enough replicas present to handle
>>> consistency level.
>>>
>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>> are sufficient for my configuration.
>>>
>>> Please help me to understand what's going on.
>>>
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
from http://www.datastax.com/docs/0.8/consistency/index:



I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
node goes down?

On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
> It looks like you only have 2 replicas configured in each data center?
>
> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
> QUORUM on RF=2 in a single DC cluster.
>
> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>> I believe I don't quite understand semantics of this exception:
>>
>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>> be enough replicas present to handle consistency level.
>>
>> Does it mean there *might be* enough?
>> Does it mean there *is not* enough?
>>
>> My case is as following - I have 3 nodes with keyspaces configured as 
>> following:
>>
>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>> Durable Writes: true
>> Options: [DC2:2, DC1:2]
>>
>> Hector can only connect to nodes in DC1 and configured to neither see
>> nor connect to nodes in DC2. This is for replication by Cassandra
>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>> nodes can see any of the remaining 5.
>>
>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>> However, this morning one node went down and I started seeing the
>> HUnavailableException: : May not be enough replicas present to handle
>> consistency level.
>>
>> I believed if I have 3 nodes and one goes down, two remaining nodes
>> are sufficient for my configuration.
>>
>> Please help me to understand what's going on.
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Well, this is the part I don't understand then. I thought that if I
configure 2 replicas with 3 nodes and one of 3 nodes goes down, I'll
still have 2 nodes to store 3 replicas. Is my logic flawed somehere?

On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall  wrote:
> It looks like you only have 2 replicas configured in each data center?
>
> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
> QUORUM on RF=2 in a single DC cluster.
>
> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
>> I believe I don't quite understand semantics of this exception:
>>
>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>> be enough replicas present to handle consistency level.
>>
>> Does it mean there *might be* enough?
>> Does it mean there *is not* enough?
>>
>> My case is as following - I have 3 nodes with keyspaces configured as 
>> following:
>>
>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>> Durable Writes: true
>> Options: [DC2:2, DC1:2]
>>
>> Hector can only connect to nodes in DC1 and configured to neither see
>> nor connect to nodes in DC2. This is for replication by Cassandra
>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>> nodes can see any of the remaining 5.
>>
>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>> However, this morning one node went down and I started seeing the
>> HUnavailableException: : May not be enough replicas present to handle
>> consistency level.
>>
>> I believed if I have 3 nodes and one goes down, two remaining nodes
>> are sufficient for my configuration.
>>
>> Please help me to understand what's going on.
>>
>


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
It looks like you only have 2 replicas configured in each data center?

If so, LOCAL_QUORUM cannot be achieved with a host down same as with
QUORUM on RF=2 in a single DC cluster.

On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev  wrote:
> I believe I don't quite understand semantics of this exception:
>
> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
> be enough replicas present to handle consistency level.
>
> Does it mean there *might be* enough?
> Does it mean there *is not* enough?
>
> My case is as following - I have 3 nodes with keyspaces configured as 
> following:
>
> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
> Durable Writes: true
> Options: [DC2:2, DC1:2]
>
> Hector can only connect to nodes in DC1 and configured to neither see
> nor connect to nodes in DC2. This is for replication by Cassandra
> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
> nodes can see any of the remaining 5.
>
> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
> However, this morning one node went down and I started seeing the
> HUnavailableException: : May not be enough replicas present to handle
> consistency level.
>
> I believed if I have 3 nodes and one goes down, two remaining nodes
> are sufficient for my configuration.
>
> Please help me to understand what's going on.
>


HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
I believe I don't quite understand semantics of this exception:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
be enough replicas present to handle consistency level.

Does it mean there *might be* enough?
Does it mean there *is not* enough?

My case is as following - I have 3 nodes with keyspaces configured as following:

Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
Durable Writes: true
Options: [DC2:2, DC1:2]

Hector can only connect to nodes in DC1 and configured to neither see
nor connect to nodes in DC2. This is for replication by Cassandra
means, asynchronously between datacenters DC1 and DC2. Each of 6 total
nodes can see any of the remaining 5.

and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
However, this morning one node went down and I started seeing the
HUnavailableException: : May not be enough replicas present to handle
consistency level.

I believed if I have 3 nodes and one goes down, two remaining nodes
are sufficient for my configuration.

Please help me to understand what's going on.