Re: adding node to cluster

2012-08-31 Thread Rob Coli
On Thu, Aug 30, 2012 at 10:39 PM, Casey Deccio ca...@deccio.net wrote:
 In what way are the lookups failing? Is there an exception?

 No exception--just failing in that the data should be there, but isn't.

At ConsistencyLevel.ONE or QUORUM?

If you are bootstrapping the node, I would expect there to be no
chance of serving blank reads like this. As auto_bootstrap is set to
true by default, I presume you are bootstrapping.

Which node are you querying to get the no data response?

=Rob

-- 
=Robert Coli
AIMGTALK - rc...@palominodb.com
YAHOO - rcoli.palominob
SKYPE - rcoli_palominodb


adding node to cluster

2012-08-30 Thread Casey Deccio
All,

I'm adding a new node to an existing cluster that uses
ByteOrderedPartitioner.  The documentation says that if I don't configure a
token, then one will be automatically generated to take load from an
existing node.  What I'm finding is that when I add a new node, (super)
column lookups begin failing (not sure if it was the row lookup failing or
the supercolumn lookup failing), and I'm not sure why.  I assumed that
while the existing node is transitioning data to the new node the affected
rows and (super) columns would still be found in the right place.  Any idea
why these lookups might be failing?  When I decommissioned the the new
node, the lookups began working again.  Any help is appreciated.

Regards,
Casey


Re: adding node to cluster

2012-08-30 Thread Rob Coli
On Thu, Aug 30, 2012 at 10:18 AM, Casey Deccio ca...@deccio.net wrote:
 I'm adding a new node to an existing cluster that uses
 ByteOrderedPartitioner.  The documentation says that if I don't configure a
 token, then one will be automatically generated to take load from an
 existing node.
 What I'm finding is that when I add a new node, (super)
 column lookups begin failing (not sure if it was the row lookup failing or
 the supercolumn lookup failing), and I'm not sure why.

1) You almost never actually want BOP.
2) You never want Cassandra to pick a token for you. IMO and the
opinion of many others, the fact that it does this is a bug. Specify a
token with initial_token.
3) You never want to use Supercolumns. The project does not support
them but currently has no plan to deprecate them. Use composite row
keys.
4) Unless your existing cluster consists of one node, you almost never
want to add only a single new node to a cluster. In general you want
to double it.

In summary, you are Doing It just about as Wrong as possible... but on
to your actual question ... ! :)

In what way are the lookups failing? Is there an exception?

=Rob

-- 
=Robert Coli
AIMGTALK - rc...@palominodb.com
YAHOO - rcoli.palominob
SKYPE - rcoli_palominodb


Re: adding node to cluster

2012-08-30 Thread Casey Deccio
On Thu, Aug 30, 2012 at 11:21 AM, Rob Coli rc...@palominodb.com wrote:

 On Thu, Aug 30, 2012 at 10:18 AM, Casey Deccio ca...@deccio.net wrote:
  I'm adding a new node to an existing cluster that uses
  ByteOrderedPartitioner.  The documentation says that if I don't
 configure a
  token, then one will be automatically generated to take load from an
  existing node.
  What I'm finding is that when I add a new node, (super)
  column lookups begin failing (not sure if it was the row lookup failing
 or
  the supercolumn lookup failing), and I'm not sure why.

 1) You almost never actually want BOP.
 2) You never want Cassandra to pick a token for you. IMO and the
 opinion of many others, the fact that it does this is a bug. Specify a
 token with initial_token.
 3) You never want to use Supercolumns. The project does not support
 them but currently has no plan to deprecate them. Use composite row
 keys.
 4) Unless your existing cluster consists of one node, you almost never
 want to add only a single new node to a cluster. In general you want
 to double it.

 In summary, you are Doing It just about as Wrong as possible... but on
 to your actual question ... ! :)


Well, at least I'm consistent :)  Thanks for the hints.  Unfortunately,
when I first brought up my system--with the goal of getting it up
quickly--I thought BOP and Supercolumns were the way to go.  Plus, the
small cluster of nodes I was using was on a hodgepodge of hardware.  I've
since had a chance to think somewhat about redesigning and rearchitecting,
but it seems like there's no easy way to convert it properly.  Step one
was to migrate everything over to a single dedicated node on reasonable
hardware, so I could begin the process, which brought me to the issue I
initially posted about.  But the problem is that this is a live system, so
data loss is an issue I'd like to avoid.


 In what way are the lookups failing? Is there an exception?


No exception--just failing in that the data should be there, but isn't.

Casey