On Wed, Oct 1, 2014 at 8:47 AM, Desimpel, Ignace <ignace.desim...@nuance.com
> wrote:

>  I deploy/distribute the Cassandra database as an embedded service
> allowing me to create a basic cassandra.yaml file based on the global
> cluster of machines (seeds, non-seeds, ports, disks, etc…). That allows me
> to configure and upgrade my own software and the cassandra software using
> the same cassandra.yaml. That yaml file has no tokens specified in it,
> still having a vnode cluster (thanks cassandra) .
>

IMO, this is the error. Why do you not want to specify your tokens?


>  In previous versions that was ok, since the cassandra code was simply
> accepting the tokens it saved in its own database, disregarding any changes
> one made in the yaml file ( there was no test like bootstrapTokens.size()
> != DatabaseDescriptor.getNumTokens() ). I guess there was some logic to
> that, since at that time the system is not bootstrapping and thus
> should/could use the known token configuration without using the yaml token
> parameter.
>

I'm not really sure I understand the scenario you are describing. In
general if a node has bootstrapped, and you have the system keyspace for
that node, it tends to use the stored tokens. Is there a specific exception
you're getting?


> Also, isn’t this small code change of CASSANDRA-7649 inspired on balancing
> problems going to vnodes (CASSANDRA-7601) using a random partitioner. And
> in my case I’m using a ByteOrdered partitioner, forcing me to
> balance/move/add nodes/tokens myself.
>
>
Using BOP is a strong smell of Doing It Wrong. You are probably the only
person on Earth using the combination of BOP and Vnodes.


> And as the description is saying, it was meant to avoid ‘to change the
> number of tokens’, that test is doing a little more (from my point of view).
>
>
>
> Well, in short : I would be in favor of removing that test, clearly
> leaving a message that the “saved tokens” are used, not the yaml configured
> tokens.
>

I am still not sure I completely understand your case, but it seems like
you can probably avoid it by simply specifying a comma delimited list of
your tokens in initial_token.

Always including your tokens in initial_token is, IMO, a Cassandra
Operations best practice. It helps you in various cases and hurts you in
almost none. Eventually I will write up a blog post explaining some of
these cases..

=Rob

Reply via email to