It's reasonable that we can attach different levels of importance to
these things. Taking a step back, I have two main points:
1) vnodes add enormous complexity to *many* parts of Cassandra. I'm
skeptical of the cost:benefit ratio here.
1a) The benefit is lower in my mind because many of the pr
> Each node would have a lower and an upper token, which would form a range
> that would be actively distributed via gossip. Read and replication
> requests would only be routed to a replica when the key of these operations
> matched the replica's token range in the gossip tables. Each node would
>
On 20 March 2012 14:55, Jonathan Ellis wrote:
> Here's how I see Sam's list:
>
> * Even load balancing when growing and shrinking the cluster
>
> Nice to have, but post-bootstrap load balancing works well in practice
> (and is improved by TRP).
Post-bootstrap load balancing without vnodes necessa
On 20 March 2012 14:50, Rick Branson wrote:
> To support a form of DF, I think some tweaking of the replica placement could
> achieve this effect quite well. We could introduce a variable into replica
> placement, which I'm going to incorrectly call DF for the purposes of
> illustration. The k
On 19 March 2012 23:41, Peter Schuller wrote:
>>> Using this ring bucket in the CRUSH topology, (with the hash function
>>> being the identity function) would give the exact same distribution
>>> properties as the virtual node strategy that I suggested previously,
>>> but of course with much bette
So taking a step back, if we want "vnodes" why can't we just give every node
100 tokens instead of only one? Seems to me this would have less impact on the
rest of the code. It would just look like you had a 500 node cluster, instead
of a 5 node cluster. Your replication strategy would have t
On Tue, Mar 20, 2012 at 9:08 AM, Eric Evans wrote:
> On Tue, Mar 20, 2012 at 8:39 AM, Jonathan Ellis wrote:
>> I like this idea. It feels like a good 80/20 solution -- 80% of the
>> benefits, 20% of the effort. More like 5% of the effort. I can't
>> even enumerate all the places full vnode sup
On 20 March 2012 13:37, Eric Evans wrote:
> On Tue, Mar 20, 2012 at 6:40 AM, Sam Overton wrote:
>> On 20 March 2012 04:35, Vijay wrote:
>>> May be, what i mean is little more simple than that... We can consider
>>> every node having a multiple conservative ranges and moving those ranges
>>> for
> > I like this idea. It feels like a good 80/20 solution -- 80% of the
> > benefits, 20% of the effort. More like 5% of the effort. I can't
> > even enumerate all the places full vnode support would change, but an
> > "active token range" concept would be relatively limited in scope.
>
>
> It on
On Tue, Mar 20, 2012 at 8:39 AM, Jonathan Ellis wrote:
> I like this idea. It feels like a good 80/20 solution -- 80% of the
> benefits, 20% of the effort. More like 5% of the effort. I can't
> even enumerate all the places full vnode support would change, but an
> "active token range" concept
I like this idea. It feels like a good 80/20 solution -- 80% of the
benefits, 20% of the effort. More like 5% of the effort. I can't
even enumerate all the places full vnode support would change, but an
"active token range" concept would be relatively limited in scope.
Full vnodes feels a lot m
On Tue, Mar 20, 2012 at 6:40 AM, Sam Overton wrote:
> On 20 March 2012 04:35, Vijay wrote:
>> On Mon, Mar 19, 2012 at 8:24 PM, Eric Evans wrote:
>>
>>> I'm guessing you're referring to Rick's proposal about ranges per node?
>>>
>>
>> May be, what i mean is little more simple than that... We can
On 20 March 2012 04:35, Vijay wrote:
> On Mon, Mar 19, 2012 at 8:24 PM, Eric Evans wrote:
>
>> I'm guessing you're referring to Rick's proposal about ranges per node?
>>
>
> May be, what i mean is little more simple than that... We can consider
> every node having a multiple conservative ranges a
13 matches
Mail list logo