Jon and Steve:

I don't understand your point. The TokenAwareLoadBalancer identifies the
nodes in the cluster that own the data for a particular token and route
requests to one of them. As I understand it, the OP wants to send requests
for a particular token to the same node every time (assuming it's
available). How does that fail in a large cluster?

Jim

On Tue, Apr 5, 2016 at 4:31 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:

> Yep - Steve hit the nail on the head.  The odds of hitting the right
> server with "sticky routing" goes down as your cluster size increases.  You
> end up adding extra network hops instead of using token aware routing.
>
> Unless you're trying to do a coordinator tier (and you're not, according
> to your original post), this is a pretty bad idea and I'd advise you to
> push back on that requirement.
>
> On Tue, Apr 5, 2016 at 12:47 PM Steve Robenalt <sroben...@highwire.org>
> wrote:
>
>> Aside from Jon's "why" question, I would point out that this only really
>> works because you are running a 3 node cluster with RF=3. If your cluster
>> is going to grow, you can't guarantee that any one server would have all
>> records. I'd be pretty hesitant to put an invisible constraint like that on
>> a cluster unless you're pretty sure it'll only ever be 3 nodes.
>>
>> On Tue, Apr 5, 2016 at 9:34 AM, Jonathan Haddad <j...@jonhaddad.com>
>> wrote:
>>
>>> Why is this a requirement?  Honestly I don't know why you would do this.
>>>
>>>
>>> On Sat, Apr 2, 2016 at 8:06 PM Mukil Kesavan <weirdbluelig...@gmail.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> We currently have 3 Cassandra servers running in a single datacenter
>>>> with a replication factor of 3 for our keyspace. We also use the
>>>> SimpleSnitch wiith DynamicSnitching enabled by default. Our load balancing
>>>> policy is TokenAwareLoadBalancingPolicy with RoundRobinPolicy as the child.
>>>> This overall configuration results in our client requests spreading equally
>>>> across our 3 servers.
>>>>
>>>> However, we have a new requirement where we need to restrict a client's
>>>> requests to a single server and only go to the other servers on failure of
>>>> the previous server. This particular use case does not have high request
>>>> traffic.
>>>>
>>>> Looking at the documentation the options we have seem to be:
>>>>
>>>> 1. Play with the snitching (e.g. place each server into its own DC or
>>>> Rack) to ensure that requests always go to one server and failover to the
>>>> others if required. I understand that this may also affect replica
>>>> placement and we may need to run nodetool repair. So this is not our most
>>>> preferred option.
>>>>
>>>> 2. Write a new load balancing policy that also uses the
>>>> HostStateListener for tracking host up and down messages, that essentially
>>>> accomplishes "sticky" request routing with failover to other nodes.
>>>>
>>>> Is option 2 the only clean way of accomplishing our requirement?
>>>>
>>>> Thanks,
>>>> Micky
>>>>
>>>
>>
>>
>> --
>> Steve Robenalt
>> Software Architect
>> sroben...@highwire.org <bza...@highwire.org>
>> (office/cell): 916-505-1785
>>
>> HighWire Press, Inc.
>> 425 Broadway St, Redwood City, CA 94063
>> www.highwire.org
>>
>> Technology for Scholarly Communication
>>
>

Reply via email to