Glad it worked
2011/3/25
> very cool. thanks for the info. this is exactly what we need.
>
>
> On Mar 25, 2011 8:22am, Patricio Echagüe wrote:
> >
> > It's a cassandra consistency level
> > On Mar 24, 2011 11:44 PM, jonathan.co...@gmail.com> wrote:> Patricio -
> > >
> > > I haven't heard of loc
Doesn't CL=LOCAL_QUORUM solve your problem?
On Thu, Mar 24, 2011 at 9:33 AM, wrote:
> Hi Nate -
>
> That sounds really promising and I'm looking forward to trying that out.
>
> My original question came up while thinking how to achieve quorum (with
> rf=3) with a loss of 1 of 2 data centers. My
Hi Nate -
That sounds really promising and I'm looking forward to trying that out.
My original question came up while thinking how to achieve quorum (with
rf=3) with a loss of 1 of 2 data centers. My logic was that if you had 2
replicas in the same data center where the client originally wri
We have a load balancing policy which selects the host best on latency
and uses a Phi convict algorithm in a method similar to DynamicSnitch.
Using this policy, you would inherently get the closest replica
whenever possible as that would most likely be the best performing.
This policy is still in
Indeed I found the big flaw in my own logic. Even writing to the "local"
cassandra nodes does not guarantee where the replicas will end up. The
decision where to write the first replica is based on the token ring, which is
spread out on all nodes regardless of datacenter. right ?
On Mar 2