[ 
https://issues.apache.org/jira/browse/CASSANDRA-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1873:
--------------------------------------

    Attachment: 1873.txt

the only tricky part was getting handles to both a command object and the 
address of the data read efficiently.  ended up handling the former w/ a Map in 
StorageProxy.weakRead, and the latter by adding a field to AsyncResult.

RR vs local reads continues to be handled by weakReadCallable.  that part 
required relatively little change.

> Read Repair behavior thwards DynamicEndpointSnitch at CL.ONE
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-1873
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1873
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>             Fix For: 0.6.9, 0.7.1
>
>         Attachments: 1873.txt
>
>
> When doing a CL.ONE read, the coordinator node selects the data node from the 
> list of replicas via snitch sortByProximity.  The data node (_not_ the 
> coordinator) then sends digest requests to the remaining replicas, and 
> compares their answers to its own (in ConsistencyChecker).
> This means that, in a multi-datacenter situation, for any given range R with 
> replicas X in dc1 and Y in dc2, the only node with latency information for Y 
> will be X.  Since DES falls back to subsnitch (static) order when latency 
> information is missing for any replica it is asked to sort, DES will be 
> unable to direct requests to Y no matter how overwhelmed X becomes.
> To fix this, we should move the digest-checking code into the coordinator 
> node (probably starting with the 0.7 ConsistencyChecker, which represents a 
> cleanup of the 0.6 one).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to