Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ArchitectureInternals" page has been changed by PeterSchuller.
The comment on this change is: Avoid StorageProxy becoming links.
http://wiki.apache.org/cassandra/ArchitectureInternals?action=diff&rev1=22&rev2=23

--------------------------------------------------

  = Read path =
   * !StorageProxy gets the endpoints (nodes) responsible for replicas of the 
keys from the !ReplicationStrategy
     * This may be a !SliceFromReadCommand, a !SliceByNamesReadCommand, or a 
!RangeSliceReadCommand, depending
-  * StorageProxy filters the endpoints to contain only those that are 
currently up/alive
+  * !StorageProxy filters the endpoints to contain only those that are 
currently up/alive
-  * StorageProxy then sorts, by asking the endpoint snitch, the responsible 
nodes by "proximity".
+  * !StorageProxy then sorts, by asking the endpoint snitch, the responsible 
nodes by "proximity".
     * The definition of "proximity" is up to the endpoint snitch
       * With a SimpleSnitch, proximity directly corresponds to proximity on 
the token ring.
       * With the NetworkTopologySnitch, endpoints that are in the same rack 
are always considered "closer" than those that are not. Failing that, endpoints 
in the same data center are always considered "closer" than those that are not.
       * The DynamicSnitch, typically enabled in the configuration, wraps 
whatever underlying snitch (such as SimpleSnitch and NetworkTopologySnitch) so 
as to dynamically adjust the perceived "closeness" of endpoints based on their 
recent performance. This is in an effort to try to avoid routing traffic to 
endpoints that are slow to respond.
-  * StorageProxy then arranges for messages to be sent to nodes as required:
+  * !StorageProxy then arranges for messages to be sent to nodes as required:
     * The closest node (as determined by proximity sorting as described above) 
will be sent a command to perform an actual data read (i.e., return data to the 
co-ordinating node). 
     * As required by consistency level, additional nodes may be sent digest 
commands, asking them to perform the read locally but send back the digest 
only. For example, at replication factor 3 a read at consistency level QUORUM 
would require one digest read in additional to the data read sent to the 
closest node. (See ReadCallback, instantiated by StorageProxy)
     * If read repair is enabled (probabilistically if read repair chance is 
somewhere between 0% and 100%), remaining nodes responsible for the row will be 
sent messages to compute the digest of the response. (Again, see ReadCallback, 
instantiated by StorageProxy)

Reply via email to