Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "FAQ" page has been changed by MakiWatanabe.
The comment on this change is: Add #seed and #seed_spof.
http://wiki.apache.org/cassandra/FAQ?action=diff&rev1=114&rev2=115

--------------------------------------------------

   * [[#replicaplacement|How does Cassandra decide which nodes have what data?]]
   * [[#cachehitrateunits|I have a row or key cache hit rate of 0.XX123456789.  
Is that XX% or 0.XX% ?]]
   * [[#bigcommitlog|Commit Log gets very big. Cassandra does not delete "old" 
commit logs. Why?]]
+  * [[#seed|What are seeds?]]
+  * [[#seed_spof|Does single seed mean single point of failure?]]
+ 
  
  <<Anchor(cant_listen_on_ip_any)>>
  
@@ -431, +434 @@

  
  update column family XXX with memtable_flush_after=60;
  
+ <<Anchor(seed)>>
+ 
+ == What are seeds? ==
+ 
+ Seeds, or seed nodes are the nodes which new nodes refer to on
+ bootstrap to know ring information.
+ When you add a new node to ring, you need to specify at least one live
+ seed to contact. Once a node join the ring, it learns about the other
+ nodes, so it doesn't need seed on subsequent boot.
+ 
+ There is no special configuration for seed node itself. In stable and
+ static ring, you can point non-seed node as seed on bootstrap though
+ it is not recommended.
+ 
+ Nodes in the ring tend to send Gossip message to seeds more often ( Refer to 
[[ArchitectureGossip]] for more
+ details ) than to non-seeds. In other words, seeds are worked as hubs of 
Gossip network.
+ With seeds, each node can detect status changes of other nodes quickly.
+ 
+ <<Anchor(seed_spof)>>
+ 
+ == Does single seed mean single point of failure? ==
+ 
+ If you are using replicated CF on the ring, only one seed in the ring
+ doesn't mean single point of failure. The ring can operate or boot
+ without the seed. However, it will need more time to spread status changes of 
node over the ring.
+ It is recommended to have multiple seeds in production system.
+ 

Reply via email to