On Sun, Oct 26, 2014 at 9:08 AM, yuelongguang <fasts...@163.com> wrote:

> hi,
> 1. if  one radosgw daemon  *corresponds* to one zone ?   the rate is 1:1
>

Not necessarily.  You need at least one radosgw daemon per zone, but you
can have more.  I have a two small clusters.  The primary has 5 nodes, and
the secondary has 4 nodes.  Every node in the clusters run an apache and
radosgw.

It's possible (and confusing) to run multiple radosgw daemons on a single
node for different clusters.  You can either use Apache VHosts, or have
CivetWeb listening on different ports.  I won't recommend this though, as
it introduces a common failure mode to both zones.




> 2. it seems that we can deploy any number of rgw in a gingle ceph
> cluster,  those rgw can work separately or cooperate by using radosgw-agent
> to sync data and metadata, am i right?
>

You can deploy as many zones as you want in a single cluster.  Each zone
needs a set of pools and a radosgw daemon.  They can be completely
independant, or have a master-slave replication setup using radosgw-agent.

Keep in mind that radosgw-agent is not bi-directional replication, and the
secondary zone is read-only.



> 3. do you know how to set up load balance for rgws?  is nginx a good
> choose, how to let nginx work with rgw?
>

Any Load Balancer should work, since the protocol is just HTTP/HTTPS.  Some
people on the list had issues with nginx.  Search the list archive for
radosgw and tengine.

I'm using HAProxy, and it's working for me.  I have a slight issue in my
secondary cluster, with locking during replication.  I believe I need to
enable some kind of stickiness, but I haven't gotten around to
investigating.  In the mean time, I've configured that cluster with a
single node in the active backend, and the other nodes in a
backup backend.  It's not a setup that can work for everybody, but it meets
my needs until I fix the real issue.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to