Re: [ceph-users] Federated gateways (our planning use case)

2014-10-08 Thread David Barker
I've had some luck putting a load balancer infront of multiple zones to get
around the multiple URL issue. You can get the LB to send POST/DELETE et al
to the primary zone, but GET requests can be distributed to multiple zones.
The only issue is the replication delay; your data may not be available on
the secondary for reading yet...

I'm pretty sure you can most LB's to check multiple zones for the object
existing when doing a GET, and redirect to the primary if replication
hasn't caught up; for what I was looking at, I didn't need this!

Dave

On Tue, Oct 7, 2014 at 1:33 AM, Craig Lewis 
wrote:

> This sounds doable, with a few caveats.
>
>
> Currently, replication is only one direction.  You can only write to the
> primary zone, and you can read from the primary or secondary zones.  A
> cluster can have many zones on it.
>
> I'm thinking your setup would be a star topology.  Each telescope will be
> a primary zone, and replicate to a secondary zone in the main storage
> cluster.  The main cluster will have one read-only secondary zone for each
> telescope.  If you have other needs to write data to the main cluster, you
> can create another zone that only exists on the main cluster (possibly
> replicated to one of the telescopes with a good network connection).
>
> Each zone has it's own URL (primary and secondary), so you'd have a bit of
> management problem remembering to use the correct URL. The URLs can be
> whatever.  Convention follow's Amazon's naming scheme, but you'd probably
> want to create your own scheme, something like
> http://telescope_name-site.inasan.ru/ and
> http://telescope_name-campus.inasan.ru/
>
>
> You might have some problems with the replication if your VPN connections
> aren't stable.  The replication agent isn't very tolerant of cluster
> problem, so I suspect (but haven't tested) that long VPN outages will need
> a replication agent restart.  For sites that don't have permanent
> connections, just make the replication agent startup and shutdown part of
> the connection startup and shutdown process.  Replication state is
> available via a REST api, so it can be monitored.
>
>
> I have tested large backlogs in replication.  When I initially imported my
> data, I deliberately imported faster than I had bandwidth to replicate.  At
> one point, my secondary cluster was ~10 million objects, and ~10TB behind
> the primary cluster.  It eventually caught up, but the process doesn't
> handle stops and restarts well.  Restarting the replication while it was
> dealing with the backlog will start from the beginning of the backlog.
> This can be a problem if your backlog is so large that it won't finish in a
> day, because log rotation will restart the replication agent.  If that's
> something you think might be a problem, I have some strategies to deal with
> it, but they're manual and hacky.
>
>
> Does that sound feasible?
>
>
> On Mon, Oct 6, 2014 at 5:42 AM, Pavel V. Kaygorodov 
> wrote:
>
>> Hi!
>>
>> Our institute now planning to deploy a set of robotic telescopes across a
>> country.
>> Most of the telescopes will have low bandwidth and high latency, or even
>> not permanent internet connectivity.
>> I think, we can set up synchronization of observational data with ceph,
>> using federated gateways:
>>
>> 1. The main big storage ceph cluster will be set up in our institute main
>> building
>> 2. The small ceph clusters will be set up near each telescope, to store
>> only the data from local telescope
>> 3. VPN tunnels will be set up from each telescope site to our institute
>> 4. Federated gateways mechanism will do all the magic to synchronize data
>>
>> Is this a realistic plan?
>> What problems we can meet with this setup?
>>
>> Thanks in advance,
>>   Pavel.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple cephfs filesystems per cluster

2014-09-17 Thread David Barker
Thanks John - It did look like it was heading in that direction!

I did wonder if a 'fs map' & 'fs unmap' would be useful too; filesystem
backups,  migrations between clusters & async DR could be facilitated by
moving underlying pool objects around between clusters.

Dave

On Wed, Sep 17, 2014 at 11:22 AM, John Spray  wrote:

> Hi David,
>
> We haven't written any code for the multiple filesystems feature so
> far, but the new "fs new"/"fs rm"/"fs ls" management commands were
> designed with this in mind -- currently only supporting one
> filesystem, but to allow slotting in the multiple filesystems feature
> without too much disruption.  There is some design work to be done as
> well, such as how the system should handle standby MDSs (assigning to
> a particular filesystem, floating between filesystems, etc).
>
> Cheers,
> John
>
> On Wed, Sep 17, 2014 at 11:11 AM, David Barker 
> wrote:
> > Hi Cephalopods,
> >
> > Browsing the list archives, I know this has come up before, but I thought
> > I'd check in for an update.
> >
> > I'm in an environment where it would be useful to run a file system per
> > department in a single cluster (or at a pinch enforcing some client / fs
> > tree security). Has there been much progress recently?
> >
> > Many thanks,
> >
> > Dave
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multiple cephfs filesystems per cluster

2014-09-17 Thread David Barker
Hi Cephalopods,

Browsing the list archives, I know this has come up before, but I thought
I'd check in for an update.

I'm in an environment where it would be useful to run a file system per
department in a single cluster (or at a pinch enforcing some client / fs
tree security). Has there been much progress recently?

Many thanks,

Dave
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com