On Jan 25, 2008 12:20 PM, <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I've looked through the documentation, but couldn't find any specifics on
> this, so I'd be grateful if somebody could point me at the page I've
> missed.
>
> 1) How do OpenAFS clients pick a server to access a volume from if the
> volume is replicated on multiple servers?
>

preferences. Look at e.g. fs get/setserverpref. They default "sensibly"
based on classful networking, sadly.


>
> 2) From the documentation, it looks like the replication mechanism is
> single-master / multiple-slaves, i.e. one read-write server, multiple
> read-only servers. Is that correct?


Yes


> If so, do clients transparently handle
> this? Are writes transparently routed to the read-write server while still
> allowing reads to come from a more local, faster, read-only server.


Not in the manner you're suggesting, since the volumes don't auto-replicate
(you can publish at discrete times but it's not you write and the change is
autopushed) you don't want that anyway.

>
> 3) Can the root volume be replicated? What I am really looking to do is
> have 2 servers, one as master and the other with all the volumes
> replicated. Is that possible?


Yes, but, as above, is it what you want?

>
>
> 4) If the read-write server fails, how does OpenAFS handle failing over to
> the replicated backup? When the original master comes back up, how
> transparently / gracefully does this happen?
>

For read-write, you can't see it. For readonly, a volume is a volume.

>
> 5) Is disconnected operation supported via local caching (as per Coda)?


Not yet.


> If
> so, are there limits on sane cache sizes


Regardless, there are. I wouldn't try something over 20gb.


> ?Is it reasonable to expect to
> have tens of GB of cached content available on the client nodes?
>
> I am currently using GFS in reliable environments, and Coda on a small
> scale in environments that have to tollerate disconnections, but I have
> concerns about Coda's stability (perpetual betaware, or so it seems) in
> larger and harsher environments (terabytes of storage, hundreds of
> clients, thousands of users), hence why I am looking at OpenAFS as a
> possible more stable alternative.
>
> Thanks in advance.
>
> Gordan
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>

Reply via email to