31000/2 is about 15tb. that seems pretty reasonable these days.
do you know what the peak throughput is?
Good point.
Btw, what's the typical size for the coraid deployment?
we see everything from 1tb to 2400tb.
our most popular appliance, the sr2421,
holds 24 disks.
- erik
It might be an interesting project for some student(s) to reimplement
Kerberos 5 for Plan 9... it's something of an open question of just how
minimal and tasteful the implementation can be when it's not MIT code. ;)
Indeed, if anyone ever does look at it, can I vote for including
the hooks for
On Dec 4, 2008, at 8:35 PM, Dave Eckhardt wrote:
At some distant point in the past (last century, actually)
I was drawn to AFS because of the features, but left in
horror because of the complexity.
The goal was adding an enterprise-scale distributed file
system to an existing operating system
On Dec 4, 2008, at 8:43 PM, erik quanstrom wrote:
On Thu Dec 4 23:37:02 EST 2008, [EMAIL PROTECTED] wrote:
supported 400 users on 120 workstations in 1984; this
evening CMU's AFS cell hosts 30,821 user volumes, roughly
half a gigabyte each; there are cells with more users and
cells with more
On Dec 2, 2008, at 5:36 PM, Dan Cross wrote:
On Tue, Dec 2, 2008 at 7:07 PM, erik quanstrom
[EMAIL PROTECTED] wrote:
currently one can prevent external changes to a
namespace by creating a unique ns with rfork.
if /proc/$pid/ns were writable, one would not not
be possible without yet another
AFS has its warts, but, trust me, if you've used it for a while,
you will not find yourself excitedly perusing the volume location
database to see where your bits are coming from.
Is there an AFS client for plan9 anywhere?
Just curious.
-Steve
On Thu, 2008-12-04 at 02:39 -0500, Dave Eckhardt wrote:
P.S. I've seen this disbelief in the fact that automoter + NFS
actually can be really convenient mostly come from Linux people.
Perspective depends on experience.
AFS has its warts, but, trust me, if you've used it for a while,
you
At some distant point in the past (last century, actually)
I was drawn to AFS because of the features, but left in
horror because of the complexity.
The goal was adding an enterprise-scale distributed file
system to an existing operating system (Unix), where
enterprise-scale meant 5,000 users
On Thu Dec 4 23:37:02 EST 2008, [EMAIL PROTECTED] wrote:
supported 400 users on 120 workstations in 1984; this
evening CMU's AFS cell hosts 30,821 user volumes, roughly
half a gigabyte each; there are cells with more users and
cells with more bits.
31000/2 is about 15tb. that seems pretty
On Thu, Dec 04, 2008 at 02:58:15PM +, Steve Simon wrote:
AFS has its warts, but, trust me, if you've used it for a while,
you will not find yourself excitedly perusing the volume location
database to see where your bits are coming from.
Is there an AFS client for plan9 anywhere?
P.S. I've seen this disbelief in the fact that automoter + NFS
actually can be really convenient mostly come from Linux people.
Perspective depends on experience.
AFS has its warts, but, trust me, if you've used it for a while,
you will not find yourself excitedly perusing the volume location
Hi, Russ!
Firs of all -- thank a lot for answering all of my question
in a very detailed manner. I really do appreciate it!
Now, if you don't mind, I still have just one question left:
On Mon, 2008-12-01 at 16:55 -0800, Russ Cox wrote:
That's very similar to what I referred to as a synthetic
On Tue, Dec 02, 2008 at 10:04:57AM -0800, Roman V. Shaposhnik wrote:
I would imagine that making '#p'/proc id/ns writable and receptive
to messages of exact same format that is being output right now
(plus an 'unmount X Y' message) would be a very natural thought in
a Plan9 environment. Yet,
On Tue, 2008-12-02 at 13:31 -0500, Nathaniel W Filardo wrote:
Namespaces form a large part of the security component of the Plan 9 model,
and (AFAICT) cross-namespace work is underinvestigated
It would be, in fact, a fair answer.
since it starts to look a lot like something that could
since nfs is always directly mounted, i think you are confusing
direct mounts with things that are accessable because you have
mounted a server which has mounted something else.
I don't think I'm confusing anything here. In fact, your statement
of nfs is always directly mounted seems to be
On Tue, 2008-12-02 at 14:29 -0500, erik quanstrom wrote:
i would think that either you want encapsulation or you don't.
see-through encapsulation would seem to me to be a contradiction
in terms.
Thanks for the feedback. Lets see if you change your mind after the
explanation given
On Tue, 2008-12-02 at 21:05 +0100, hiro wrote:
I still don't understand what kind of feature you are missing. Could
it be that you just want a naming convention for your mount places?
Writable '#p/id/ns'
Thanks,
Roman.
P.S. Unless somebody tells me that it is a bad idea with the explanation
On Tue, 2008-12-02 at 21:05 +0100, hiro wrote:
I still don't understand what kind of feature you are missing. Could
it be that you just want a naming convention for your mount places?
Writable '#p/id/ns'
Thanks,
Roman.
P.S. Unless somebody tells me that it is a bad idea with the
nope. sorry. i would hate to see such a botch in plan 9.
if you want to distribute load by having multiple fs, then
it should be done so that the client wouldn't know or care
that any distribution is going on.
I think you're deliberately exaggerating here. You must
know full well,
a couple of questions come to mind. how does writing
to a ns interact with shared namespaces? does it automaticly
fork the namespace? seems iffy. which leads to the next
obvious question
how do you prevent a race between changing the namespace and
opening fds?
and, what about open
None of these questions are any different in this
context than if there was simply some other process
sharing the name space and doing the same manipulations.
currently one can prevent external changes to a
namespace by creating a unique ns with rfork.
if /proc/$pid/ns were writable, one
On Tue, 2008-12-02 at 19:07 -0500, erik quanstrom wrote:
None of these questions are any different in this
context than if there was simply some other process
sharing the name space and doing the same manipulations.
currently one can prevent external changes to a
namespace by creating
On Tue, 2008-12-02 at 16:35 -0500, erik quanstrom wrote:
nope. sorry. i would hate to see such a botch in plan 9.
if you want to distribute load by having multiple fs, then
it should be done so that the client wouldn't know or care
that any distribution is going on.
I think
On Tue, Dec 2, 2008 at 7:07 PM, erik quanstrom [EMAIL PROTECTED] wrote:
currently one can prevent external changes to a
namespace by creating a unique ns with rfork.
if /proc/$pid/ns were writable, one would not not
be possible without yet another mechanism.
chmod? I guess it comes back to,
On Tue, Dec 2, 2008 at 8:26 PM, Roman V. Shaposhnik [EMAIL PROTECTED] wrote:
The client does not pick. It is part of the automounter's decision.
And once the server gets picked by the automounter, it is awfully
convenient that you see the actual mount as part of the namespace.
Folks are
That's what bns does on Plan B.
AFAIK, there's no way on Plan 9 to automate mounts making
everythiing work after the FS goes away.
aan?
- erik
bns != aan
On Mon, Dec 1, 2008 at 3:34 PM, erik quanstrom [EMAIL PROTECTED] wrote:
That's what bns does on Plan B.
AFAIK, there's no way on Plan 9 to automate mounts making
everythiing work after the FS goes away.
aan?
well, sure. i wasn't saying that they are the same.
i
Maybe I missunderstood.
I mean that unless the server is reached in exaclty the same way
(which, in general, if you want something like automount, it does not)
aan is not enough.
It's fine to reach the same FS on the same address when the net goes
and come, but otherwise it is not IIRC.
On Mon,
On Mon, Dec 1, 2008 at 9:48 AM, Russ Cox [EMAIL PROTECTED] wrote:
The automounter is symptomatic of an ill that Plan 9 has cured.
Since adding to the name space requires no special privileges,
ordinary users can mount the servers they want to use directly,
The other reason for an automounter
On Mon, 2008-12-01 at 10:17 -0800, ron minnich wrote:
But this need for an automounter has not really existed for probably
17 years or so ... NFS servers are pretty reliable in many cases. It
is interesting to see the use case for automoiuters change.
Right. I'm actually too young to be able
On Mon, Dec 1, 2008 at 1:31 PM, Roman V. Shaposhnik [EMAIL PROTECTED] wrote:
In Plan9 land you don't need automounter to deal with
/media/floppy. But cd /net/machine name is not there.
At least not by default.
I see what you're after. If that's all you want, though, I have to
confess I don't
On Mon, 01 Dec 2008 10:25:09 PST Roman V. Shaposhnik [EMAIL PROTECTED]
wrote:
P.S. I have always wanted to be able to trade namespaces
between different processes the same way file descriptors get
traded using #s. On the other hand, I have never ever possessed
enough insight into the
Won't srvfs (see exportfs(4)) do what you want (packaging up a
namespace)?
Russ, could, you please be a tad more specific as to what ill
exactly are you referring to?
I was referring to needing special privilege to mount something.
While I agree that Plan9 completely removes the need for
automounter to be a privileged application, I still don't
see an easy way
34 matches
Mail list logo