Welcome, but don't mistake me for someone having the background and
experience with plan 9 to comment with any sort of authority.

I won't ;-)

 I'm not sure there's as much difference as you make out to be.

There is a huge difference. Almost as much difference there is between NAT and RSVP.

the other hand you have an open tcp/ip connection and a file server
waiting for 9p requests. It's not as though 9p is wasting bandwidth
chatting away while there's no activity, so the only cost is the
tcp/ip connection to each client on the network, which shouldn't
qualify as a huge amount of resources.

It actually does qualify. I believe (though I could be wrong) state information and communication buffers are the biggest memory spending for network operations.

There _could_ be a trade-off between the transient NAT with its processing power toll and the persistent /net-import with its memory cost. However, systems like FreeBSD pre-allocate and always keep a number of network buffers so the processing power toll for transience almost vanishes if the kernel is fine-tuned for its load. By contrast, on a large network /net-import strategy could make a "powerful" gateway unavoidable because every machine on the network will need a session with the gateway even if it only rarely communicates with the outside world, unless you implement an uglier-than-NAT client-side dial-on-demand.

at a cost (the sort of cost that are hard to recognise as costs
because we're so used to them. every time i remind myself that /net
removes the need for port forwarding i get shivers).

I don't know about implementing NAT but configuring most NAT implementations, e.g. FreeBSD's natd(8) or the Linux-based one in my router, is very easy. I don't see how port forwarding is possible at all with an imported /net. Assuming /net represents an underlying IP network, how is one supposed to tell /net to redirect inbound traffic to either machine X or machine Y depending on the port the traffic is destined for?

 What makes /net tick depends on what you export on /net. The kernel
serves your basic /net, yes, but there's nothing to stop you having a
userspace file server on top of that to do whatever filtering you
like.

That would break the protocol stack. 9P is an application layer protocol (or so I understand). It should _never_ see, or worse rewrite, network layer data units. If by "a fileserver on top of that" you actually mean a file server under that then you simply are re-inventing NAT.

 UNIX seems to have coped with changing constraints by bolting more
and more junk on the side...

Are there other ways of doing it?

 *ahem* Plan 9 seems to have more of a tendency to adapt. I'm sure the
adoption of utf-8 and the switch from 9p to 9p2000 aren't the only
examples of system-wide changes.

Or is it because Plan 9 has much less inertia because of a smaller user base?

 More to the point, I'm yet to see a richer set of abstractions come
out of another system. Private namespaces, resources as files... they
might be ancient ideas, but everyone else is still playing catch up.
They might not be the ultimate ideal, but if we push them far enough
we might learn something.

Everyone else isn't playing catch up. Most others seem to watch the development here, pick what is to their liking, and do it their own way.

As for an example of a richer set of abstractions take Microsoft .NET framework. There are so many abstractions and layers of abstraction you don't know where to begin. Every thinkable way of representing the same old thing, i.e. a computer's resources, is in there. That's one reason I don't like programming in .NET. I doubt any of these abstractions, except the most rudimentary ones, actually ease development.

 'scuse me if I'm silent for awhile, I've been spending too much time
pondering and it's starting to affect my work. *9fans breathes a sigh
of relief*

Take your time. You never promised to tutor me through and through ;-)

(I don't see why 9fans should be annoyed by a few emails).

--On Friday, November 14, 2008 1:55 AM +0900 sqweek <[EMAIL PROTECTED]> wrote:

On Wed, Nov 12, 2008 at 10:23 PM, Eris Discordia
<[EMAIL PROTECTED]> wrote:
First off, thank you so much, sqweek. When someone on 9fans tries to put
things in terms of basic abstract ideas instead of technical ones I
really appreciate it--I actually learn something.

 Welcome, but don't mistake me for someone having the background and
experience with plan 9 to comment with any sort of authority.

 It doesn't stop at 9p:
* plan 9 doesn't need to bother with NAT, since you can just
import /net from your gateway.

I understand that if you import a gateway's /net on each computer in a
rather large internal network you will be consuming a huge amount of
mostly redundant resources on the gateway. My impression is that each
imported instance of /net requires a persistent session to be
established between the gateway and the host on the internal network.
NAT in comparison is naturally transient.

 I'm not sure there's as much difference as you make out to be. On the
one hand, you have a NAT gateway listening for tcp/ip packets, and on
the other hand you have an open tcp/ip connection and a file server
waiting for 9p requests. It's not as though 9p is wasting bandwidth
chatting away while there's no activity, so the only cost is the
tcp/ip connection to each client on the network, which shouldn't
qualify as a huge amount of resources.
 If it does, you have the same problem with any service you want to
provide to the whole network, so the techniques you use to solve it
there can be applied to the gateway. So *maybe* you could get away
with a weaker machine serving NAT instead of /net, but it would come
at a cost (the sort of cost that are hard to recognise as costs
because we're so used to them. every time i remind myself that /net
removes the need for port forwarding i get shivers).

With an imported /net since there's no packet rewriting implemented
on the network layer (e.g. IP) and because the "redirection" occurs in
the application layer there's no chance of capturing spoofed packets
except with hacking what makes /net tick (the kernel?).

 What makes /net tick depends on what you export on /net. The kernel
serves your basic /net, yes, but there's nothing to stop you having a
userspace file server on top of that to do whatever filtering you
like.

Does that mean a new design "from scratch" is
always bound to be better suited to current constraints?

 It very often is in my experience. But it's also very easy to leave
something important out of "current constraints" when designing from
scratch, or ignore the lessons learned by the previous iteration.

Also, if you think
UNIX and clones are flawed today because of their history and origins
what makes you think Plan 9 doesn't suffer from "diseases" it contracted
from its original birthplace? I can't forget the Jukebox example.

 UNIX seems to have coped with changing constraints by bolting more
and more junk on the side...
* "Whoa, here comes a network, we're going to need some more syscalls!"
* "Non-english languages? Better rig up some new codepages!"

As pointed out previously on this same thread,
in jest though, the Linux community always finds a way to exhaust a
(consistent) model's options, and then come the extensions and "hacks."
That, however, is not the Linux community's fault--it's the nature of an
advancing science and technology that always welcomes not only new users
but also new types of users and entirely novel use cases.

 It's a matter of approach. Linux takes what I like to call the cookie
monster approach, which is MORE MORE MORE. More syscalls, more ioctls,
more program flags, more layers of indirection, a constant enumeration
of every use case. Rarely is there a pause to check whether several
use cases can be coalesced into a single more general way of doing
things, or to consider whether the feature could be better implemented
elsewhere in the system. This has a tendency to disrupt conceptual
integrity, which hastens the above process.

 These days the dearth of developers make it difficult to distinguish
any daring developments on plan 9, but during the decades a different
derivation has been demonstrated.
 *ahem* Plan 9 seems to have more of a tendency to adapt. I'm sure the
adoption of utf-8 and the switch from 9p to 9p2000 aren't the only
examples of system-wide changes. The early labs history is rife with
stories of folk joining and trying out all sorts of new stuff. The
system feels like it has a more experimental nature - things that
don't work get dropped and lessons are learned from mistakes. Which is
sadly somewhat rare in software.

 More to the point, I'm yet to see a richer set of abstractions come
out of another system. Private namespaces, resources as files... they
might be ancient ideas, but everyone else is still playing catch up.
They might not be the ultimate ideal, but if we push them far enough
we might learn something.

The problem is it forces the server and client to synchronise on every
read/write syscall, which results in terrible bandwidth utilisation.

An example of a disease contracted from a system's birthplace.

 You're pretty quick to to describe it as a disease. Plan 9 learned a
lot from the mistakes of UNIX, but the base syscalls are something
that stuck. I wouldn't expect that to happen without good reason.

 'scuse me if I'm silent for awhile, I've been spending too much time
pondering and it's starting to affect my work. *9fans breathes a sigh
of relief*
-sqweek


Reply via email to