Re: [9fans] rc's shortcomings (new subject line)
Hi, just my 2¢ for the fanning out of pipes: if I remember correctly, mycroftiv's hubfs (or some other software in his contrib) allowed one to do exactly this. hth On 08/28/2012 09:41 PM, Dan Cross wrote: On Tue, Aug 28, 2012 at 8:56 PM, erik quanstrom quans...@quanstro.net wrote: And rc is not perfect. I've always felt like the 'if not' stuff was a kludge. no, it's certainly not. (i wouldn't call if not a kludge—just ugly. Kludge perhaps in the sense that it seems to be to work around an issue with the grammar and the expectation that it's mostly going to be used interactively, as opposed to programmatically. See below. the haahr/rakitzis es' if makes more sense, even if it's wierder.) Agreed; es would be an interesting starting point for a new shell. but the real question with rc is, what would you fix? I think in order to really answer that question, one would have to step back for a moment and really think about what one wants out of a shell. There seems to be a natural conflict a programming language and a command interpreter (e.g., the 'if' vs. 'if not' thing). On which side does one err? i can only think of a few things around the edges. `{} and $ are obvious and is some way to use standard regular expressions. but those really aren't that motivating. rc does enough. I tend to agree. As a command interpreter, rc is more or less fine as is. I'd really only feel motivated to change whatever people felt were common nits, and there are fairly few of those. perhaps (let's hope) someone else has better ideas. Well, something off the top of my head: Unix pipelines are sort of like chains of coroutines. And they work great for defining linear combinations of filters. But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). That would be kind of an interesting thing to play with in a shell language; I don't know how practically useful it would be, though. switch/case would make helluva difference over nested if/if not, if defaulted to fall-through. maybe you have an example? because i don't see that. if not works fine, and can be nested. case without fallthrough is also generally what i want. if not, i can make the common stuff a function. variable scoping (better than subshel) would help writing larger scripts, but that's not necessarily an improvement ;-) something similar to LISP's `let' special form, for dynamic binding. (A nit: 'let' actually introduces lexical scoping in most Lisp variants; yes, doing (let ((a 1)) ...) has non-lexical effect if 'a' is a dynamic variable in Common Lisp, but (let) doesn't itself introduce dynamic variables. Emacs Lisp is a notable exception in this regard.) there is variable scoping. you can write x=() y=() cmd cmd can be a function body or whatever. x and y are then private to cmd. you can nest redefinitions. x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo third $x $y}; echo ret second $x $y}; echo ret first $x $y} first 1 2 second a b third α β ret second a b ret first 1 2 This syntax feels clunky and unfamiliar to me; rc resembles block scoped languages like C; I'd rather have a 'local' or similar keyword to introduce a variable in the scope of each '{ }' block. you should try the es shell. es had let and some other scheme-y features. let allows one to do all kinds of tricky stuff, like build a shell debugger in the shell, but my opinion is that es was more powerful and fun, but it didn't buy enough because it didn't really expand on the essential nature of a shell. what can one do to manipulate processes and file descriptors. es was a weird merger between rc's syntax and functional programming concepts. It's neat-ish, but unless we're really ready to go to the pipe monad (not that weird, in my opinion) you're right. Still, if it allowed one to lexically bind a file descriptor to a variable, I could see that being neat; could I have a closure over a file descriptor? I don't think the underlying process model is really set up for it, but it would be kind of cool: one could have different commands consuming part of a stream in a very flexible way. - Dan C.
Re: [9fans] rc's shortcomings (new subject line)
On Wed, Aug 29, 2012 at 7:27 PM, erik quanstrom quans...@quanstro.net wrote: rc already has non-linear pipelines. but they're not very convienient. And somewhat limited. There's no real concept of 'fanout' of output, for instance (though that's a fairly trivial command, so probably doesn't count), or multiplexing input from various sources that would be needed to implement something like a shell-level data flow network. Muxing input from multiple sources is hard when the data isn't somehow self-delimited. [...] There may be other ways to achieve the same thing; I remember that the boundaries of individual writes used to be preserved on read, but I think that behavior changed somewhere along the way; maybe with the move away from streams? Or perhaps I'm misremembering? pipes still preserve write boundaries, as does il. (even the 0-byte write) but tcp of course by definition does not. but either way, the protocol would need to be self-framed to be transported on tcp. and even then, there are protocols that are essentially serial, like tls. Right. I think this is the reason for Bakul's question about s-expressions or JSON or a similar format; those formats are inherently self-delimiting. The problem with that is that, for passing those things around to work without some kind of reverse 'tee'-like intermediary, the system has to understand the the things that are being transferred are s-expressions or JSON records or whatever, not just streams of uninterpreted bytes. We've steadfastly rejected such system-imposing structure on files in Unix-y type environments since 1969. But conceptually, these IPC mechanisms are sort of similar to channels in CSP-style languages. A natural question then becomes, how do CSP-style languages handle the issue? Channels work around the muxing thing by being typed; elements placed onto a channel are indivisible objects of that type, so one doesn't need to worry about interference from other objects simultaneously placed onto the same channel in other threads of execution. Could we do something similar with pipes? I don't know that anyone wants typed file descriptors; that would open a whole new can of worms. Maybe the building blocks are all there; one could imagine some kind of 'splitter' program that could take input and rebroadcast it across multiple output descriptors. Coupled with some kind of 'merge' program that could take multiple input streams and mux them onto a single output, one could build nearly arbitrarily complicated networks of computations connected by pipes. Maybe for simplicity constrain these to be DAGs. With a notation to describe these computation graphs, one could just do a topological sort of the graph, create pipes in all the appropriate places and go from there. Is the shell an appropriate place for such a thing? Forsyth's link looks interesting; I haven't read through the paper in detail yet, but it sort of reminded me of LabView in a way (where non-programmers wire together data flows using boxes and arrows and stuff). i suppose i'm stepping close to sawzall now. Actually, I think you're stepping closer to the reducers stuff Rich Hickey has done recently in Clojure, though there's admittedly a lot of overlap with the sawzall way of looking at things. my knowledge of both is weak. :-) The Clojure reducers stuff is kind of slick. Consider a simple reduction in Lisp; say, summing up a list of numbers or something like that. In Common Lisp, we may write this as: (reduce #'+ '(1 2 3 4 5)) In clojure, the same thing would be written as: (reduce + [1 2 3 4 5]) The problem is how the computation is performed. To illustrate, here's a simple definition of 'reduce' written in Scheme (R5RS doesn't have a standard 'reduce' function, but it is most commonly written to take an initial element, so I do that here). (define (reduce binop a bs) (if (null? bs) a (reduce binop (binop a (car bs)) (cdr bs Notice how the recursive depth of the function is linear in the length of the list. But, if one thinks about what I'm doing here (just addition of simple numbers) there's no reason this can't be done in parallel. In particular, if I can split the list into evenly sized parts and recurse, I can limit the recursive depth of the computation to O(lg n). Something more like: (define (reduce binop a bs) (if (null? bs) a (let ((halves (split-into-halves bs))) (binop (reduce binop a (car halves)) (reduce binop a (cadr halves))) If I can exploit parallelism to execute functions in the recursion tree simultaneously, I can really cut down on execution time. The requirement is that binop over a and bs's is a monoid; that is, binop is associative over the set from which 'a' and 'bs' are drawn, and 'a' is an identity element. This sounds wonderful, of course, but in Lisp and Scheme, lists are built from cons cells, and even if I have some magic
Re: [9fans] rc's shortcomings (new subject line)
typed command languages: I F Currie, J M Foster, Curt: The Command Interpreter Language for Flex http://www.vitanuova.com/dist/doc/rsre-3522-curt.pdf
Re: [9fans] rc's shortcomings (new subject line)
rejected such system-imposing structure on files in Unix-y type environments since 1969. [...] other threads of execution. Could we do something similar with pipes? I don't know that anyone wants typed file descriptors; that would open a whole new can of worms. i don't see that the os can really help here. lib9p has no problem turning an undelimited byte stream → 9p messages. there's no reason any other format couldn't get the same treatment. said another way, we already have typed streams, but they're not enforced by the operating system. one can also use the thread library technique, using shared memory. Consider a simple reduction in Lisp; say, summing up a list of numbers or something like that. In Common Lisp, we may write this as: (reduce #'+ '(1 2 3 4 5)) In clojure, the same thing would be written as: (reduce + [1 2 3 4 5]) this reminds me of a bit of /bin/man. it seemed that the case statement to generate a pipeline of formatting commands was awkward—verbose and yet limited. fn pipeline{ if(~ $#* 0) troff $Nflag $Lflag -$MAN | $postproc if not{ p = $1; shift $p | pipeline $* } } fn roff { ... fontdoc $2 | pipeline $preproc } http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html Your example of running multiple 'grep's in parallel sort of reminded me of this, though it occurs to me that this can probably be done with a command: a sort of 'parallel apply' thing that can run a command multiple times concurrently, each invocation on a range of the arguments. But making it simple and elegant is likely to be tricky. actually, unless i misread (i need more coffee), the blog sounds just like xargs. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thursday 30 of August 2012 15:35:47 Dan Cross wrote: (...) Your example of running multiple 'grep's in parallel sort of reminded me of this, though it occurs to me that this can probably be done with a command: a sort of 'parallel apply' thing that can run a command multiple times concurrently, each invocation on a range of the arguments. But making it simple and elegant is likely to be tricky. now that i think of it... mk creates DAG of dependences and then reduces it by calling commands, going in parallel where applicable. erik's example with grep x *.[ch] boils down to two cases: - for single use, do it simple slow way -- just run single grep process for all files - but when you expect to traverse those files often, prepare a mkfile (preferably in a semi-automatic way) which will perform search in parallel. caveat: output of one grep instance could end up in the midst of a /line/ of output of another grep instance. -- dexen deVries [[[↓][→]]] I'm sorry that this was such a long letter, but I didn't have time to write you a short one. -- Blaise Pascal
Re: [9fans] rc's shortcomings (new subject line)
caveat: output of one grep instance could end up in the midst of a /line/ of output of another grep instance. grep -b. but in general if the bio library had an option to output line-wise, then the problem could be avoided. otherwise, one would need to mux the output. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thu, Aug 30, 2012 at 7:11 PM, dexen deVries dexen.devr...@gmail.com wrote: On Thursday 30 of August 2012 15:35:47 Dan Cross wrote: (...) Your example of running multiple 'grep's in parallel sort of reminded me of this, though it occurs to me that this can probably be done with a command: a sort of 'parallel apply' thing that can run a command multiple times concurrently, each invocation on a range of the arguments. But making it simple and elegant is likely to be tricky. now that i think of it... mk creates DAG of dependences and then reduces it by calling commands, going in parallel where applicable. erik's example with grep x *.[ch] boils down to two cases: - for single use, do it simple slow way -- just run single grep process for all files - but when you expect to traverse those files often, prepare a mkfile (preferably in a semi-automatic way) which will perform search in parallel. caveat: output of one grep instance could end up in the midst of a /line/ of output of another grep instance. The thing is that mk doesn't really do anything to set up connections between the commands it runs.
Re: [9fans] rc's shortcomings (new subject line)
On Thu, Aug 30, 2012 at 7:51 PM, Dan Cross cro...@gmail.com wrote: A parallel apply sort of thing could be used with xargs, of course; 'whatever | xargs papply foo' could keep some $n$ of foo's running at the same time. The magic behind 'papply foo `{whatever}' is that it knows how to interpret its arguments in blocks. xargs will invoke a command after reading $n$ arguments, but that's mainly to keep from overflowing the argument buffer, and (to my knowledge) it won't try to keep multiple instances running them in parallel. Oops, I should have checked the man page before I wrote. It seems that at least some version of xargs have a '-P' for 'parallel' mode.
Re: [9fans] rc's shortcomings (new subject line)
On Thursday 30 of August 2012 09:47:59 you wrote: caveat: output of one grep instance could end up in the midst of a /line/ of output of another grep instance. grep -b. but in general if the bio library had an option to output line-wise, then the problem could be avoided. otherwise, one would need to mux the output. to quote you, erik, pipes still preserve write boundaries, as does il so, hopefully, a dumb pipe to cat would do the job...? :^) grep-single-directory:VQ: $FILES_IN_THE_DIR grep $regex $prereq | cat -- dexen deVries [[[↓][→]]] I'm sorry that this was such a long letter, but I didn't have time to write you a short one. -- Blaise Pascal
Re: [9fans] rc's shortcomings (new subject line)
The thing is that mk doesn't really do anything to set up connections between the commands it runs. it does. the connections are through the file system. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thu, Aug 30, 2012 at 7:03 PM, erik quanstrom quans...@quanstro.net wrote: rejected such system-imposing structure on files in Unix-y type environments since 1969. [...] other threads of execution. Could we do something similar with pipes? I don't know that anyone wants typed file descriptors; that would open a whole new can of worms. i don't see that the os can really help here. lib9p has no problem turning an undelimited byte stream → 9p messages. there's no reason any other format couldn't get the same treatment. Yeah, I don't see much here unless one breaks the untyped stream model (from the perspective of the system). said another way, we already have typed streams, but they're not enforced by the operating system. Yes, but then every program that participates in one of these computation networks has to have that type knowledge baked in. The Plan 9/Unix model seems to preclude a general mechanism. one can also use the thread library technique, using shared memory. Sure, but that doesn't do much for designing a new shell. :-) Consider a simple reduction in Lisp; say, summing up a list of numbers or something like that. In Common Lisp, we may write this as: (reduce #'+ '(1 2 3 4 5)) In clojure, the same thing would be written as: (reduce + [1 2 3 4 5]) this reminds me of a bit of /bin/man. it seemed that the case statement to generate a pipeline of formatting commands was awkward—verbose and yet limited. fn pipeline{ if(~ $#* 0) troff $Nflag $Lflag -$MAN | $postproc if not{ p = $1; shift $p | pipeline $* } } fn roff { ... fontdoc $2 | pipeline $preproc } Ha! That's something. I'm not sure what, but definitely something (I actually kind of like it). http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html Your example of running multiple 'grep's in parallel sort of reminded me of this, though it occurs to me that this can probably be done with a command: a sort of 'parallel apply' thing that can run a command multiple times concurrently, each invocation on a range of the arguments. But making it simple and elegant is likely to be tricky. actually, unless i misread (i need more coffee), the blog sounds just like xargs. Hmm, not exactly. xargs would be like reducers if xargs somehow asked stdin to apply a program to itself. A parallel apply sort of thing could be used with xargs, of course; 'whatever | xargs papply foo' could keep some $n$ of foo's running at the same time. The magic behind 'papply foo `{whatever}' is that it knows how to interpret its arguments in blocks. xargs will invoke a command after reading $n$ arguments, but that's mainly to keep from overflowing the argument buffer, and (to my knowledge) it won't try to keep multiple instances running them in parallel. Hmm, I'm afraid I'm off in the realm of thinking out loud at this point. Sorry if that's noisy for folks. - Dan C.
Re: [9fans] rc's shortcomings (new subject line)
grep -b. but in general if the bio library had an option to output line-wise, then the problem could be avoided. otherwise, one would need to mux the output. to quote you, erik, pipes still preserve write boundaries, as does il so, hopefully, a dumb pipe to cat would do the job...? :^) grep-single-directory:VQ: $FILES_IN_THE_DIR grep $regex $prereq | cat i think you still need grep -b, because otherwise grep uses the bio library to buffer output, and bio doesn't respect lines. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thu, Aug 30, 2012 at 7:56 PM, erik quanstrom quans...@quanstro.net wrote: The thing is that mk doesn't really do anything to set up connections between the commands it runs. it does. the connections are through the file system. No. The order in which commands are run (or if they are run at all) is based on file timestamps, so in that sense it uses the filesystem for coordination, but mk itself doesn't do anything to facilitate interprocess communications between the commands it runs (for example setting up pipes between commands). - Dan C.
Re: [9fans] rc's shortcomings (new subject line)
Hmm, I'm afraid I'm off in the realm of thinking out loud at this point. Sorry if that's noisy for folks. THANK YOU. if 9fans needs anything, it's more thinking. i'm not an edison fan, but i do like one thing he said, which was that he had not failed, but simply discovered that the $n ways he'd tried so far do not work. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thu, Aug 30, 2012 at 7:56 PM, erik quanstrom quans...@quanstro.net wrote: The thing is that mk doesn't really do anything to set up connections between the commands it runs. it does. the connections are through the file system. No. The order in which commands are run (or if they are run at all) is based on file timestamps, so in that sense it uses the filesystem for coordination, but mk itself doesn't do anything to facilitate interprocess communications between the commands it runs (for example setting up pipes between commands). what i was saying is that mk knows and insures that the output files are there. the fact that it's not in the middle of the conversation is an implementation detail, imho. that is, mk is built on the assumption that programs communicate through files; $O^c communicates to $O^l by producing .$O files. mk rules know this. - erik
Re: [9fans] rc's shortcomings (new subject line)
If you look at the paper I referenced, you will. Similar abilities appeared in systems that supported persistence and persistent programming languages (cf. Malcolm Atkinson, not Wikipedia). On 30 August 2012 14:33, erik quanstrom quans...@quanstro.net wrote: i don't see that the os can really help here.
Re: [9fans] rc's shortcomings (new subject line)
On Thu Aug 30 10:28:24 EDT 2012, cro...@gmail.com wrote: said another way, we already have typed streams, but they're not enforced by the operating system. Yes, but then every program that participates in one of these computation networks has to have that type knowledge baked in. The Plan 9/Unix model seems to preclude a general mechanism. that's what i thought when i first read the plan 9 papers. but it turns out, that it works out just fine for file servers, ssl, authentication, etc. why can't it work for another type of agreed protocol? obviously you'd need something along the lines of tlsclient/tlssrv if you wanted normal programs to do this, but it might be that just a subset of programs are really interested in participating. one can also use the thread library technique, using shared memory. Sure, but that doesn't do much for designing a new shell. :-) the shell itself could have channels, without the idea escaping into the wild. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thursday 30 of August 2012 10:41:38 erik quanstrom wrote: what i was saying is that mk knows and insures that the output files are there. the fact that it's not in the middle of the conversation is an implementation detail, imho. that is, mk is built on the assumption that programs communicate through files; $O^c communicates to $O^l by producing .$O files. mk rules know this. shouldn't be the case for rules with virtual targets (V). such rules are always executed, and the order should only depend on implementatino of DAG traversing. ``Files may be made in any order that respects the preceding restrictions'', from manpage. if mk was used for executing grep in parallel, prerequisites would be actual files, but targets would be virtual; probably 1...$NPROC targets per directory. anyway, a meld of Rc shell and mk? crazy idea. -- dexen deVries [[[↓][→]]] I'm sorry that this was such a long letter, but I didn't have time to write you a short one. -- Blaise Pascal
Re: [9fans] rc's shortcomings (new subject line)
As another example, also from Flex, J M Foster, I F Currie, Remote Capabilities, The Computer Journal, 30(5), 1987, pp. 451-7. http://comjnl.oxfordjournals.org/content/30/5/451.full.pdf On 30 August 2012 15:45, Charles Forsyth charles.fors...@gmail.com wrote: If you look at the paper I referenced, you will. Similar abilities appeared in systems that supported persistence and persistent programming languages (cf. Malcolm Atkinson, not Wikipedia).
Re: [9fans] rc's shortcomings (new subject line)
anyway, a meld of Rc shell and mk? crazy idea. Inferno (Vitanuova) released a mash a ways back, but apparently the sources were lost. It was mind-bogglingly interesting! ++L
Re: [9fans] rc's shortcomings (new subject line)
anyway, a meld of Rc shell and mk? crazy idea. Inferno (Vitanuova) released a mash a ways back, but apparently the sources were lost. It was mind-bogglingly interesting! In case anyone's interested (like I was): http://www.vitanuova.com/inferno/man/1/mash.html -- Burton Samograd This e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by Markit is prohibited. This email is subject to all waivers and other terms at the following link: http://www.markit.com/en/about/legal/email-disclaimer.page Please visit http://www.markit.com/en/about/contact/contact-us.page? for contact information on our offices worldwide.
Re: [9fans] rc's shortcomings (new subject line)
Errr ... no. Twice: mash was not VN code but brucee's preemptive strike against a POSIX shell for Lucent's Inferno; VN's Inferno had a shell with a different style done by Roger Peppe. On 30 August 2012 16:13, Lucio De Re lu...@proxima.alt.za wrote: Inferno (Vitanuova) released a mash a ways back, but apparently the sources were lost.
Re: [9fans] rc's shortcomings (new subject line)
The source of mash as VN inherited it from the defunct Lucent organisation on 1 September 1999 remains in the tree, so it wasn't lost. On 30 August 2012 16:13, Charles Forsyth charles.fors...@gmail.com wrote: On 30 August 2012 16:13, Lucio De Re lu...@proxima.alt.za wrote: Inferno (Vitanuova) released a mash a ways back, but apparently the sources were lost.
Re: [9fans] rc's shortcomings (new subject line)
anyway, a meld of Rc shell and mk? crazy idea. What was mash? -sl
Re: [9fans] rc's shortcomings (new subject line)
Errr ... no. Twice: mash was not VN code but brucee's preemptive strike against a POSIX shell for Lucent's Inferno; VN's Inferno had a shell with a different style done by Roger Peppe. I do apologise. Mash was genial! The VN shell was remarkable in a very different way. ++L
Re: [9fans] rc's shortcomings (new subject line)
On Thu, 30 Aug 2012 15:35:47 +0530 Dan Cross cro...@gmail.com wrote: On Wed, Aug 29, 2012 at 7:27 PM, erik quanstrom quans...@quanstro.net wrote : rc already has non-linear pipelines. but they're not very convienient. And somewhat limited. There's no real concept of 'fanout' of output, for instance (though that's a fairly trivial command, so probably doesn't count), or multiplexing input from various sources that would be needed to implement something like a shell-level data flow network. Muxing input from multiple sources is hard when the data isn't somehow self-delimited. [...] There may be other ways to achieve the same thing; I remember that the boundaries of individual writes used to be preserved on read, but I think that behavior changed somewhere along the way; maybe with the move away from streams? Or perhaps I'm misremembering? pipes still preserve write boundaries, as does il. (even the 0-byte write) but tcp of course by definition does not. but either way, the protocol would need to be self-framed to be transported on tcp. and even then, there are protocols that are essentially serial, like tls. Right. I think this is the reason for Bakul's question about s-expressions or JSON or a similar format; those formats are inherently self-delimiting. Indeed. The problem with that is that, for passing those things around to work without some kind of reverse 'tee'-like intermediary, the system has to understand the the things that are being transferred are s-expressions or JSON records or whatever, not just streams of uninterpreted bytes. We've steadfastly rejected such system-imposing structure on files in Unix-y type environments since 1969. I think that it is time to try something new. A lot of things don't fit into into this model of bytepipes connecting processes. A lot of commands even in this model use line objects. But the kind of composability one gets in Scheme, go, functional languages etc is missing. Even go seems to go the one language for all Lispy model -- its typed channels are all within a single address space. [Actually I would have much preferred if they had just focused on adding channels and parallel processes to a Scheme instead of creating a whole new language but that is whole 'nother discussion!] But conceptually, these IPC mechanisms are sort of similar to channels in CSP-style languages. A natural question then becomes, how do CSP-style languages handle the issue? Channels work around the muxing thing by being typed; elements placed onto a channel are indivisible objects of that type, so one doesn't need to worry about interference from other objects simultaneously placed onto the same channel in other threads of execution. Could we do something similar with pipes? I don't know that anyone wants typed file descriptors; that would open a whole new can of worms. I am suggesting channels of self-typed objects. The idea of communicating self-identifying objects between loosely coupled processes is blindingly obvious to me. And as long as you have one producer/one consumer pair, there are no problems in implementing this on any system but the unix model of inheriting file descriptors (or even passing them around to unrelated processes) and then letting them all blab indiscriminately on the same channel just doesn't work. So again Unix gets in the way. This sounds wonderful, of course, but in Lisp and Scheme, lists are built from cons cells, and even if I have some magic 'split-into-halves' function that satisfies the requirements of reduce, doing so is still necessarily linear, so I don't gain much. Besides, having to pass around the identity all the time is a bummer. You can use cdr-coding. Or just use arrays like APL/j/k. But a knowledge of function properties is essential to implement them efficiently and for one arg version (without the initial value). In k / is the reduce operator so you can say +/!3// !3 == 0 1 2 0+/!3 // two arg version of +. And */!0 +/!0 do the right thing because the language knows identity elements for + and *. But in clojure, the Lisp concept of a list (composed of cons cells) is generalized into the concept of a 'seq'. A seq is just a sequence of things; it could be a list, a vector, some other container (say, a sequence of key/value pairs derived from some kind of associated structure), or a stream of data being read from a file or network connection. What's the *real* problem here? The issue is that reduce knows too much about the things it is reducing over. Doing things sequentially is easy, but slow; doing things in parallel requires that reduce know a lot about the type of thing it's reducing over (e.g., this magic 'split-into-halves' function. Further, that might not be appropriate for *all* sequence types; e.g., files or lists made from cons cells. Even more common than reduce
Re: [9fans] rc's shortcomings (new subject line)
Even more common than reduce is map. No reason why you can't parallelize 8c *.c we already do—with mk. - erik
Re: [9fans] rc's shortcomings (new subject line)
That's true, but the C compiler also does each .c in parallel up to NPROC. On 30 August 2012 18:18, erik quanstrom quans...@labs.coraid.com wrote: Even more common than reduce is map. No reason why you can't parallelize 8c *.c we already do—with mk.
Re: [9fans] rc's shortcomings (new subject line)
On Thu Aug 30 16:08:23 EDT 2012, charles.fors...@gmail.com wrote: That's true, but the C compiler also does each .c in parallel up to NPROC. On 30 August 2012 18:18, erik quanstrom quans...@labs.coraid.com wrote: Even more common than reduce is map. No reason why you can't parallelize 8c *.c we already do—with mk. ha! i hadn't seen that before. too bad mk doesn't (generally) use it. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Thu Aug 30 10:56:52 EDT 2012, charles.fors...@gmail.com wrote: As another example, also from Flex, J M Foster, I F Currie, Remote Capabilities, The Computer Journal, 30(5), 1987, pp. 451-7. http://comjnl.oxfordjournals.org/content/30/5/451.full.pdf very intersting. the paper says that although capabilities are enforced by microcode, and therefore can be proven, it would be possible to build the system from cots parts. so i'm wondering if kernel participation is necessary? - erik
Re: [9fans] rc's shortcomings (new subject line)
On Tuesday 28 of August 2012 16:34:10 erik quanstrom wrote: my knee-jerk reaction to my own question is that making it easier and more natural to parallelize dataflow. a pipeline is just a really low-level way to talk about it. the standard grep x *.[ch] forces all the *.[ch] to be generated before 1 instance of grep runs on whatever *.[ch] evaluates to be. but it would be okay for almost every use of this if *.[ch] were generated in parallel with any number of grep's being run. (in Linux terms, sorry!) you can get close with find|xargs -- it runs the command for every -L number lines of input. AFAIK xargs does not parallelize the execution itself. find -name '*.[ch]' | xargs -L 8 grep REGEX -- dexen deVries [[[↓][→]]] I'm sorry that this was such a long letter, but I didn't have time to write you a short one. -- Blaise Pascal
Re: [9fans] rc's shortcomings (new subject line)
On Wednesday 29 of August 2012 09:06:35 arisawa wrote: Hello, On 2012/08/29, at 4:34, dexen deVries wrote: now i see i can do: x=1 y=2 z=3 ...and only `z' retains its new value in the external scope, while `x' and `y' are limited in scope. No. ar% a=1 b=2 c=3; echo $a $b $c 1 2 3 ar% a=() b=() c=() ar% a=1 b=2 {c=3}; echo $a $b $c 3 ar% indeed, thanks. -- dexen deVries [[[↓][→]]] I'm sorry that this was such a long letter, but I didn't have time to write you a short one. -- Blaise Pascal
Re: [9fans] rc's shortcomings (new subject line)
On Wed, Aug 29, 2012 at 2:04 AM, erik quanstrom quans...@quanstro.net wrote: the haahr/rakitzis es' if makes more sense, even if it's wierder.) Agreed; es would be an interesting starting point for a new shell. es is great input. there are really cool ideas there, but it does seem like a lesson learned to me, rather than a starting point. Starting point conceptually, if not in implementation. I think in order to really answer that question, one would have to step back for a moment and really think about what one wants out of a shell. There seems to be a natural conflict a programming language and a command interpreter (e.g., the 'if' vs. 'if not' thing). On which side does one err? since the raison d'être of a shell is to be a command interpter, i'd go with that. Fair enough, but that will color the flavor of the shell when used as a programming language. Then again, Inferno's shell was able to successfully navigate both in a comfortable manner by using clever facilities available in that environment (module loading and the like). It's not clear how well that works in an environment like Unix, let alone Plan 9. I tend to agree. As a command interpreter, rc is more or less fine as is. I'd really only feel motivated to change whatever people felt were common nits, and there are fairly few of those. there are nits of omission, and those can be fixable. ($x(n-m) was added) Right. perhaps (let's hope) someone else has better ideas. Well, something off the top of my head: Unix pipelines are sort of like chains of coroutines. And they work great for defining linear combinations of filters. But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). That would be kind of an interesting thing to play with in a shell language; I don't know how practically useful it would be, though. rc already has non-linear pipelines. but they're not very convienient. And somewhat limited. There's no real concept of 'fanout' of output, for instance (though that's a fairly trivial command, so probably doesn't count), or multiplexing input from various sources that would be needed to implement something like a shell-level data flow network. Muxing input from multiple sources is hard when the data isn't somehow self-delimited. For specific applications this is solvable by the various pieces of the computation just agreeing on how to represent data and having a program that takes that into account do the muxing, but for a general mechanism it's much more difficult, and the whole self-delimiting thing breaks the Unix 'data as text' abstraction by imposing a more rigid structure. There may be other ways to achieve the same thing; I remember that the boundaries of individual writes used to be preserved on read, but I think that behavior changed somewhere along the way; maybe with the move away from streams? Or perhaps I'm misremembering? I do remember that it led to all sorts of hilarious arguments about what the behavior of things like, 'write(fd, , 0)' should induce in the reading side of things, but this was a long time ago. Anyway, maybe something along the lines of, 'read a message of length =SOME_MAX_SIZE from a file descriptor; the message boundaries are determined by the sending end and preserved by read/write' could be leveraged here without too much disruption to the current model. i think part of the problem is answering the question, what problem would we like to solve. because a better shell just isn't well-defined enough. Agreed. my knee-jerk reaction to my own question is that making it easier and more natural to parallelize dataflow. a pipeline is just a really low-level way to talk about it. the standard grep x *.[ch] forces all the *.[ch] to be generated before 1 instance of grep runs on whatever *.[ch] evaluates to be. but it would be okay for almost every use of this if *.[ch] were generated in parallel with any number of grep's being run. i suppose i'm stepping close to sawzall now. Actually, I think you're stepping closer to the reducers stuff Rich Hickey has done recently in Clojure, though there's admittedly a lot of overlap with the sawzall way of looking at things. - Dan C.
Re: [9fans] rc's shortcomings (new subject line)
rc already has non-linear pipelines. but they're not very convienient. And somewhat limited. There's no real concept of 'fanout' of output, for instance (though that's a fairly trivial command, so probably doesn't count), or multiplexing input from various sources that would be needed to implement something like a shell-level data flow network. Muxing input from multiple sources is hard when the data isn't somehow self-delimited. [...] There may be other ways to achieve the same thing; I remember that the boundaries of individual writes used to be preserved on read, but I think that behavior changed somewhere along the way; maybe with the move away from streams? Or perhaps I'm misremembering? pipes still preserve write boundaries, as does il. (even the 0-byte write) but tcp of course by definition does not. but either way, the protocol would need to be self-framed to be transported on tcp. and even then, there are protocols that are essentially serial, like tls. i suppose i'm stepping close to sawzall now. Actually, I think you're stepping closer to the reducers stuff Rich Hickey has done recently in Clojure, though there's admittedly a lot of overlap with the sawzall way of looking at things. my knowledge of both is weak. :-) - erik
Re: [9fans] rc's shortcomings (new subject line)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.78.5331 Paul Haeberli, ConMan: A Visual Programming Language for Interactive Graphics (1988) I supervised a student who did an implementation for a Blit-like environment on the Sun3 as a project; unfortunately I didn't keep a copy. I remember there were several things left to work out based on the paper. (The Blit-like environment replaced megabytes of Sunview, in case you were wondering, and enabled some serious fun. Sunview enabled some serious head-banging.)
Re: [9fans] rc's shortcomings (new subject line)
switch/case would make helluva difference over nested if/if not, if defaulted to fall-through. maybe you have an example? because i don't see that. if not works fine, and can be nested. case without fallthrough is also generally what i want. if not, i can make the common stuff a function. variable scoping (better than subshel) would help writing larger scripts, but that's not necessarily an improvement ;-) something similar to LISP's `let' special form, for dynamic binding. there is variable scoping. you can write x=() y=() cmd cmd can be a function body or whatever. x and y are then private to cmd. you can nest redefinitions. x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo third $x $y}; echo ret second $x $y}; echo ret first $x $y} first 1 2 second a b third α β ret second a b ret first 1 2 you should try the es shell. es had let and some other scheme-y features. let allows one to do all kinds of tricky stuff, like build a shell debugger in the shell, but my opinion is that es was more powerful and fun, but it didn't buy enough because it didn't really expand on the essential nature of a shell. what can one do to manipulate processes and file descriptors. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Tuesday 28 of August 2012 14:44:40 erik quanstrom wrote: (...) variable scoping (better than subshel) would help writing larger scripts, but that's not necessarily an improvement ;-) something similar to LISP's `let' special form, for dynamic binding. there is variable scoping. you can write x=() y=() cmd thank you good sire, for you've just made my day. now i see i can do: x=1 y=2 z=3 ...and only `z' retains its new value in the external scope, while `x' and `y' are limited in scope. horray for rc and helpful 9fans, -- dexen deVries 1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.
Re: [9fans] rc's shortcomings (new subject line)
On Tue, Aug 28, 2012 at 8:56 PM, erik quanstrom quans...@quanstro.net wrote: And rc is not perfect. I've always felt like the 'if not' stuff was a kludge. no, it's certainly not. (i wouldn't call if not a kludge—just ugly. Kludge perhaps in the sense that it seems to be to work around an issue with the grammar and the expectation that it's mostly going to be used interactively, as opposed to programmatically. See below. the haahr/rakitzis es' if makes more sense, even if it's wierder.) Agreed; es would be an interesting starting point for a new shell. but the real question with rc is, what would you fix? I think in order to really answer that question, one would have to step back for a moment and really think about what one wants out of a shell. There seems to be a natural conflict a programming language and a command interpreter (e.g., the 'if' vs. 'if not' thing). On which side does one err? i can only think of a few things around the edges. `{} and $ are obvious and is some way to use standard regular expressions. but those really aren't that motivating. rc does enough. I tend to agree. As a command interpreter, rc is more or less fine as is. I'd really only feel motivated to change whatever people felt were common nits, and there are fairly few of those. perhaps (let's hope) someone else has better ideas. Well, something off the top of my head: Unix pipelines are sort of like chains of coroutines. And they work great for defining linear combinations of filters. But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). That would be kind of an interesting thing to play with in a shell language; I don't know how practically useful it would be, though. switch/case would make helluva difference over nested if/if not, if defaulted to fall-through. maybe you have an example? because i don't see that. if not works fine, and can be nested. case without fallthrough is also generally what i want. if not, i can make the common stuff a function. variable scoping (better than subshel) would help writing larger scripts, but that's not necessarily an improvement ;-) something similar to LISP's `let' special form, for dynamic binding. (A nit: 'let' actually introduces lexical scoping in most Lisp variants; yes, doing (let ((a 1)) ...) has non-lexical effect if 'a' is a dynamic variable in Common Lisp, but (let) doesn't itself introduce dynamic variables. Emacs Lisp is a notable exception in this regard.) there is variable scoping. you can write x=() y=() cmd cmd can be a function body or whatever. x and y are then private to cmd. you can nest redefinitions. x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo third $x $y}; echo ret second $x $y}; echo ret first $x $y} first 1 2 second a b third α β ret second a b ret first 1 2 This syntax feels clunky and unfamiliar to me; rc resembles block scoped languages like C; I'd rather have a 'local' or similar keyword to introduce a variable in the scope of each '{ }' block. you should try the es shell. es had let and some other scheme-y features. let allows one to do all kinds of tricky stuff, like build a shell debugger in the shell, but my opinion is that es was more powerful and fun, but it didn't buy enough because it didn't really expand on the essential nature of a shell. what can one do to manipulate processes and file descriptors. es was a weird merger between rc's syntax and functional programming concepts. It's neat-ish, but unless we're really ready to go to the pipe monad (not that weird, in my opinion) you're right. Still, if it allowed one to lexically bind a file descriptor to a variable, I could see that being neat; could I have a closure over a file descriptor? I don't think the underlying process model is really set up for it, but it would be kind of cool: one could have different commands consuming part of a stream in a very flexible way. - Dan C.
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 14:44:40 EDT erik quanstrom quans...@quanstro.net wrote: =20 switch/case would make helluva difference over nested if/if not, if defaulted to fall-through. maybe you have an example? because i don't see that. if not works fine, and can be nested. case without fallthrough is also generally what i want. if not, i can make the common stuff a function. variable scoping (better than subshel) would help writing larger scripts, but that's not necessarily an improvement ;-) something similar to LISP's `let' special form, for dynamic binding. there is variable scoping. you can write x=3D() y=3D() cmd cmd can be a function body or whatever. x and y are then private to cmd. you can nest redefinitions. =20 x=3D1 y=3D2 {echo first $x $y; x=3Da y=3Db {echo second $x $y; x=3D=CE= B1= y=3D=CE=B2 {echo third $x $y}; echo ret second $x $y}; echo ret first $x= $y} first 1 2 second a b third =CE=B1 =CE=B2 ret second a b ret first 1 2 This is basically the same as let. Instead of let x=1 y=2 foo you say x=1 y=2 foo and this is lexical scoping. try lex=1 { echo $lex; } echo $lex vs { var=1; echo $var; } echo $var
Re: [9fans] rc's shortcomings (new subject line)
But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). Rc has this. It's great. See section 10 of the rc paper or {command} in the rc manual. I use it all the time to see differences between programmatically generated things. -- Aram Hăvărneanu
Re: [9fans] rc's shortcomings (new subject line)
On Wed, 29 Aug 2012 01:11:26 +0530 Dan Cross cro...@gmail.com wrote: On Tue, Aug 28, 2012 at 8:56 PM, erik quanstrom quans...@quanstro.net wro= perhaps (let's hope) someone else has better ideas. Well, something off the top of my head: Unix pipelines are sort of like chains of coroutines. And they work great for defining linear combinations of filters. But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). That would be kind of an interesting thing to play with in a shell language; I don't know how practically useful it would be, though. Coming up with an easy to use syntax for computation trees (or arbitrary nets) is the hard part. May be the time is ripe for a net-rc or net-scheme-shell. The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects).
Re: [9fans] rc's shortcomings (new subject line)
the haahr/rakitzis es' if makes more sense, even if it's wierder.) Agreed; es would be an interesting starting point for a new shell. es is great input. there are really cool ideas there, but it does seem like a lesson learned to me, rather than a starting point. I think in order to really answer that question, one would have to step back for a moment and really think about what one wants out of a shell. There seems to be a natural conflict a programming language and a command interpreter (e.g., the 'if' vs. 'if not' thing). On which side does one err? since the raison d'être of a shell is to be a command interpter, i'd go with that. I tend to agree. As a command interpreter, rc is more or less fine as is. I'd really only feel motivated to change whatever people felt were common nits, and there are fairly few of those. there are nits of omission, and those can be fixable. ($x(n-m) was added) perhaps (let's hope) someone else has better ideas. Well, something off the top of my head: Unix pipelines are sort of like chains of coroutines. And they work great for defining linear combinations of filters. But something that may be interesting would be the ability to allow the stream of computations to branch; instead of pipelines being just a list, make them a tree, or even some kind of dag (if one allows for the possibility of recombining streams). That would be kind of an interesting thing to play with in a shell language; I don't know how practically useful it would be, though. rc already has non-linear pipelines. but they're not very convienient. i think part of the problem is answering the question, what problem would we like to solve. because a better shell just isn't well-defined enough. my knee-jerk reaction to my own question is that making it easier and more natural to parallelize dataflow. a pipeline is just a really low-level way to talk about it. the standard grep x *.[ch] forces all the *.[ch] to be generated before 1 instance of grep runs on whatever *.[ch] evaluates to be. but it would be okay for almost every use of this if *.[ch] were generated in parallel with any number of grep's being run. i suppose i'm stepping close to sawzall now. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 16:34:10 EDT erik quanstrom quans...@quanstro.net wrote: my knee-jerk reaction to my own question is that making it easier and more natural to parallelize dataflow. a pipeline is just a really low-level way to talk about it. the standard grep x *.[ch] forces all the *.[ch] to be generated before 1 instance of grep runs on whatever *.[ch] evaluates to be. Here the shell would have to understand program behavior. Consider something like 8l x.8 y.8 z.8 ... This can't be parallelized (but a parallelizable loader can be written). May be you can define a `par' command (sort of like xargs but invokes in parallel). echo *.[ch] | par -1 grep x but it would be okay for almost every use of this if *.[ch] were generated in parallel with any number of grep's being run. i suppose i'm stepping close to sawzall now. Be careful!
Re: [9fans] rc's shortcomings (new subject line)
Hello, On 2012/08/29, at 4:34, dexen deVries wrote: now i see i can do: x=1 y=2 z=3 ...and only `z' retains its new value in the external scope, while `x' and `y' are limited in scope. No. ar% a=1 b=2 c=3; echo $a $b $c 1 2 3 ar% a=() b=() c=() ar% a=1 b=2 {c=3}; echo $a $b $c 3 ar% Kenji Arisawa
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 16:34:10 EDT erik quanstrom quans...@quanstro.net wrote: my knee-jerk reaction to my own question is that making it easier and more natural to parallelize dataflow. a pipeline is just a really low-level way to talk about it. the standard grep x *.[ch] forces all the *.[ch] to be generated before 1 instance of grep runs on whatever *.[ch] evaluates to be. Here the shell would have to understand program behavior. Consider something like 8l x.8 y.8 z.8 ... This can't be parallelized (but a parallelizable loader can be written). ya, ya. improving on rc in a noticable way is hard. and thinking aloud is a bad idea. and a good way to look foolish. - erik
Re: [9fans] rc's shortcomings (new subject line)
The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects). !? the ability to pass typed records around is an idea that was tarred, feathered, drawn and quartered by unix. files, and therefore streams, have no type. they are byte streams. one of the advantages of unix over, say, ibm systems, is that in unix it is not the os' business to care what you're passing about. but by the same token, if you are the application, you get to arrange these things by yourself. rc already passes structured data through the environment. rc variables in the environment are defined as var:[^ctl-a]* | ([^ctl-a]*) ctl-a list so there is precident for this in shells. - erik
Re: [9fans] rc's shortcomings (new subject line)
var:[^ctl-a]* | ([^ctl-a]*) ctl-a list sorry. s/list/var/ - erik
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 21:39:06 EDT erik quanstrom quans...@quanstro.net wrote: The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects). !? the ability to pass typed records around is an idea that was tarred, feathered, drawn and quartered by unix. files, and therefore streams, have no type. they are byte streams. I was not talking about records but s-expressions. json is kind of sort of the same thing. Without a generally useful and simple such mechanism, people end up devising their own. The 9p format for instance. And go has typed channels. rc already passes structured data through the environment. rc variables in the environment are defined as var:[^ctl-a]* | ([^ctl-a]*) ctl-a list so there is precident for this in shells. And this.
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 21:39:06 EDT erik quanstrom quans...@quanstro.net wrote: The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects). !? the ability to pass typed records around is an idea that was tarred, feathered, drawn and quartered by unix. files, and therefore streams, have no type. they are byte streams. I was not talking about records but s-expressions. json is kind of sort of the same thing. Without a generally useful and simple such mechanism, people end up devising their own. The 9p format for instance. And go has typed channels. it sounds like you're saying 9p isn't useful. i must be reading your post incorrectly. - erik
Re: [9fans] rc's shortcomings (new subject line)
On Tue, 28 Aug 2012 22:23:20 EDT erik quanstrom quans...@quanstro.net wrote: On Tue, 28 Aug 2012 21:39:06 EDT erik quanstrom quans...@quanstro.net wr ote: The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects). !? the ability to pass typed records around is an idea that was tarred, feathered, drawn and quartered by unix. files, and therefore streams, have no type. they are byte streams. I was not talking about records but s-expressions. json is kind of sort of the same thing. Without a generally useful and simple such mechanism, people end up devising their own. The 9p format for instance. And go has typed channels. it sounds like you're saying 9p isn't useful. i must be reading your post incorrectly. 9p is quite useful. But the same semantics could've been implemented using a more universal but compact structured format such as s-expr. It is not the only choice but to me it seems to strike a reasonable balance (compared to bloaty XML at one extreme, tightly packed binary structures at another, and byte streams with printf/parse encode/decode at the third extreme).
Re: [9fans] rc's shortcomings (new subject line)
The feature I want is the ability to pass not just character values in environment or pipes but arbitrary Scheme objects. But that requires changes at the OS level (or mapping them to/from strings, which is a waste if both sides can handle structured objects). !? the ability to pass typed records around is an idea that was tarred, feathered, drawn and quartered by unix. files, and therefore streams, have no type. they are byte streams. I was not talking about records but s-expressions. json is kind of sort of the same thing. Without a generally useful and simple such mechanism, people end up devising their own. The 9p format for instance. And go has typed channels. it sounds like you're saying 9p isn't useful. i must be reading your post incorrectly. 9p is quite useful. But the same semantics could've been implemented using a more universal but compact structured format such as s-expr. It is not the only choice but to me it seems to strike a reasonable balance (compared to bloaty XML at one extreme, tightly packed binary structures at another, and byte streams with printf/parse encode/decode at the third extreme). i don't see the problem. 9p is not in any way special to the kernel. only devmnt knows about it, and it is only used to mount file servers. in theory, one could substitue something else. it wouldn't quite be plan 9, and it wouldn't be interoperable, but there's no reason it couldn't be done. authentication speaks special protocols. venti speaks a special protocol. so i don't see why kernel support would even be helpful in implementing your s-expression protocol. and there's no reason a 9p over s-expression device can't be implemented. imho, the reason for constraining 9p to exactly the operations needed is to make it easy to prove the protocol correct. - erik