Re: [9fans] 9P writes for directories

2009-03-27 Thread Roman Shaposhnik

On Mar 27, 2009, at 6:36 PM, Uriel wrote:

Some of us have been thinking about a 'sane' subset of HTTP plus some
conventions, that could reasonably map to 9p.


Interestingly enough, that's exactly the quest I'm on. I'd appreciate  
a chance

of talking to likeminded folks.


The main issue is the huge amounts of crud in the HTTP spec and how to
pick the sensible bits


Ever since I've read "JavaSript: the good parts" (and realized along the
way that the good parts were really quire close to Scheme) I tend
to thing that this approach of trying to identify "the good parts" is  
the

only thing that can let me keep sanity in the modern world of
rampant web services. The good news (at least so far) seem to
be that there *are* good parts in HTTP.


and discard the rest while remaining compatible
with existing implementations; the main convention that needs to be
added is a way to list directory contents, likely using something not
too insane, like JSON.


Yeap.


With some thought and care one could come up with a very simple,
RESTful, and almost 9p-mappable replacement for the stinking bloated
WebDAV and SOAP/XML-RPC abominations.



Can't agree more. Another reason to clean up this mess is a potential
to significantly simplify the way a typical web service is written. With
luck, there could be something as simple as lib9p/libixp (or even
execnet) to make writing webserives simple and fun.

 I'd love to exchange ideas on a more appropriate forum (if there's  
any).


Thanks,
Roman.



Re: [9fans] grist for the "synchronous vs. asynchronous" mill

2009-03-27 Thread Roman Shaposhnik

On Mar 24, 2009, at 5:51 AM, roger peppe wrote:

http://www.classhat.com/tymaPaulMultithread.pdf


Java has its own share of issues when it comes to multithreading, I'd
rather see a presentation like that from the sort of guys who do
VoIP servers in C/C++ and things like that.

Thanks,
Roman.




[9fans] fossil caching venti errors

2009-03-27 Thread Nathaniel W Filardo
Entertaining.  Indented lines are fossil console; nonindented are at a
normal CPU prompt.

cpu% 9fs
9fs: venti i/o error or wrong score, block 
558b88fbae4e0aa894c614fb3eeccf4d2f7492ca

main: venti tcp!xxx.xxx.xxx.xxx!x

cpu% 9fs
9fs: venti i/o error or wrong score, block 
558b88fbae4e0aa894c614fb3eeccf4d2f7492ca

main: df
  main: 741,801,984 used + 38,548,643,840 free = 39,290,445,824 (1% 
used)

cpu% 9fs
usage: 9fs service [mountpoint]

Something about this seems wrong.  Suggestions?
--nwf;


pgpgJ0QO7HEJu.pgp
Description: PGP signature


[9fans] GSOC proposal

2009-03-27 Thread Bruce Ellis
Just a suggestion,

A good forth system using acme, probably based on fgb's 4th. The goal
is to conquer the Seaforth chip.

I know the dev kit is US$500 but their compiler and simulator, written
in forth, doesn't need hardware.

And at least two 9fans have a kit.

brucee



Re: [9fans] plan9 for calculations

2009-03-27 Thread Roman V. Shaposhnik

On 03/27/09 14:31, Rudolf Sykora wrote:

Hello everybody,

I noticed there are some thoughts about using plan9 on supercomputers.
For me supercomputers are usually used to do some heavy calculations.
And this leads me to a question. What software is then used for
programming these calculations? (I mean e.g. linear algebra, i.e.
matrix, calculations.) Where can one read about that?
  

If you are talking about established practices then supercomputing *almost*
always means Infiniband+MPI (managed by a grid engine).
On fat nodes (the kind of machines Sun used to sell) you might also find
OpenMP. But there will always be MPI, since doesn't mean how fat the
node is -- the cluster is fatter. On the language side I've seen 
predominantly

Fortran and C++, although Ron was telling me horror stories about Python
and some other goo. As far as the libraries go: linpack is almost
always there, but the good place to take a look at what's relevant is here:
  http://docs.sun.com/app/docs/doc/819-0497

PS.: It could be that plan9, being more a os-research system than
anything else, is simply no suitable for such a use (there are no
plotting libraries, other utilities). Perhaps it's not a good idea at
all to try to use plan9 like that because it would be more work than
anything. Maybe using linux for such things with all its tools is just
ok. If you share this idea, just say it too, please.
  
My personal take is that Plan9's forte might be the sane clustering of 
things.

MPI and grid engines is really quite horrible, but it is pervasive.

Thanks,
Roman.




Re: [9fans] GSOC: Drawterm for the iPhone

2009-03-27 Thread Eric Van Hensbergen
If you take the right approach you should be able to pave the way for  
all three.  Just keep the interface modular and implement the hooks  
for the target you are most comfortable with.


-eric

Sent from my iPhone

On Mar 27, 2009, at 8:21 PM, Uriel  wrote:

On Thu, Mar 26, 2009 at 10:22 PM, Pietro Gagliardi  
 wrote:

A 9vx, p9p or inferno cocoa port is a project that seems fairly


I can do one of these; which is the most needed/wanted?


Personally I have no preference, any of the three would be great to
have, probably the p9p one is the one better tested, in better shape
and a better starting point, but I'm mostly guessing.

uriel





Re: [9fans] 9P writes for directories

2009-03-27 Thread Uriel
Some of us have been thinking about a 'sane' subset of HTTP plus some
conventions, that could reasonably map to 9p.

The main issue is the huge amounts of crud in the HTTP spec and how to
pick the sensible bits and discard the rest while remaining compatible
with existing implementations; the main convention that needs to be
added is a way to list directory contents, likely using something not
too insane, like JSON.

With some thought and care one could come up with a very simple,
RESTful, and almost 9p-mappable replacement for the stinking bloated
WebDAV and SOAP/XML-RPC abominations.

uriel

On Thu, Mar 26, 2009 at 11:05 PM, Roman Shaposhnik  wrote:
> On Mar 26, 2009, at 1:54 PM, Eric Van Hensbergen wrote:
>>>
>>> I have thought about that too, but became convinced that POST is more
>>> like create (or more like write on a subdirectory -- hence the original
>>> question). With the clone operation it is the *opening* of the clone
>>> device that provides you with a new fid. In HTTP that would be like
>>> getting
>>> a redirection on GET. Don't you think?
>>>
>>
>> Except that Creates give an ID/path for creation instead of receiving
>> one -- that's the key thing that makes it like clone, the most
>> important bit being that this sort of mechanism avoids collision.
>
> I believe we're on the same page, but the wording is  not always
> accurate.
>
>> Whether or not that is critical depends on how you write your app.  I
>> think the main difference here is you are trying to map HTTP syntax to
>> 9P syntax, and I've been thinking of semantics -- an HTTP POST to a
>> subdirectory would equal the opening of a clone file (within that
>> subdirectory), and writing the metadata,
>
> Interesting point. I guess it all depends on what is the model for
> new nodes to be created in the URI tree. In fact, if I were to complete
> the analogy, then POST to an existing URI (although, nothing really exists
> in the world of URIs) corresponding to a "subdirectory" would be, in fact,
> what you say. A POST/PUT to a non-existing URI could be considered
> a creation of the named resource in its parent. Although I'm not sure that
> such a thing should be always allowed.
>
>> a read on that file would
>> return the ID -- this would be done atomically by the HTTP server to
>> service the POST not as a set of HTTP routines.
>
> If you mean PRG (http://en.wikipedia.org/wiki/Post/Redirect/Get) then
> yes.
>
>> I think the critical aspect from the REST/CRUD perspective is that the
>> POST
>> has to be idempotent
>
> Not as per HTTP RFC:
>    http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5
>
 Outside of the ID bit, why wouldn't create suffice?
>>>
>>> It would (just as Erik pointed out). I guess I was just looking for
>>> symmetry (if POST is really a write(*), it should translate into write
>>> independent of whether the URI corresponds to a subdirectory or
>>> not) and potential pitfalls that made 9P spec disallow writes on
>>> subdirectories (and since nobody can identify any of those -- I'll
>>> rest my case and proceed with translating POST into different
>>> 9P messages depending on the type of the URI).
>>>
>>
>> I don't think the symmetry is worth altering the semantics of the
>> protocol -- its likely more trouble than its worth in the long run.
>
> Agreed.
>
> Thanks,
> Roman.
>
> P.S. I wonder if there's a general interest in REST from the Plan9
> folks. I've seen your blog post on the subject, so there's that. It is
> actually quite fun to see how things like Google App Engine and
> http://konstrukt.dk/, etc can reap the same benefits from the elegance
> of FS-aware design. Could it be that with the right approach
> (lib9p/libixp plugin for apache?) writing web services could be
> as much fun as writing filesystems for Plan9?
>



Re: [9fans] GSOC: Drawterm for the iPhone

2009-03-27 Thread Uriel
On Thu, Mar 26, 2009 at 10:22 PM, Pietro Gagliardi  wrote:
>> A 9vx, p9p or inferno cocoa port is a project that seems fairly
>
> I can do one of these; which is the most needed/wanted?

Personally I have no preference, any of the three would be great to
have, probably the p9p one is the one better tested, in better shape
and a better starting point, but I'm mostly guessing.

uriel



Re: [9fans] plan9 for calculations

2009-03-27 Thread Fernan Bolando
On Sat, Mar 28, 2009 at 5:31 AM, Rudolf Sykora  wrote:
> Hello everybody,
>
> I noticed there are some thoughts about using plan9 on supercomputers.
> For me supercomputers are usually used to do some heavy calculations.
> And this leads me to a question. What software is then used for
> programming these calculations? (I mean e.g. linear algebra, i.e.
> matrix, calculations.) Where can one read about that?
>
> More, it also leads me to a (perhaps) simpler question. What is the
> situations with ordinary machines?
>
> Untill now I have used several libraries in linux, all of them somehow
> based on lapack. I used C language (c-lapack), python (numpy), and now
> I do some programming in Fortran (Intel MKL). From my experience I
> would say: writing programs in C is a nightmare (for me next to no-go
> again), using python with numpy is a breeze, using Fortran (95) is
> sort of fine. C and Fortran run faster than python, but the factor,
> when I played with it, surprised me to be sth. like 3x (expecting a
> worse result).
>
> Now I've been thinking, If I were to write sth. in plan9, what would
> be the way to try?
> Recently I heard about eigen2 library, which seems to be nice (high
> performance, few dependencies), but for C++...
>
> Thank you for any suggestion
> Ruda
>
> PS.: It could be that plan9, being more a os-research system than
> anything else, is simply no suitable for such a use (there are no
> plotting libraries, other utilities). Perhaps it's not a good idea at
> all to try to use plan9 like that because it would be more work than
> anything. Maybe using linux for such things with all its tools is just
> ok. If you share this idea, just say it too, please.
>
>

Its not supercomputer level, but I have a sparse matrix solver in my
contrib. I use it along with haskell as simple replacement for octave.
It is mostly a collection of scripts right I am hoping to consolidate
it later as a single package. I also cannot give you a alot of example
mainly because it will translate to an actual project I did for my day
job although those were done on mathcad

I did notice that for some plotting needs simply piping it plot
command is adequate instead of bloating my tools with plot routines.

fernan

-- 
http://www.fernski.com



[9fans] plan9 for calculations

2009-03-27 Thread Rudolf Sykora
Hello everybody,

I noticed there are some thoughts about using plan9 on supercomputers.
For me supercomputers are usually used to do some heavy calculations.
And this leads me to a question. What software is then used for
programming these calculations? (I mean e.g. linear algebra, i.e.
matrix, calculations.) Where can one read about that?

More, it also leads me to a (perhaps) simpler question. What is the
situations with ordinary machines?

Untill now I have used several libraries in linux, all of them somehow
based on lapack. I used C language (c-lapack), python (numpy), and now
I do some programming in Fortran (Intel MKL). From my experience I
would say: writing programs in C is a nightmare (for me next to no-go
again), using python with numpy is a breeze, using Fortran (95) is
sort of fine. C and Fortran run faster than python, but the factor,
when I played with it, surprised me to be sth. like 3x (expecting a
worse result).

Now I've been thinking, If I were to write sth. in plan9, what would
be the way to try?
Recently I heard about eigen2 library, which seems to be nice (high
performance, few dependencies), but for C++...

Thank you for any suggestion
Ruda

PS.: It could be that plan9, being more a os-research system than
anything else, is simply no suitable for such a use (there are no
plotting libraries, other utilities). Perhaps it's not a good idea at
all to try to use plan9 like that because it would be more work than
anything. Maybe using linux for such things with all its tools is just
ok. If you share this idea, just say it too, please.



[9fans] porting sam

2009-03-27 Thread Benjamin Huntsman
I figure I'm not the only person on this list who would find a newer copy of 
sam for Windows useful...
I know there's acme-sac, but I still find myself using the 9pm version of sam 
for remote connections and such.

So, I've been working on and off trying to get sam from plan9port going on 
Windows.  Currently, I'm using the Inferno hosted environment to do the port.  
That may or may not have been a good idea.  I'm now stuck at the function 
bootterm, in io.c.  Code below.  I'm stuck at the point where sam forks (or 
proc's) to launch samterm, since Windows provides neither.  emu uses Windows' 
CreateThread in it's kproc code.  I'm wondering what the best approach here 
would be, but in the style of some of the other ported Inferno tools, I'm 
leaning toward implementing the required functions in a "Nt.c" file and 
building it in.  Using libkern and a few other bits from draterm might work 
too...

Thanks!
-Ben


void
bootterm(char *machine, char **argv)
{
int ph2t[2], pt2h[2];

if(machine){
dup(remotefd0, 0);
dup(remotefd1, 1);
close(remotefd0);
close(remotefd1);
argv[0] = "samterm";
execvp(samterm, argv);
fprint(2, "can't exec %s: %r\n", samterm);
_exits("damn");
}
if(pipe(ph2t)==-1 || pipe(pt2h)==-1)
panic("pipe");
switch(fork()){
case 0:
dup(ph2t[0], 0);
dup(pt2h[1], 1);
close(ph2t[0]);
close(ph2t[1]);
close(pt2h[0]);
close(pt2h[1]);
argv[0] = "samterm";
execvp(samterm, argv);
fprint(2, "can't exec: ");
perror(samterm);
_exits("damn");
case -1:
panic("can't fork samterm");
}
dup(pt2h[0], 0);
dup(ph2t[1], 1);
close(ph2t[0]);
close(ph2t[1]);
close(pt2h[0]);
close(pt2h[1]);
}

<>

Re: [9fans] another webfs question

2009-03-27 Thread erik quanstrom
> > assuming that every application that uses webfs is prepared
> > to handle a null byte in the middle of a string.  what webfs does
> > — complaining loudly — is much preferrable to programs misbehaving
> > silently.  since it's quite likely that plan 9 applications are not
> > going to properly deal with a null in a string, it's probablly
> > a good implementation strategy unless you're willing to test
> > all the programs that use webfs to make sure that this case
> > is properly handled.
> 
> Ok, but then valid applications such as this one can't use webfs. I
> think something needing this could solve the issue by having the
> application import webfs into its own namespace, and then sending some
> sort of ctl command telling it to set an option to allow null bytes.

read to the end:
> > unless you're willing to test
> > all the programs that use webfs to make sure that this case
> > is properly handled.

i think it would be a bad idea to add a control swizzle bit
to avoid testing.  testing is not that hard.

grep webfs `{find /sys/src  /rc/bin |grep '\.[chy]$'} | grep -v /webfs/
/sys/src/cmd/webcookies.c: * Cookie file system.  Allows hget and multiple 
webfs's to collaborate.
/sys/src/cmd/webfsget.c:/* Example of how to use webfs */
/sys/src/cmd/webfsget.c:fprint(2, "usage: webfsget [-b baseurl] [-m 
mtpt] [-p postbody] url\n");

you can search contrib, too.  i'm sure that abaco falls on
its face when confronted with a 0 in a url.

- erik



Re: [9fans] another webfs question

2009-03-27 Thread Devon H. O'Dell
2009/3/27 erik quanstrom :
>> Yeah, there aren't any. That's the point of URL encoding; NULL bytes
>> are as acceptable as any other, and your client should be able to
>> handle them -- so I think that webfs check is just bogus. It should
>> just encode it as a \0 and pass it through.
>
> (you do mean %00 should result in a byte with value 0, not
> two bytes (in c notation) '\\' and '0', right?)

Yes, I meant '\0'.

> assuming that every application that uses webfs is prepared
> to handle a null byte in the middle of a string.  what webfs does
> — complaining loudly — is much preferrable to programs misbehaving
> silently.  since it's quite likely that plan 9 applications are not
> going to properly deal with a null in a string, it's probablly
> a good implementation strategy unless you're willing to test
> all the programs that use webfs to make sure that this case
> is properly handled.

Ok, but then valid applications such as this one can't use webfs. I
think something needing this could solve the issue by having the
application import webfs into its own namespace, and then sending some
sort of ctl command telling it to set an option to allow null bytes.

--dho

> - erik
>
>



Re: [9fans] another webfs question

2009-03-27 Thread erik quanstrom
> Yeah, there aren't any. That's the point of URL encoding; NULL bytes
> are as acceptable as any other, and your client should be able to
> handle them -- so I think that webfs check is just bogus. It should
> just encode it as a \0 and pass it through.

(you do mean %00 should result in a byte with value 0, not
two bytes (in c notation) '\\' and '0', right?)

assuming that every application that uses webfs is prepared
to handle a null byte in the middle of a string.  what webfs does
— complaining loudly — is much preferrable to programs misbehaving
silently.  since it's quite likely that plan 9 applications are not
going to properly deal with a null in a string, it's probablly
a good implementation strategy unless you're willing to test
all the programs that use webfs to make sure that this case
is properly handled.

- erik



Re: [9fans] another webfs question

2009-03-27 Thread Mathieu Lonjaret
Ok, thanks to both. In the meanwhile, mjl pointed me to 
http://www.ietf.org/rfc/rfc3986.txt, sect. 7.3, where this can found:
"Note, however, that the "%00" percent-encoding
(NUL) may require special handling and should be rejected if the
application is not expecting to receive raw data within a component."

This apparently could be the reason behind the current behaviour of webfs.

I'll try and fix that in webfs unless somebody beats me to it (please
do! ;) ).

Cheers,
Mathieu
--- Begin Message ---
2009/3/27 erik quanstrom :
>> It seems I'm hitting this error when sending some GET requests:
>>
>> In /sys/src/cmd/webfs/url.c:
>>
>>       if(strstr(url, "%00")){
>>               werrstr("escaped NUL in URI");
>>               return -1;
>>       }
>>
>> I haven't fully understood the comment above, especially if it is against
>> the RFC to have an escaped NUL in an url, but this can actually happen,
>> at least with queries to a bittorrent tracker. For example when specifying
>> the info hash of a specific torrent when sending a scrape request:
>>
>> http://bttracker.debian.org:6969/scrape?info_hash=%F1%AE%D2%E5%15%A0%BD%F1%41%54%9D%44%00%47%AB%97%81%2B%69%16
>> (13th char in the info hash is a NUL)
>>
>> I get a reply to that one both with wget on linux or hget on plan 9,
>> while webfs gives the error from the code above.
>>
>> So is it webfs that needs fixing for that case, or are the other tools
>> breaking some RFC with that?
>
> rfc2396 doesn't mention any restrictions; %00 is legal.

Yeah, there aren't any. That's the point of URL encoding; NULL bytes
are as acceptable as any other, and your client should be able to
handle them -- so I think that webfs check is just bogus. It should
just encode it as a \0 and pass it through.

--dho

> - erik
>
>
--- End Message ---


Re: [9fans] another webfs question

2009-03-27 Thread Devon H. O'Dell
2009/3/27 erik quanstrom :
>> It seems I'm hitting this error when sending some GET requests:
>>
>> In /sys/src/cmd/webfs/url.c:
>>
>>       if(strstr(url, "%00")){
>>               werrstr("escaped NUL in URI");
>>               return -1;
>>       }
>>
>> I haven't fully understood the comment above, especially if it is against
>> the RFC to have an escaped NUL in an url, but this can actually happen,
>> at least with queries to a bittorrent tracker. For example when specifying
>> the info hash of a specific torrent when sending a scrape request:
>>
>> http://bttracker.debian.org:6969/scrape?info_hash=%F1%AE%D2%E5%15%A0%BD%F1%41%54%9D%44%00%47%AB%97%81%2B%69%16
>> (13th char in the info hash is a NUL)
>>
>> I get a reply to that one both with wget on linux or hget on plan 9,
>> while webfs gives the error from the code above.
>>
>> So is it webfs that needs fixing for that case, or are the other tools
>> breaking some RFC with that?
>
> rfc2396 doesn't mention any restrictions; %00 is legal.

Yeah, there aren't any. That's the point of URL encoding; NULL bytes
are as acceptable as any other, and your client should be able to
handle them -- so I think that webfs check is just bogus. It should
just encode it as a \0 and pass it through.

--dho

> - erik
>
>



Re: [9fans] another webfs question

2009-03-27 Thread erik quanstrom
> It seems I'm hitting this error when sending some GET requests:
> 
> In /sys/src/cmd/webfs/url.c:
> 
>   if(strstr(url, "%00")){
>   werrstr("escaped NUL in URI");
>   return -1;
>   }
> 
> I haven't fully understood the comment above, especially if it is against
> the RFC to have an escaped NUL in an url, but this can actually happen,
> at least with queries to a bittorrent tracker. For example when specifying
> the info hash of a specific torrent when sending a scrape request:
> 
> http://bttracker.debian.org:6969/scrape?info_hash=%F1%AE%D2%E5%15%A0%BD%F1%41%54%9D%44%00%47%AB%97%81%2B%69%16
> (13th char in the info hash is a NUL)
> 
> I get a reply to that one both with wget on linux or hget on plan 9,
> while webfs gives the error from the code above.
> 
> So is it webfs that needs fixing for that case, or are the other tools
> breaking some RFC with that? 

rfc2396 doesn't mention any restrictions; %00 is legal.

- erik