Re: [9fans] addition to fortune file

2005-10-17 Thread Nils O. Selåsdal

erik quanstrom wrote:

From the ISO-2022/ECMA-035 spec:


4.2 byte
A bit string that is operated upon as a unit.

NOTE 3
Each bit has the value either ZERO or ONE.

Guess they didn't want to bother with those russian trinary machines[1]

[1] http://www.icfcst.kiev.ua/museum/PHOTOS/Setun-1.html


Re: [9fans] nubus macs

2005-10-17 Thread Ben Huntsman
Apple tends to do things strangely, and the PPC laptop you describe,  
happens to be an Old World system, which means that if it even has  
OpenFirmware at all, it's the old buggy version 2 or less.  Just  
because some Linux variant boots on the thing doesn't mean P9 will  
run on it.  For starters, you'd have to write a boot loader specific  
to it pretty much from scratch.
Have a look at Vita Nuova's Inferno distribution, which might have  
some examples of how to boot from OF on PPC hardware... and Inferno  
and P9 are sufficiently similar that you shouldn't have to shoehorn  
too much...


Good luck, and let me know if you get anywhere... I have a bunch of  
similar Old World mac hardware sitting around too


-Ben

On Oct 16, 2005, at 10:32 PM, Jack Johnson wrote:


On 10/16/05, Lyndon Nerenberg [EMAIL PROTECTED] wrote:


One of my neighbours on the dock has a very unused Powerbook 1400c
that I can probably swap for a case of beer.  In the realm of ppc
ports, has there been any attempt to make P9 run on a nubus (or
whatever it's called) PPC mac?



It looks like that PowerBook is support by Linux, MkLinux and possibly
NetBSD, so it might be a good candidate for a plan9ports box.

Not perfect, but at least you know your NIC will work. ;)

-Jack





Re: [9fans] Re: some Plan9 related ideas

2005-10-17 Thread Russ Cox
 I was reading old archives, and I'm probably a bit dense; but what is
 the reason to use the same tag for the three messages?

The reason is you don't have to wait for the response to the first
before sending the second and third, avoiding two round trip times.

 Wouldn't that be able to break a server that expected tags not to be
 reused until the corresponding Rmessage had been sent?

Yes, but I did say I was redefining the protocol.  And single-threaded
servers (the majority of our servers by code volume) don't care.

Define that a client may send more than one message
with the same tag, and in that case servers must process
those messages sequentially.  This is not very hard to
implement on the server side, and the single-threaded
servers needn't change at all.

Russ


Re: [9fans] Re: some Plan9 related ideas

2005-10-17 Thread Uriel
On Mon, Oct 17, 2005 at 07:23:39AM -0400, Russ Cox wrote:
  I was reading old archives, and I'm probably a bit dense; but what is
  the reason to use the same tag for the three messages?
 
 The reason is you don't have to wait for the response to the first
 before sending the second and third, avoiding two round trip times.
Yes, but what I didn't understand is why you needed to use the same tag,
I thought you could do this without chaning the protocol.

After some discussion in #plan9 we guessed that the reason is threaded
servers...

Could you explain with more detail how it would work from the (threaded)
server POV? I was thinking that the server could use the fid to avoid
threads stepping into each other, and still avoid having to change the
protocol at all...

And I'm still curious what kernel changes nemo was talking about.

Sorry for being dense

uriel


Re: [9fans] SuperComputing in Seattle.

2005-10-17 Thread Ronald G Minnich

Roman Shaposhnick wrote:

Guys,

I know that at least some of you use Plan9 for the cluster work and such,
hence SuperComputing expo in Seattle might be of an interest to you. 


I guess depending on how many of us will be there (I, for one, will be)
it could be a nice opportunity for Plan 9 beer bash or something.

What do you think ?

Thanks,
Roman.



I will be there with a booth. We may have a small plan 9 cluster there, 
in the lanl booth.


andrey will also be in the lanl booth. I guess we'll be happy to see any 
of you who can make it.


I can set up a Plan 9 bof, although beer bash has more appeal.

ron


[9fans] xcpu kludge release

2005-10-17 Thread Ronald G Minnich
folks, in /n/sources/9grid/xcpu.tar you will see a release of xcpu, 
which is part of the assortment of 9grid tools that we are releasing 
from LANL. While we are developing this sytem on clusters at LANL, it 
ought to work just fine on the 9grid, at least on Plan 9 systems.


xcpu is intended to be a 9p-based replacement for the bproc system. See 
http://bproc.sourceforge.net/ for info on bproc, or clustermatic.org for 
papers. We wish to build lightweight nodes with 9p2000-based servers 
that are every bit as simple as our bproc nodes. xcpu is intended to 
replace and obsolete bproc.


Performance is going to matter: we can start 1024 16MB mpi images on 
Pink in 3 seconds. Bproc sets a very high bar, which we hope to meet, 
but we'll see.


Once you untar, in xcpu/XCPU/xcpu.pdf, is a small document describing 
the whys and wherefores of the design. There really is a reason for the 
way this is done.


This is not the same implementation as Vic Zandy's original cut. That 
system, while very nice, was just too hard to get working on Linux.


This version of xcpusrv is derived from sshnet, with my own changes, and 
I'm sure, bugs, thrown in. You will doubtless have comments on my 
ignorance of the Right Way to do Plan 9 threading, and I welcome them; 
I'm really a n00bie at this game.


I'm not real picky about how this is implemented, only that xcpusrv 
roughly work as described in xcpu.pdf, and work on both Plan 9 and Plan 
9 ports. If you want to chuck my code, and send me a better server, I'm 
fine with that idea :-)


Also note that humans are not intended to interact with the services 
provided by xcpusrv, only programs. See xsh.c for an example of a 
possible user program to use xcpusrv services. xsh is modeled on the 
bproc bpsh command.


Note that bproc manages process startup, i/o (stdin/out/err), status, 
and control. xcpu only does startup and I/O. Status and control are 
intended to be done via u9fs, with specific enhancements to make linux 
/proc appear to have Plan 9-style ctl and status files for each process.


to make on Plan 9, mk -f mkfile-plan9, to make on linux (or other 
p9ports machine), mk -f mkfile-p9p. To use on Plan 9, see the setitup 
script; for linux, see the .pdf or the openclone tool (a hacked up plan 
9 cat).


comments, fixes, etc. welcome. Flames ignored :-)

thanks

ron



[9fans] xcpu note

2005-10-17 Thread Ronald G Minnich

oh, yeah, you're going to see a lot of debugging crap from xcpusrv.

this is called: A guy who's done select()-based threading for xx years 
tries to learn Plan 9 threads, and fails a lot, but is slowly getting 
it, sometimes


sorry for any convenience (sic).

also, on Plan 9 ports,  you are going to need a linux kernel, e.g. 
2.6.14-rc2, to make it go, or use Newsham's python client code.


ron


Re: [9fans] xcpu note

2005-10-17 Thread David Leimbach
Congrats on another fine Linux Journal article Ron.  I just got this
in the mail yesterday and read it today.

Clustermatic is pretty cool, I think it's what was installed on one of
the other clusters I used at LANL as a contractor at the time.  I
recall a companion tool for bproc to request nodes, sort of an ad-hoc
scheduler.  I had to integrate support for this in our MPI's start up
that I was testing on that machine.

I'm curious to see how this all fits together with xcpu, if there is
such a resource allocation setup needed etc.

Dave

On 10/17/05, Ronald G Minnich rminnich@lanl.gov wrote:
 oh, yeah, you're going to see a lot of debugging crap from xcpusrv.

 this is called: A guy who's done select()-based threading for xx years
 tries to learn Plan 9 threads, and fails a lot, but is slowly getting
 it, sometimes

 sorry for any convenience (sic).

 also, on Plan 9 ports,  you are going to need a linux kernel, e.g.
 2.6.14-rc2, to make it go, or use Newsham's python client code.

 ron



Re: [9fans] xcpu note

2005-10-17 Thread Ronald G Minnich

David Leimbach wrote:


Clustermatic is pretty cool, I think it's what was installed on one of
the other clusters I used at LANL as a contractor at the time.  I
recall a companion tool for bproc to request nodes, sort of an ad-hoc
scheduler.  I had to integrate support for this in our MPI's start up
that I was testing on that machine.


the simple scheduler, bjs, was written by erik hendriks (now at Google, 
sigh) and was rock-solid. It ran on one cluster, unattended, scheduling 
128 2-cpu nodes with a very diverse job mix, for one year. It was a 
great piece of software. It was far faster, and far more reliable, than 
any scheduler we have ever seen, then or now. In one test, we ran about 
20,000 jobs through it on about an hour, on a 1024-node cluster, just to 
test. Note that it could probably have scheduled a lot more jobs, but 
the run-time of the job was non-zero. No other scheduler we have used 
comes close to this kind of performance. Scheduler overhead was 
basically insignificant.




I'm curious to see how this all fits together with xcpu, if there is
such a resource allocation setup needed etc.


we're going to take bjs and have it schedule nodes to give to users.

Note one thing we are going to do with xcpu: attach nodes to a user's 
desktop machine, rather than make users log in to the cluster. So users 
will get interactive clusters that look like they own them. This will, 
we hope, kill batch mode. Plan 9 ideas make this possible. It's going to 
be a big change, one we hope users will like.


If you look at how most clusters are used today, they closely resemble 
the batch world of the 1960s. It is actually kind of shocking. I 
downloaded a JCL manual a year or two ago, and compared what JCL did to 
what people wanted batch schedulers for clusters to do, and the 
correspondance was a little depressing. The Data General ad said it 
best: Batch is a bitch.


Oh yeah, if anyone has a copy of that ad (Google does not), i'd like it 
in .pdf :-) It appeared in the late 70s IIRC.


ron
p.s. go ahead, google JCL, and you can find very recent manuals on how 
to use it. I will be happy to post the JCL for sort + copy if anyone 
wants to see it.


Re: [9fans] xcpu note

2005-10-17 Thread Kenji Okamoto
 also, on Plan 9 ports,  you are going to need a linux kernel, e.g. 
 2.6.14-rc2, to make it go, or use Newsham's python client code.

The latest candidate of this, I checked just before, is still 2.6.14-rc4.☺
Do you have any info when it'll become stable release?

Kenji



Re: [9fans] xcpu note

2005-10-17 Thread Ronald G Minnich

Kenji Okamoto wrote:
also, on Plan 9 ports,  you are going to need a linux kernel, e.g. 
2.6.14-rc2, to make it go, or use Newsham's python client code.



The latest candidate of this, I checked just before, is still 2.6.14-rc4.☺
Do you have any info when it'll become stable release?

Kenji



usual rule with this most recent series is pretty damn soon. 2.6.13 
stabilized quite fast. I am guessing we're close, not that I know any 
more than you do :-)


ron


Re: [9fans] addition to fortune file

2005-10-17 Thread McLone
On 10/17/05, Nils O. Selåsdal [EMAIL PROTECTED] wrote:
 Guess they didn't want to bother with those russian trinary machines[1]
 [1] http://www.icfcst.kiev.ua/museum/PHOTOS/Setun-1.html
they called theirs trit and tryte [0].
Seen one of them on former military radio plant here,
defunct, already broken in peaces. Their cases were
great to store hunter weapons locked. And their coolers
are like 30cm in diameter, practically ethernal, w/great
airflow but noicy. The one i'm talking about was shut down in
mid-80s, when Misha bring perestroyka. I'm unsure about
model number, but they were made at the beginning of 70s.
I've been told it used Algol language.

p.s. googling wasn't very helpful, because that site is dead now.
Archive.org wayback machine has it, the URL was
http://www.computer-museum.ru/english/setun.htm
--
[0] http://xyzzy.freeshell.org/trinary/
--
wbr,|\  _,,,---,,_   dog bless ya!
`   Zzz /,`.-'`'-.  ;-;;,_
McLone at GMail dot com|,4-  ) )-,_. ,\ (  `'-'
, net- and *BSD admin '---''(_/--'  `-'\_)   ...translit rawx


Re: [9fans] xcpu note

2005-10-17 Thread Eric Van Hensbergen
Should be any day now, there weren't that many patches in rc4.
Of course, lucho has a massive rehaul of the mux code waiting in the
wings and I have my fid tracking rework, so stuff won't be stable for
long ;)  Of course, we'll keep that code in our development trees
until it has cooked a little, but lucho's code looks to fix a lot of
long standing problems and hopefully my new fid stuff will make Plan 9
things (p9p) and Ron's new synthetics work better.

-eric


On 10/17/05, Kenji Okamoto [EMAIL PROTECTED] wrote:
  also, on Plan 9 ports,  you are going to need a linux kernel, e.g.
  2.6.14-rc2, to make it go, or use Newsham's python client code.

 The latest candidate of this, I checked just before, is still 2.6.14-rc4.☺
 Do you have any info when it'll become stable release?

 Kenji




Re: [9fans] xcpu note

2005-10-17 Thread Scott Schwartz
| No other scheduler we have used 
| comes close to this kind of performance. Scheduler overhead was 
| basically insignificant.
 
Probably apples and oranges, but Jim Kent wrote a job scheduler for his
kilocluster that nicely handled about 1M jobs in six hours.  It's the
standard thing for whole genome sequence alignments at ucsc.

| If you look at how most clusters are used today, they closely resemble 
| the batch world of the 1960s. It is actually kind of shocking. 

On the other hand, sometimes that's just what you really want.



Re: [9fans] xcpu note

2005-10-17 Thread Ronald G Minnich

Scott Schwartz wrote:


Probably apples and oranges, but Jim Kent wrote a job scheduler for his
kilocluster that nicely handled about 1M jobs in six hours.  It's the
standard thing for whole genome sequence alignments at ucsc.


I think that's neat, I would like to learn more. Was this scheduler for 
an arbitrary job mix, or specialized to that app?




| If you look at how most clusters are used today, they closely resemble 
| the batch world of the 1960s. It is actually kind of shocking. 


On the other hand, sometimes that's just what you really want.



true. Sometimes it is. I've found, more often, that it's what people 
will accept, but not what they want.


ron


Re: [9fans] xcpu note

2005-10-17 Thread andrey mirtchovski
 | No other scheduler we have used 
 | comes close to this kind of performance. Scheduler overhead was 
 | basically insignificant.
  
 Probably apples and oranges, but Jim Kent wrote a job scheduler for his
 kilocluster that nicely handled about 1M jobs in six hours.  It's the
 standard thing for whole genome sequence alignments at ucsc.

the vitanuova guys probably have better numbers, but when we ran their
grid code at ucalgary it executed over a million jobs in a 24-hour
period.  the jobs were non-null (md5sum using inferno's dis code).  it
ran on a 12 (or so) -node cluster :)



Re: [9fans] xcpu note

2005-10-17 Thread Ronald G Minnich

andrey mirtchovski wrote:
| No other scheduler we have used 
| comes close to this kind of performance. Scheduler overhead was 
| basically insignificant.


Probably apples and oranges, but Jim Kent wrote a job scheduler for his
kilocluster that nicely handled about 1M jobs in six hours.  It's the
standard thing for whole genome sequence alignments at ucsc.



the vitanuova guys probably have better numbers, but when we ran their
grid code at ucalgary it executed over a million jobs in a 24-hour
period.  the jobs were non-null (md5sum using inferno's dis code).  it
ran on a 12 (or so) -node cluster :)



man, all these schedulers that work MUCH better than the stuff we pay 
money for ... ah well. It shows how limited my experience is ... I'm 
used to schedulers that take 5-25 seconds to schedule jobs on 1000 or so 
nodes.


Oh, wait, 12 nodes. Hmm. That's cheating!

ron