Re: [9fans] [OT] interesting hardware project for gsoc

2008-03-04 Thread Alexander Sychev

It's just a quite popular joke in Russia.
Actually, in the Russian army such kind a "humor" really takes a place.
For example:
"dig a ditch from the fence till the dusk"

On Mon, 03 Mar 2008 22:18:42 +0300, Martin Harriss <[EMAIL PROTECTED]>  
wrote:



Alexander Sychev wrote:

In the Russian army:

Sgt.: Who is a painter here?
Soldier: I'm the painter.
Sgt.: Well, take this axe and draw me a stack of firewood in the  
morning.


That one sounds like it was written by Milligan.

Martin




--
Best regards,
  santucco


[9fans] gsocfs

2008-03-04 Thread Siddhant
Hi everyone.
I am interested in the gsocfs project listed here -
http://plan9.bell-labs.com/wiki/plan9/gsocfs
Though it does sound kind, but since I am not very much familiar with
Plan9 development environment, I was wondering if it could get way too
complex for someone relatively inexperienced.
I do have a fair enough idea of what all is needed to be implemented,
how it can be done, and how in general file servers (and other things)
work in Plan9; the only thing that I'm concerned about is the
development environment.
Could anyone please suggest as to what all is required from a student
to attempt this?
Regards,
Siddhant Goel


Re: [9fans] GCC/G++: some stress testing

2008-03-04 Thread Paweł Lasek
On Mon, Mar 3, 2008 at 4:31 AM, Roman V. Shaposhnik <[EMAIL PROTECTED]> wrote:
> On Sun, 2008-03-02 at 12:34 -0800, Paul Lalonde wrote:
>  > -BEGIN PGP SIGNED MESSAGE-
>  > Hash: SHA1
>  >
>  > CSP doesn't scale very well to hundreds of simultaneously executing
>  > threads (my claim, not, as far as I've found yet, anyone else's).
>
>  I take it that you really do mean "simultaneous". As in: you actually
>  have hundreds of cores available on the same system. I'm actually
>  quite curious to find out what's the highest number of cores available
>  of a single shared memory systems that you, or anybody else has
>  experienced in practice so far (mine would be 32 == 8 CPUs x 4 cores AMD
>  Barcelona)? Now even more important question is -- what are the
>  expectations for, this number in 5 years from now?
>

Actually, with parts available from AMD you can directly mesh up to 64
sockets, each with (currently) 4 cores, 8 core cpu announced (as MCP
in the beginning). And there were available methods for routing HT
traffic with number of sockets nearing thousands or tens of thousands.
Dunno if they used it directly with cache coherency protocol though.

-- 
Paul Lasek


Re: [9fans] GCC/G++: some stress testing

2008-03-04 Thread ron minnich
On Tue, Mar 4, 2008 at 2:57 AM, Paweł Lasek <[EMAIL PROTECTED]> wrote:
>And there were available methods for routing HT
>  traffic with number of sockets nearing thousands or tens of thousands.

as in this: http://www.cray.com/products/xt4/

>  Dunno if they used it directly with cache coherency protocol though.

no, it does not.

Plan 9 port in progress ... soon we hope.

ron


Re: [9fans] GCC/G++: some stress testing

2008-03-04 Thread Philippe Anel



no, it does not.

Plan 9 port in progress ... soon we hope.

ron
  


Are you working on this port Ron ?
Are you planning to have several kernels or just one ?

Phil;



Re: [9fans] GCC/G++: some stress testing

2008-03-04 Thread Roman V. Shaposhnik
On Tue, 2008-03-04 at 11:57 +0100, Paweł Lasek wrote:
> >  I take it that you really do mean "simultaneous". As in: you actually
> >  have hundreds of cores available on the same system. I'm actually
> >  quite curious to find out what's the highest number of cores available
> >  of a single shared memory systems that you, or anybody else has
> >  experienced in practice so far (mine would be 32 == 8 CPUs x 4 cores AMD
> >  Barcelona)? Now even more important question is -- what are the
> >  expectations for, this number in 5 years from now?
> >
> 
> Actually, with parts available from AMD you can directly mesh up to 64
> sockets, each with (currently) 4 cores, 8 core cpu announced (as MCP
> in the beginning). And there were available methods for routing HT
> traffic with number of sockets nearing thousands or tens of thousands.
> Dunno if they used it directly with cache coherency protocol though.

Understood. Although, what I meant was a practical hands-on experience.
So that the war stories can be told. Otherwise, on the SPARC side, I'd
be able to claim 128 in Sun's M9000.

Thanks,
Roman.



Re: [9fans] GCC/G++: some stress testing

2008-03-04 Thread ron minnich
On Tue, Mar 4, 2008 at 7:55 AM, Philippe Anel <[EMAIL PROTECTED]> wrote:

>  Are you working on this port Ron ?

soon.

I just realized today that, for the part, linuxemu may save our neck
on the XT4. That's because there are proprietary programs that need to
run, and they are only linux programs.

So I owe Russ and cinap ...

ron


[9fans] plan9 diskless terminal booting on 486.

2008-03-04 Thread Juan M. Mendez
Hello 9fans,

I have been talking with Benavento and Lucio to get a 486 terminal
working. I decided to share it with you to get some ideas of where
to start debugging.


 >  I created a boot floppy, to configure a plan9 terminal using
 >  sirviente.9grid.es as cpu, and  got a tons of lines with:
 >
 >  "mapfree unallocated unmapped physical memory"


I have discarded hardware problems because I tested the machine with a
QNX floppy and the network card worked. I could browse through some
www pages.

The previous errors were produced because the kernel was trying to
allocate more memory than the system had.

I added *maxmem=67108864 to PLAN9.INI (the system has 64Mb of RAM) and
these errors dissapeared.

 Following Lucio suggestion I also added *noe820scan=1.

 I reach the prompt:

 #l0: NE2000: 10Mbps port 0x300 irq 9 addr 0x4000 size 0x4000: 004f4c0141b5
 64M memory: 27M kernel data, 36M user, 163M swap
 root is from (tcp) [tcp -g 192.168.1.1 ether /net/ether0 192.168.1.5
 255.255.255.0]:
 user[none]: vejeta
 boot: can't connect to file server: connection time out
 panic: boot process died: unknown
 panic: boot process died: unknown
 dumpstack disabled
 cpu0 exiting
 - - - - - - - - - - - - - - - - - - - -

 I tested the same floppy in another machine, a Pentium II with same
 network card, and it could boot successfully and reach 9grid.net as
 cpu server. So I have discarded router misconfigurations too.

Launching a sniffer and monitoring the network in another machine showed
that there weren't any output coming from 192.168.1.5.


 My current PLAN9.INI is as follows:
 - - - - - - - - - - - - - - - - - - - - - - - - - - -
 ether0=type=ne2000 port=0x300 irq=9 mem=0x4000
 # nodummyrr


 bootfile=fd0!dos!9pc.gz

 fs=91.121.95.142

auth=91.121.95.142 authdom=91.121.95.142
 bootargs=tcp -g 192.168.1.1 ether /net/ether0 192.168.1.5 255.255.255.0
 #bootdisk=local!#S/sdC0/fossil

 *nomp=1
 *nodumpstack=1
 *noe820scan=1

 *maxmem=67108864
 *kernelpercent=40
 #partition=new
 #dmamode=ask

 *nobiosload=1
 *noahciload=1

 *debugload=1
 mouseport=0   #0 para el COM1 y 1 para el COM2
 monitor=xga
 vgasize=800x600x8


-- 
Fidonet: 2:345/432.2


Re: [9fans] Xen and new venti

2008-03-04 Thread stella
Hi,
thanks to the suggestion erik gave me I was able to compile the kernel:
I just needed to switch from memcpy to memmove into /sys/src/libip/parseip.c 
and recompile libip
and then recompile also /sys/src/libc

I moved the kernels to the Xen machine but all of them crashes just after the 
message 
"Started domain plan9"

Am I missing something?
S.


Re: [9fans] Xen and new venti

2008-03-04 Thread Richard Miller
> Hi,
> thanks to the suggestion erik gave me I was able to compile the kernel:
> I just needed to switch from memcpy to memmove into /sys/src/libip/parseip.c 
> and recompile libip
> and then recompile also /sys/src/libc

I can't see why this should have been necessary - libc contains memcpy
already.  I have just updated /n/sources/xen/xen3 so it compiles with
the current Plan 9 kernel source (using Xen 3.0.2).  I made the other
changes you listed, but I didn't have to do anything about memcpy.

> I moved the kernels to the Xen machine but all of them crashes just after the 
> message 
> "Started domain plan9"
> 
> Am I missing something?

The present Plan 9 xen kernel works with Xen 3.0.x; you said you are using Xen 
3.2.0.
Xen is not known for the consistency of its interfaces from one release to the
next, so it is quite possible that some rewriting needs to be done on the Plan 9
side to deal with changes in Xen semantics.

Have you enabled verbose debugging messages on the Xen console?  You should see
some indication of why the plan9 guest crashed.



Re: [9fans] Xen and new venti

2008-03-04 Thread erik quanstrom
> I can't see why this should have been necessary - libc contains memcpy
> already.  I have just updated /n/sources/xen/xen3 so it compiles with
> the current Plan 9 kernel source (using Xen 3.0.2).  I made the other
> changes you listed, but I didn't have to do anything about memcpy.

the reason it's necessary is that although the kernel links against the
c library, it does not include libc.h.  this is very much on purpose as
not every function in libc is safe for use in the kernel.

in this case, the definition for memcpy was likely not picked up in
portfns.h so the linker couldn't find the proper function.

- erik



Re: [9fans] Xen and new venti

2008-03-04 Thread stella
Hi,
Frist of all a note for erik: 
>> I had a problem of type in xendat.h fixed by replacing 
>> uint8 with uint at line 1540

>i suspect you mean uchar.  (or uvlong if they're counting 
>bytes.)
I meant uint [or so I remember] but I can't give any explanation for this

I tryed both compiling the kernel with xen 3.2.0 and xen 3.0.2 include files 
with the same result
I've debugging active on xen so I will paste at the end what happens
In any case it's odd that the kernels from /n/sources/xen/xen3 boots smoothly 
on xen 3.2.0
[without venti of course ] while the others do not.
 
Sorry if the following paste is not polite

S.

PS: I have no Xen 3.0.2 to test the kernel with but I'm willing to give it to 
anyone who does.

Booting a fresh built 9xenpcf.gz Xend says:

[2008-03-05 01:04:46 3685] DEBUG (XendDomainInfo:84) 
XendDomainInfo.create([\047vm\047, [\047name\047, \047plan9-noventi-new\047], 
[\047memory\047, 96], [\047on_crash\047, \047preserve\047], [\047vcpus\047, 1], 
[\047on_xend_start\047, \047ignore\047], [\047on_xend_stop\047, 
\047ignore\047], [\047image\047, [\047linux\047, [\047kernel\047, 
\047/mnt/allanon/hdb3/xen/plan9/9xenpcf.gz\047], [\047args\047, 
\047\nbootargs=local!#S/sd00/fossil\n\047]]], [\047device\047, [\047vbd\047, 
[\047uname\047, \047file:/mnt/allanon/hdb3/xen/plan9/plan9-noventi.img\047], 
[\047dev\047, \047sda\047], [\047mode\047, \047w\047]]], [\047device\047, 
[\047vif\047, [\047mac\047, \047aa:00:10:00:00:10\047)
[2008-03-05 01:04:46 3685] DEBUG (XendDomainInfo:1608) 
XendDomainInfo.constructDomain
[2008-03-05 01:04:46 3685] DEBUG (balloon:132) Balloon: 99244 KiB free; need 
2048; done.
[2008-03-05 01:04:46 3685] DEBUG (XendDomain:443) Adding Domain: 59
[2008-03-05 01:04:46 3685] DEBUG (XendDomainInfo:1692) 
XendDomainInfo.initDomain: 59 256
[2008-03-05 01:04:46 3685] DEBUG (XendDomainInfo:1724) 
_initDomain:shadow_memory=0x0, memory_static_max=0x600, 
memory_static_min=0x0.
[2008-03-05 01:04:46 3685] DEBUG (balloon:132) Balloon: 99244 KiB free; need 
98304; done.
[2008-03-05 01:04:46 3685] INFO (image:139) buildDomain os=linux dom=59 vcpus=1
[2008-03-05 01:04:46 3685] DEBUG (image:351) domid  = 59
[2008-03-05 01:04:46 3685] DEBUG (image:352) memsize= 96
[2008-03-05 01:04:46 3685] DEBUG (image:353) image  = 
/mnt/allanon/hdb3/xen/plan9/9xenpcf.gz
[2008-03-05 01:04:46 3685] DEBUG (image:354) store_evtchn   = 1
[2008-03-05 01:04:46 3685] DEBUG (image:355) console_evtchn = 2
[2008-03-05 01:04:46 3685] DEBUG (image:356) cmdline= 
bootargs=local!#S/sd00/fossil

[2008-03-05 01:04:46 3685] DEBUG (image:357) ramdisk= 
[2008-03-05 01:04:46 3685] DEBUG (image:358) vcpus  = 1
[2008-03-05 01:04:46 3685] DEBUG (image:359) features   = 
[2008-03-05 01:04:46 3685] INFO (XendDomainInfo:1504) createDevice: vbd : 
{\047uuid\047: \047df218b35-05a8-72c7-5bc9-aad542df2721\047, \047bootable\047: 
1, \047driver\047: \047paravirtualised\047, \047dev\047: \047sda\047, 
\047uname\047: \047file:/mnt/allanon/hdb3/xen/plan9/plan9-noventi.img\047, 
\047mode\047: \047w\047}
[2008-03-05 01:04:46 3685] DEBUG (DevController:117) DevController: writing 
{\047virtual-device\047: \0472048\047, \047device-type\047: \047disk\047, 
\047protocol\047: \047x86_32-abi\047, \047backend-id\047: \0470\047, 
\047state\047: \0471\047, \047backend\047: 
\047/local/domain/0/backend/vbd/59/2048\047} to 
/local/domain/59/device/vbd/2048.
[2008-03-05 01:04:46 3685] DEBUG (DevController:119) DevController: writing 
{\047domain\047: \047plan9-noventi-new\047, \047frontend\047: 
\047/local/domain/59/device/vbd/2048\047, \047uuid\047: 
\047df218b35-05a8-72c7-5bc9-aad542df2721\047, \047dev\047: \047sda\047, 
\047state\047: \0471\047, \047params\047: 
\047/mnt/allanon/hdb3/xen/plan9/plan9-noventi.img\047, \047mode\047: \047w\047, 
\047online\047: \0471\047, \047frontend-id\047: \04759\047, \047type\047: 
\047file\047} to /local/domain/0/backend/vbd/59/2048.
[2008-03-05 01:04:46 3685] INFO (XendDomainInfo:1504) createDevice: vif : 
{\047mac\047: \047aa:00:10:00:00:10\047, \047uuid\047: 
\04798f63ee9-c9dc-1749-de75-d2438bd58ccf\047}
[2008-03-05 01:04:46 3685] DEBUG (DevController:117) DevController: writing 
{\047mac\047: \047aa:00:10:00:00:10\047, \047handle\047: \0470\047, 
\047protocol\047: \047x86_32-abi\047, \047backend-id\047: \0470\047, 
\047state\047: \0471\047, \047backend\047: 
\047/local/domain/0/backend/vif/59/0\047} to /local/domain/59/device/vif/0.
[2008-03-05 01:04:46 3685] DEBUG (DevController:119) DevController: writing 
{\047domain\047: \047plan9-noventi-new\047, \047handle\047: \0470\047, 
\047uuid\047: \04798f63ee9-c9dc-1749-de75-d2438bd58ccf\047, \047script\047: 
\047/etc/xen/scripts/vif-bridge\047, \047state\047: \0471\047, 
\047frontend\047: \047/local/domain/59/device/vif/0\047, \047mac\047: 
\047aa:00:10:00:00:10\047, \047online\047: \0471\047, \047frontend-id\047: 
\04759\047} to /local/domain/0/backend/vif/59/0.
[20

Re: [9fans] Xen and new venti

2008-03-04 Thread erik quanstrom
> I tryed both compiling the kernel with xen 3.2.0 and xen 3.0.2 include files 
> with the same result
> I've debugging active on xen so I will paste at the end what happens
> In any case it's odd that the kernels from /n/sources/xen/xen3 boots smoothly 
> on xen 3.2.0
> [without venti of course ] while the others do not.

this sounds like the problem seen on real hardware a few months ago.
the problem was that the kernel plus the page tables didn't fit in the
temporary page tables set up in l.s.

it may be that there's another bug in there.  the easiest way to check
would be to see if there aren't devices you aren't useing and eliminate
them from your xencpuf configuration.  stripping venti will also
make your kernel smaller.  to do this, comment out this line in
port/mkportall

if(~ $"t *executable* && ! ~ $name venti)

if reducing the size of the kernel solves the problem, then i think it's
a good bet, the original problem is not completely solved.

- erik



[9fans] thoughs about venti+fossil

2008-03-04 Thread Enrico Weigelt

Hi folks,


some thoughts about venti that go around in my mind:

1. how stable is the keying ? sha-1 has only 160 bits, while 
   data blocks may be up to 56k long. so, the mapping is only 
   unique into one direction (not one-to-one). how can we be
   *really sure*, that - even on very large storages (TB or
   even PB) - data to each key is alway (one-to-one) unique ?
   
2. what approx. compression level could be assumed on large
   storages by putting together equal data blocks (on several 
   kind of data, eg. typical office documents vs. media) ?
   
3. what happens on the space consumtion if venti is used as 
   storage for heavily rw filesystems for a longer time 
   (not as permanent archive) - how much space will be wasted ?
   should we add some method for block expiry (eg. timeouts 
   of reference counters) ?

4. assuming #1 can be answered 100% yes - would it suit for 
   an very large (several PB) heavily distributed storage 
   (eg. for some kind of distributed, redundant filesystem) ?


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-


Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread Roman Shaposhnik

On Mar 4, 2008, at 8:00 PM, Enrico Weigelt wrote:

some thoughts about venti that go around in my mind:

1. how stable is the keying ? sha-1 has only 160 bits, while
   data blocks may be up to 56k long. so, the mapping is only
   unique into one direction (not one-to-one). how can we be
   *really sure*, that - even on very large storages (TB or
   even PB) - data to each key is alway (one-to-one) unique ?


http://www.nmt.edu/~val/review/hash/index.html

Not that this analysis is without flaws, though.

Thanks,
Roman.



Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread erik quanstrom
> On Mar 4, 2008, at 8:00 PM, Enrico Weigelt wrote:
> > some thoughts about venti that go around in my mind:
> >
> > 1. how stable is the keying ? sha-1 has only 160 bits, while
> >data blocks may be up to 56k long. so, the mapping is only
> >unique into one direction (not one-to-one). how can we be
> >*really sure*, that - even on very large storages (TB or
> >even PB) - data to each key is alway (one-to-one) unique ?
> 
> http://www.nmt.edu/~val/review/hash/index.html
> 
> Not that this analysis is without flaws, though.

have you invented the 9fans.net effect?

this link may or may not be similar.  but it is on point:
http://www.valhenson.org/review/hash.pdf

do you care to elaborate on the flaws of this analysis?

- erik


Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread geoff
I think that /sys/doc/venti/venti.pdf answers your questions.



Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread Roman Shaposhnik

On Mar 4, 2008, at 8:43 PM, erik quanstrom wrote:


On Mar 4, 2008, at 8:00 PM, Enrico Weigelt wrote:

some thoughts about venti that go around in my mind:

1. how stable is the keying ? sha-1 has only 160 bits, while
   data blocks may be up to 56k long. so, the mapping is only
   unique into one direction (not one-to-one). how can we be
   *really sure*, that - even on very large storages (TB or
   even PB) - data to each key is alway (one-to-one) unique ?


http://www.nmt.edu/~val/review/hash/index.html

Not that this analysis is without flaws, though.


have you invented the 9fans.net effect?


Meaning? I guess the reference went over my head.


this link may or may not be similar.  but it is on point:
http://www.valhenson.org/review/hash.pdf


I believe it to be exactly the same paper.


do you care to elaborate on the flaws of this analysis?


I tend to agree with counter arguments published here:
   http://monotone.ca/docs/Hash-Integrity.html
I'm not an expert in this field (although I dabbled
in cryptograhy somewhat given my math background) and
thus I would love if somebody can show that the
counter arguments don't stand.

Thanks,
Roman.


Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread Enrico Weigelt
* Roman Shaposhnik <[EMAIL PROTECTED]> wrote:
> On Mar 4, 2008, at 8:00 PM, Enrico Weigelt wrote:
> >some thoughts about venti that go around in my mind:
> >
> >1. how stable is the keying ? sha-1 has only 160 bits, while
> >   data blocks may be up to 56k long. so, the mapping is only
> >   unique into one direction (not one-to-one). how can we be
> >   *really sure*, that - even on very large storages (TB or
> >   even PB) - data to each key is alway (one-to-one) unique ?
> 
> http://www.nmt.edu/~val/review/hash/index.html

Tells me what I already suspected: compare-by-hash can be an
dangerous game (even if very uncertain). So I wouldn't count
current venti secure and stable if one storage is large and
used for a long time.

cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-


Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread geoff
>From the fortune file:
You are roughly 2^90 times more likely to win a U.S. state lottery *and* be 
struck by lightning simultaneously than you are to encounter [an accidental 
SHA1 collision] in your file system.  - J. Black



Re: [9fans] thoughs about venti+fossil

2008-03-04 Thread Taj Khattra
>  Tells me what I already suspected: compare-by-hash can be an
>  dangerous game (even if very uncertain).

does this tell you anything you already suspected?

http://www.usenix.org/event/usenix06/tech/full_papers/black/black_html/index.html