Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread dexen deVries
On Tuesday 04 of October 2011 02:02:31 erik quanstrom wrote:
> xp won't use it correctly either.  in fact, if you're using a standard
> fdisk layout, chances are things are a little sideways on nearly any
> os.
> 
> in any event, if i were buying a 2t hard drive today, i'd get
>   http://www.newegg.com/Product/Product.aspx?Item=N82E16822136365

This RE4-GP is actually `low-power' version -- it spins at < 7.200RPM (they 
tout it as `IntelliPower'). RE4 is the `enterprise' 7.200 version.

If you can stand a 1.5TB, I'd recommend Seagate ST31500341AS as fast at 
7.200RPM, reliable and 512B sector based.
-- 
dexen deVries

[[[↓][→]]]

http://xkcd.com/732/



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
I said:

>> You could try telling your BIOS to use the disk in ATA (IDE) mode, and
>> see if that gives you 512-byte sector emulation.  However I seem to
>> recall posts from Erik advising that some chipsets have bugs in this
>> mode which affect Plan 9.

quans...@quanstro.net said:

> to put a point on it, ata is an abstracted command set.  ide is a register
> set and protocol for delivering ata commands modeled on the isa bus.
> 
> the disk interface (ide/ahci/something else) doesn't have anything to do
> with the sector size.  the sector size is reported in the ident block, which
> emulation layers treat as an opaque sack of bits.

But if you look at the drivers, disk interface (ide/ahci/something else) has
everything to do with the sector size, because:
- sdiahci.c uses drive->unit->secsize, which comes from an ata read capacity
  command in scisonline() - not from the ident block
- sdata.c ignores drive->unit->secsize, and uses drive->secsize which is
  always set to 512

So, if the BIOS can be set to use IDE mode, the sdata driver will be
used, and will treat the drive as if it has 512-byte sectors.  This
may or may not work, depending on what "logical sector size" actually
means.  I would have thought it meant that read/write commands are
expressed in logical not physical sector terms; but then it seems odd
that the WD20EARS reports 4096-byte blocksize in the read capacity
command.  My WD10EARS behaves differently:

term% scuzz /dev/sdC0
block size: 512
capacity
 1953525167 512
ok 8




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
quans...@quanstro.net:
> you may use 512
> byte sectors if you really want to suffer terrible performance
> (usually 1/3 the normal performance for reasonablly random
> workloads.)

Why will performance be terrible using 512-byte logical sectors on a
4096-byte physical sector disk?  Both fossil and venti use multi
sector blocks (the default is 8kb), so as long as partitions are
carefully aligned (as I suggested), won't all the actual disk
transfers be correctly aligned and sized to take advantage of the
large physical sectors?




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
quans...@quanstro.net:
> word 106 6003 [valid=1]
>   multiple log/phys? 1
>   log/phys 8
>   logical sector size 0 [valid=0]

On my WD10EARS, word 106 of the ATA identify block is 0, indicating
no support for logical sectors.

I've just tried the experiment of creating eight 1GB partitions
aligned at each possible logical sector offset, and found that
alignment doesn't seem to affect the writing speed significantly:

term% awk '/test/ {print $0,"align " $3 % 8}' /dev/sdC0/ctl
part test0 28249856 30249856 align 0
part test1 30249857 32249857 align 1
part test2 32249858 34249858 align 2
part test3 34249859 36249859 align 3
part test4 36249860 38249860 align 4
part test5 38249861 40249861 align 5
part test6 40249862 42249862 align 6
part test7 42249863 44249863 align 7
term% for (i in /dev/sdC0/test?) { time rc -c 'dd -if /dev/zero -of '$i' -bs 4k 
-count 25 >[2]/dev/null' }
0.57u 8.43s 20.40r   rc -c dd -if /dev/zero -of /dev/sdC0/test0 -bs 4k 
-count 25 >[2]/dev/null
0.50u 8.65s 20.71r   rc -c dd -if /dev/zero -of /dev/sdC0/test1 -bs 4k 
-count 25 >[2]/dev/null
0.64u 8.44s 20.73r   rc -c dd -if /dev/zero -of /dev/sdC0/test2 -bs 4k 
-count 25 >[2]/dev/null
0.60u 8.86s 20.54r   rc -c dd -if /dev/zero -of /dev/sdC0/test3 -bs 4k 
-count 25 >[2]/dev/null
0.64u 8.70s 20.77r   rc -c dd -if /dev/zero -of /dev/sdC0/test4 -bs 4k 
-count 25 >[2]/dev/null
0.55u 9.00s 20.62r   rc -c dd -if /dev/zero -of /dev/sdC0/test5 -bs 4k 
-count 25 >[2]/dev/null
0.48u 8.51s 20.61r   rc -c dd -if /dev/zero -of /dev/sdC0/test6 -bs 4k 
-count 25 >[2]/dev/null
0.59u 8.16s 20.78r   rc -c dd -if /dev/zero -of /dev/sdC0/test7 -bs 4k 
-count 25 >[2]/dev/null

This all seems to contradict WD's claim (*) that the WD10EARS is an
advanced format drive.

(*) http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Charles Forsyth
perhaps there's an option bit?

On 4 October 2011 12:28, Richard Miller <9f...@hamnavoe.com> wrote:

> This all seems to contradict WD's claim (*) that the WD10EARS is an
> advanced format drive.
>


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> perhaps there's an option bit?

If the drive was physically formatted with 4096-byte sectors,
I can't see how changing a logical bit could prevent unaligned
writes from causing a read-modify-write cycle.




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Charles Forsyth
no, i meant to select what the drive advertises. it would be a bit
disconcerting if flipping a bit had to reformat a drive!

On 4 October 2011 13:01, Richard Miller <9f...@hamnavoe.com> wrote:

> > perhaps there's an option bit?
>
> If the drive was physically formatted with 4096-byte sectors,
> I can't see how changing a logical bit could prevent unaligned
> writes from causing a read-modify-write cycle.
>
>
>


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 07:29:18 EDT 2011, 9f...@hamnavoe.com wrote:
> quans...@quanstro.net:
> > word 106 6003 [valid=1]
> >   multiple log/phys? 1
> >   log/phys 8
> >   logical sector size 0 [valid=0]
> 
> On my WD10EARS, word 106 of the ATA identify block is 0, indicating
> no support for logical sectors.

the other guy has a WD20EARS which is a completely different drive.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 08:02:13 EDT 2011, 9f...@hamnavoe.com wrote:
> > perhaps there's an option bit?
> 
> If the drive was physically formatted with 4096-byte sectors,
> I can't see how changing a logical bit could prevent unaligned
> writes from causing a read-modify-write cycle.

you aren't up with the level of sleeze involved.  what some drive
makers do is give you an option to offset lbas by 1.  since the
standard fat layout tend to give (lba%16) == 15, if the lba used
at the interface is 64 (unaligned) the drive will add 1 and get an
aligned lba.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Charles Forsyth
i see that to find all the answers i might have to pay $300 for an
individual membership to another organisation;
probably the membership will be about as useful as the VGA ones. thieving
bastards.


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> the other guy has a WD20EARS which is a completely different drive.

Sure, but all the EARS series are claimed (in the spec sheet ref'd
in my previous message) to be advanced format i.e. 4096-byte physical
sectors.




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread dexen deVries
On Tuesday 04 of October 2011 14:13:40 Charles Forsyth wrote:
> i see that to find all the answers i might have to pay $300 for an
> individual membership to another organisation;
> probably the membership will be about as useful as the VGA ones. thieving
> bastards.

if you'll excuse,

Q.  What's so great about ISO standardization?

A.  It is often said that one of the advantages of SGML over some
other, proprietary, generic markup scheme is that "nobody owns
the standard".  While this is not strictly true, the ISO's pricing
policy certainly has helped to keep the number of people who do own
a copy of the Standard at an absolute minimum.

from http://www.flightlab.com/~joe/sgml/faq-not.txt


-- 
dexen deVries

[[[↓][→]]]

http://xkcd.com/732/



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 08:16:00 EDT 2011, 9f...@hamnavoe.com wrote:
> > the other guy has a WD20EARS which is a completely different drive.
> 
> Sure, but all the EARS series are claimed (in the spec sheet ref'd
> in my previous message) to be advanced format i.e. 4096-byte physical
> sectors.

sure, that just specifies the physical layout.  not what they claim at
the interface.

drives aren't "hardware" in the traditional sense anymore.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 08:09:55 EDT 2011, charles.fors...@gmail.com wrote:

> no, i meant to select what the drive advertises. it would be a bit
> disconcerting if flipping a bit had to reformat a drive!

well they advertize *two* different sector sizes, a logical size and a
physical size.  the drive never changes the physical format, but may
do read/modify/write cycles in the case of writes that don't cover a
full physical sector.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
> part test7 42249863 44249863 align 7
> term% for (i in /dev/sdC0/test?) { time rc -c 'dd -if /dev/zero -of '$i' -bs 
> 4k -count 25 >[2]/dev/null' }
> 0.57u 8.43s 20.40r rc -c dd -if /dev/zero -of /dev/sdC0/test0 -bs 4k 
> -count 25 >[2]/dev/null
> 0.50u 8.65s 20.71r rc -c dd -if /dev/zero -of /dev/sdC0/test1 -bs 4k 
> -count 25 >[2]/dev/null
> 0.64u 8.44s 20.73r rc -c dd -if /dev/zero -of /dev/sdC0/test2 -bs 4k 
> -count 25 >[2]/dev/null
> 0.60u 8.86s 20.54r rc -c dd -if /dev/zero -of /dev/sdC0/test3 -bs 4k 
> -count 25 >[2]/dev/null
> 0.64u 8.70s 20.77r rc -c dd -if /dev/zero -of /dev/sdC0/test4 -bs 4k 
> -count 25 >[2]/dev/null
> 0.55u 9.00s 20.62r rc -c dd -if /dev/zero -of /dev/sdC0/test5 -bs 4k 
> -count 25 >[2]/dev/null
> 0.48u 8.51s 20.61r rc -c dd -if /dev/zero -of /dev/sdC0/test6 -bs 4k 
> -count 25 >[2]/dev/null
> 0.59u 8.16s 20.78r rc -c dd -if /dev/zero -of /dev/sdC0/test7 -bs 4k 
> -count 25 >[2]/dev/null
> 

25*4096/20.78 = 49 mb/s.  this is less than 1/2 the available
bandwidth giving the drive lots of wiggle room.  and since you're
doing sequential i/o the drive can do write combining.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> i see that to find all the answers i might have to pay $300 for an
> individual membership to another organisation;

google ata.command.set filetype:pdf

gives you lots of free draft versions, pick one.




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Charles Forsyth
i had in mind something that would make the drive look as much like an old
drive as possible,
including lying about simply everything, including the underlying physical
sector size. surely that's
suitable for the PC heritage.

On 4 October 2011 13:17, erik quanstrom  wrote:

> On Tue Oct  4 08:09:55 EDT 2011, charles.fors...@gmail.com wrote:
>
> > no, i meant to select what the drive advertises. it would be a bit
> > disconcerting if flipping a bit had to reformat a drive!
>
> well they advertize *two* different sector sizes, a logical size and a
> physical size.  the drive never changes the physical format, but may
> do read/modify/write cycles in the case of writes that don't cover a
> full physical sector.
>
> - erik
>
>


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> 25*4096/20.78 = 49 mb/s.  this is less than 1/2 the available
> bandwidth giving the drive lots of wiggle room.  and since you're
> doing sequential i/o the drive can do write combining.

Is there any experiment I can do (not involving a crowbar and a
microscope) to find out the real physical sector size?  Bigger
transfers get more of the bandwidth, but then a smaller proportion
of the transfer needs read/modify/write.  I could do random addressing
but then I would expect seek time to dominate.

Not that it matters (my drive works fine), but I'm curious!




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
> i had in mind something that would make the drive look as much like an old
> drive as possible,
> including lying about simply everything, including the underlying physical
> sector size. surely that's
> suitable for the PC heritage.

we used to care what the c/h/s of a drive was.  now we don't.  anything with
a raid underneath or flash memory is going to lie about the sector/erase block
size.  the writing is on the wall.  logical sector size != physical sector size.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread dexen deVries
On Tuesday 04 of October 2011 14:33:27 Richard Miller wrote:
> > 25*4096/20.78 = 49 mb/s.  this is less than 1/2 the available
> > bandwidth giving the drive lots of wiggle room.  and since you're
> > doing sequential i/o the drive can do write combining.
> 
> Is there any experiment I can do (not involving a crowbar and a
> microscope) to find out the real physical sector size?  Bigger
> transfers get more of the bandwidth, but then a smaller proportion
> of the transfer needs read/modify/write.  I could do random addressing
> but then I would expect seek time to dominate.

compare write bandwidth for 4096B data chunks at offset modulo 512B:
(n*512B+k), where `n' is random. compare 8 runs, each with const `k' from { 0, 
1, ...7 }. sync after every write. write much more than drive cache size at 
each run (probably 64MB on modern HDs).

if your drive is 4k, one of the runs will be at exact sector boundary, and 7 
others will read-modify-write two sectors every time. thus one run will have 
much better performance.

if your drive is 512b, all runs will have same performance.


-- 
dexen deVries

[[[↓][→]]]

http://xkcd.com/732/



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread dexen deVries
On Tuesday 04 of October 2011 14:52:49 dexen deVries wrote:
> compare write bandwidth for 4096B data chunks at offset modulo 512B:
> (n*512B+k), where `n' is random. compare 8 runs, each with const `k' from {
> 0, 1, ...7 }. sync after every write. write much more than drive cache
> size at each run (probably 64MB on modern HDs).


oops, was supposed to be (n+k) * 512B.

if you expect seek times to dominate, perhaps
for (..; ++n; ..) {
  write 4096B @ (n*8+k) * 512B; sync;
}
would be better -- that is; walk sequentially but sync in 4096B chunks 
possibly across 4k sectors (when `k' mis-aligns).


-- 
dexen deVries

[[[↓][→]]]

http://xkcd.com/732/



[9fans] namec() dislikes #M - why?

2011-10-04 Thread Steve Simon
Why can I not re-bind #M?

/sys/src/9/port/chan.c:1374

I want to use auth/newns to flush out my namespace
then build a new one including some bits of devcons.

Is this a security issue for cpu servers? I cannot see
what it is protecting me from.

Also, why is this done here and not in fsattach() of devcons
(with a refcount)?

-Steve




[9fans] advanced format drives

2011-10-04 Thread erik quanstrom
to save grief in the short term, i've disabled recognizing 4k sector drives
as 4k.  as richard miller points out, this will allow one who carefully aligns
i/o and does so in chunks that are 0 mod 4k, to not incur r/m/w cycles.
(not that venti or fossil can push a disk hard enough for this to matter.)

i've rerolled the atom cd with this change.

- erik



Re: [9fans] namec() dislikes #M - why?

2011-10-04 Thread Russ Cox
#M is not a nameable device.
It is a pseudo-device representing mounted connections.
Instead of writing

bind -b #M123 /dev

You are supposed to write

mount -b /srv/whatever /dev

My memory is that the name space files use
the latter form unless the original name is gone.

Russ



Re: [9fans] namec() dislikes #M - why?

2011-10-04 Thread Charles Forsyth
#M, but devcons? devcons is #c

On 4 October 2011 14:34, Steve Simon  wrote:

> then build a new one including some bits of devcons.

 ...
>
> Also, why is this done here and not in fsattach() of devcons
> (with a refcount)?


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
> Is there any experiment I can do (not involving a crowbar and a
> microscope) to find out the real physical sector size?  Bigger
> transfers get more of the bandwidth, but then a smaller proportion
> of the transfer needs read/modify/write.  I could do random addressing
> but then I would expect seek time to dominate.
> 
> Not that it matters (my drive works fine), but I'm curious!

i think the attached program ought to work.  here are some local
results.  no 4k-sector drives.  sorry.  if this doesn't work, especially for
writing, it's probablly a caching effect.  you may need to run till
the cache fills up (64mb or so) and then start timing.

ioszr (s)   riops   w   wiops   model
512 12.227  81.78   5.531   180.78  Hitachi HUA721050KLA330
409612.363  80.88   5.649   176.99  Hitachi HUA721050KLA330
512 5.957   167.85  2.636   379.26  SEAGATE ST373455SS (sas)
40965.946   168.15  2.665   375.19  SEAGATE ST373455SS (sas)
512 0.181   5494.54 0.168   5941.86 SSDSA2SH064G1GC INTEL (ssd)
40960.317   3150.64 0.213   4689.26 SSDSA2SH064G1GC INTEL (ssd)

as already known, none of these appear to have any sector size wierdness.

surprising, considering the intel ssd is attached to a openrd!

- erik#include 
#include 

enum {
Ios = 1000, /* number of ios / test */
Nanoi   = 10,   /* 1/nano */
Microi  = 100,
};

enum {
Read= 1<<0,
Write   = 1<<1,
};

static int otab[4] = {
[Read]  OREAD,
[Write] OWRITE,
[Read|Write]ORDWR,
};

static char *rwtab[4] = {
[Read]  "read",
[Write] "write",
[Read|Write]"rdwr",
};

void
rio(int fd, uvlong byte0, uvlong bytes, uint ss, int rw)
{
char *buf;
int i;
uvlong maxlba, lba, t, x;
long (*io)(int, void*, long, vlong);

maxlba = (bytes - byte0) / ss;
if(rw == Read)
io = pread;
else if(rw == Write)
io = pwrite;
else{
io = nil;   /* compiler noise */
sysfatal("not prepared to mix read/write yet");
}

buf = malloc(ss);
if(buf == nil)
sysfatal("malloc");

srand(nsec());
t = -nsec();
for(i = 0; i < Ios; i++){
lba = frand()*maxlba;   /* awful.  no vlnrand() */
io(fd, buf, ss, byte0 + lba*ss);
}
t += nsec();
free(buf);

print("%s %lld.%03lld\n", rwtab[rw], t/Nanoi, (t%Nanoi)/Microi);
x = (uvlong)Ios*(uvlong)Nanoi*100;
x /= t;
print("%lld.%02lld iops\n", x/100, (x%100));
}

void
usage(void)
{
fprint(2, "usage: randomio [-rw] [-o offset] [-s sectorsz] [file 
...]\n");
exits("usage");
}

void
main(int argc, char **argv)
{
int i, byte0, fd, rw, ss;
uvlong bytes;
Dir *d;

rw = 0;
ss = 512;
byte0 = 0;
ARGBEGIN{
case 'r':
rw |= Read;
break;
case 'w':
rw |= Write;
break;
case 'o':
byte0 = atoi(EARGF(usage()));
break;
case 's':
ss = atoi(EARGF(usage()));
break;
default:
usage();
}ARGEND;

if(rw == 0)
rw = Read;

for(i = 0; i < argc; i++){
d = dirstat(argv[i]);
if(d == nil)
sysfatal("dirstat: %r");
bytes = d->length;
free(d);

fd = open(argv[i], otab[rw]);
if(fd == -1)
sysfatal("open: %r");

if(rw & Read)
rio(fd, byte0, bytes, ss, Read);
if(rw & Write)
rio(fd, byte0, bytes, ss, Write);
close(fd);
}
exits("");
}

Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> i think the attached program ought to work.

Doing i/o to random blocks, aren't you mostly measuring seek time?




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 11:57:27 EDT 2011, 9f...@hamnavoe.com wrote:
> > i think the attached program ought to work.
> 
> Doing i/o to random blocks, aren't you mostly measuring seek time?

the drive will write-combine sequential io, so that's not an option.
i've definately seen artifacts when doing completely random io on
4k drives.

you can try rewriting it to step by 8k and retry the test.

- erik



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Richard Miller
> you can try rewriting it to step by 8k and retry the test.

Bingo, thanks.

I was still trying to figure out how to force the drive to sync
as Dexen suggested (maybe using scsicmd to issue an ata FLUSH CACHE
command?), but stepping by 8k does reveal a strong pattern.

Writing 25000 4k records stepping by 8k takes
  sector offset 1-7 mod 8: 27-30 sec
  sector offset 0 mod 8:   14-17 sec

Conclusion: WD10EARS has 4k byte physical sectors but is pretty
good at concealing it.




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
> > you can try rewriting it to step by 8k and retry the test.
> 
> Bingo, thanks.
>
> I was still trying to figure out how to force the drive to sync
> as Dexen suggested (maybe using scsicmd to issue an ata FLUSH CACHE
> command?), but stepping by 8k does reveal a strong pattern.

you can either use the raw device to send raw write dma ext fua,
or you can explicitly turn off the cache.

flush cache flushes anything from the cache, so it can be misleading.

> Writing 25000 4k records stepping by 8k takes
>   sector offset 1-7 mod 8: 27-30 sec
>   sector offset 0 mod 8:   14-17 sec
> 
> Conclusion: WD10EARS has 4k byte physical sectors but is pretty
> good at concealing it.

it's amazing how fast you can be if you don't care about losing data
on power loss.  :-)

- erik



[9fans] copying over 9P using plan9port

2011-10-04 Thread Jens Staal
Hi.

First of all sorry if I through my ignorance am attempting something
completely stupid.

I have been trying to copy the APE sources using plan9port. The
purpose of this is that I am trying to make an APE augmentation
PKGBUILD to the kencc package [1] (and ultimately, I will try to
figure out how to make a kencc+APE cross compiler for Plan9 on a Linux
host) for Arch linux.

I have also asked at the Arch forums [2] but no answers yet, so I
guess it is nothing anyone has attempted...

I can mount the sources at bell labs using the following procedure

mkdir sources
9 mount 'tcp!sources.cs.bell-labs.com' sources

I can then cd and explore the bell labs sources via plan9port, so that
works just fine.

When I then try something like

cp -ar sources/plan9/sys/src/ape ape

I get an error stating:
unexpected open flags 050cp: can not open
”sources/plan9/sys/src/ape/9src/mkfile” for reading: Access denied

What I now wonder is: Is this the expected behaviour? Is anonymous
copying from the Bell labs sources blocked? Or is the permission
issues local and something I can fix by a mount option? I could not
find anything using 9 man mount...

[1] https://aur.archlinux.org/packages.php?ID=49835
[2] https://bbs.archlinux.org/viewtopic.php?id=127522



[9fans] sources, others, offline

2011-10-04 Thread Lyndon Nerenberg
Sources and the plan9 website are unreachable at the moment.  It appears 
that 204.178.31.x hosts are unreachable, and since both ns1 and 
ns2.cs.bell-labs.com are on that network, neither plan9.bell-labs.com or 
*.cs.bell-labs.com are resolving.




Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Charles Forsyth
that's certainly the linux way, although to be fair, its fsck does a really
good job of making a scramble worse.
that reminds me that i've still got a big file with a scrambled partition to
unscramble to
get back some big data i wanted to keep.

On 4 October 2011 17:45, erik quanstrom  wrote:

> it's amazing how fast you can be if you don't care about losing data
> on power loss.  :-)
>


Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread dexen deVries
On Tuesday 04 October 2011 19:52:10 Charles Forsyth wrote:
> that's certainly the linux way, although to be fair, its fsck does a really
> good job of making a scramble worse.

for those stuck on linux:
http://www.nilfs.org/en/ is quite immune to inconsistencies on dirty shutdown, 
yet performs well for both read and write (except file removal is slow in some 
cases). upon mount of dirty filesystem, nilfs2 reverts to last written 
transaction. 

aside of the usual 4kB low-level clusters, there's high-level garbage-
collection unit of 8MB. any free space on filesystem is at least 8MB large; 
that provides for decent write performance on highly fragmented and almost 
full harddrive. this also helps performance on SSD -- little or no erases 
happen across erase units.

oh, and by default it doesn't come with fsck ;-) (available only on a separate 
branch of utils).


-- 
dexen deVries

> It's called trolling. It's been done since there were bangs in people's 
email addresses.

thaumaturgy, on HN



Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread David du Colombier
> What I now wonder is: Is this the expected behaviour?

No. You are doing it fine.
It's working for me right now, authenticated and unauthenticated.

I don't know why it doesn't work for you.
Beware "9 mount" should be spelled "9mount" in your messages.

What kernel version are you using?

-- 
David du Colombier



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread Bakul Shah
On Tue, 04 Oct 2011 12:45:58 EDT erik quanstrom   wrote:
> > Writing 25000 4k records stepping by 8k takes
> >   sector offset 1-7 mod 8: 27-30 sec
> >   sector offset 0 mod 8:   14-17 sec
> > 
> > Conclusion: WD10EARS has 4k byte physical sectors but is pretty
> > good at concealing it.

Weren't there 4k sector disks that mapped the 1st 4k sector to
just the mbr? So the next four 512B sectors got mapped to the
next 4k sector and so on.  As a result your nicely "aligned"
on 4k boundaries IO wouldn't be aligned at all.  Are there
still such disks one should avoid?

> it's amazing how fast you can be if you don't care about losing data
> on power loss.  :-)

it's amazing how simple you life can be if you don't care
about losing data :-) I call it garbage collection! Because
if you really cared you'd have already saved your stuff!
[Cue George Carlin on "stuff"]



Re: [9fans] copying fossil filesystem to a bigger disk

2011-10-04 Thread erik quanstrom
On Tue Oct  4 14:25:29 EDT 2011, ba...@bitblocks.com wrote:
> On Tue, 04 Oct 2011 12:45:58 EDT erik quanstrom   
> wrote:
> > > Writing 25000 4k records stepping by 8k takes
> > >   sector offset 1-7 mod 8: 27-30 sec
> > >   sector offset 0 mod 8:   14-17 sec
> > > 
> > > Conclusion: WD10EARS has 4k byte physical sectors but is pretty
> > > good at concealing it.
> 
> Weren't there 4k sector disks that mapped the 1st 4k sector to
> just the mbr? So the next four 512B sectors got mapped to the
> next 4k sector and so on.  

that's not what they do.  they offset the lba by one so that the mbr
is actually at offset 512 in the first lba, and therefore sector 63
(where most of the action starts, is at (512*63 + 512)/4k offset
(512*63 + 512)%4k which is physical sector 8, offset 0.

i think the wd tool can change this alignment.  i'm sure there's some
magic set features commands.

- erik



Re: [9fans] Weird plan9 Xen DonU problems

2011-10-04 Thread Richard Miller
I can't spot anything obvious.  I'm not aware of anyone trying
Plan 9 on xen on NetBSD before, so maybe there's something different
in that environment.

> -rw-r--r--  1 root  wheel  1073741824 Sep 26 08:08 plan9.img

You could try 'chmod 666 plan9.img', just in case it's a permission thing.




Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Jens Staal
2011/10/4 David du Colombier <0in...@gmail.com>:
>> What I now wonder is: Is this the expected behaviour?
>
> No. You are doing it fine.
> It's working for me right now, authenticated and unauthenticated.
>
> I don't know why it doesn't work for you.
> Beware "9 mount" should be spelled "9mount" in your messages.
>
> What kernel version are you using?
>
> --
> David du Colombier
>
>

uname -r:
3.0-ARCH

Since you ask about kernel version, I guessed that it meant that it
might be a permission issue on my side. I also tried doing the same
mount and copy as root.
This also failed, so I am confused.



Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Russ Cox
What does 'mount' (not 9 mount, just mount)
say after you mount the file system?

That will tell you whether the '9 mount' used
v9fs (Linux 9P driver) or 9pfuse (user-space
9P-to-FUSE translator).

Neither gets much use, so it is easy to believe
that there is a bug.

Russ



Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Brian L. Stuart
> I can then cd and explore the bell labs sources via
> plan9port, so that
> works just fine.
> 
> When I then try something like
> 
> cp -ar sources/plan9/sys/src/ape ape
> 
> I get an error stating:
> unexpected open flags 050cp: can not open
> ”sources/plan9/sys/src/ape/9src/mkfile” for reading:
> Access denied

Give it a shot without the -a.  I've had a lot of issues
with the strange attribute flags in "modern" Unices.  The
issues have usually been when writing via 9p, but it's
worth a try to see if that has anything to do with it.
Any idea what the 050 flags indicate on your system?

BLS




Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Russ Cox
To answer my question: the error message comes from 9pfuse.
The extra bits are O_NOFOLLOW and O_LARGEFILE, both of
which seem harmless in this context.  Try this:


diff -r 6db8fc2588f6 src/cmd/9pfuse/main.c
--- a/src/cmd/9pfuse/main.c Mon Oct 03 18:16:09 2011 -0400
+++ b/src/cmd/9pfuse/main.c Tue Oct 04 15:43:16 2011 -0400
@@ -577,6 +577,13 @@
openmode = flags&3;
flags &= ~3;
flags &= ~(O_DIRECTORY|O_NONBLOCK|O_LARGEFILE|O_CLOEXEC);
+#ifdef O_NOFOLLOW
+   flags &= ~O_NOFOLLOW
+#endif
+#ifdef O_LARGEFILE
+   flags &= ~O_LARGEFILE
+#endif
+
/*
 * Discarding O_APPEND here is not completely wrong,
 * because the host kernel will rewrite the offsets
@@ -594,7 +601,7 @@
 *  O_NONBLOCK -> ONONBLOCK
 */
if(flags){
-   fprint(2, "unexpected open flags %#uo", (uint)in->flags);
+   fprint(2, "unexpected open flags %#uo\n", (uint)in->flags);
replyfuseerrno(m, EACCES);
return;
}



Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Jens Staal
2011/10/4 Russ Cox :
> What does 'mount' (not 9 mount, just mount)
> say after you mount the file system?
>
> That will tell you whether the '9 mount' used
> v9fs (Linux 9P driver) or 9pfuse (user-space
> 9P-to-FUSE translator).
>
> Neither gets much use, so it is easy to believe
> that there is a bug.
>
> Russ
>
>

The p9p 9mount uses 9pfuse on my system



Re: [9fans] copying over 9P using plan9port

2011-10-04 Thread Jens Staal
2011/10/4 Russ Cox :
> To answer my question: the error message comes from 9pfuse.
> The extra bits are O_NOFOLLOW and O_LARGEFILE, both of
> which seem harmless in this context.  Try this:
>
>
> diff -r 6db8fc2588f6 src/cmd/9pfuse/main.c
> --- a/src/cmd/9pfuse/main.c     Mon Oct 03 18:16:09 2011 -0400
> +++ b/src/cmd/9pfuse/main.c     Tue Oct 04 15:43:16 2011 -0400
> @@ -577,6 +577,13 @@
>        openmode = flags&3;
>        flags &= ~3;
>        flags &= ~(O_DIRECTORY|O_NONBLOCK|O_LARGEFILE|O_CLOEXEC);
> +#ifdef O_NOFOLLOW
> +       flags &= ~O_NOFOLLOW
> +#endif
> +#ifdef O_LARGEFILE
> +       flags &= ~O_LARGEFILE
> +#endif
> +
>        /*
>         * Discarding O_APPEND here is not completely wrong,
>         * because the host kernel will rewrite the offsets
> @@ -594,7 +601,7 @@
>         *      O_NONBLOCK -> ONONBLOCK
>         */
>        if(flags){
> -               fprint(2, "unexpected open flags %#uo", (uint)in->flags);
> +               fprint(2, "unexpected open flags %#uo\n", (uint)in->flags);
>                replyfuseerrno(m, EACCES);
>                return;
>        }
>
>

Thanks I will try the patch as soon as I got time.

@ brian:

doing "cp -r" instead of "cp -ar" did not make a difference



Re: [9fans] namec() dislikes #M - why?

2011-10-04 Thread Yaroslav
Perhaps, Steve wants rio-simulated /dev/cons.
Steve, doesn't mount -b $wsys /dev in the new ns do the trick?

2011/10/4 Charles Forsyth :
> #M, but devcons? devcons is #c
>
> On 4 October 2011 14:34, Steve Simon  wrote:
>>
>> then build a new one including some bits of devcons.
>
>  ...
>>
>> Also, why is this done here and not in fsattach() of devcons
>> (with a refcount)?



-- 
- Yaroslav



[9fans] radar

2011-10-04 Thread erik quanstrom
i got motivated and fixed radar so that,
- /lib/sky here is used to find the "nearest" radar station, and
- if you enter an arbitrary (us, sorry) location like "eden prarie mn", radar
will find the "nearest" radar station and display that.

i spent a few minutes looking for decent european radars, but didn't
find anything that covered the whole of europe.

- erik



Re: [9fans] radar

2011-10-04 Thread smj
> i got motivated and fixed radar so that,
> - /lib/sky here is used to find the "nearest" radar station, and
> - if you enter an arbitrary (us, sorry) location like "eden prarie mn", radar
> will find the "nearest" radar station and display that.

Very cool, it seems to be working quite well except for Seattle which
is cloudless.  But thats due to the ATX Whidbey Island station being
down for maintenance.  However, we do have a brand new coastal radar
array which has just recently been brought online.

I added: LGX Langley Hill WA to lib/stations and LGX 47.109 -124.100
(approx) to lib/stationlat

There is an interesting read about getting the array going and what it
will do for Western Washington here:  http://tx0.org/2uy




Re: [9fans] radar

2011-10-04 Thread erik quanstrom
On Wed Oct  5 01:13:08 EDT 2011, s...@9p.sdf.org wrote:
> > i got motivated and fixed radar so that,
> > - /lib/sky here is used to find the "nearest" radar station, and
> > - if you enter an arbitrary (us, sorry) location like "eden prarie mn", 
> > radar
> > will find the "nearest" radar station and display that.
> 
> Very cool, it seems to be working quite well except for Seattle which
> is cloudless.  But thats due to the ATX Whidbey Island station being
> down for maintenance.  However, we do have a brand new coastal radar
> array which has just recently been brought online.
> 
> I added: LGX Langley Hill WA to lib/stations and LGX 47.109 -124.100
> (approx) to lib/stationlat
> 
> There is an interesting read about getting the array going and what it
> will do for Western Washington here:  http://tx0.org/2uy

i think it's already there.  what you saw was probablly a bug in
finding the station code.  i had "..." instead of "???".  what was
that dek quote, "i define unix as 30 definitions of regular expressions
living under one roof."

(pull radar again.  sorry.)

; grep ATX /lib/radar/*
/lib/radar/stationlat:ATX   48.18960544933 -122.4886090504
/lib/radar/stations:ATX Everett/Seattle-Tacoma  WA

- erik



Re: [9fans] radar

2011-10-04 Thread erik quanstrom
> > I added: LGX Langley Hill WA to lib/stations and LGX 47.109 -124.100
> > (approx) to lib/stationlat
> > 
> > There is an interesting read about getting the array going and what it
> > will do for Western Washington here:  http://tx0.org/2uy
> 
> i think it's already there.  what you saw was probablly a bug in
> finding the station code.  i had "..." instead of "???".  what was
> that dek quote, "i define unix as 30 definitions of regular expressions
> living under one roof."
> 
> (pull radar again.  sorry.)
> 
> ; grep ATX /lib/radar/*
> /lib/radar/stationlat:ATX 48.18960544933 -122.4886090504
> /lib/radar/stations:ATX   Everett/Seattle-Tacoma  WA

ack!  misread.  i've pushed out /lib/radar/genlat which automaticly
generates the latitude listings given the station listing to save you
from guessing.

anybody know where a current station listing can be had?  i scraped
mine from a pulldown, but there's got to be an automatic way to do this?

contributions gladly accepted.  the stationlat should be self-updating
like /sys/lib/scsicodes.

- erik