Re: [9fans] i/o error reading large sata disk
read: i/o error i think i see the problem. we're off by one bit. [...] /n/sources/plan9//sys/src/9/pc/sdata.c:1344,1350 - sdata.c:1344,1350 }; static int - atageniostart(Drive* drive, vlong lba) + atageniostart(Drive* drive, uvlong lba) { Ctlr *ctlr; uchar cmd; /n/sources/plan9//sys/src/9/pc/sdata.c:1351,1357 - sdata.c:1351,1357 int as, c, cmdport, ctlport, h, len, s, use48; use48 = 0; - if((drive-flagsLba48always) || (lba28) || drive-count 256){ + if((drive-flagsLba48always) || (lba27) || drive-count 256){ if(!(drive-flags Lba48)) return -1; use48 = 1; while this does fix the problem, it's sloppy. the problem is actually that ata reports device sizes as number of sectors+1. it also does not follow the tradition used for sector counts, where 0 sector count = all-ones+1 = 256. this is because removable media drives with no media (eg cdroms) give size = 0. therefore if under any ata addressing scheme, the all-ones sector is not accessable. credit to sam hopkins for pointing this out. /n/sources/plan9//sys/src/9/pc/sdata.c:1344,1350 - sdata.c:1344,1350 }; + enum{ + Last28 = (128) - 1 - 1, + }; + static int - atageniostart(Drive* drive, vlong lba) + atageniostart(Drive* drive, uvlong lba) { Ctlr *ctlr; uchar cmd; /n/sources/plan9//sys/src/9/pc/sdata.c:1351,1357 - sdata.c:1355,1361 int as, c, cmdport, ctlport, h, len, s, use48; use48 = 0; - if((drive-flagsLba48always) || (lba28) || drive-count 256){ + if((drive-flagsLba48always) || lba Last28 || drive-count 256){ if(!(drive-flags Lba48)) return -1; use48 = 1; - erik
Re: [9fans] /lib/rfc
On 2008-Apr-22, at 10:11 , erik quanstrom wrote: is there an existant script for populating this? /n/sources/contrib/lyndon/rfcmirror is one.
Re: [9fans] /lib/rfc
On 2008-Apr-22, at 10:11 , erik quanstrom wrote: is there an existant script for populating this? Actually, is uses /lib/ietf/rfc, and the corresponding idmirror script uses /lib/ietf/id.
Re: [9fans] /lib/rfc
Yes, /lib/rfc/grabfc. Uncomment this line: /cron/sys/cron:#30 9 * * * local /lib/rfc/grabrfc ---BeginMessage--- is there an existant script for populating this? - erik ---End Message---
Re: [9fans] /lib/rfc
/lib/rfc/grabrfc? I'm running it now and it seems to be populating just fine. John
Re: [9fans] /lib/rfc
We had the same problem time ago and had to lower the mtu by hand. Perhaps detecting too many retransmissions of the same packet could be considered a hint of this problem and try by reducing at least once the mtu. In any case, it´s been a long time since we had this problem. I even forgot about it. On Tue, Apr 22, 2008 at 8:58 PM, erik quanstrom [EMAIL PROTECTED] wrote: i had the usual fight with the natted dsl that doesn't pass icmp mustfrags. i also elected to skip the big honking xml index if(! ~ $target *.xml *.xsd \ test ! -e $LIB/$target test -f $i){ - erik
Re: [9fans] /lib/rfc
On Tue, Apr 22, 2008 at 9:12 PM, Francisco J Ballesteros [EMAIL PROTECTED] wrote: We had the same problem time ago and had to lower the mtu by hand. Perhaps detecting too many retransmissions of the same packet could be considered a hint of this problem and try by reducing at least once the mtu. In any case, it´s been a long time since we had this problem. I even forgot about it. I think we just got bye with setting the mtu to be some arbitrary 1480 or something instead of 1500. It was a bug in some adsl router coupled with another but which was it dropped the packet and did not reported it or the icmp got lost or something. I cannot remember it too well though either. -- - curiosity sKilled the cat
[9fans] QEMU and Venti
Hello. Someone just told me the fault on why QEMU crashes every time I boot Plan 9 -- venti. With a fossil only system, everything worked without a hitch -- until that corrupt root entry fiasco which cost me a book I was writing, a troff preprocessor (eg, for graphing equations), my extensions to hoc(1), and other items. So I went to fossil+venti, suffered several crashes on Tiger but little else, then went to Leopard to find out that it always crashed. So I'll go back to fossil-only and be locked in prayer that it doesn't happen again. I have no knowledge of filesystem design so I can't help with venti. - Pietro
Re: [9fans] QEMU and Venti
On Apr 22, 2008, at 3:19 PM, Pietro Gagliardi wrote: Hello. Someone just told me the fault on why QEMU crashes every time I boot Plan 9 -- venti. With a fossil only system, everything worked without a hitch -- until that corrupt On Leopard I've found that QEMU runs very slowly but crashes exceedingly reliably. So much so, that it wasn't worth my effort to get Plan 9 or anything else running on it for much time at all. With luck and a lot of trying, I did get VMware to work very reliably with Plan 9--I've got it running right now to look at all the nice details about grabrfc currently floating about. Note: for VMware it helps to set it up with pccpuf and modify the boot to not bring up rio. Keep the console running and use drawterm instead and it works like a charm. -jas
Re: [9fans] /lib/rfc
Perhaps we were lucky and did not connect to a broken router again. The fault on why QEMU crashes every time I boot Plan 9 -- venti. With a fossil only system, everything worked without a hitch -- until that corrupt On Leopard I've found that QEMU runs very slowly but crashes exceedingly reliably. So much so, that it wasn't worth my effort to Keep the console running and use drawterm instead and it works like a charm. any sufficiently advanced technology eventually becomes completely indistinguishable from witchcraft
[9fans] Regenerating Venti Index Sections?
Hi, I have a ~5GB Venti which had run for some time with one 256MB Index Section; recently, the Index Section became corrupted. The Arenas are intact. I was using plan9port's Venti. Is it possible for me to reconstruct the index section? If so, how? Will the Venti be able to run without it? If I can reconstruct it, can the new index section be larger? Thanks, --vs
Re: [9fans] Regenerating Venti Index Sections?
Venkatesh Srinivas wrote: Hi, I have a ~5GB Venti which had run for some time with one 256MB Index Section; recently, the Index Section became corrupted. The Arenas are intact. I was using plan9port's Venti. I run a similar system, a ubuntu linux box with Venti under p9p with a VMware guest system that is a plan9 cpu server. Is it possible for me to reconstruct the index section? Yes If so, how? By reading the manuals :-) man 8 venti-fmt Rebuild index with venti/buildindex venti.conf Will the Venti be able to run without it? If I can reconstruct it, can the new index section be larger? Yes. Prior to rebuilding your index, create new isect file(s) or partitions, add them to your venti.conf, format them with, for example: venti/fmtisect isectXYZ isectXYZ then rebuild the index. Thanks, --vs Adrian
Re: [9fans] Regenerating Venti Index Sections?
// Is it possible for me to reconstruct the index section? Yes. See venti-fmt(8). If you want to stop here, you can try checkindex first, and if that doesn't work for you, go to buildindex. // Will the Venti be able to run without it? No, you need an index. The bloom filter is the only optional (but recommended) component. // ...can the new index section be larger? Yes. You can either replace your old index (required if you've suffered a hardware failure, for example) or build a new index using your old index as a part. (if you've just run out of space). In either case, you'll need to format each index section (fmtisect) and then the index itself (fmtindex) before regerating the index for your currend data log (buildindex). anthony