Re: [PATCH] corruption bugs in 2.6 v3

2004-03-03 Thread Dieter Ntzel
Am Mittwoch, 3. März 2004 21:34 schrieb Chris Mason:
 Hello everyone,

 These two patches fix corruption problems I've been hitting on 2.6.
 Both bugs are present in the vanilla and suse kernels.

Both do NOT fix the lilo problem.
But mkinitrd works.

Thanks,
Dieter




Re: [PATCH] updated data=ordered patch for 2.6.3

2004-03-01 Thread Dieter Ntzel
Am Montag, 1. März 2004 15:38 schrieb Chris Mason:
 On Mon, 2004-03-01 at 09:30, Christophe Saout wrote:
  Am Mo, den 01.03.2004 schrieb Chris Mason um 15:01:
It seems you introduced a bug here. I installed the patches yesterday
and found a lockup on my notebook when running lilo (with /boot on
the root reiserfs filesystem).
   
A SysRq-T showed that lilo is stuck in fsync:
  
   Ugh, I use grub so I haven't tried lilo.  Could you please send me the
   full sysrq-t, this is probably something stupid.
 
  Yes. I could reproduce it by simply creating a dummy /boot volume on
  reiserfs. I copied the content of /boot, ran lilo and it hung again. The
  other reiserfs filesystems were still usable (but a global sync hangs
  afterwards). I also attached a bzipped strace of the lilo process.

 Ok, thanks.  The problem is in reiserfs_unpack(), which needs updating
 for the patch.  Fixing.

I'll test this under SuSE 2.6.3-7 (with lilo).
Or is it in?

Thanks,
Dieter


Re: v3 logging speedups for 2.6

2003-12-11 Thread Dieter Ntzel
Am Donnerstag, 11. Dezember 2003 19:10 schrieb Chris Mason:
 Hello everyone,

 This is part one of the data logging port to 2.6, it includes all the
 cleanups and journal performance fixes.  Basically, it's everything
 except the data=journal and data=ordered changes.

 The 2.6 merge has a few new things as well, I've changed things around
 so that metadata and log blocks will go onto the system dirty lists.
 This should make it easier to improve log performance, since most of the
 work will be done outside the journal locks.

 The code works for me, but should be considered highly experimental.  In
 general, it is significantly faster than vanilla 2.6.0-test11, I've done
 tests with dbench, iozone, synctest and a few others.  streaming writes
 didn't see much improvement (they were already at disk speeds), but most
 other tests did.

 Anyway, for the truly daring among you:

 ftp.suse.com/pub/people/mason/patches/data-logging/experimental/2.6.0-test11

 The more bug reports I get now, the faster I'll be able to stabilize
 things.  Get the latest reiserfsck and check your disks after each use.

Chris,

with which kernel should I start on my SuSE 9.0?
A special SuSE 2.6.0-test11 + data logging?
Or plane native? --- There are such much patches in SuSE kernels...

Greetings,
Dieter
-- 
Dieter Nützel
@home: Dieter.Nuetzel () hamburg ! de


data-logging finally for 2.4.23?

2003-09-03 Thread Dieter Ntzel
What's up Chris?

Your latest stuff working fine on 2.4.22-rc1-rl (pre-emption; haven't time for 
a newer version, yet).

patches/2.4.22-data-logging l
insgesamt 89
drwxr-xr-x2 root root  536 Aug  6 04:50 .
drwxr-xr-x4 root root  408 Sep  3 13:46 ..
-rw-r--r--1 root root 1251 Jul 13 16:08 
02-akpm-b_journal_head-1.diff.bz2
-rw-r--r--1 root root 4929 Jul 13 16:08 
04-reiserfs-sync_fs-4.diff.bz2
-rw-r--r--1 root root32068 Jul 13 16:08 
05-data-logging-39.diff.bz2
-rw-r--r--1 root root 5724 Jul 13 16:08 
06-reiserfs-jh-3.diff.bz2
-rw-r--r--1 root root  424 Jul 13 16:08 
06-write_times.diff.bz2
-rw-r--r--1 root root10378 Jul 13 16:08 
07-reiserfs-quota-28.diff.bz2
-rw-r--r--1 root root 1541 Jul 13 16:08 08-kinoded-9.diff.bz2
-rw-r--r--1 root root 1097 Jul 13 16:08 
10-reiserfs-quota-link-fix.diff.bz2
-rw-r--r--1 root root  564 Jul 13 16:08 README.bz2
-rw-r--r--1 root root 1580 Jul 15 04:15 
search_reada-5.diff.bz2
-rw-r--r--1 root root  379 Jul 15 20:50 stree-fix.bz2

Regards,
Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Ntzel
Am Freitag, 11. Juli 2003 19:09 schrieb Chris Mason:
 On Fri, 2003-07-11 at 11:44, Oleg Drokin wrote:
  Hello!
 
  On Fri, Jul 11, 2003 at 05:34:12PM +0200, Marc-Christian Petersen wrote:
Actually I did it already, as data-logging patches can be applied to
2.4.22-pre3 (where this truncate patch was included).
   
 Maybe it _IS_ time for this _AND_ all the other data-logging
 patches? 2.4.22-pre5?
   
It's Chris turn. I thought it is good idea to test in -ac first,
though (even taking into account that these patches are part of
SuSE's stock kernels).
  
   Well, I don't think that testing in -ac is necessary at all in this
   case.
 
  May be not. But it is still useful ;)
 
   I am using WOLK on many production machines with ReiserFS mostly as
   Fileserver (hundred of gigabytes) and proxy caches.
 
  I am using this code on my production server myself ;)
 
   If someone would ask me: Go for 2.4 mainline inclusion w/o going via
   -ac! :)
 
  Chris should decide (and Marcelo should agree) (Actually Chris thought it
  is good idea to submit data-logging to Marcelo now, too). I have no
  objections. Also now, that quota v2 code is in place, even quota code can
  be included.
 
  Also it would be great to port this stuff to 2.5 (yes, I know Chris wants
  this to be in 2.4 first)

 Marcelo seems to like being really conservative on this point, and I
 don't have a problem with Oleg's original idea to just do relocation in
 2.4.22 and the full data logging in 2.4.23-pre4 (perhaps +quota now that
 32 bit quota support is in there).

So, it's another half year away...?

 2.5 porting work has restarted at last, Oleg's really been helpful with
 keeping the 2.4 stuff up to date.

Nice but.

Patches against latest -aa could be helpful, then.

Thanks,
Dieter



Re: reiserfs on removable media

2003-07-02 Thread Dieter Ntzel
Am Mittwoch, 2. Juli 2003 20:59 schrieb Chris Mason:
 On Wed, 2003-07-02 at 14:53, Hans Reiser wrote:
  This is called ordered data mode, and exists on ext3 and also reiserfs
   with Chris Mason's patches.  Under normal usage it shouldn't change
   performance compared to writeback data mode (which is what reiserfs
   does by default).

Chris,

I thought data=ordered is the new default with your patch?

  It had some impact, I forget exactly how much, maybe Chris can
  resuscitate his benchmark of it?

 The major cost of data=ordered is that dirty blocks are flushed every 5
 seconds instead of every 30.  The journal header patch in my
 experimental data logging directory changes things so that only new
 bytes in the file are done in data=ordered mode (either adding a new
 block or appending onto the end of the file).

 This helps a lot in the file rewrite tests.

What's faster than with your patches? ordered|journal|writeback?

I thought is order: writeback  ordered  journal ;-)

Thanks,
Dieter



2.4.22-pre1 with heavy ACPI/USB changes is out. data-logging?

2003-06-23 Thread Dieter Ntzel
Thanks,
Dieter



Re: Will Reisefs have undo?

2003-06-15 Thread Dieter Ntzel
Am Sonntag, 15. Juni 2003 11:50 schrieb Joachim Zobel:
 Am Sam, 2003-06-14 um 21.39 schrieb Fred -- Speed Up --:
  This is not the filesystem's work, KDE for instance has a trashcan, and
  you can bind the rm command to a special mv one that moves files from the
  initial place to a backup folder, the files of which you can empty
  regulary with a cron script depending on the file's age. It's simple to
  implement, and really not something Reiser4 should be capable of ...
  instead journalising works very well to protect data from being erased,
  that's the only purpose of a modern filesystem.

 A filesystem undo is most needed on servers.

No.

 There usually is no KDE.

True.

 However a trash can can be done with hard links and a cronjob.

True.

 What I would like to have is more - I think I was not clear enough about
 that. I not only want to undo deletions. I would like to be able to undo all
 file system operations.

What you are looking for are FS snapshots.
LVM or EVMS can do that with all (?) Linux FSs.

 VMS has a versioning file system, which does about what I want.

Windows 2003 Server (very new feature ;-), too. 
But it is a bloated thing.

 It feels pretty good if you can undo everything I did since 11:30.

You should have some TRUE backups (snapshots) handy.

-- 
Dieter Nützel
Leiter FE, WEAR-A-BRAIN GmbH, Wiener Str. 5, 28359 Bremen, Germany
Mobil: 0162 673 09 09



data-logging for 2.4.21 (-aa)?

2003-06-14 Thread Dieter Ntzel
Hopefully we'll see 2.4.22-pre with it, soon.

Thanks,
Dieter



Re: reiserfsprogs 3.6.5 release

2003-04-01 Thread Dieter Ntzel
Am Dienstag, 1. April 2003 19:25 schrieb Vitaly Fertman:
 Hi, seems that I fixed that problem, could you try the patch:
ftp.namesys.com/pub/reiserfsprogs/reiserfsprogs-3.6.5-flush_buffers-bug.patch

Applied to reiserfsprogs-3.6.6_pre1 and WORKS for me.
Thanks!

Do you have a 3.6.6_pre2 without debug for me?

Regards,
Dieter


Re: no reiserfs quota in 2.4 yet? 2.4.21-pre4-ac4 says different

2003-02-21 Thread Dieter Ntzel
Am Donnerstag, 20. Februar 2003 02:10 schrieb Chris Mason:
 On Wed, 2003-02-19 at 17:40, Ookhoi wrote:
   Alan has needed changes to generic quota code already in his patchset
   so probably just leaving out these changes should make everything work
   (but I haven't tested it recently).
 
  You mean, I should use only a few of these?
 
  01-quota-v2-2.4.20.diff

 Not this, -ac already has it

  02-nesting-2.4.20.diff
  03-reiserfs-quota-23-2.4.20.diff

 Or these two, I've got an updated reiserfs-quota patch on top of the
 data logging code in testing here, it should apply more cleanly on -ac.
 The big delay right now is I'm trying to fix another latency bug in
 data=ordered support, and test the fix for the sd_block count on
 symlinks.

Chris do you have something handy (data=ordered latency)?
I'll give it a try with my lantency test under 2.4.21pre4aa3.

Thanks,
Dieter

-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: Dieter.Nuetzel at hamburg.de (replace at with @)




Re: slightly [OT] highmem (was Re: 2.4.20 at kernel.org and data logging)

2003-01-24 Thread Dieter Ntzel
Am Freitag, 24. Januar 2003 08:50 schrieb Oleg Drokin:
 Hello!

 On Fri, Jan 24, 2003 at 01:18:44AM +0100, Manuel Krause wrote:
  You need highmem if you want more than 980M.  People usually refer to
   1G, and if you don't mind wasting 20M then that's near enough.
 
  Mmh. So I wasn't really clear with my questions?
  Does it give advantages for 512M systems like mine if I enabled
  higmem4GB / highmem64GB with pae or does it produce more overhead that
  you mention below?

 You get no advantage of course.
 But lots of overhead. Rumours have it that 256M systems with highmem
 enabled kernels (default for RedHat beta it seems) are swapping much more
 then when the same kernel is built with highmem off.

But that could be because they have forgotten to enabled HIGHMEM IO?
See Andrea Ancangeli's -aa kernels.

Greetings,
Dieter



Re: Can resize_reiserfs do non-destructive resizing??

2002-11-19 Thread Dieter Ntzel
Am Dienstag, 19. November 2002 16:56 schrieb Oleg Drokin:
 Hello!

 On Tue, Nov 19, 2002 at 10:48:21AM -0500, Bruce wrote:
  I ran badblocks /dev/hda2 and it produced a list of 39 numbers.
  Nov 19 09:52:07 desktop kernel: hda: dma_intr: status=0x51 { DriveReady
  SeekComplete Error }

 Well, that means that your hard drive is failing and reiserfs cannot read
 some of the data because of bad sectors.

Yah, replace that disk (or disks).
Some harddisk manufactures like IBM for example tell you to try there DFT tool 
first to get a valid RMA number or low level reformat the disk, but we 
replace all such costumer disks. They are out of reserved sector = bad.

Regards,
Dieter
-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: Dieter.Nuetzel at hamburg.de (replace at with @)



Re: [reiserfs-list] Catastrophe with mailboxes on ReiserFS

2002-10-09 Thread Dieter Ntzel

Am Mittwoch, 9. Oktober 2002 10:18 schrieb Oleg Drokin:
 Hello!

 On Wed, Oct 09, 2002 at 11:06:36AM +0300, Robert Tiismus wrote:
   Also you probably want to run reiserfsck on that disk to make sure
   no other damage happened.
 
  Thank you. Reiserfsck said that all is ok. It's just that I have seen
  nothing

 This is good.

  similar happening with other filesystems. I would prefer disappearing
  data to leaking data. Am I completely wrong, when I say that because of
  'tail packing' in ReiserFS, it can leak information with more
  probability than, lets say ext2 or ufs or...? Will notails option give

 No, tail packing should have no effect on leakign data.
 Anyway tail packing is only takes effect for files less than 16K in size.

  me more secure FS? I have to assure company management, and myself, that
  this

 notails will have no effect on that.

  incident happened not because I put ReiserFS on new server. It could
  have happened also with other filesystems :)

 Indeed it seems that you've got either some metadata altered (block
 pointers) or the block content swapped somehow, both of which could happen
 on any FS with absolutely same results.
 BTW, I just remembered that until you apply Chris' Mason data logging
 patches, there is a certain window where system crash would lead to
 deleted data appearing at the end of files that were appended before the
 crash. (that's it, metadata is already updated and list newly allocated
 blocknumbers, but old content od those blocks is still intact since crash
 prevented system from putting new content in there), but since you've got
 other file's data in the middle of file, this is not the case.

Oleg, I see it from time to time during kernel and DRI development (X server 
crashes), too. This times without Chris's stuff. 2.4.19-ck5 (latest ReiserFS 
patches for 2.4.19) and now with 2.5.40-ac3.

Most times messages, localmessages, kernel .o.*, dep files, etc. are 
broken (wrong stuff or mixed stuff).

Latest my .q3a/baseq3/q3key file (DRI SMP tests) was broken (completely 
other stuff in it) even though it only should opened for reading.

-Dieter

PS I send you a compressed copy of my messages file in private.



Re: [reiserfs-list] Catastrophe with mailboxes on ReiserFS

2002-10-09 Thread Dieter Ntzel

Am Mittwoch, 9. Oktober 2002 16:02 schrieb Dieter Nützel:
 Am Mittwoch, 9. Oktober 2002 10:18 schrieb Oleg Drokin:
  Hello!
 
  On Wed, Oct 09, 2002 at 11:06:36AM +0300, Robert Tiismus wrote:
Also you probably want to run reiserfsck on that disk to make sure
no other damage happened.
  
   Thank you. Reiserfsck said that all is ok. It's just that I have seen
   nothing
 
  This is good.
 
   similar happening with other filesystems. I would prefer disappearing
   data to leaking data. Am I completely wrong, when I say that because of
   'tail packing' in ReiserFS, it can leak information with more
   probability than, lets say ext2 or ufs or...? Will notails option give
 
  No, tail packing should have no effect on leakign data.
  Anyway tail packing is only takes effect for files less than 16K in size.
 
   me more secure FS? I have to assure company management, and myself,
   that this
 
  notails will have no effect on that.
 
   incident happened not because I put ReiserFS on new server. It could
   have happened also with other filesystems :)
 
  Indeed it seems that you've got either some metadata altered (block
  pointers) or the block content swapped somehow, both of which could
  happen on any FS with absolutely same results.
  BTW, I just remembered that until you apply Chris' Mason data logging
  patches, there is a certain window where system crash would lead to
  deleted data appearing at the end of files that were appended before
  the crash. (that's it, metadata is already updated and list newly
  allocated blocknumbers, but old content od those blocks is still intact
  since crash prevented system from putting new content in there), but
  since you've got other file's data in the middle of file, this is not the
  case.

 Oleg, I see it from time to time during kernel and DRI development (X
 server crashes), too. This times without Chris's stuff. 2.4.19-ck5 (latest
 ReiserFS patches for 2.4.19) and now with 2.5.40-ac3.

 Most times messages, localmessages, kernel .o.*, dep files, etc. are
 broken (wrong stuff or mixed stuff).

 Latest my .q3a/baseq3/q3key file (DRI SMP tests) was broken (completely
 other stuff in it) even though it only should opened for reading.

Oh, forgotten my mount options:

/dev/sda3 on / type reiserfs (rw,noatime,notail)
/dev/sda2 on /tmp type reiserfs (rw,notail)
/dev/sda5 on /var type reiserfs (rw,notail)
/dev/sda6 on /home type reiserfs (rw,notail)
/dev/sda7 on /usr type reiserfs (rw,notail)
/dev/sda8 on /opt type reiserfs (rw,notail)
/dev/sdb1 on /Pakete type reiserfs (rw,notail)
/dev/sdb5 on /database/db1 type reiserfs (rw,notail)
/dev/sdb6 on /database/db2 type reiserfs (rw,notail)
/dev/sdb7 on /database/db3 type reiserfs (rw,notail)
/dev/sdb8 on /database/db4 type reiserfs (rw,notail)

-Dieter



Re: [reiserfs-list] Catastrophe with mailboxes on ReiserFS

2002-10-09 Thread Dieter Ntzel

Am Mittwoch, 9. Oktober 2002 16:09 schrieb Oleg Drokin:
 Hello!

 On Wed, Oct 09, 2002 at 04:02:34PM +0200, Dieter N?tzel wrote:
   prevented system from putting new content in there), but since you've
   got other file's data in the middle of file, this is not the case.
 
  Oleg, I see it from time to time during kernel and DRI development (X
  server crashes), too. This times without Chris's stuff. 2.4.19-ck5
  (latest ReiserFS patches for 2.4.19) and now with 2.5.40-ac3.

 What's it? Is it means what I describe, or it means wrong stuff in the
 middle of files?

Wrong stuff in the middle (appended) _AND_ sometimes whole broken files (if 
the files are smaller).

  Most times messages, localmessages, kernel .o.*, dep files, etc. are
  broken (wrong stuff or mixed stuff).

 These are recently modified stuff.

Yes.

  Latest my .q3a/baseq3/q3key file (DRI SMP tests) was broken (completely
  other stuff in it) even though it only should opened for reading.

 ls -la .q3a/baseq3/q3key?

Sorry, I had a copy of it elsewhere and deleted it ;-(

  PS I send you a compressed copy of my messages file in private.

 It only confirms my version, note how whole block of zeroes
 was inserted after emergency sync (perhaps during crash).

whole block of zeroes would be fine but it isn't sometimes...
Sometimes it is stuff from other (deleted?) files.

When I get the next crash during kernel compilation I save some files for 
you.

-Dieter



Re: [reiserfs-list] Catastrophe with mailboxes on ReiserFS

2002-10-09 Thread Dieter Ntzel

Am Mittwoch, 9. Oktober 2002 16:32 schrieb Oleg Drokin:
 Hello!

 On Wed, Oct 09, 2002 at 04:23:31PM +0200, Dieter N?tzel wrote:
Latest my .q3a/baseq3/q3key file (DRI SMP tests) was broken
(completely other stuff in it) even though it only should opened for
reading.
  
   ls -la .q3a/baseq3/q3key?
 
  OK, read it wrong. You mean ls NOT less ;-)
  It is VERY, VERY small...
  ls -la .q3a/baseq3/q3key
  -rw-r--r--1 nuetzel  users 167 Okt  9 05:47 .q3a/baseq3/q3key

 Looks like recently modified for me ;)

Yes, copied over...;-)

But you are sure with cp -a original.
And Q3A seems to modify it ;-(
Truly bad coding style...

So NOT ReiserFS to blame?!

Thanks,
Dieter



Re: [reiserfs-list] Catastrophe with mailboxes on ReiserFS

2002-10-09 Thread Dieter Ntzel

Am Mittwoch, 9. Oktober 2002 17:19 schrieb Oleg Drokin:
 Hello!

 On Wed, Oct 09, 2002 at 05:15:20PM +0200, Dieter N?tzel wrote:
  But you are sure with cp -a original.
  And Q3A seems to modify it ;-(
  Truly bad coding style...
  So NOT ReiserFS to blame?!

 There is a reason thato blame reiserfs too, unfortunately.
 But Cris' patches should change that.

Yes, to close this thread, data-logging should be my friend ;-)
Hopefully soon for 2.5.41+.

Regards,
Dieter



[reiserfs-list] How to get all disk geometry (logical/physical) equal for RAID5

2002-09-07 Thread Dieter Ntzel

Hello Neil, Chris,

Sorry that I bother you but I'm under some hurry 'cause a school server 
crashed and I have to set it up over the weekend, again.
It had double disk fault in RAID5. The second disk show the damage during 
RAID5 reconstruction ;-(
So I changed all 4 disks with another brand and put a fifth one as spare in.

hda (the spare) is on the on board VIA 686b controller and show some bad 
logical numbers so that I get a different partition layout.

hde and hdg are on the on board HPT 370
hdi and hdk are on an additional HPT 370A IDE card

hdb is for installation only

Any chance to change the logical disk layout of hda?

Second to you and Chris:
Is it possible to boot from the mirrored RAID1 partitions (hdX10) with the 
current lilo-22.x (SuSE 8.0)? With ReiserFS?

Thank you much!

-Dieter

Festplatte /dev/hdk: 16 Köpfe, 63 Sektoren, 77545 Zylinder
Einheiten: Zylinder mit 1008 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hdk1 1 14564   7340224+  fd  Linux raid autodetect
/dev/hdk2 14565 20806   3145968   fd  Linux raid autodetect
/dev/hdk3 20807 20949 72072   fd  Linux raid autodetect
/dev/hdk4 20950 77545  285243845  Erweiterte
/dev/hdk5 20950 27191   3145936+  fd  Linux raid autodetect
/dev/hdk6 27192 41755   7340224+  fd  Linux raid autodetect
/dev/hdk7 41756 56319   7340224+  fd  Linux raid autodetect
/dev/hdk8 56320 64642   4194760+  fd  Linux raid autodetect
/dev/hdk9 64643 72965   4194760+  fd  Linux raid autodetect
/dev/hdk1072966 73006 20632+  83  Linux
/dev/hdk1173007 77545   2287624+  83  Linux

Festplatte /dev/hdi: 16 Köpfe, 63 Sektoren, 77545 Zylinder
Einheiten: Zylinder mit 1008 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hdi1 1 14564   7340224+  fd  Linux raid autodetect
/dev/hdi2 14565 20806   3145968   fd  Linux raid autodetect
/dev/hdi3 20807 20949 72072   fd  Linux raid autodetect
/dev/hdi4 20950 77545  285243845  Erweiterte
/dev/hdi5 20950 27191   3145936+  fd  Linux raid autodetect
/dev/hdi6 27192 41755   7340224+  fd  Linux raid autodetect
/dev/hdi7 41756 56319   7340224+  fd  Linux raid autodetect
/dev/hdi8 56320 64642   4194760+  fd  Linux raid autodetect
/dev/hdi9 64643 72965   4194760+  fd  Linux raid autodetect
/dev/hdi1072966 73006 20632+  83  Linux
/dev/hdi1173007 77545   2287624+  83  Linux

Festplatte /dev/hdg: 16 Köpfe, 63 Sektoren, 77545 Zylinder
Einheiten: Zylinder mit 1008 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hdg1 1 14564   7340224+  fd  Linux raid autodetect
/dev/hdg2 14565 20806   3145968   fd  Linux raid autodetect
/dev/hdg3 20807 20949 72072   fd  Linux raid autodetect
/dev/hdg4 20950 77545  285243845  Erweiterte
/dev/hdg5 20950 27191   3145936+  fd  Linux raid autodetect
/dev/hdg6 27192 41755   7340224+  fd  Linux raid autodetect
/dev/hdg7 41756 56319   7340224+  fd  Linux raid autodetect
/dev/hdg8 56320 64642   4194760+  fd  Linux raid autodetect
/dev/hdg9 64643 72965   4194760+  fd  Linux raid autodetect
/dev/hdg1072966 73006 20632+  83  Linux
/dev/hdg1173007 77545   2287624+  83  Linux

Festplatte /dev/hde: 16 Köpfe, 63 Sektoren, 77545 Zylinder
Einheiten: Zylinder mit 1008 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hde1 1 14564   7340224+  fd  Linux raid autodetect
/dev/hde2 14565 20806   3145968   fd  Linux raid autodetect
/dev/hde3 20807 20949 72072   fd  Linux raid autodetect
/dev/hde4 20950 77545  285243845  Erweiterte
/dev/hde5 20950 27191   3145936+  fd  Linux raid autodetect
/dev/hde6 27192 41755   7340224+  fd  Linux raid autodetect
/dev/hde7 41756 56319   7340224+  fd  Linux raid autodetect
/dev/hde8 56320 64642   4194760+  fd  Linux raid autodetect
/dev/hde9 64643 72965   4194760+  fd  Linux raid autodetect
/dev/hde1072966 73006 20632+  83  Linux
/dev/hde1173007 77545   2287624+  83  Linux

Festplatte /dev/hda: 255 Köpfe, 63 Sektoren, 4865 Zylinder
Einheiten: Zylinder mit 16065 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hda2   * 1  4865  39078081   83  Linux

Festplatte /dev/hdb: 128 Köpfe, 63 Sektoren, 620 Zylinder
Einheiten: Zylinder mit 8064 * 512 Bytes

Gerät boot.  Anfang  EndeBlöcke   Id  Dateisystemtyp
/dev/hdb1 1   620   2499808+  83  Linux

-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg

[reiserfs-list] Fwd: Re: 2.4.19-rc1-jam2 (-rc1aa2)

2002-07-10 Thread Dieter Ntzel

Hello Chris,

symbol clash between latest -AA (-jam2, 00-aa-rc1aa2) kernel and your 
data-logging stuff ;-(

Regards,
Dieter

--  Forwarded Message  --

Subject: Re: 2.4.19-rc1-jam2 (-rc1aa2)
Date: Thu, 11 Jul 2002 02:26:03 +0200
From: Dieter Nützel [EMAIL PROTECTED]
To: J.A. Magallon [EMAIL PROTECTED]
Cc: Andrea Arcangeli [EMAIL PROTECTED]

On Thursday 11 July 2002 02:01, J.A. Magallon wrote:
 On 2002.07.11 Dieter Nützel wrote:
 fs/fs.o(__ksymtab+0x10): multiple definition of `__ksymtab_balance_dirty'
 kernel/kernel.o(__ksymtab+0x870): first defined here
 fs/fs.o(.kstrtab+0x47): multiple definition of `__kstrtab_balance_dirty'
 kernel/kernel.o(.kstrtab+0x23c7): first defined here

[-]

 I think your gcc is making one other definition of balance_dirty from this:

 kernel/ksyms.c:EXPORT_SYMBOL(balance_dirty);

 gcc, binutils version ?

Reading specs from /usr/lib/gcc-lib/i486-suse-linux/2.95.3/specs
gcc version 2.95.3 20010315 (SuSE)

GNU ld version 2.11.90.0.29 (with BFD 2.11.90.0.29)

 Are you building with someting like -fnocommon ?

Not that I know.

Same flags as with 2.4.19rc1aa1.

I use latest ReiserFS stuff, but same code works fine with 2.4.19rc1aa1.
Even Page Coloring works.
Only new stuff is -aa2 (XFS, not enabled, I dropped it before 'cause I do not
use it) and jam2.

SunWave1 src/linux# grep -r balance_dirty kernel fs
kernel/ksyms.c:EXPORT_SYMBOL(balance_dirty);
Übereinstimmungen in Binärdatei kernel/ksyms.o.
Übereinstimmungen in Binärdatei kernel/kernel.o.
fs/xfs/pagebuf/page_buf_io.c:   balance_dirty();
fs/xfs/pagebuf/page_buf_io.c:   int need_balance_dirty = 0;
fs/xfs/pagebuf/page_buf_io.c:   need_balance_dirty = 1;
fs/xfs/pagebuf/page_buf_io.c:   if (need_balance_dirty) {
fs/xfs/pagebuf/page_buf_io.c:   balance_dirty();
Übereinstimmungen in Binärdatei fs/fs.o.
Übereinstimmungen in Binärdatei fs/reiserfs/reiserfs.o.
fs/reiserfs/inode.c:int need_balance_dirty = 0 ;
fs/reiserfs/inode.c:need_balance_dirty = 1;
fs/reiserfs/inode.c:if (need_balance_dirty) {
fs/reiserfs/inode.c:balance_dirty() ;
fs/reiserfs/inode.c:int partial = 0, need_balance_dirty = 0;
fs/reiserfs/inode.c:need_balance_dirty = 1;
fs/reiserfs/inode.c:if (need_balance_dirty)
fs/reiserfs/inode.c:balance_dirty();
Übereinstimmungen in Binärdatei fs/reiserfs/inode.o.
fs/reiserfs/do_balan.c: tb-need_balance_dirty = 1;
fs/reiserfs/do_balan.c:tb-need_balance_dirty = 0;
fs/buffer.c:static int balance_dirty_state(void)
fs/buffer.c:void balance_dirty(void)
fs/buffer.c:int state = balance_dirty_state();
fs/buffer.c:EXPORT_SYMBOL(balance_dirty);
fs/buffer.c:/* atomic version, the user must call balance_dirty() by hand
fs/buffer.c:balance_dirty();
fs/buffer.c:int partial = 0, need_balance_dirty = 0;
fs/buffer.c:need_balance_dirty = 1;
fs/buffer.c:if (need_balance_dirty)
fs/buffer.c:balance_dirty();
fs/buffer.c:if (balance_dirty_state() = 0)
Übereinstimmungen in Binärdatei fs/buffer.o.

Thanks and good night.

-Dieter

---

-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]




Re: [reiserfs-list] [PATCH CFT] tons of logging patches + addon

2002-07-09 Thread Dieter Ntzel

On Tuesday 9 June 2002 00:22, Manuel Krause wrote:

[-]

 The notebook now works like I bought it: fine, stable and the thermal
 (fan start/stop) patterns are quite well though I didn't replace the
 thermal pads between CPUfan and the heatsink-to-graphix-and-chipset as
 always and everywhere recommended. (Oh, try to get these special parts
 once!!!)
 We had up to 35°C today here in Germany. That's a good restart!

You are not exaggerate? Aren't you?
Eastern German...;-)

We have ~30°C (shadow) and I have 26°C in my working room, today here in
Hamburg, Northern Germany.

Have you ever considered to use lm_sensors?
You need the latest kernel patch (2.6.4 CVS).

My brand new dual Athlon MP 1900+, 1 GB DDR-SDRAM 266 CL2 is running even
today sweet and cool.

SunWave1 /home/nuetzel# /usr/local/sbin/smartctl -a /dev/sda
Device: IBM  DDYS-T18350N Version: S96H
Device supports S.M.A.R.T. and is Enabled
Temperature Warning Disabled or Not Supported
S.M.A.R.T. Sense: Okay!
Current Drive Temperature: 33 C
Drive Trip Temperature:85 C
Current start stop count:  65678 times
Recommended start stop count:  2555920 times

SunWave1 /home/nuetzel# sensors
eeprom-i2c-0-50
Adapter: SMBus AMD768 adapter at 06e0
Algorithm: Non-I2C SMBus adapter
Memory type:SDRAM DIMM SPD
SDRAM Size (MB):invalid
12 1 2 144

eeprom-i2c-0-51
Adapter: SMBus AMD768 adapter at 06e0
Algorithm: Non-I2C SMBus adapter

w83627hf-isa-0290
Adapter: ISA adapter
Algorithm: ISA algorithm
VCore 1:   +1.72 V  (min =  +4.08 V, max =  +4.08 V)
VCore 2:   +2.49 V  (min =  +4.08 V, max =  +4.08 V)
+3.3V: +3.36 V  (min =  +3.13 V, max =  +3.45 V)
+5V:   +4.94 V  (min =  +4.72 V, max =  +5.24 V)
+12V: +12.08 V  (min = +10.79 V, max = +13.19 V)
-12V: -12.70 V  (min = -13.21 V, max = -10.90 V)
-5V:   -5.10 V  (min =  -5.26 V, max =  -4.76 V)
V5SB:  +5.39 V  (min =  +4.72 V, max =  +5.24 V)
VBat:  +3.42 V  (min =  +2.40 V, max =  +3.60 V)
U160:0 RPM  (min = 3000 RPM, div = 2)
CPU 0:4500 RPM  (min = 3000 RPM, div = 2)
CPU 1:4354 RPM  (min = 3000 RPM, div = 2)
System:   +35.0°C   (limit = +60°C, hysteresis = +50°C) sensor = thermistor
CPU 1:+38.5°C   (limit = +60°C, hysteresis = +50°C) sensor = 3904
transistor
CPU 0:+40.5°C   (limit = +60°C, hysteresis = +50°C) sensor = 3904
transistor
vid:  +18.50 V
alarms:   Chassis intrusion detection
beep_enable:
  Sound alarm disabled

Regards,
Dieter

BTW Ich liebe die Ossies ;-)

--
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]



[reiserfs-list] 2.4.19-pre7.pending: 04 is set to unreadable for group/world ;-(

2002-04-18 Thread Dieter Ntzel

Thanks,
Dieter



Re: [reiserfs-list] knfsd/samba performance with reisefs and 1Gbit network.

2002-03-18 Thread Dieter Ntzel

On Mon, 18 Mar 2002 06:16:39,Valdis Kletnieks wrote :
 On Mon, 18 Mar 2002 09:11:14 +0300, Oleg Drokin said:

  It is somehow related to knfsd, I think. I will tell you what I will be
  able to find.

  The only thing that ext{2,3}/reiserfs differs right now is buffer flushing
  policy, while ext{2,3} just marks block dirty and returns, reiserfs
  actually waits for disk to finish it's I/O.

 If that *doesnt* explain the performance differences, I'd be surprised

That could be well explain the little stalls I see during all latest dbench 
test, here.

Latest Kernel is:
2.4.19-pre3-dn1 (;-)
AA vm_29
O(1)
preemption+lock-break

Under heavy IO (dbench) there are times in which there is no progress and then 
fast drawing of the little dots, again.

Regards,
Dieter
-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]




[reiserfs-list] HP370: ATARAID/ReiserFS/LVM need advice

2002-03-03 Thread Dieter Ntzel

Hello to all of you!

I have to reinstall a Linux server for a school which is a reference system 
for more to come. Main usage is SAMBA (DOMAIN logon and DOMAIN master), 
squid, and Apache.

It was running since June 2001 under SuSE 7.2, ReiserFS 3.6 and kernel 
2.4.6-ac (the first HP370 ataraid stuff). Much crafted stuff...

It consists of four identical disks on the HP370 and I used it with (software) 
ATARAID 0+1 (yes it worked since). But the performance was poor (compared to 
numbers spooking in the Windoze world around) and there are some hiccup 
(kupdated) during low/medium IO and some network traffic.

I've configured the system with an install harddisk from which I did fdisk and 
format without a hitch.

6Uniform Multi-Platform E-IDE driver Revision: 6.31
4ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
4VP_IDE: IDE controller on PCI bus 00 dev 39
4VP_IDE: chipset revision 6
4VP_IDE: not 100%% native mode: will probe irqs later
4ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
6VP_IDE: VIA vt82c686b (rev 40) IDE UDMA100 controller on pci00:07.1
4ide0: BM-DMA at 0xa000-0xa007, BIOS settings: hda:pio, hdb:pio
4ide1: BM-DMA at 0xa008-0xa00f, BIOS settings: hdc:DMA, hdd:pio
4HPT370: IDE controller on PCI bus 00 dev 98
6PCI: Found IRQ 11 for device 00:13.0
6PCI: Sharing IRQ 11 with 00:09.0
4HPT370: chipset revision 3
4HPT370: not 100%% native mode: will probe irqs later
4ide2: BM-DMA at 0xcc00-0xcc07, BIOS settings: hde:pio, hdf:pio
4ide3: BM-DMA at 0xcc08-0xcc0f, BIOS settings: hdg:pio, hdh:pio
4hdc: LG DVD-ROM DRD-8120B, ATAPI CD/DVD-ROM drive
4hde: FUJITSU MPG3204AT E, ATA DISK drive
4hdf: FUJITSU MPG3204AT E, ATA DISK drive
4hdg: FUJITSU MPG3204AT E, ATA DISK drive
4hdh: FUJITSU MPG3204AT E, ATA DISK drive
4ide1 at 0x170-0x177,0x376 on irq 15
4ide2 at 0xbc00-0xbc07,0xc002 on irq 11
4ide3 at 0xc400-0xc407,0xc802 on irq 11
6hde: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdf: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdg: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdh: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
4hdc: ATAPI 40X DVD-ROM drive, 512kB Cache, UDMA(33)
6Uniform CD-ROM driver Revision: 3.12
6Partition check:
6 hde:
6 hdf: [PTBL] [2491/255/63] hdf1 hdf2 hdf3 hdf4  
6 hdg:
6 hdh: unknown partition table
[-]
6 ataraid/d0: ataraid/d0p1 ataraid/d0p2 ataraid/d0p3 ataraid/d0p4  
ataraid/d06Highpoint HPT370 Softwareraid driver for linux version 0.01
6Drive 0 is 19546 Mb
6Drive 1 is 19546 Mb
6Raid array consists of 2 drives.

nordbeck@stmartin:~ df
Dateisystem  1k-BlöckeBenutzt Verfügbar Ben% montiert auf
/dev/ataraid/d0p1   144540104116 40424  73% /
/dev/ataraid/d0p2   979928 37804942124   4% /tmp
/dev/ataraid/d0p3  1967892 83040   1884852   5% /var
/dev/ataraid/d0p5  9767184 32840   9734344   1% /var/squid
/dev/ataraid/d0p6 17381760269840  17111920   2% /home
/dev/ataraid/d0p7  4891604   1305296   3586308  27% /usr
/dev/ataraid/d0p8  4891604452176   4439428  10% /opt
shmfs   257120 0257120   0% /dev/shm

fstab
/dev/ataraid/d0p1   /   reiserfsdefaults,noatime
/dev/ataraid/d0p2   /tmpreiserfsdefaults,notail
/dev/ataraid/d0p3   /varreiserfsdefaults
/dev/ataraid/d0p5   /var/squid  reiserfsdefaults
/dev/ataraid/d0p6   /home   reiserfsdefaults
/dev/ataraid/d0p7   /usrreiserfsdefaults
/dev/ataraid/d0p8   /optreiserfsdefaults
/dev/cdrom  /media/cdromautoro,noauto,user,exec
/dev/dvd/media/dvd  autoro,noauto,user,exec
/dev/fd0/media/floppy   autonoauto,user,sync
proc/proc   procdefaults
devpts  /dev/ptsdevpts  defaults
usbdevfs/proc/bus/usb   usbdevfsdefaults,noauto

Second problem was lilo (version 21.7-5). I didn't get the system to boot from 
the RAID. I tried it with an old fifth disk but no luck to. So I had to boot 
from floppy. Not so nice for a standalone server.
I had to type in root=/dev/ataraid/d0p1 with 2.4.6-ac2 and root=7201 with 
newer kernels at the lilo boot prompt. Why the change?

To get it going I tested lilo 22.1-beta and 22.1 final but nothing worked.
Lilo didn't show any error messages but it didn't boot.

I tried many different lilo.conf versions but no go. Some examples:

lilo.conf
boot= /dev/hdf
vga = 791
read-only
menu-scheme = Wg:kw:Wg:Wg
lba32
prompt
timeout = 80
message = /boot/message

  disk   = /dev/hdc
  bios   = 0x82

  disk   = /dev/hde
  bios   = 0x80

  disk   = /dev/hdg
  bios   = 0x81

  image  = /boot/vmlinuz
  label  = linux
  root   = 7201
  initrd = /boot/initrd
  append = 

[reiserfs-list] Re: [PATCH] write barriers for 2.4.x

2002-02-13 Thread Dieter Ntzel

On Thursday, 14. February 2002 00:47, Chris Mason wrote:
 On Wednesday, February 13, 2002 11:27:13 PM +0100 Dieter Nützel 
[EMAIL PROTECTED] wrote:
  Hello Chris,
 
  I'll do my best on AHA-2940UW and with IBM DDYS U160 10k rpm.
 
  But can you please send your patch next time _NOT_ included, but as
  attachment (*.gz or *.bz2)?
  It is corrupted (wrapped lines)...
  ...and I am not subscribed and have to pull it from
  http://marc.theaimsgroup.com/?l=reiserfs.

 Hmmm, you might want to try the download message raw feature on
 marc.  I had line wrapping turned off on that message, and just
 double checked that it didn't have wrapped lines.

OK, that's a second possibility...;-)

 Thanks for offering to give it a try, I'm sending you a gzip'd
 version of the patch in case you can't pull it from marc.

Got your second mail, thanks.
But I'll do it tomorrow, will see some Olympic stuff before I go to bed.

-Dieter

BTW
I'm currently running the below stuff.
Linux is flying!!! --- Ingo's latest little -K2 to -K3 fix did the trick.

2.4.18-pre8
00_get_user_pages-2
00_nanosleep-5
00_vm_raend-race-1
00_vmalloc-cache-flush-1
06-clone-flags
10-ide-20020119
10_vm-24
2.4.18-pre9.pending (all :-)
bootmem-2.4.17-pre6
make_request.patch  (Andrew Mortan)
preempt+lock-break
read-latency2.patch
sched-O1-2.5.4-K3.patch
waitq-2.4.17-mainline-1



Re: [reiserfs-list] Re: Hard disk error. Strange SCSI sense error on HDD. Crash!

2002-01-16 Thread Dieter Ntzel

On Wednesday, 16. January 2002 16:06, Dieter Nützel wrote:
 On Wednesday, 16. January 2002 02:29, pcg( Marc)@goof(A.).(Lehmann )com 
wrote:
  On Tue, Jan 15, 2002 at 10:24:19PM +0100, Dieter Nützel

 [EMAIL PROTECTED] wrote:
   smartd and smartctl
 
  Do you have a url where one can get these? The homepage only offers the
  very old 2.0beta versions.

 Hello Marc,

 hmm, where did I grep it. --- Sourceforge.net, not sure...
 Have to think, again.

Ah, found it.
http://sourceforge.net/projects/smartsuite/
ftp://ftp.sourceforge.net/pub/sourceforge/smartsuite

-Dieter
-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]



[reiserfs-list] Re: Hard disk error. Strange SCSI sense error on HDD. Crash!

2002-01-15 Thread Dieter Ntzel

   Check that your drive does not overheats (and that it really can do
   several months of uptime). And that it does not contains any hidden
   remapped bad sectors (is there a way to query SMART-alike stuff in
   SCSI?)

Of course..;-)

smartd and smartctl

AUTHOR
   Michael Cornwell, [EMAIL PROTECTED]
   Concurrent Systems Laboratory
   Jack Baskin School of Engineering
   University of California Santa Cruz
   http://csl.cse.ucsc.edu/

smartctl-2.1September 13, 2001  SMARTD(8)

You can run smartd in your startup scripts and have your SCSI-2/3 disks under 
control. I have a version for SuSE 7.3 attached.

Jan 15 21:24:54 SunWave1 smartd: smartd started
Jan 15 21:24:54 SunWave1 smartd: Device: /dev/sda, Found and is SMART capable
Jan 15 21:24:54 SunWave1 smartd: Device: /dev/sdb, Found and is SMART capable
Jan 15 21:24:54 SunWave1 smartd: Device: /dev/sdc, Found and is SMART capable
[-]
Jan 15 21:54:54 SunWave1 smartd: Device: /dev/sda, Temperature changed 9 
degrees to 27 degress since last reading

SunWave1 /home/nuetzel# /usr/local/sbin/smartctl -a /dev/sda
Device: IBM  DDYS-T18350N Version: S96H
Device supports S.M.A.R.T. and is Enabled
Temperature Warning Disabled or Not Supported
S.M.A.R.T. Sense: Okay!
Current Drive Temperature: 28 C
Drive Trip Temperature:85 C
Current start stop count:  128 times
Recommended start stop count:  2555920 times

SunWave1 /home/nuetzel# /usr/local/sbin/smartctl -a /dev/sdb
Device: IBM  DDRS-34560D  Version: DC1B
Device supports S.M.A.R.T. and is Enabled
Temperature Warning Disabled or Not Supported
S.M.A.R.T. Sense: Okay!

  That's what coming to my  mind.
  Ill look into the overheating possibility. Its a pretty jam packed rack :)

 Perhaps some additional fans might need to be installed ;)
 You might want to measure temperature in the rack, and there are
 recommended  values, you can get the numbers from vendor's site, I
 believe. 

The case temperature shouldn't go over 40°C.

Regards,
Dieter
-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]



smartd.bz2
Description: BZip2 compressed data


[reiserfs-list] Re: [Dri-devel] Voodoo5 SLI / AA

2002-01-07 Thread Dieter Ntzel

On Sunday, 7. January 2002 13:11,  Russell Coker wrote:
 On Mon, 7 Jan 2002 13:11, pesarif wrote:
  1. How big is the journal?

 32M.  It is possible to change this, but currently that requires
 recompiling  your kernel (and running an altered mkreiserfs).  Then a
 regular kernel won't  mount them.  It's painful enough that you don't want
 to do it.

Oleg described here how you can do that.

 Hans has announced plans to address this issue, I am looking forward to a 
 version of ReiserFS that works on floppies.  ;)

Chris has worked on this (dynamic journal size) for ages.
He told me something about it in the year 2000?

So Chris, any news?
Would be nice to have a smaller journal on my root partition...

-Dieter

-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]




Re: [reiserfs-list] Re: [Dri-devel] Voodoo5 SLI / AA

2002-01-07 Thread Dieter Ntzel

On Tuesday, 8. January 2002 00:20, Chris Wedgwood wrote:
 On Mon, Jan 07, 2002 at 05:32:16PM -0500, Chris Mason wrote:
  Chris has worked on this (dynamic journal size) for ages.
  He told me something about it in the year 2000?

 Actually some of the namesys coders did this, it works pretty
 well.  The current utilities have support for it, I expect the
 patches will get fed into 2.5.x.

 Is there any way (at present) to gague how much of the journal is
 presently being used (high/low water marks perhaps?).

 My guess is a busy server will want a full 32M journal (maybe even
 larger) by my laptop will suffice with only a fraction of that.  It
 would be nice to measure this.

Sorry,

that I missed up the subject!

But all you good men understand me, of course...8-)))

Regards,
Dieter



[reiserfs-list] patch archive for current kernels (2.4.17-rc2)?

2001-12-20 Thread Dieter Ntzel

Hello Hans and Chris,

is it to much to hope that the reiser team held there ftp archive up todate?
Shouldn't we a have a 2.4.17-rc.pending archive, already?

K-N works (clean)
O-inode-attrs.patch need updates
P-reiser_stats_fix.patchdo we need it anylonger since -rc2?
expanding-truncate-4.diff   works (shouldn't this go into final?)

Do we need more than the two additional patches?
map_block_for_writepage_highmem_fix-1.diff
mmaped_data_loss_fix.diff

Shouldn't all the above except O (it works OK btw) go into 2.4.17-final?

BTW Linux-2.4.17rc1-iicache.patch do not show that great speedup. Have a look 
at the bonnie++ block read numbers (second run). Yes, dbench 32 looks good. 
But I learnt that dbench is not that great for testing. Like Andrew Morton 
claims.

2.4.17-rc1-preempt plus patches
/dev/sda8 on /opt type reiserfs (rw,notail)
dbench 32
Throughput 14.7385 MB/sec (NB=18.4231 MB/sec  147.385 MBit/sec)
14.220u 77.240s 4:47.62 31.7%   0+0k 0+0io 938pf+0w

bonnie++
Version  1.93   --Sequential Output-- --Sequential Input- 
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
%CP
SunWave1  1248M88  97 14083  17  9295   8   804  98 22052  13 289.4   
4
Latency   118ms3228ms2102ms   81787us 189ms2400ms
Version  1.93   --Sequential Create-- Random 
Create
SunWave1-Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
%CP
 16  5822  74 + +++  9355  95  7338  78 + +++  8739  
97
Latency 26189us   13247us   13797us2452us   16038us   15410us
1.92b,1.93,SunWave1,1,1008685582,1248M,,88,97,14083,17,9295,8,804,98,22052,13,289.4,4,16,5822,74,+,+++,9355,95,7338,78,+,+++,8739,97,118ms,3228ms,2102ms,81787us,189ms,2400ms,26189us,13247us,13797us,2452us,16038us,15410us

getc_putc
Version  1.93  write   read putcNT getcNT   putc   getc  putcU  getcU
SunWave1 152717  12044  12762   2334   2336  48094  61240
SunWave1,152,717,12044,12762,2334,2336,48094,61240
16.800u 10.320s 0:32.52 83.3%   0+0k 0+0io 626pf+0w


2.4.17-rc1-preempt plus patches + iicache
/dev/sda8 on /opt type reiserfs (rw,notail,iicache,reada)
dbench 32
Throughput 30.8844 MB/sec (NB=38.6055 MB/sec  308.844 MBit/sec)
14.210u 50.980s 2:17.78 47.3%   0+0k 0+0io 938pf+0w

bonnie++
Version  1.93   --Sequential Output-- --Sequential Input- 
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
%CP
SunWave1  1248M   103  97 14231  16  3685   3   856  98  4681   2 273.1   
4
Latency   125ms2740ms2302ms   63912us 371ms2108ms
Version  1.93   --Sequential Create-- Random 
Create
SunWave1-Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
%CP
 16  6645  66 + +++ 10522  96  8270  75 + +++  4913  
52
Latency   315ms   13360us   14170us2279us   15456us1330ms
1.92b,1.93,SunWave1,1,1008689757,1248M,,103,97,14231,16,3685,3,856,98,4681,2,273.1,4,16,6645,66,+,+++,10522,96,8270,75,+,+++,4913,52,125ms,2740ms,2302ms,63912us,371ms,2108ms,315ms,13360us,14170us,2279us,15456us,1330ms

getc_putc
Version  1.93  write   read putcNT getcNT   putc   getc  putcU  getcU
SunWave1 153661  12037  12741   2328   2335  47931  60970

Thanks,
Dieter



[reiserfs-list] Re: XDSM and ReiserFS

2001-12-19 Thread Dieter Ntzel

On Wed, 19 Dec 2001, Benjamin Scott wrote:
 On Wed, 19 Dec 2001, Russell Coker wrote:
  DLT and other magnetic tape are a much better option than DVD.  DLT drives
  storing 15G on a tape are common I believe.

   You can put 100 GB on an LTO tape right now.  Sony hopes to have 500 GB on
 a single Super-AIT cassette by the end of next year.  Tape still rules when
 it comes to bulk storage.

You are thing about the current tapes with  ~200 GB (compressed)?
HP ultrium tape drives, for example, IBM and Seagate?

[-]
 On Wed, 19 Dec 2001, Hans Reiser wrote:
  I think that DVD-RW could be cheaper than tape though, because tapes
  require expensive cartridges, and platters can be cheaper.

   The thing is, you can pack much more surface area into a tape than a
 platter.  The cost of manufacturing the cartridge is almost immaterial.
 Cost per GB is lower for tape today, and is likely to remain that way for
 some time.

But if you look at the LTO or DLT (HP-SS DLT-VS 80, 3099 DM) tape drives and 
consider Hans joe user view than DVD-RW drives (the hardware) should be 
much cheaper in some months.
The media, too.

Only as an example, I own a HP DDS-3 (12/24 GB) tape 'cause one of my friends 
worked for a HP distributor and he give it to me as a present. Without such 
luck it were out of reach for me (~1348 DM; DDS-4, 1878 DM). The tapes cost 
~26 DM (DDS-4, 40 GB, 56 DM, which I can't use)...

DVD-RW (9.4 GB, 2 layers, 2 sides, 99,50 DM) media prices should go down very 
fast if the drives become much more standard. Like it does with CD-R/CD-RWs.

Only my to 2 Cents (¤) ;-)

-Dieter
-- 
Dieter Nützel
Graduate Student, Computer Science

University of Hamburg
Department of Computer Science
@home: [EMAIL PROTECTED]



[reiserfs-list] Re: Linux-2.4.17-rc1 boot problem (fwd)

2001-12-16 Thread Dieter Ntzel

 [cursing] I think its almost the same bug we found on friday in the P patch
 (not in the kernel yet) where 3.6.x filesystems are missing some
 intialization on remounts from ro to rw.
 
 Marcelo, you're bcc'd to let you know progress has been made, and to keep
 replies out of your inbox until we've all agreed this is the right fix.

 Patch attached, it sets the CONVERT bit and the hash function code when
 mounting readonly.

Chris,

do you think I can't see the bug 'cause I am running with the new P patch?
Any hints how I can do some tests with the P patch , here?

-Dieter



[reiserfs-list] Re: per-char IO tests

2001-12-10 Thread Dieter Ntzel

 On Sunday, December 09, 2001 04:14:30 PM +0100 Russell Coker
 [EMAIL PROTECTED] wrote:

  Both machines run ReiserFS.  A quick test indicates that using Ext2
  instead of ReiserFS triples the  performance of write(fd, buf, 1), but
  this is something I already knew (and had mentioned before on the
  ReiserFS list).

 This is most likely from logging the inode as you extend the file.  Recent
 pre releases for 2.4.17 include a from patch from Andrew that should help,
 but I expect reiserfs to still be somewhat slower.

Here are my number (2.4.17-pre7-preempt + all the ReiserFS suff, except 
O-attrs they gave me over and over trouble).

Athlon II 1 GHz, AMD Irongate C4 (without bypass), 640 MB PC100-2-2-2,
DDYS U160 18 GB 10k IBM disk (/tmp, second partition, next after swap :-)

Version  1.93  write   read putcNT getcNT   putc   getc  putcU  getcU
SunWave1 176670  11905  12732   2306   2330  47592  61554
SunWave1,176,670,11905,12732,2306,2330,47592,61554
16.850u 9.370s 0:29.74 88.1%0+0k 0+0io 622pf+0w

Can we please have an iicache patch against 2.4.17-pre7 (at least)?

Thanks,
Dieter



[reiserfs-list] What's in 2.4.17-pre5, now? A little more verbose kernel log, please.

2001-12-06 Thread Dieter Ntzel

One of my wishes for Marcello Tossati, too...

Can we have an iicache patch against it, please?

Thanks,
Dieter