Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging (was:Compatibility of current 2.4.19.pending ...)

2002-09-17 Thread Oleg Drokin

Hello!

On Tue, Sep 17, 2002 at 07:39:39PM +0200, Manuel Krause wrote:

> >Copy same amount of data from RAM/nowhere to FS.
> >E.g. make a file with file names and sizes and write a script that
> >writes this amount of data from /dev/zero with these same names and needed 
> >sizes
> >into FS. (or just use RAMFS as your source if you have not much data and 
> >huge
> >RAM)
> To be honest, this already exceeds my linux knowledge...

I meant something to this extent:
You run a script that runs over your filesystem and creates shell script
that first creates whole dir structure of source dir and then for each file
creates necessary command to recreate file of the same size:
e.g for this directory contents:
green@angband:~/z> ls -lR
.:
total 1
drwxr-xr-x2 greengreen 114 Sep 18 09:08 t

./t:
total 148
-rw-rw-r--1 greengreen   69570 Aug 10 16:34 inode.c
-rw-rw-r--1 greengreen   66478 Aug 10 16:33 stree.c
-rw-rw-r--1 greengreen   10256 Aug 10 16:32 tail_conversion.c

Result of the work of the script would be:
mkdir t
mkdir t/z
dd if=/dev/zero of=t/z/inode.c bs=69570 count=1
dd if=/dev/zero of=t/z/stree.c bs=66478 count=1
dd if=/dev/zero of=t/z/tail_conversion.c bs=10256 count=1

And you can run resulting script in target dir.

> I was fiddling with some test directories containing 195.8MB I copied to 
> and from /dev/shm with swap turned off.
> 
> # time cp -a /dev/shm/. /mnt/beta/z.Backup.3/
> kernel 2.4.20-pre7  | kernel 2.4.20-pre6
> real0m9.006s| real0m6.740s
> user0m0.190s| user0m0.230s
> sys 0m5.250s| sys 0m4.780s
> # rm -r /dev/shm/*
> # time cp -a /mnt/beta/z.Backup.3/. /dev/shm/
> kernel 2.4.20-pre7  | kernel 2.4.20-pre6
> real0m6.349s| real0m6.180s
> user0m0.210s| user0m0.220s
> sys 0m2.450s| sys 0m2.510s

This dataset is way too small and entirely fits into your RAM I presume.
So to avoid any distortion or results you'd better have all periodic stuff
disabled. (though kupdated is still there) so it's better to run it several
times.
Also since it its into RAM, it must be flushed out, so I usually do this
using such command:
time sh -c "cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt"

> # time dd if=/dev/zero bs=1M count=1000 of=/mnt/beta/testfile.zero
> kernel 2.4.20-pre7  | kernel 2.4.20-pre6
> real1m11.390s   | real1m42.011s
> sys 0m11.230s   | sys 0m5.620s

Hm. While system time is less as expected, real time increased, that's strange.

> # time dd of=/dev/null bs=1M if=/mnt/beta/testfile.zero
> kernel 2.4.20-pre7  | kernel 2.4.20-pre6
> real1m16.738s   | real1m39.094s
> sys 0m5.460s| sys 0m5.930s

And real time is bigger for reads too, so it seems data layout is different.

That's really strange. If you can reproduce this behaviour, I am interested
in getting debugreiserfs -d output for each case after you umount this volume
(I assume that /mnt/beta/ filesystems contains nothing but this testfile.zero
file).

> >Compare 2.4.20-pre[67] if you see any difference.
> >Ah, also copy your data from original disk location to /dev/null and 
> >measure
> >time of that operation to know how much of total time is occupied by reads.
> >Also you can calculate read and write throughput separately this way.
> >And if reads are slower than writes - ...
> I'm definitely not sure if my lines above are something you meant.

Yes, kind of, though you have omitted timings of copying original data to
/dev/shm/ that will give us read speed from original media.

In fact instead of turning of swap you can do
mount none /mnt/ramfs -t ramfs
command (if you have ramfs compiled in of course) and /mnt/ramfs is now
kind of ram filesystem with very low overhead. It also cannot be swapped out
so if you fill all of your RAM, your box will OOM ;)
Byt the test itself is very small.
Probably you need to run something like
time find /source/that/needs/to/be/backed/up -type f -exec cat {} >/dev/null \;

to get read performance and implement a script like I mentioned in the beginning
to measure writes.
This way you do not need tons of RAM.

Bye,
Oleg



Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-18 Thread Manuel Krause

Hi!

Even facing the problem to make this mail unreadable I want to answer 
all inline. Please, don't complain.

I feel the urgent need to correct at least my last testing results as 
your last mail revealed heavy errors of my timing tests and kind-of 
opened my eyes.

On 09/18/2002 07:20 AM, Oleg Drokin wrote:
> Hello!
> 
> On Tue, Sep 17, 2002 at 07:39:39PM +0200, Manuel Krause wrote:
> 
>>>Copy same amount of data from RAM/nowhere to FS.
>>>E.g. make a file with file names and sizes and write a script that
>>>writes this amount of data from /dev/zero with these same names and needed 
>>>sizes into FS. (or just use RAMFS as your source if you have not much data 
>>>and huge RAM)
>>
>>To be honest, this already exceeds my linux knowledge...
> 
> I meant something to this extent:
> You run a script that runs over your filesystem and creates shell script
> that first creates whole dir structure of source dir and then for each file
> creates necessary command to recreate file of the same size:
> e.g for this directory contents:
> green@angband:~/z> ls -lR 
[...]
> Result of the work of the script would be:
> mkdir t
> mkdir t/z
> dd if=/dev/zero of=t/z/inode.c bs=69570 count=1
> dd if=/dev/zero of=t/z/stree.c bs=66478 count=1
> dd if=/dev/zero of=t/z/tail_conversion.c bs=10256 count=1
> 
> And you can run resulting script in target dir.

Yes, I saw this work in a nightmare last night. Scheduled for some dark 
moonless cold snow flurry winter night, sorry. Except for someone 
experienced likes to provide me with a basic script for that... ;-))

>>I was fiddling with some test directories containing 195.8MB I copied to 
>>and from /dev/shm with swap turned off.
>>
>># time cp -a /dev/shm/. /mnt/beta/z.Backup.3/
>>kernel 2.4.20-pre7  | kernel 2.4.20-pre6
>>real0m9.006s| real0m6.740s
>>user0m0.190s| user0m0.230s
>>sys 0m5.250s| sys 0m4.780s
>># rm -r /dev/shm/*
>># time cp -a /mnt/beta/z.Backup.3/. /dev/shm/
>>kernel 2.4.20-pre7  | kernel 2.4.20-pre6
>>real0m6.349s| real0m6.180s
>>user0m0.210s| user0m0.220s
>>sys 0m2.450s| sys 0m2.510s
> 
> This dataset is way too small and entirely fits into your RAM I presume.

Yes, it fits. I know that problem with this RAM based test. Though I may 
increase the testing directory a bit closer to the OOM limit, having 
512MB available.

> So to avoid any distortion or results you'd better have all periodic stuff
> disabled. (though kupdated is still there) so it's better to run it several
> times.
> Also since it its into RAM, it must be flushed out, so I usually do this
> using such command:
> time sh -c "cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt"

Couldn't you have written these words to me some years earlier?! The 
effect is measurable and on almost any so far discussed fs interaction 
huge or at least relevant. So, after reviewing my 
partition-backup-scripts, forget _all_ results I posted to the list. 
They're all lacking the umount=flush component.

Now, you caught me as the "fool of reiserfs-list"... Quite embarrassing. 
Mmmh. Painful.

>># time dd if=/dev/zero bs=1M count=1000 of=/mnt/beta/testfile.zero
>>kernel 2.4.20-pre7  | kernel 2.4.20-pre6
>>real1m11.390s   | real1m42.011s
>>sys 0m11.230s   | sys 0m5.620s
> 
> Hm. While system time is less as expected, real time increased, that's strange.
> 
>># time dd of=/dev/null bs=1M if=/mnt/beta/testfile.zero
>>kernel 2.4.20-pre7  | kernel 2.4.20-pre6
>>real1m16.738s   | real1m39.094s
>>sys 0m5.460s| sys 0m5.930s
> 
> And real time is bigger for reads too, so it seems data layout is different.
> 
> That's really strange. If you can reproduce this behaviour, I am interested
> in getting debugreiserfs -d output for each case after you umount this volume
> (I assume that2 /mnt/beta/ filesystems contains nothing but this testfile.zero
> file).

No. /mnt/beta/ is my software storage partition and contains this:
  /dev/hda11   5550248   4089088   1461160  74% /mnt/beta .

I have no means to provide this complete "debugreiserfs -d /dev/hda11" 
output set, 42MB one of four, if I read your wording correctly (-pre6 
without 1G file, -pre6 with 1GB file, -pre7 without 1GB file, -pre7 with 
1GB file) on my web-space. As .tar.gz it's 4MB one of four, and even 
that set doesn't fit on my private t-online  website. Maybe, it would 
work if sent sequentially by mail.
Oh. O.k. you get a definite "No" on this, sorry. I just reviewed the 
debugreiserfs' output file content and I would not send or publish this 
in any way. It is simply too sensitive as it contains direct file and 
directory names.
Is it possible to provide the needed info without clear directory or 
file names in future?! (These names replaced by sequentionally taken 
numbers?)



O.k., too many words about unneeded things. I've remade the testing I 
posted and kept to umount the interacting related partition inbetween in 
order to force the needed flush and c

Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging (was:Compatibility of current 2.4.19.pending ...)

2002-09-18 Thread Oleg Drokin

Hello!

On Thu, Sep 19, 2002 at 03:14:56AM +0200, Manuel Krause wrote:

> >And you can run resulting script in target dir.
> Yes, I saw this work in a nightmare last night. Scheduled for some dark 
> moonless cold snow flurry winter night, sorry. Except for someone 
> experienced likes to provide me with a basic script for that... ;-))

> >This dataset is way too small and entirely fits into your RAM I presume.
> Yes, it fits. I know that problem with this RAM based test. Though I may 
> increase the testing directory a bit closer to the OOM limit, having 
> 512MB available.

No, this is not enough of course since some data will remain unflushed and
amount of such data is relatively big compared to total amount of data.

> >So to avoid any distortion or results you'd better have all periodic stuff
> >disabled. (though kupdated is still there) so it's better to run it several
> >times.
> >Also since it its into RAM, it must be flushed out, so I usually do this
> >using such command:
> >time sh -c "cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt"
> Couldn't you have written these words to me some years earlier?! The 
> effect is measurable and on almost any so far discussed fs interaction 
> huge or at least relevant. So, after reviewing my 
> partition-backup-scripts, forget _all_ results I posted to the list. 
> They're all lacking the umount=flush component.

It is only needed if data to be cached is big enough to be noticed when compared
to total amount of data to be copied.

> No. /mnt/beta/ is my software storage partition and contains this:
>  /dev/hda11   5550248   4089088   1461160  74% /mnt/beta .

Ah!

> Oh. O.k. you get a definite "No" on this, sorry. I just reviewed the 
> debugreiserfs' output file content and I would not send or publish this 
> in any way. It is simply too sensitive as it contains direct file and 
> directory names.
No, then I do not need that debugreiserfs dump anyway.

But here is another warning:
I presume before each copy test is done, /mnt/beta/z.Backup.3 is removed
completely and /mnt/beta is unmounted and mounted back, also
between sevetral writing attempts (And during these attempts of course)
no other processes can write to to this FS.
If those two above clauses are not true, then results are also meaningless,
as lots of unnecessary tree reads are issued for overwrite and new blocks are
not allocated, but existing ones are reused.
If somebody can write to FS, then with every next test blocks chosen for files
are different (old ones may be already occupied).

> Is it possible to provide the needed info without clear directory or 
> file names in future?! (These names replaced by sequentionally taken 
> numbers?)

In such a case you can determine object id of big file (shown to userspace
as inode number) and only provide it's SD and indirect items:
|  9|4 357 0x0 SD (0), len 44, location 1572 entry count 65535, ...
| 10|4 357 0x1 IND (1), len 504, location 1068 entr
126 pointers
[ 9948(126)]

This is a example of file with objectid 357, that have 126 blocks in size.
9948-10074 blocks (all continuous) are used.

If file is very big, there would be several IND (indirect) items in other nodes,
number in brackets will changes to show offset that this INDIRECT item starts
with.

> Comparison of "dd" actions:
> ---
> reading command: time sh -c "dd if=/mnt/beta/testfile.zero bs=1M
>  count=1000 of=/dev/null ; umount /mnt/beta"
> writing command: time sh -c "dd if=/dev/zero bs=1M count=1000
>  of=/mnt/beta/testfile.zero ; umount /mnt/beta"

I presume you earased /mnt/beta/testfile.zero between tests and executed
sync.

Ah, until I forgot - in reiserfs if you erased something blocks that were
freed are only get back to you on next journal flush or after sync.
So if you do something like this:
rm -f /mnt/beta/testfile.zero ; time sh -c "dd ...",
then second file will get different blocknumbers.

> related df values:
> /dev/hda11   5550248   4089088   1461160  74% /mnt/beta
> /dev/hda11   5550248   5114104436144  93% /mnt/beta
> Yes, that's going over the "senfseful" filesystem content value.

Hm. This is before and after dd command or what?

> Comparison of "cp -a" actions:
> --
> reading command: time sh -c "cp -a /mnt/beta/z.Backup.3/. /mnt/ramfs/ ;
>  umount /mnt/beta ; umount /mnt/ramfs"
> writing command: time sh -c "cp -a /mnt/ramfs/. /mnt/beta/z.Backup.3/ ;
>  umount /mnt/beta ; umount /mnt/ramfs"

You mean you executed your commands in this same order?
I.e. first reading files from partition and then writing same files back
in place of already existing ones? Then above thing about overwriting files
applies directly in here.
I thought you were reading files from one filesystem and writing these files
to another one.

> >Yes, kind of, though you have omitted timings of copying original data to
> >/dev/shm/ that will give us read speed from original media.
> I thought my posted set "# time cp -a /mnt/beta/z.Backup

Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-19 Thread Manuel Krause

On 09/19/2002 08:34 AM, Oleg Drokin wrote:
> Hello!
> 
> On Thu, Sep 19, 2002 at 03:14:56AM +0200, Manuel Krause wrote:
> 
> 
>>>And you can run resulting script in target dir.
>>
>>Yes, I saw this work in a nightmare last night. Scheduled for some dark 
>>moonless cold snow flurry winter night, sorry. Except for someone 
>>experienced likes to provide me with a basic script for that... ;-))
> 
>>>This dataset is way too small and entirely fits into your RAM I presume.
>>
>>Yes, it fits. I know that problem with this RAM based test. Though I may 
>>increase the testing directory a bit closer to the OOM limit, having 
>>512MB available.
> 
> No, this is not enough of course since some data will remain unflushed and
> amount of such data is relatively big compared to total amount of data.

And if the participating filesystems are umounted after writing the data?

>>>So to avoid any distortion or results you'd better have all periodic stuff
>>>disabled. (though kupdated is still there) so it's better to run it several
>>>times.
>>>Also since it its into RAM, it must be flushed out, so I usually do this
>>>using such command:
>>>time sh -c "cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt"
>>
>>Couldn't you have written these words to me some years earlier?! The 
>>effect is measurable and on almost any so far discussed fs interaction 
>>huge or at least relevant. So, after reviewing my 
>>partition-backup-scripts, forget _all_ results I posted to the list. 
>>They're all lacking the umount=flush component.
> 
> It is only needed if data to be cached is big enough to be noticed when compared
> to total amount of data to be copied.
> 
>>No. /mnt/beta/ is my software storage partition and contains this:
>> /dev/hda11   5550248   4089088   1461160  74% /mnt/beta .
> 
> Ah!
> 
>>Oh. O.k. you get a definite "No" on this, sorry. I just reviewed the 
>>debugreiserfs' output file content and I would not send or publish this 
>>in any way. It is simply too sensitive as it contains direct file and 
>>directory names.
> 
> No, then I do not need that debugreiserfs dump anyway.
> 
> But here is another warning:
> I presume before each copy test is done, /mnt/beta/z.Backup.3 is removed
> completely and /mnt/beta is unmounted and mounted back, also
> between sevetral writing attempts (And during these attempts of course)
> no other processes can write to to this FS.

These clauses are both true and apply to the dd tests, too. Mmh, except 
for the case when /mnt/beta/z.Backup.3 or testfile.zero were the source 
to be copied/dd'ed... There haven't been overwrites or other writes to 
the disk during the testsets.

> If those two above clauses are not true, then results are also meaningless,
> as lots of unnecessary tree reads are issued for overwrite and new blocks are
> not allocated, but existing ones are reused.
> If somebody can write to FS, then with every next test blocks chosen for files
> are different (old ones may be already occupied).
> 
>>Is it possible to provide the needed info without clear directory or 
>>file names in future?! (These names replaced by sequentionally taken 
>>numbers?)
> 
> In such a case you can determine object id of big file (shown to userspace
> as inode number) and only provide it's SD and indirect items:
> |  9|4 357 0x0 SD (0), len 44, location 1572 entry count 65535, ...
> | 10|4 357 0x1 IND (1), len 504, location 1068 entr
> 126 pointers
> [ 9948(126)]
> 
> This is a example of file with objectid 357, that have 126 blocks in size.
> 9948-10074 blocks (all continuous) are used.
> 
> If file is very big, there would be several IND (indirect) items in other nodes,
> number in brackets will changes to show offset that this INDIRECT item starts
> with.
> 
>>Comparison of "dd" actions:
>>---
>>reading command: time sh -c "dd if=/mnt/beta/testfile.zero bs=1M
>> count=1000 of=/dev/null ; umount /mnt/beta"
>>writing command: time sh -c "dd if=/dev/zero bs=1M count=1000
>> of=/mnt/beta/testfile.zero ; umount /mnt/beta"
> 
> I presume you earased /mnt/beta/testfile.zero between tests and executed
> sync.

I umounted the partition and mounted it back. I thought that would be 
the right action to avoid what you describe in the following:

> Ah, until I forgot - in reiserfs if you erased something blocks that were
> freed are only get back to you on next journal flush or after sync.
> So if you do something like this:
> rm -f /mnt/beta/testfile.zero ; time sh -c "dd ...",
> then second file will get different blocknumbers.
> 
>>related df values:
>>/dev/hda11   5550248   4089088   1461160  74% /mnt/beta
>>/dev/hda11   5550248   5114104436144  93% /mnt/beta
>>Yes, that's going over the "senfseful" filesystem content value.
> 
> Hm. This is before and after dd command or what?

Yes, without and with the 1G file.

>>Comparison of "cp -a" actions:
>>--
>>reading command: time sh -c "cp -a /mnt/beta/z.Backup.3/. /mnt/ramfs/ ;
>> umount /mnt/beta

Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging (was:Compatibility of current 2.4.19.pending ...)

2002-09-19 Thread Oleg Drokin

Hello!

On Thu, Sep 19, 2002 at 05:52:15PM +0200, Manuel Krause wrote:

> >>Yes, it fits. I know that problem with this RAM based test. Though I may 
> >>increase the testing directory a bit closer to the OOM limit, having 
> >>512MB available.
> >No, this is not enough of course since some data will remain unflushed and
> >amount of such data is relatively big compared to total amount of data.
> And if the participating filesystems are umounted after writing the data?

Then it is ok of course. All fs' buffers are flushed on umount.

> >But here is another warning:
> >I presume before each copy test is done, /mnt/beta/z.Backup.3 is removed
> >completely and /mnt/beta is unmounted and mounted back, also
> >between sevetral writing attempts (And during these attempts of course)
> >no other processes can write to to this FS.
> These clauses are both true and apply to the dd tests, too. Mmh, except 
> for the case when /mnt/beta/z.Backup.3 or testfile.zero were the source 
> to be copied/dd'ed... There haven't been overwrites or other writes to 
> the disk during the testsets.
Ok, good.

> >I presume you earased /mnt/beta/testfile.zero between tests and executed
> >sync.
> I umounted the partition and mounted it back. I thought that would be 
> the right action to avoid what you describe in the following:

Yes.

> >I thought that original data is residing on another filesystem on another 
> >disk.
> >If originally you vere copying data from one disk to the same disk, just
> >another partition, then thisis just lots of seeks.
> Thanks for these reminders. Originally I wanted to copy data from one 
> disk to another and on the way capture the timings to compare the 
> throughput... Then you pointed me to re-check pure reads and pure writes 
> mainly to make sure the writes were not read-speed bound and to compare 
> this behaviour on -pre6 and -pre7.

Originally I was mostly interested in reads from source fs, not the one
where you have copied the data (though that one is also might be useful).

Ok, thank you for lots of testing.

Bye,
Oleg



Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 <-> 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-22 Thread Manuel Krause

On 09/19/2002 06:01 PM, Oleg Drokin wrote:
> Hello!
> 
> On Thu, Sep 19, 2002 at 05:52:15PM +0200, Manuel Krause wrote:
> 
[...]
>>
>>Thanks for these reminders. Originally I wanted to copy data from one 
>>disk to another and on the way capture the timings to compare the 
>>throughput... Then you pointed me to re-check pure reads and pure writes 
>>mainly to make sure the writes were not read-speed bound and to compare 
>>this behaviour on -pre6 and -pre7.
> 
> Originally I was mostly interested in reads from source fs, not the one
> where you have copied the data (though that one is also might be useful).
> 
> Ok, thank you for lots of testing.
> 
> Bye,
> Oleg
> 

Hi Oleg & others!

O.k. somehow I managed to "find" the "script to make the scripts" to 
measure writes from nowhere to the target disk. I decided to make up 2 
scripts - one for the dirs and one for the files, originally for the 
comparison of "pure" file wites to the reads (read below) including the 
umounts. They're containing just the lists of commands Oleg posted 
recently (e.g. mkdir "/mnt/beta/dir1" /// dd if=/dev/zero count=1 
bs=8466760 of="/mnt/beta/dir1/.../filename1" ). And I re-used Olegs 
posted command to measure reads from the source disk for the most recent 
reiserfs kernel versions.

The results are shown below.

I'm not very glad with comparing the "pure" reads against writes 
especially when watching the copy timings as different commands are 
included and their (relative) overhead is quite unknown, am I right? (So 
e.g. I didn't want to take these values' relation to tweak the disks 
read & write latency settings for now.)  CPU usage during the dd... and 
the find...cat commands is very high. Also the disk access when watched 
in ksysguardd is different for all these kinds of tests.

So, read and compare it yourself and I would be glad to get comments on 
how I needed to refine it.


Good night,

Manuel

--

/dev/hda11 5550248   3927572   1622676  71% /mnt/beta
/dev/hdd11 5550248   3927572   1622676  71% /mnt/gamma
containing 58015 files, 3481 files

kernel 2.4.19-  2.4.20-pre6  2.4.20-pre7
data-logging

# time sh -c "cp -ax /mnt/gamma/. /mnt/beta/ ; umount /mnt/gamma ; 
umount /mnt/beta "
(representing copies from source to target disk)

real   7m46.970s10m57.716s   10m04.328s
user   0m01.710s 0m01.390s0m01.540s
sys1m25.440s 1m11.100s1m18.840s

# time /tmp/script.dirs.sh
(representing directory writes to target disk)

real   0m09.972s 0m10.055s0m10.316s
user   0m04.960s 0m04.770s0m04.880s
sys0m04.370s 0m04.590s0m04.550s

# time sh -c "/tmp/script.files.sh ; umount /dev/hda11 ; umount /dev/hdd11 "
(representing file writes to target disk)

real   7m35.992s 8m24.499s8m20.830s
user   1m31.100s 1m30.120s1m32.280s
sys2m21.480s 1m42.230s2m02.660s

# cd /mnt/ ; time sh -c "find ./gamma/* -type f -exec cat {} >/dev/null 
\; ; umount /dev/hda11 ; umount /dev/hdd11 "
(representing reads from source disk)
real   8m34.665s 9m54.103s9m50.796s
user   1m11.650s 1m10.760s1m11.100s
sys1m15.000s 1m29.960s1m17.370s


I took care to freshly mount the participating partitions each
test, recreate and mount+umount the counterpart reiserfs
partition before when appropriate, had no overwrites and no
other accesses to the source partition during this test.

Going back from 2.4.20-pre7 to 2.4.19-data-logging and
beeing in doubt of the effects of the new block allocator I
recreated the source filesystem completely (copy to new
target and copy back) to measure the above data-logging
values. The first copy from the former 2.4.20-pre7 reiserfs
to 2.4.19-data-logging had this timings:
   real7m38.715s
   user0m2.030s
   sys 1m29.560s

Command to take the "/tmp/script.[dirs,files].sh" :
sh -c 'cd /mnt/gamma ; find * -type d -fprintf /tmp/script.dirs.sh 
"mkdir \"/mnt/beta/%p\"\n" ; find * -type f -fprintf 
/tmp/script.files.sh "dd if=/dev/zero count=1 bs=%s 
of=\"/mnt/beta/%p\"\n" ; chmod u+x /tmp/script.*.sh'

--