Tomasz Chmielewski wrote:
I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files is a decent number.


Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to my surprise, it takes longer than I initially expected.


After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can usually remove about 50000-200000 files with moderate performance. I see up to 5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.


After that, waiting for IO grows to 99%, and disk write speed is down to 50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).


Is it normal to expect the write speed go down to only few dozens of kilobytes/s? Is it because of that many seeks? Can it be somehow optimized? The machine has loads of free memory, perhaps it could be uses better?


Also, writing big files is very slow - it takes more than 4 minutes to write and sync a 655 MB file (so, a little bit more than 1 MB/s) - fragmentation perhaps?

It would be really interesting if you try your workload with XFS. In my experience, XFS considerably outperforms ext3 on big (> few hundreds MB) disks.

Vlad
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to