I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files is a decent number.


Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to my surprise, it takes longer than I initially expected.


After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can usually remove about 50000-200000 files with moderate performance. I see up to 5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.


After that, waiting for IO grows to 99%, and disk write speed is down to 50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).


Is it normal to expect the write speed go down to only few dozens of kilobytes/s? Is it because of that many seeks? Can it be somehow optimized? The machine has loads of free memory, perhaps it could be uses better?


Also, writing big files is very slow - it takes more than 4 minutes to write and sync a 655 MB file (so, a little bit more than 1 MB/s) - fragmentation perhaps?

+ dd if=/dev/zero of=testfile bs=64k count=10000
10000+0 records in
10000+0 records out
655360000 bytes (655 MB) copied, 3,12109 seconds, 210 MB/s
+ sync
0.00user 2.14system 4:06.76elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+883minor)pagefaults 0swaps


# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda              1,2T  697G  452G  61% /mnt/iscsi_backup

# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda                154M     20M    134M   13% /mnt/iscsi_backup




--
Tomasz Chmielewski
http://wpkg.org

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to