I am removing 300gb of data spread across 130 files within a single
directory and the process take just over 2 hours. In my past experiences
removing a small number of large files was very quick, almost instantaneous.
I am running red hat Linux on ibm p series hardware against a san with sata
and fiber drives. I see this issue on both the sata and fiber side although
the rm process is slightly faster on fiber.
Uname -a : Linux hostname 2.6.9-55.EL #1 SMP Fri Apr 20 16:33:09 EDT 2007
ppc64 ppc64 ppc64 GNU/Linux
Commands : cd /path/directory/subdirectory
Rm -f *
I wanted to know if there is a way to speed this up as it causes 3 hour
process to go to 5 hours.
Thanks,
Ken Naim
_______________________________________________
Bug-coreutils mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/bug-coreutils