Olivier Mueller wrote:
> On Wed, 2009-05-06 at 16:15 +0300, Arkadi Shishlov wrote:
>> Its probably "dirhash' that is not enabled or its cache is too small for the
>> task.
>
> $ sysctl -a |grep dirha
> UFS dirhash 1262 286K - 9715683
> 16,32,64,128,256,512,1024,2048,4096
> vfs.ufs.d
On Wed, 2009-05-06 at 13:54 +0200, Olivier Mueller wrote:
> -> it took about 12 hours to delete these 30GB of files and
> sub-directories (smarty cache files: many small files in many dirs).
Haven't you ever had the pleasure of running Sendmail on Solaris? :)
Move this data store to a separate pa
If you aren't using ZFS, or even a GEOM volume with mirror/RAID5/softup/etc,
you cannot make the statement that hardware RAID is faster. I learned
that 3 years ago.
i state exactly opposite. all hardware raid cards are made just to suck
money from those who believe in it.
like "performance
rg; Benjamin Krueger
; Olivier Mueller ;
freebsd-performance@freebsd.org; Bill Moran
Sent: Wednesday, May 6, 2009 2:31:16 PM
Subject: RE: filesystem: 12h to delete 32GB of data
> It could just be me, but I swear Hardware RAID has been faster for many
> many years, especially with RAID5 arrays -
On Wed, May 6, 2009 at 12:21 PM, Matthew Seaman
wrote:
> Gary Gatten wrote:
>> OT now, but in high i/o envs with high concurrency needs, RAID5 is
>> still the way to go, esp if 90% of i/o is reads. Of course it depends
>> on file size / type as well... Anyway, let's sum it up with "a
>> storage su
Gary Gatten wrote:
OT now, but in high i/o envs with high concurrency needs, RAID5 is
still the way to go, esp if 90% of i/o is reads. Of course it depends
on file size / type as well... Anyway, let's sum it up with "a
storage subsystem is only as fast as its slowest link"
It's not just the bal
It could just be me, but I swear Hardware RAID has been faster for many
many years, especially with RAID5 arrays - or anything that requires
parity calcs. Most of my benchmarking was done on SCO OpenServer and
Novell UnixWare and Netware, but hardware RAID controllers were always
faster and of cou
sage -
From: Wojciech Puchar
To: Bill Moran
Cc: Gary Gatten; Benjamin Krueger ;
freebsd-performance@freebsd.org ; Olivier
Mueller ; freebsd-questi...@freebsd.org
Sent: Wed May 06 13:31:53 2009
Subject: Re: filesystem: 12h to delete 32GB of data
> yes, some of them suck royally.
you should r
yes, some of them suck royally.
you should rather say "some of them doesn't suck".
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsu
It could just be me, but I swear Hardware RAID has been faster for many
many years, especially with RAID5 arrays - or anything that requires
maybe with RAID5, but using RAID5 today (huge disk sizes, little sense to
save on disk space) instead of RAID1/10 doesn't make much sense, as RAID5
is sl
config, or gmirror/gstripe config.
usually it's far much slower
Sorry, but my experience with that very server using a P400 controller with
256MB write cache is very different. My benchmarks showed that controller
using Raid5 (with only 4 disks) is significantly faster than software
layouts.
May 06 13:08:46 2009
Subject: RE: filesystem: 12h to delete 32GB of data
It could just be me, but I swear Hardware RAID has been faster for many
many years, especially with RAID5 arrays - or anything that requires
parity calcs. Most of my benchmarking was done on SCO OpenServer and
Novell UnixW
In response to "Gary Gatten" :
> It could just be me, but I swear Hardware RAID has been faster for many
> many years, especially with RAID5 arrays - or anything that requires
> parity calcs. Most of my benchmarking was done on SCO OpenServer and
> Novell UnixWare and Netware, but hardware RAID c
Wojciech Puchar wrote:
means you had 6 million files. df -i would have been more useful in
the output above.
This brings a number of questions up:
* Are you _sure_ softupdates is enabled on that partition? That's
he showed mount output - he has softdeps on.
* Are these 7200RPM disks or
means you had 6 million files. df -i would have been more useful in
the output above.
This brings a number of questions up:
* Are you _sure_ softupdates is enabled on that partition? That's
he showed mount output - he has softdeps on.
* Are these 7200RPM disks or 15,000? Again, going t
-> it took about 12 hours to delete these 30GB of files and
sub-directories (smarty cache files: many small files in many dirs).
It's a little bit surprising, as it's on a recent HP proliant DL360 g5
with SAS disks (Raid1) running freebsd 6.x
( /dev/da0s1f on /usr (ufs, local, soft-updates) )
if
In response to Olivier Mueller :
>
> Yes, it is one of the best options. My initial goal was to delete all
> files older than N days by cron (find | xargs | rm, etc.), but if each
> cronjob takes 2 hours (and takes so much cpu time), it's probably not
> the best way.
>
> I'll make some more te
In response to Arkadi Shishlov :
> Its probably "dirhash' that is not enabled or its cache is too small for the
> task.
I'm no expert, but I thought dirhash only improved read speed. His
bottleneck would be writes.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/
On Wed, 2009-05-06 at 16:15 +0300, Arkadi Shishlov wrote:
> Its probably "dirhash' that is not enabled or its cache is too small for the
> task.
$ sysctl -a |grep dirha
UFS dirhash 1262 286K - 9715683 16,32,64,128,256,512,1024,2048,4096
vfs.ufs.dirhash_docheck: 0
vfs.ufs.dirhash_mem:
Its probably "dirhash' that is not enabled or its cache is too small for the
task.
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubs
Thanks for your answer Bill! (and to Will as well),
Some more infos I gathered a few minutes ago:
[~/templates_c]$ date; du -s -m ; date
Wed May 6 13:35:15 CEST 2009
2652 .
Wed May 6 13:52:36 CEST 2009
[~/templates_c]$ date ; find . | wc -l ; date
Wed May 6 13:52:56 CEST 2009
30546
In response to Olivier Mueller :
> Hello,
>
> $ df -m ; date ; rm -r templates_c ; df -m ; date
> Filesystem 1M-blocks Used Avail Capacity Mounted on
> /dev/da0s1a 989 45 864 5%/
> /dev/da0s1f128631 102179 1616086%/usr
> [...]
> Wed May 6 00:23:01 CEST 2009
>
Hello,
$ df -m ; date ; rm -r templates_c ; df -m ; date
Filesystem 1M-blocks Used Avail Capacity Mounted on
/dev/da0s1a 989 45 864 5%/
/dev/da0s1f128631 102179 1616086%/usr
[...]
Wed May 6 00:23:01 CEST 2009
Filesystem 1M-blocks Used Avail Capacity Mounted
23 matches
Mail list logo