Sorry. This statement is incorrect.
If you aren't using ZFS, or even a GEOM volume with mirror/RAID5/softup/etc,
you cannot make the statement that hardware RAID is faster. I learned that 3
years ago.
It takes about 30 minutes to mirror 1.5TB on ZFS. Try that on hardware RAID.
I did the sam
On Wed, May 6, 2009 at 12:21 PM, Matthew Seaman
wrote:
> Gary Gatten wrote:
>> OT now, but in high i/o envs with high concurrency needs, RAID5 is
>> still the way to go, esp if 90% of i/o is reads. Of course it depends
>> on file size / type as well... Anyway, let's sum it up with "a
>> storage su
Gary Gatten wrote:
OT now, but in high i/o envs with high concurrency needs, RAID5 is
still the way to go, esp if 90% of i/o is reads. Of course it depends
on file size / type as well... Anyway, let's sum it up with "a
storage subsystem is only as fast as its slowest link"
It's not just the bal
It could just be me, but I swear Hardware RAID has been faster for many
many years, especially with RAID5 arrays - or anything that requires
parity calcs. Most of my benchmarking was done on SCO OpenServer and
Novell UnixWare and Netware, but hardware RAID controllers were always
faster and of cou
OT now, but in high i/o envs with high concurrency needs, RAID5 is still the
way to go, esp if 90% of i/o is reads. Of course it depends on file size / type
as well... Anyway, let's sum it up with "a storage subsystem is only as fast as
its slowest link"
- Original Message -
From: Wojci
yes, some of them suck royally.
you should rather say "some of them doesn't suck".
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsu
It could just be me, but I swear Hardware RAID has been faster for many
many years, especially with RAID5 arrays - or anything that requires
maybe with RAID5, but using RAID5 today (huge disk sizes, little sense to
save on disk space) instead of RAID1/10 doesn't make much sense, as RAID5
is sl
config, or gmirror/gstripe config.
usually it's far much slower
Sorry, but my experience with that very server using a P400 controller with
256MB write cache is very different. My benchmarks showed that controller
using Raid5 (with only 4 disks) is significantly faster than software
layouts.
Sorry, "drive" in last sentence should be "driver"!
- Original Message -
From: owner-freebsd-questi...@freebsd.org
To: Benjamin Krueger ; Wojciech Puchar
Cc: freebsd-performance@freebsd.org ; Olivier
Mueller ; Bill Moran ;
freebsd-questi...@freebsd.org
Sent: Wed May 06 13:08:46 2009
In response to "Gary Gatten" :
> It could just be me, but I swear Hardware RAID has been faster for many
> many years, especially with RAID5 arrays - or anything that requires
> parity calcs. Most of my benchmarking was done on SCO OpenServer and
> Novell UnixWare and Netware, but hardware RAID c
Wojciech Puchar wrote:
means you had 6 million files. df -i would have been more useful in
the output above.
This brings a number of questions up:
* Are you _sure_ softupdates is enabled on that partition? That's
he showed mount output - he has softdeps on.
* Are these 7200RPM disks or
means you had 6 million files. df -i would have been more useful in
the output above.
This brings a number of questions up:
* Are you _sure_ softupdates is enabled on that partition? That's
he showed mount output - he has softdeps on.
* Are these 7200RPM disks or 15,000? Again, going t
-> it took about 12 hours to delete these 30GB of files and
sub-directories (smarty cache files: many small files in many dirs).
It's a little bit surprising, as it's on a recent HP proliant DL360 g5
with SAS disks (Raid1) running freebsd 6.x
( /dev/da0s1f on /usr (ufs, local, soft-updates) )
if
In response to Olivier Mueller :
>
> Yes, it is one of the best options. My initial goal was to delete all
> files older than N days by cron (find | xargs | rm, etc.), but if each
> cronjob takes 2 hours (and takes so much cpu time), it's probably not
> the best way.
>
> I'll make some more te
In response to Arkadi Shishlov :
> Its probably "dirhash' that is not enabled or its cache is too small for the
> task.
I'm no expert, but I thought dirhash only improved read speed. His
bottleneck would be writes.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/
On Wed, May 6, 2009 at 7:18 AM, Florian Smeets wrote:
> On 05.05.09 07:30, Mark Wong wrote:
>>
>> Hi everyone,
>>
>> We (PostgreSQL community) have a HP DL380 G5 that we were using to do
>> some very basic filesystem characterizations as part of a database
>> performance tuning project, so we want
On Wed, 2009-05-06 at 16:15 +0300, Arkadi Shishlov wrote:
> Its probably "dirhash' that is not enabled or its cache is too small for the
> task.
$ sysctl -a |grep dirha
UFS dirhash 1262 286K - 9715683 16,32,64,128,256,512,1024,2048,4096
vfs.ufs.dirhash_docheck: 0
vfs.ufs.dirhash_mem:
On Wed, May 6, 2009 at 2:25 AM, Anthony Pankov wrote:
> Hello Mark,
>
> May i ask a question while more expierenced people is waking up?
>
> I don't fully understand the target. For what filesystem should be
> optimized?
>
> I expect a patterns of recorded IO calls when pgsql perform typical
> ope
On 05.05.09 07:30, Mark Wong wrote:
Hi everyone,
We (PostgreSQL community) have a HP DL380 G5 that we were using to do
some very basic filesystem characterizations as part of a database
performance tuning project, so we wanted to give FreeBSD a try out of
the box. For this set of data we used 7
Its probably "dirhash' that is not enabled or its cache is too small for the
task.
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubs
Thanks for your answer Bill! (and to Will as well),
Some more infos I gathered a few minutes ago:
[~/templates_c]$ date; du -s -m ; date
Wed May 6 13:35:15 CEST 2009
2652 .
Wed May 6 13:52:36 CEST 2009
[~/templates_c]$ date ; find . | wc -l ; date
Wed May 6 13:52:56 CEST 2009
30546
In response to Olivier Mueller :
> Hello,
>
> $ df -m ; date ; rm -r templates_c ; df -m ; date
> Filesystem 1M-blocks Used Avail Capacity Mounted on
> /dev/da0s1a 989 45 864 5%/
> /dev/da0s1f128631 102179 1616086%/usr
> [...]
> Wed May 6 00:23:01 CEST 2009
>
Hello,
$ df -m ; date ; rm -r templates_c ; df -m ; date
Filesystem 1M-blocks Used Avail Capacity Mounted on
/dev/da0s1a 989 45 864 5%/
/dev/da0s1f128631 102179 1616086%/usr
[...]
Wed May 6 00:23:01 CEST 2009
Filesystem 1M-blocks Used Avail Capacity Mounted
Hello Mark,
May i ask a question while more expierenced people is waking up?
I don't fully understand the target. For what filesystem should be
optimized?
I expect a patterns of recorded IO calls when pgsql perform typical
operations with statistics and in-depth analysis.
Are you sure there is
24 matches
Mail list logo