Simon Banton wrote:
>
> At 12:59 -0400 2/10/07, Ross S. W. Walker wrote:
> >Try running the same benchmark but use bs=4k and count=1048576
>
> Just finished doing that now - comparison graphs are here:
>
> http://community.novacaster.com/showarticle.pl?id=7492
>
> >While these tests are running
At 12:59 -0400 2/10/07, Ross S. W. Walker wrote:
Try running the same benchmark but use bs=4k and count=1048576
Just finished doing that now - comparison graphs are here:
http://community.novacaster.com/showarticle.pl?id=7492
While these tests are running can you run any processes on another
At 13:49 -0400 2/10/07, Ross S. W. Walker wrote:
Sounds like the issue is more of a CPU issue then a disk issue, so
just upgrading the hardware and OS should make a big difference in
itself,
Yeah, that was the plan :-) Basically, we worked out what we needed
to do (alleviate peak load CPU bott
What's the best multi-threaded / multi-process io-benchmark utility that
works with filesystems instead of raw devices? and can read/write multiple
files at once..
http://untroubled.org/benchmarking/2004-04/
No raw numbers but...
___
CentOS mailing
On Tue, Oct 02, 2007 at 03:51:50PM -0400, Ross S. W. Walker wrote:
> Pasi Kärkkäinen wrote:
> >
> > On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > > Simon Banton wrote:
> > > >
> > > > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > > > >
> > > > >What I did to work ar
On Tue, Oct 02, 2007 at 03:56:17PM -0400, Ross S. W. Walker wrote:
> Pasi Kärkkäinen wrote:
> >
> > On Tue, Oct 02, 2007 at 08:57:28PM +0300, Pasi Kärkkäinen wrote:
> > > On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > > > Simon Banton wrote:
> > > > >
> > > > > At 12:30 +0
Pasi Kärkkäinen wrote:
>
> On Tue, Oct 02, 2007 at 08:57:28PM +0300, Pasi Kärkkäinen wrote:
> > On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > > Simon Banton wrote:
> > > >
> > > > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > > > >
> > > > >What I did to work around
Pasi Kärkkäinen wrote:
>
> On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > Simon Banton wrote:
> > >
> > > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > > >
> > > >What I did to work around them was basically switching
> to XFS for
> > > >everything except / (3ware
On Tue, Oct 02, 2007 at 08:57:28PM +0300, Pasi Kärkkäinen wrote:
> On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > Simon Banton wrote:
> > >
> > > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > > >
> > > >What I did to work around them was basically switching to XFS for
On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> Simon Banton wrote:
> >
> > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > >
> > >What I did to work around them was basically switching to XFS for
> > >everything except / (3ware say their cards are fast, but only on
> >
Simon Banton wrote:
>
> At 13:03 -0400 2/10/07, Ross S. W. Walker wrote:
> >Have you tried calculating the performance of your current drives on
> >paper to see if it matches your "reality"? It may just be that your
> >disks suck...
>
> They're performing to spec for 7200rpm SATA II drives - your
At 13:03 -0400 2/10/07, Ross S. W. Walker wrote:
Have you tried calculating the performance of your current drives on
paper to see if it matches your "reality"? It may just be that your
disks suck...
They're performing to spec for 7200rpm SATA II drives - your help in
determining which was the
Simon Banton wrote:
>
> At 12:41 -0400 2/10/07, Ross S. W. Walker wrote:
> >If the performance issue is identical to the kernel bug mentioned
> >in the posting then the only real fix that was mentioned was to
> >switch to 32bit from 64bit or to down-rev your kernel, which on
> >CentOS means to go
Simon Banton wrote:
>
> >What is the recurring performance problem you are seeing?
>
> Pretty much exactly the symptoms described in
> http://bugzilla.kernel.org/show_bug.cgi?id=7372 relating to read
> starvation under heavy write IO causing sluggish system response.
>
> I recently graphed the
At 12:41 -0400 2/10/07, Ross S. W. Walker wrote:
If the performance issue is identical to the kernel bug mentioned
in the posting then the only real fix that was mentioned was to
switch to 32bit from 64bit or to down-rev your kernel, which on
CentOS means to go down to 4.5 from 5.0.
The irony i
Simon Banton wrote:
>
> >What is the recurring performance problem you are seeing?
>
> Pretty much exactly the symptoms described in
> http://bugzilla.kernel.org/show_bug.cgi?id=7372 relating to read
> starvation under heavy write IO causing sluggish system response.
>
> I recently graphed the
What is the recurring performance problem you are seeing?
Pretty much exactly the symptoms described in
http://bugzilla.kernel.org/show_bug.cgi?id=7372 relating to read
starvation under heavy write IO causing sluggish system response.
I recently graphed the blocks in/blocks out from vmstat 1
Simon Banton wrote:
>
> At 09:24 -0400 2/10/07, Ross S. W. Walker wrote:
> >Actually the real-real fix was to use the 'deadline' or
> 'noop' scheduler
> >with this card as the default 'cfq' scheduler was designed
> to work with
> >a single drive and not a multiple drive RAID, so it acts as
> a
Simon Banton wrote:
>
> At 12:30 +0200 2/10/07, matthias platzer wrote:
> >
> >What I did to work around them was basically switching to XFS for
> >everything except / (3ware say their cards are fast, but only on
> >XFS) AND using very low nr_requests for every blockdev on the 3ware
> >card.
>
At 09:24 -0400 2/10/07, Ross S. W. Walker wrote:
Actually the real-real fix was to use the 'deadline' or 'noop' scheduler
with this card as the default 'cfq' scheduler was designed to work with
a single drive and not a multiple drive RAID, so it acts as a govenor on
the amount of IO that a single
matthias platzer wrote:
>
> hello,
>
> i saw this thread a bit late, but I had /am having the exact
> same issues
> on a dual-2-core-cpu opteron box with a 9550SX. (Centos 5 x86_64)
> What I did to work around them was basically switching to XFS for
> everything except / (3ware say their cards
At 12:30 +0200 2/10/07, matthias platzer wrote:
What I did to work around them was basically switching to XFS for
everything except / (3ware say their cards are fast, but only on
XFS) AND using very low nr_requests for every blockdev on the 3ware
card.
Hi Matthias,
Thanks for this. In my C
hello,
i saw this thread a bit late, but I had /am having the exact same issues
on a dual-2-core-cpu opteron box with a 9550SX. (Centos 5 x86_64)
What I did to work around them was basically switching to XFS for
everything except / (3ware say their cards are fast, but only on XFS)
AND using ve
At 12:01 -0400 26/9/07, Ross S. W. Walker wrote:
CFQ is intended for single disk workstations and it's io limits are
based on that, so it actually acts as an io govenor on RAID setups.
Only use 'cfq' on single disk workstations.
Use 'deadline' on RAID setups and servers.
Many thanks Ross, tha
Simon Banton wrote:
>
> At 09:14 -0400 26/9/07, Ross S. W. Walker wrote:
> >Could you try the benchmarks with the 'deadline' scheduler?
>
> OK, these are all with RHEL5, driver 2.26.06.002-2.6.18, RAID 1:
>
> elevator=deadline:
> Sequential reads:
> | 2007/09/26-16:19:30 | START | 3065 | v1.2.8
At 09:14 -0400 26/9/07, Ross S. W. Walker wrote:
Could you try the benchmarks with the 'deadline' scheduler?
OK, these are all with RHEL5, driver 2.26.06.002-2.6.18, RAID 1:
elevator=deadline:
Sequential reads:
| 2007/09/26-16:19:30 | START | 3065 | v1.2.8 | /dev/sdb | Start
args: -B 4k -h 1
Simon Banton wrote:
>
> At 13:26 -0400 25/9/07, Ross S. W. Walker wrote:
> >Off of 3ware's support site I was able to download and compile the
> >latest stable release which has this modinfo:
> >
> >[EMAIL PROTECTED] driver]# modinfo 3w-9xxx.ko
> >filename: 3w-9xxx.ko
> >version:2.26
At 13:26 -0400 25/9/07, Ross S. W. Walker wrote:
Off of 3ware's support site I was able to download and compile the
latest stable release which has this modinfo:
[EMAIL PROTECTED] driver]# modinfo 3w-9xxx.ko
filename: 3w-9xxx.ko
version:2.26.06.002-2.6.18
OK, driver source from t
Simon Banton wrote:
>
> At 10:36 -0400 25/9/07, Ross S. W. Walker wrote:
> >Post the modinfo to the list just in case somebody
> >else knows of any issues with the version you are running.
>
> This is from RHEL5 - it's the driver that comes built-in:
>
> [EMAIL PROTECTED] ~]# modinfo 3w-9xxx
>
At 10:36 -0400 25/9/07, Ross S. W. Walker wrote:
Post the modinfo to the list just in case somebody
else knows of any issues with the version you are running.
This is from RHEL5 - it's the driver that comes built-in:
[EMAIL PROTECTED] ~]# modinfo 3w-9xxx
filename: /lib/modules/2.6.18-8.
Simon Banton wrote:
>
> At 13:35 -0400 24/9/07, Ross S. W. Walker wrote:
> >Ok, so here is the command I would use:
>
> Thanks - here are the results (tried CentOS 4.5 and RHEL5, with tests
> on sdb when configured as both RAID 0 and as RAID 1):
>
> >Sequential reads:
> >disktest -B 4k -h 1 -I
At 13:35 -0400 24/9/07, Ross S. W. Walker wrote:
Ok, so here is the command I would use:
Thanks - here are the results (tried CentOS 4.5 and RHEL5, with tests
on sdb when configured as both RAID 0 and as RAID 1):
Sequential reads:
disktest -B 4k -h 1 -I BD -K 4 -p l -P T -T 300 -r /dev/sdX
Simon Banton wrote:
At 07:46 +0800 24/9/07, Feizhou wrote:
... plus an Out of Memory kill of sshd. Second time around (logged in
on the console rather than over ssh), it's just the same except it's
hald that happens to get clobbered instead.
Are you saying that running in RAID0 mode with this
Simon Banton wrote:
>
> At 10:04 -0400 24/9/07, Ross S. W. Walker wrote:
> >How about trying your benchmarks with the 'disktest' utility from the
> >LTP (Linux Test Project),
>
> Now fetched and installed - I'd be grateful for a suggestion as to an
> appropriate disktest command line for a 4GB R
At 10:04 -0400 24/9/07, Ross S. W. Walker wrote:
How about trying your benchmarks with the 'disktest' utility from the
LTP (Linux Test Project),
Now fetched and installed - I'd be grateful for a suggestion as to an
appropriate disktest command line for a 4GB RAM twin CPU box with
250GB RAID 1
Simon Banton wrote:
>
> At 07:46 +0800 24/9/07, Feizhou wrote:
> >>... plus an Out of Memory kill of sshd. Second time around (logged
> >>in on the console rather than over ssh), it's just the same except
> >>it's hald that happens to get clobbered instead.
> >
> >Are you saying that running in
At 07:46 +0800 24/9/07, Feizhou wrote:
... plus an Out of Memory kill of sshd. Second time around (logged
in on the console rather than over ssh), it's just the same except
it's hald that happens to get clobbered instead.
Are you saying that running in RAID0 mode with this card and
motherboar
Simon Banton wrote:
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
Ugh. The 3ware code will not give OK then until the stuff has hit disk.
Having now installed BBUs, it's made no differenc
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
Ugh. The 3ware code will not give OK then until the stuff has hit disk.
Having now installed BBUs, it's made no difference to the underlying
Feizhou wrote:
>
> Is there any way to tell the card to forget about not
> having a BBU
> and behave as if it did?
> >>> Short of modifying the code...I do not know of any.
> >> Well, I've now got BBUs on order for the three identical
> machines to
> >> see if that does anything to i
Is there any way to tell the card to forget about not having a BBU
and behave as if it did?
Short of modifying the code...I do not know of any.
Well, I've now got BBUs on order for the three identical machines to
see if that does anything to improve matters - I'll report back when
I've fitted
Simon Banton wrote:
>
> At 08:18 +0800 15/9/07, Feizhou wrote:
> >>Is there any way to tell the card to forget about not having a BBU
> >>and behave as if it did?
> >
> >Short of modifying the code...I do not know of any.
>
> Well, I've now got BBUs on order for the three identical machines to
At 08:18 +0800 15/9/07, Feizhou wrote:
Is there any way to tell the card to forget about not having a BBU
and behave as if it did?
Short of modifying the code...I do not know of any.
Well, I've now got BBUs on order for the three identical machines to
see if that does anything to improve mat
Simon Banton wrote:
At 11:16 -0400 14/9/07, Ross S. W. Walker wrote:
Yes, a write-back cache with a BBU will definitely help, also your
config,
The write-cache is enabled, but what I've not known up to now is that
the absence of a BBU will impact IO performance in this way - which
seems to b
At 11:16 -0400 14/9/07, Ross S. W. Walker wrote:
Yes, a write-back cache with a BBU will definitely help, also your config,
The write-cache is enabled, but what I've not known up to now is that
the absence of a BBU will impact IO performance in this way - which
seems to be what you and Feizho
At 23:07 +0800 14/9/07, Feizhou wrote:
Well, I do not think it will help much with a larger journal...you
want RAM speed, not single 250GB SATA disk speed.
Right now, I'd be happy with being able to configure the 3Ware care
as a plain old SATA II passthru interface and do software RAID1 with
Feizhou wrote:
>
> Simon Banton wrote:
> > At 17:34 +0800 14/9/07, Feizhou wrote:
> >> .ohdo you have a BBU for your write cache on your 3ware board?
> >
> > Not installed, but the machine's on a UPS.
>
> Ugh. The 3ware code will not give OK then until the stuff has
> hit disk.
>
> >
> >
Simon Banton wrote:
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
Ugh. The 3ware code will not give OK then until the stuff has hit disk.
I see where you're going with larger journal ide
Simon Banton wrote:
>
> At 15:43 +0200 14/9/07, Sebastian Walter wrote:
> >Simon Banton wrote:
> > > No, I haven't. This is 3ware hardware RAID-1 on two disks with a
> >> single LVM ext3 / partition - I'm afraid I don't know how
> to go about
> >> discovering the chunk size to plug into Ross's
At 09:41 -0400 14/9/07, Ross S. W. Walker wrote:
Try getting another identical 3ware card and swapping them. If it
produces the same problem, then try putting that card in another
box with a different motherboard to see if it works then.
I've got three identical machines here - two as yet not u
At 15:43 +0200 14/9/07, Sebastian Walter wrote:
Simon Banton wrote:
> No, I haven't. This is 3ware hardware RAID-1 on two disks with a
single LVM ext3 / partition - I'm afraid I don't know how to go about
discovering the chunk size to plug into Ross's calcs.
You can see the chunk size eithe
Simon Banton wrote:
> At 08:09 -0400 14/9/07, Jim Perrin wrote:
>> Have you done any filesystem optimization and tried matching the
>> filesystem to the raid chunk size?
>
> No, I haven't. This is 3ware hardware RAID-1 on two disks with a
> single LVM ext3 / partition - I'm afraid I don't know how
Simon Banton wrote:
>
> At 08:09 -0400 14/9/07, Jim Perrin wrote:
> >Have you done any filesystem optimization and tried matching the
> >filesystem to the raid chunk size?
>
> No, I haven't. This is 3ware hardware RAID-1 on two disks with a
> single LVM ext3 / partition - I'm afraid I don't know
At 08:09 -0400 14/9/07, Jim Perrin wrote:
Have you done any filesystem optimization and tried matching the
filesystem to the raid chunk size?
No, I haven't. This is 3ware hardware RAID-1 on two disks with a
single LVM ext3 / partition - I'm afraid I don't know how to go about
discovering the
On 9/14/07, Simon Banton <[EMAIL PROTECTED]> wrote:
> I see where you're going with larger journal idea and I'll give that a go.
Have you done any filesystem optimization and tried matching the
filesystem to the raid chunk size? A while back Ross had a very good
email thread regarding this. Some o
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
I see where you're going with larger journal idea and I'll give that a go.
Cheers
S.
___
CentOS mail
Simon Banton wrote:
Hmm, how are you creating your ext3 filesystem(s) that you test on?
Try creating it with a large journal (maybe 256MB) and run it in full
journal mode.
The filesystem was created during the initial CentOS installation, and
I've tried it with ext2 which made no difference.
Hmm, how are you creating your ext3 filesystem(s) that you test on?
Try creating it with a large journal (maybe 256MB) and run it in
full journal mode.
The filesystem was created during the initial CentOS installation,
and I've tried it with ext2 which made no difference.
S.
Simon Banton wrote:
At 20:52 +0800 13/9/07, Feizhou wrote:
Well, the first thing I noted was that the H8DA8 was not on the list
of compatible motherboards on the 3ware website.
I challenged the vendor about that quite early on and was told that
they've used this combo before with no trouble,
At 20:52 +0800 13/9/07, Feizhou wrote:
Well, the first thing I noted was that the H8DA8 was not on the list
of compatible motherboards on the 3ware website.
I challenged the vendor about that quite early on and was told that
they've used this combo before with no trouble, though I've yet to
p
Simon Banton wrote:
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see if
anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1
plus 2 hot spare
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see
if anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID
1 plus 2 hot spare config. The array is
62 matches
Mail list logo