Re: [SOLVED] RAID Performance Questions

2007-01-25 Thread Milo Hyson
A quick call to 3ware and they told me to increase vfs.read_max from  
8 to 256. That helped. I'm now seeing 4x performance on a four-drive  
array vs a single drive. Additionally, Ivan was right about the  
database being too small. iostat showed no disk activity after the  
initial run, as everything was cached in memory.


--
Milo Hyson
CyberLife Labs

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: RAID Performance Questions

2007-01-25 Thread Ivan Voras
Milo Hyson wrote:

> I also ran some performance tests with a stock build of PostgreSQL 8.0
> to get a different angle on things. Two tests were run on each of the
> UDMA system drive, the RAID 5 unit, and the RAID 10 unit. The first
> tested sequential-scans through a 58,000+ record table. The second
> tested random index-scans of the same table. These were read-only tests
> -- no write tests were performed. The results are as follows:
> 
> Unit  Seq/secIndex/sec
> --
> single  0.550 2048.983
> raid5   0.533 2063.900
> raid10  0.533 2093.283


58,000 records is WAY too small for any benefits to come out, unless the
records are very large ("wide"). The database and the OS will cache as
much data they can - with such a small number of records it's very
probable they will all be cached and the drives won't get any IO (and
it's lucky for you that it works this way). You can verify this
hypothesis with iostat and similar utilities.

This is also something you'll need to consider: unless you have more
data than fits in your memory, don't bother with the drives. When your
data DOES grow enough that it doesn't fit in memory (or actually - not
all of it, just the mostly accessed bits), you'll take a dramatic
performance hit which you can fix only with a large numbers of drives
(as other said - no less than 5 fast drives to get any kind of decent
performance). In that case, it's way cheaper and faster to add as much
memory as the motherboard can handle before even touching the drives.

(the above explanation holds for read-mostly loads. for write intensive
loads, go immediately to the 5+ drives option and try to avoid RAID5).



signature.asc
Description: OpenPGP digital signature


Re: RAID Performance Questions

2007-01-25 Thread Milo Hyson

On Jan 25, 2007, at 13:50, Jeff Mohler wrote:


How about one large raid, and two partitions to serve each purpose?

Being so limited in HW, youre either going to take a _huge_
performance hit with only 2 disks per raid (unless Raid0), or an
availability hit with everything on one RAID set.


I suppose availability is more important than performance. However  
reliability is even more important. My big concern with a single,  
large array is that should the array become corrupted for any reason,  
I'll lose both the live and the backup. Also, in that configuration,  
the backup becomes little more than transparent revision control.


Another factor to consider (which I should have mentioned earlier) is  
that with the exception of any locally hosted databases, this is a  
NAS box. All data access will be via FastEthernet, and that's more of  
a bottleneck than any disks I have.


--
Milo Hyson
CyberLife Labs

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: RAID Performance Questions

2007-01-25 Thread Jeff Mohler

How about one large raid, and two partitions to serve each purpose?

Being so limited in HW, youre either going to take a _huge_
performance hit with only 2 disks per raid (unless Raid0), or an
availability hit with everything on one RAID set.

But..considering the costs of adding RAID to a server..take a peek
here for a high(er) perfomance RAID solution..if Fbsd had an iscsi
layer like linux has had for...5yrs or so..this would be a slam dunk
as you can still serve it as block data.

If that cant help you, it might help the next guy with only a few K's
to spend on large disks and raid controllers.

On 1/25/07, Milo Hyson <[EMAIL PROTECTED]> wrote:

On Jan 25, 2007, at 12:15, Chuck Swiger wrote:

> Still, you also ought to consider that a 3-disk RAID-5
> configuration is very much not ideal from either an efficiency or
> performance standpoint-- you want more like 5 or 6 drives being
> used, in which case your performance numbers ought to increase
> some.  This is also somewhat true of the 4-disk RAID-10 config;
> using 6 or all 8 drives would likely improve performance compared
> with striping against only two disks.

Unfortunately, I'm a bit limited in terms of equipment and
application requirements. For starters, the app specs currently call
for two arrays: one for general file-serving and databases, and the
other for backups. Due to limited hardware I'm to run both on the
same controller. Far from ideal, I know, but it's what I have.
Second, I need to keep at least one drive as a hot-spare. Thus, I
have seven drives that I somehow need to partition into two groups
and maximize performance without sacrificing reliability. Lastly, the
RAID controller does not permit more than two drives in a RAID-1 set.

Any suggestions?

--
Milo Hyson
CyberLife Labs
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: RAID Performance Questions

2007-01-25 Thread Milo Hyson

On Jan 25, 2007, at 12:15, Chuck Swiger wrote:

Still, you also ought to consider that a 3-disk RAID-5  
configuration is very much not ideal from either an efficiency or  
performance standpoint-- you want more like 5 or 6 drives being  
used, in which case your performance numbers ought to increase  
some.  This is also somewhat true of the 4-disk RAID-10 config;  
using 6 or all 8 drives would likely improve performance compared  
with striping against only two disks.


Unfortunately, I'm a bit limited in terms of equipment and  
application requirements. For starters, the app specs currently call  
for two arrays: one for general file-serving and databases, and the  
other for backups. Due to limited hardware I'm to run both on the  
same controller. Far from ideal, I know, but it's what I have.  
Second, I need to keep at least one drive as a hot-spare. Thus, I  
have seven drives that I somehow need to partition into two groups  
and maximize performance without sacrificing reliability. Lastly, the  
RAID controller does not permit more than two drives in a RAID-1 set.


Any suggestions?

--
Milo Hyson
CyberLife Labs
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: RAID Performance Questions

2007-01-25 Thread Chuck Swiger

On Jan 25, 2007, at 10:50 AM, Milo Hyson wrote:
The write times of both RAID configurations are slower than the  
single drive (which is expected due to having to write to multiple  
drives). However, I wasn't expecting such a drastic reduction  
(about 50%). The read times, although faster, are only marginally  
so in per-char transfer. They're a bit better in block performance,  
but still not what I would expect. It would seem to me that a read  
spread across four drives should see more than a 45% performance  
increase. The highest rate recorded here is only a quarter of the  
PCI bus-speed, so I doubt that's a bottleneck. CPU load peaks at  
50%, so I don't see that being a problem either.


Single-byte accesses are a worst-case scenario for RAID throughput;  
the block rates are generally more applicable to the performance  
you'll see for decently-written applications and many use-case  
scenarios.  If you've got a UPS or battery-backup option for the RAID  
card enabled, consider turning on write-back mode rather than write- 
thru mode, which ought to improve write performance pretty  
significantly.


Still, you also ought to consider that a 3-disk RAID-5 configuration  
is very much not ideal from either an efficiency or performance  
standpoint-- you want more like 5 or 6 drives being used, in which  
case your performance numbers ought to increase some.  This is also  
somewhat true of the 4-disk RAID-10 config; using 6 or all 8 drives  
would likely improve performance compared with striping against only  
two disks.


I also ran some performance tests with a stock build of PostgreSQL  
8.0 to get a different angle on things.

[ ... ]
Any performance benefit of RAID in these tests is almost  
nonexistent.  Am I doing something wrong?  Am I expecting too much?  
Any advice that can be offered in this area would be much appreciated.


Most databases dislike any form of RAID except plain old RAID-1  
mirroring, but absolutely hate RAID-5.  Databases can do OK with big  
RAID-10 combinations, too, but ask any experienced DBA what they'd  
like, and they'd rather have as many RAID-1 spindles available as  
possible compared with any other drive arrangement.


--
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: RAID Performance Questions

2007-01-25 Thread Martin Hepworth

Milo

if you hunt around you should see papers/articles where it shows foe RAID 5
you need at least 5 drives before you any dramatic performance gains..(sun
old Sun articles from around 1998 where they do the math as well).

not sure about RAID 10, but again I *think* you need at least 3 drives in
the stripe before you start hitting gains.

To best test I'd put ALL the SATA drives into the RAID 5 or RAID 10 array
and then see what happens.

--
Martin

On 1/25/07, Milo Hyson <[EMAIL PROTECTED]> wrote:


I don't really have a whole lot of experience with RAID, so I was
wondering if the performance figures I'm seeing are normal or if I
just need to tweak things a bit. Based on what I've been reading, I
would expect more significant improvements over a single drive.
Here's my setup:

* FreeBSD 5.4-RELEASE-p22
* AMD Athlon 2200+
* 512 MB RAM
* 3ware 9500S-8 RAID controller
* 8 x Maxtor 7Y250M0 drives (SATA150 - 250 GB each)
* 1 x UDMA100 system drive

I'm using a trimmed-down but otherwise stock kernel (see below). The
array is configured as two units: a three-drive RAID 5 and a four-
drive RAID 10. Both units have been fully initialized and verified.
No errors or warnings are being issued by the controller --
everything is green. Using bonnie I get the following results with a
1.5 GB file:

   ---Sequential Output ---Sequential Input--
--Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %
CPU  /sec %CPU
single   1536 42229 45.1 44379 19.4 17227  7.7 40819 41.6 44772 12.1
141.1  0.7
raid51536 21812 22.8 21876  8.7 12935  5.9 47283 48.3 61998 17.0
152.8  0.8
raid10   1536 21905 23.0 21999  8.6 14878  6.7 49036 50.1 64847 17.7
130.6  0.7

The write times of both RAID configurations are slower than the
single drive (which is expected due to having to write to multiple
drives). However, I wasn't expecting such a drastic reduction (about
50%). The read times, although faster, are only marginally so in per-
char transfer. They're a bit better in block performance, but still
not what I would expect. It would seem to me that a read spread
across four drives should see more than a 45% performance increase.
The highest rate recorded here is only a quarter of the PCI bus-
speed, so I doubt that's a bottleneck. CPU load peaks at 50%, so I
don't see that being a problem either.

I also ran some performance tests with a stock build of PostgreSQL
8.0 to get a different angle on things. Two tests were run on each of
the UDMA system drive, the RAID 5 unit, and the RAID 10 unit. The
first tested sequential-scans through a 58,000+ record table. The
second tested random index-scans of the same table. These were read-
only tests -- no write tests were performed. The results are as follows:

Unit  Seq/secIndex/sec
--
single  0.550 2048.983
raid5   0.533 2063.900
raid10  0.533 2093.283

Any performance benefit of RAID in these tests is almost nonexistent.
Am I doing something wrong? Am I expecting too much? Any advice that
can be offered in this area would be much appreciated.

Here is my kernel config (the twa driver is loaded as a module):

machine i386
cpu I686_CPU
ident   NAS-20070124

options SCHED_4BSD  # 4BSD scheduler
options INET# InterNETworking
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates
support
options UFS_ACL # Support for access control
lists
options UFS_DIRHASH # Improve performance on big
directories
options NFSCLIENT   # Network Filesystem Client
options NFSSERVER   # Network Filesystem Server
options CD9660  # ISO 9660 Filesystem
options PROCFS  # Process filesystem
(requires PSEUDOFS)
options PSEUDOFS# Pseudo-filesystem framework
options COMPAT_43   # Compatible with BSD 4.3
[KEEP THIS!]
options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options SCSI_DELAY=15000# Delay (in ms) before
probing SCSI
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-
time extensions
options ADAPTIVE_GIANT  # Giant mutex is adaptive.

device  apic# I/O APIC

# Bus support.  Do not remove isa, even if you have no isa slots
device  isa
device  pci

# ATA and ATAPI devices
device  ata
device  atadisk # ATA disk drives
device  atapicd # ATAPI CDROM drives
options ATA_S

RAID Performance Questions

2007-01-25 Thread Milo Hyson
I don't really have a whole lot of experience with RAID, so I was  
wondering if the performance figures I'm seeing are normal or if I  
just need to tweak things a bit. Based on what I've been reading, I  
would expect more significant improvements over a single drive.  
Here's my setup:


* FreeBSD 5.4-RELEASE-p22
* AMD Athlon 2200+
* 512 MB RAM
* 3ware 9500S-8 RAID controller
* 8 x Maxtor 7Y250M0 drives (SATA150 - 250 GB each)
* 1 x UDMA100 system drive

I'm using a trimmed-down but otherwise stock kernel (see below). The  
array is configured as two units: a three-drive RAID 5 and a four- 
drive RAID 10. Both units have been fully initialized and verified.  
No errors or warnings are being issued by the controller --  
everything is green. Using bonnie I get the following results with a  
1.5 GB file:


  ---Sequential Output ---Sequential Input--  
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---  
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % 
CPU  /sec %CPU
single   1536 42229 45.1 44379 19.4 17227  7.7 40819 41.6 44772 12.1  
141.1  0.7
raid51536 21812 22.8 21876  8.7 12935  5.9 47283 48.3 61998 17.0  
152.8  0.8
raid10   1536 21905 23.0 21999  8.6 14878  6.7 49036 50.1 64847 17.7  
130.6  0.7


The write times of both RAID configurations are slower than the  
single drive (which is expected due to having to write to multiple  
drives). However, I wasn't expecting such a drastic reduction (about  
50%). The read times, although faster, are only marginally so in per- 
char transfer. They're a bit better in block performance, but still  
not what I would expect. It would seem to me that a read spread  
across four drives should see more than a 45% performance increase.  
The highest rate recorded here is only a quarter of the PCI bus- 
speed, so I doubt that's a bottleneck. CPU load peaks at 50%, so I  
don't see that being a problem either.


I also ran some performance tests with a stock build of PostgreSQL  
8.0 to get a different angle on things. Two tests were run on each of  
the UDMA system drive, the RAID 5 unit, and the RAID 10 unit. The  
first tested sequential-scans through a 58,000+ record table. The  
second tested random index-scans of the same table. These were read- 
only tests -- no write tests were performed. The results are as follows:


Unit  Seq/secIndex/sec
--
single  0.550 2048.983
raid5   0.533 2063.900
raid10  0.533 2093.283

Any performance benefit of RAID in these tests is almost nonexistent.  
Am I doing something wrong? Am I expecting too much? Any advice that  
can be offered in this area would be much appreciated.


Here is my kernel config (the twa driver is loaded as a module):

machine i386
cpu I686_CPU
ident   NAS-20070124

options SCHED_4BSD  # 4BSD scheduler
options INET# InterNETworking
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates  
support
options UFS_ACL # Support for access control  
lists
options UFS_DIRHASH # Improve performance on big  
directories

options NFSCLIENT   # Network Filesystem Client
options NFSSERVER   # Network Filesystem Server
options CD9660  # ISO 9660 Filesystem
options PROCFS  # Process filesystem  
(requires PSEUDOFS)

options PSEUDOFS# Pseudo-filesystem framework
options COMPAT_43   # Compatible with BSD 4.3  
[KEEP THIS!]

options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options SCSI_DELAY=15000# Delay (in ms) before  
probing SCSI

options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real- 
time extensions

options ADAPTIVE_GIANT  # Giant mutex is adaptive.

device  apic# I/O APIC

# Bus support.  Do not remove isa, even if you have no isa slots
device  isa
device  pci

# ATA and ATAPI devices
device  ata
device  atadisk # ATA disk drives
device  atapicd # ATAPI CDROM drives
options ATA_STATIC_ID   # Static device numbering

# SCSI support
device  scbus   # SCSI bus (required for SCSI)
device  da  # Direct Access (disks)

# atkbdc0 controls both the keyboard and the PS/2 mouse
device  atkbdc  # AT keyboard controller
device  atkbd   # AT keyboard

device  vga # VGA video card driver

# syscons is the default console driver,