RE: RAID5 question, take 2

2001-06-11 Thread Christopher Spence

This actually depends on the drives, if they are cheetahs, yeah you will be
pinched.  But if it is anything but perhaps not.
I would hope they would be, but very possible they are not.


"Walking on water and developing software from a specification are easy if
both are frozen."

Christopher R. Spence
Oracle DBA
Fuelspot 



-Original Message-
Sent: Monday, June 11, 2001 12:20 AM
To: Multiple recipients of list ORACLE-L


Gary,

Here is where we have to know more details.

a 9 drive array on a single channel sounds like your peak I/O rate for
reads would be throttled by the controller channel speed. Now, if the
SCSI interface is ultra 160/m, and the drive support a sustained rate of
20 MB/sec - you're not pinched. But if the RAID controller interface is
FC - and only 100MB/sec - you're going to be seriously pinched during
index range, fast full and full table scans - bulk reads.

Are you using fine-grained striping - such that a FTS will be using the
multiblock_read_count and will hit all 8 drives (net)?
what's your:
db_block_size
multiblock_read_count
OS I/O size

If your OS I/O size is 128 KB
and your db_block_size is 16 KB
then a multiblock_read_count = 8
and a stripe size of 128 KB - or 16 KB depth per stripe member.
(as the parity drive is ignored)
and each member in the stripe contributes one block in each read request
for a FTS.

SAME methodology would imply that your OS I/O size has been cranked up
to 1 MB, and that your stripe size is also 1 MB. On a FC interface - the
transfer speed for a 1 MB read would be 10 ms - on par with the average
seek time.
But SAME is not geared for RAID 5 - as RAID 5 supports having the drive
heads out of sync to satisfy mutliple independent requests concurrently.
SAME is geared more for RAID 0+1 - where the drive heads in an array
move in unison - with all drives returning the results of one request at
a time.

What do you want to return for your read requests - (1 db_block,
multiblock_read_count, 1 MB)?
This will depend entirely on the access paths that are used in YOUR
application.


Basically - a 3 drive RAID 5 array is useless. Don't even consider it.
Better off to have a single RAID 1 volume with a hot spare. If I were to
break the 9 drives up  (it would be as a RAID 0+1 of 4 drives each)
 - it would be as a 5 drive and 4 drive array (assuming that 2 channels
are available).

if most of the read requests have been driven by an index - and only one
block is being requested - the 9 disk RAID 5 config is the way to go -
as seek time will dominate transfer time.

just my opinion.

Paul



Gary Weber wrote:
> 
> The reply below was a great post! As were replies prior to it. But, none
of
> the replies were for the original question.
> 
> The issue in hand is not which raid level to use or whether to use at all.
> 
> The question is, and I promise this is the very last time I post it: given
9
> hard drives dedicated for RAID5, should data reside on 6 drives via volume
> group A and indexes on the other 3 drives via volume group B, or should
data
> and indexes be placed on all 9 drives via one volume group? The data is
> absolutely static.
> 
> Gary
> 
> - Original Message -
> To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
> Sent: Sunday, June 10, 2001 7:15 PM
> 
> > Since RAID5 means that data is striped, of course read performance is
OK.
> As
> > soon as you talk write performance, however, RAID5 becomes something of
a
> joke
> > since it was invented back in the 70's to offer a cheap alternative to
the
> fast,
> > extremely expensive disks offered by IBM back then. So the focus was on
> limiting
> > the number of disks. Today, where disks in general are cheap and caches
> are
> > expensive, I really have a hard time figuring out why people buy RAID5
> (few
> > disks, cache required to compensate for the horrible write penalty)
> instead of
> > RAID1+0 (more disks, no cache required). And I have a hard time figuring
> out why
> > the vendors are pushing RAID5 solutions, if RAID1+0 means selling more
> disks to
> > the customers :-). The answer, of course, is that they are making money
on
> > caches, not disks.
> >
> > Technically speaking, RAID1+0 will always be better than RAID5, of
course.
> Oh,
> > they will try to compensate with caches and talk of RAID3 techniques and
> what
> > have you. RAID1+0 is still superior to RAID5 in any techinal aspect.
> >
> > It becomes really absurd when you look at the SAN offerings on the
market.
> For
> > instance, IBM's Shark only offers the customer the choice between JBOD
> (Just a
> > Bunch Of Disks, ie., Non-RAID) and RAID5. IBM has a red book out
regarding
> this
> > and on page 127 out of 228 or so you can read the headline: "JBOD or
> RAID5?" and
> > that's when it dawns on you that Shark (which is very expensive) cannot
> under
> > any circumstances be configured for anything else than RAID5 or
non-RAID.
> > Workaround: Place a file system on top that at least can be striped
> (Veritas,

RE: RAID5 question, take 2

2001-06-11 Thread Gary Weber

Mogens, the super-market analogy does not apply - this is for SQL Server
database. I'm not sure how far I'll be able to tweak that rdbms, hence my
question did not contain many details - it was simply a request for
opinions.

Btw, to sum up current responses:
Option 1: split 9 drives to separate data/index I/O
Option 2: stripe everything across 9 drives for better throughput.

So, methinks the Windoz admin is going to try both ways and monitor i/o...

Thanks to Paul, Jared, Christopher for great input,

Gary Weber
Senior DBA
Charles Jones, LLC
609-530-1144, ext 5529

-Original Message-
Nørgaard
Sent: Monday, June 11, 2001 3:15 AM
To: Multiple recipients of list ORACLE-L


Indeed, Paul. Very good points.

Gary - you're asking us to determine the number of bags we'll need at the
supermarket without knowing what we're going to buy. If we had IO-stats for
your
datafiles/tablespaces, ie reads/writes and their size, and your availability
requirements on the system, we could tell you more.

Paul Drake wrote:

> Gary,
>
> Here is where we have to know more details.
>
> a 9 drive array on a single channel sounds like your peak I/O rate for
> reads would be throttled by the controller channel speed. Now, if the
> SCSI interface is ultra 160/m, and the drive support a sustained rate of
> 20 MB/sec - you're not pinched. But if the RAID controller interface is
> FC - and only 100MB/sec - you're going to be seriously pinched during
> index range, fast full and full table scans - bulk reads.
>
> Are you using fine-grained striping - such that a FTS will be using the
> multiblock_read_count and will hit all 8 drives (net)?
> what's your:
> db_block_size
> multiblock_read_count
> OS I/O size
>
> If your OS I/O size is 128 KB
> and your db_block_size is 16 KB
> then a multiblock_read_count = 8
> and a stripe size of 128 KB - or 16 KB depth per stripe member.
> (as the parity drive is ignored)
> and each member in the stripe contributes one block in each read request
> for a FTS.
>
> SAME methodology would imply that your OS I/O size has been cranked up
> to 1 MB, and that your stripe size is also 1 MB. On a FC interface - the
> transfer speed for a 1 MB read would be 10 ms - on par with the average
> seek time.
> But SAME is not geared for RAID 5 - as RAID 5 supports having the drive
> heads out of sync to satisfy mutliple independent requests concurrently.
> SAME is geared more for RAID 0+1 - where the drive heads in an array
> move in unison - with all drives returning the results of one request at
> a time.
>
> What do you want to return for your read requests - (1 db_block,
> multiblock_read_count, 1 MB)?
> This will depend entirely on the access paths that are used in YOUR
> application.
>
> Basically - a 3 drive RAID 5 array is useless. Don't even consider it.
> Better off to have a single RAID 1 volume with a hot spare. If I were to
> break the 9 drives up  (it would be as a RAID 0+1 of 4 drives each)
>  - it would be as a 5 drive and 4 drive array (assuming that 2 channels
> are available).
>
> if most of the read requests have been driven by an index - and only one
> block is being requested - the 9 disk RAID 5 config is the way to go -
> as seek time will dominate transfer time.
>
> just my opinion.
>
> Paul

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Gary Weber
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



Re: RAID5 question, take 2

2001-06-10 Thread Mogens Nørgaard

Indeed, Paul. Very good points.

Gary - you're asking us to determine the number of bags we'll need at the
supermarket without knowing what we're going to buy. If we had IO-stats for your
datafiles/tablespaces, ie reads/writes and their size, and your availability
requirements on the system, we could tell you more.

Paul Drake wrote:

> Gary,
>
> Here is where we have to know more details.
>
> a 9 drive array on a single channel sounds like your peak I/O rate for
> reads would be throttled by the controller channel speed. Now, if the
> SCSI interface is ultra 160/m, and the drive support a sustained rate of
> 20 MB/sec - you're not pinched. But if the RAID controller interface is
> FC - and only 100MB/sec - you're going to be seriously pinched during
> index range, fast full and full table scans - bulk reads.
>
> Are you using fine-grained striping - such that a FTS will be using the
> multiblock_read_count and will hit all 8 drives (net)?
> what's your:
> db_block_size
> multiblock_read_count
> OS I/O size
>
> If your OS I/O size is 128 KB
> and your db_block_size is 16 KB
> then a multiblock_read_count = 8
> and a stripe size of 128 KB - or 16 KB depth per stripe member.
> (as the parity drive is ignored)
> and each member in the stripe contributes one block in each read request
> for a FTS.
>
> SAME methodology would imply that your OS I/O size has been cranked up
> to 1 MB, and that your stripe size is also 1 MB. On a FC interface - the
> transfer speed for a 1 MB read would be 10 ms - on par with the average
> seek time.
> But SAME is not geared for RAID 5 - as RAID 5 supports having the drive
> heads out of sync to satisfy mutliple independent requests concurrently.
> SAME is geared more for RAID 0+1 - where the drive heads in an array
> move in unison - with all drives returning the results of one request at
> a time.
>
> What do you want to return for your read requests - (1 db_block,
> multiblock_read_count, 1 MB)?
> This will depend entirely on the access paths that are used in YOUR
> application.
>
> Basically - a 3 drive RAID 5 array is useless. Don't even consider it.
> Better off to have a single RAID 1 volume with a hot spare. If I were to
> break the 9 drives up  (it would be as a RAID 0+1 of 4 drives each)
>  - it would be as a 5 drive and 4 drive array (assuming that 2 channels
> are available).
>
> if most of the read requests have been driven by an index - and only one
> block is being requested - the 9 disk RAID 5 config is the way to go -
> as seek time will dominate transfer time.
>
> just my opinion.
>
> Paul
>
> Gary Weber wrote:
> >
> > The reply below was a great post! As were replies prior to it. But, none of
> > the replies were for the original question.
> >
> > The issue in hand is not which raid level to use or whether to use at all.
> >
> > The question is, and I promise this is the very last time I post it: given 9
> > hard drives dedicated for RAID5, should data reside on 6 drives via volume
> > group A and indexes on the other 3 drives via volume group B, or should data
> > and indexes be placed on all 9 drives via one volume group? The data is
> > absolutely static.
> >
> > Gary
> >
> > - Original Message -
> > To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
> > Sent: Sunday, June 10, 2001 7:15 PM
> >
> > > Since RAID5 means that data is striped, of course read performance is OK.
> > As
> > > soon as you talk write performance, however, RAID5 becomes something of a
> > joke
> > > since it was invented back in the 70's to offer a cheap alternative to the
> > fast,
> > > extremely expensive disks offered by IBM back then. So the focus was on
> > limiting
> > > the number of disks. Today, where disks in general are cheap and caches
> > are
> > > expensive, I really have a hard time figuring out why people buy RAID5
> > (few
> > > disks, cache required to compensate for the horrible write penalty)
> > instead of
> > > RAID1+0 (more disks, no cache required). And I have a hard time figuring
> > out why
> > > the vendors are pushing RAID5 solutions, if RAID1+0 means selling more
> > disks to
> > > the customers :-). The answer, of course, is that they are making money on
> > > caches, not disks.
> > >
> > > Technically speaking, RAID1+0 will always be better than RAID5, of course.
> > Oh,
> > > they will try to compensate with caches and talk of RAID3 techniques and
> > what
> > > have you. RAID1+0 is still superior to RAID5 in any techinal aspect.
> > >
> > > It becomes really absurd when you look at the SAN offerings on the market.
> > For
> > > instance, IBM's Shark only offers the customer the choice between JBOD
> > (Just a
> > > Bunch Of Disks, ie., Non-RAID) and RAID5. IBM has a red book out regarding
> > this
> > > and on page 127 out of 228 or so you can read the headline: "JBOD or
> > RAID5?" and
> > > that's when it dawns on you that Shark (which is very expensive) cannot
> > under
> > > any circumstances be configured for anything else 

Re: RAID5 question, take 2

2001-06-10 Thread Paul Drake

Gary,

Here is where we have to know more details.

a 9 drive array on a single channel sounds like your peak I/O rate for
reads would be throttled by the controller channel speed. Now, if the
SCSI interface is ultra 160/m, and the drive support a sustained rate of
20 MB/sec - you're not pinched. But if the RAID controller interface is
FC - and only 100MB/sec - you're going to be seriously pinched during
index range, fast full and full table scans - bulk reads.

Are you using fine-grained striping - such that a FTS will be using the
multiblock_read_count and will hit all 8 drives (net)?
what's your:
db_block_size
multiblock_read_count
OS I/O size

If your OS I/O size is 128 KB
and your db_block_size is 16 KB
then a multiblock_read_count = 8
and a stripe size of 128 KB - or 16 KB depth per stripe member.
(as the parity drive is ignored)
and each member in the stripe contributes one block in each read request
for a FTS.

SAME methodology would imply that your OS I/O size has been cranked up
to 1 MB, and that your stripe size is also 1 MB. On a FC interface - the
transfer speed for a 1 MB read would be 10 ms - on par with the average
seek time.
But SAME is not geared for RAID 5 - as RAID 5 supports having the drive
heads out of sync to satisfy mutliple independent requests concurrently.
SAME is geared more for RAID 0+1 - where the drive heads in an array
move in unison - with all drives returning the results of one request at
a time.

What do you want to return for your read requests - (1 db_block,
multiblock_read_count, 1 MB)?
This will depend entirely on the access paths that are used in YOUR
application.


Basically - a 3 drive RAID 5 array is useless. Don't even consider it.
Better off to have a single RAID 1 volume with a hot spare. If I were to
break the 9 drives up  (it would be as a RAID 0+1 of 4 drives each)
 - it would be as a 5 drive and 4 drive array (assuming that 2 channels
are available).

if most of the read requests have been driven by an index - and only one
block is being requested - the 9 disk RAID 5 config is the way to go -
as seek time will dominate transfer time.

just my opinion.

Paul



Gary Weber wrote:
> 
> The reply below was a great post! As were replies prior to it. But, none of
> the replies were for the original question.
> 
> The issue in hand is not which raid level to use or whether to use at all.
> 
> The question is, and I promise this is the very last time I post it: given 9
> hard drives dedicated for RAID5, should data reside on 6 drives via volume
> group A and indexes on the other 3 drives via volume group B, or should data
> and indexes be placed on all 9 drives via one volume group? The data is
> absolutely static.
> 
> Gary
> 
> - Original Message -
> To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
> Sent: Sunday, June 10, 2001 7:15 PM
> 
> > Since RAID5 means that data is striped, of course read performance is OK.
> As
> > soon as you talk write performance, however, RAID5 becomes something of a
> joke
> > since it was invented back in the 70's to offer a cheap alternative to the
> fast,
> > extremely expensive disks offered by IBM back then. So the focus was on
> limiting
> > the number of disks. Today, where disks in general are cheap and caches
> are
> > expensive, I really have a hard time figuring out why people buy RAID5
> (few
> > disks, cache required to compensate for the horrible write penalty)
> instead of
> > RAID1+0 (more disks, no cache required). And I have a hard time figuring
> out why
> > the vendors are pushing RAID5 solutions, if RAID1+0 means selling more
> disks to
> > the customers :-). The answer, of course, is that they are making money on
> > caches, not disks.
> >
> > Technically speaking, RAID1+0 will always be better than RAID5, of course.
> Oh,
> > they will try to compensate with caches and talk of RAID3 techniques and
> what
> > have you. RAID1+0 is still superior to RAID5 in any techinal aspect.
> >
> > It becomes really absurd when you look at the SAN offerings on the market.
> For
> > instance, IBM's Shark only offers the customer the choice between JBOD
> (Just a
> > Bunch Of Disks, ie., Non-RAID) and RAID5. IBM has a red book out regarding
> this
> > and on page 127 out of 228 or so you can read the headline: "JBOD or
> RAID5?" and
> > that's when it dawns on you that Shark (which is very expensive) cannot
> under
> > any circumstances be configured for anything else than RAID5 or non-RAID.
> > Workaround: Place a file system on top that at least can be striped
> (Veritas,
> > for instance).
> >
> > EMC has a standard offering where they'll suggest RAID-S (S looks a lot
> like 5,
> > doesn't it?) and the standard answer if write performance is not good
> enough is:
> > "Add more cache". Well, we had a customer who reached 32 GB of cache (not
> MB,
> > mind you, but GB) and write performance was still bad (of course) for
> restores
> > and recovery operations and file copying and all those things wh

Re: RAID5 question, take 2

2001-06-10 Thread Jared Still

On Sunday 10 June 2001 20:15, Gary Weber wrote:

> The question is, and I promise this is the very last time I post it: given
> 9 hard drives dedicated for RAID5, should data reside on 6 drives via
> volume group A and indexes on the other 3 drives via volume group B, or
> should data and indexes be placed on all 9 drives via one volume group? The
> data is absolutely static.
>
> Gary

Given the choices, I would consider whether the access is heavily weighted
towards full table scans.

If so, the 9 disk array would sound good.

If more heavily weighted toward table access via indexes, then separating
the data and indexes would get the nod.

HTH

Jared
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Jared Still
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).