I beg to differ. I am not so sure whether what you are
suggesting is even feasible on most VLDB environments
today. Imagine if you have to implement at 2TB hybrid
database with 500 datafiles on a RAID 1, you will have
to individually maintain a large number of
"independent RAID 1 logical volumes". It will become a
management nightmare.

OK, agreed that you are trying to achieve I/O
isolation, but you are giving up big time on the
ability to "divide and conquer", that a striped volume
provides, both for "reads and writes". If 1 disk drive
supports 100 IOPS, then a 4-way striped volume on
independent disks will provide 400 IOPS, assuming that
your controller has enough bandwidth. When implemented
as a RAID 1+0, the throughput is 800 IOPS. A single
drive failure will result only in a 12.5% loss in
IOPS, for that RAID 1+0 logical volume. This number is
progressively halved, when each time you double the
"degree of striping".

If everything is implemented as RAID 1 in an
environment, then assuming 20 Mb/sec. data transfer
rates and 100 IOPS throughput rate per disk, you will
never be able to achieve more than 40 Mb/sec. or 200
IOPS on any of your logical volumes. The single disk
failure will result in a 50% loss in IOPS for that
RAID 1 logical volume.

Can you guarantee yourself that this bandwidth (with
or without the loss of the 1 disk drive) will always
be sufficient? What if one of the partitions in the
partitioned table, grows and becomes much larger than
the others? It is a lot easier to deal with the growth
at the I/O sub-system level, rather than manually
managing I/O on a per mirrored disk level and/or
re-partitioning the table because the amount of data
stored in 1 database partition is much larger than
others.

I am not suggesting that one gives up complete
physical placement control by configuring 1 logical
volume with all available drives (a' la SAME).  But we
need to strike a balance between what SAME suggests
and the pure "RAID 1 only" environment. With only RAID
1 volumes, you may end up with partition-level segment
hotspots (assuming that you have implemented
table/index partitioning as you suggested) and will
not have much leeway to do anything about it.

Configuring multiple RAID 1+0 logical volumes,
provides the required flexibility and "physical
independence" for balancing the I/O load on your
system. That along with one or more RAID 3/5 logical
volumes (as appropriate) for the "read-intensive"
components of your database, should be strongly
considered, as maintaining 100s of logical volumes
that are "only mirrored" may not provide the necessary
flexibility, throughput and scalability for the I/O
system. That in turn affects the throughput for the
database and finally the throughput for the
application. Striping is a significant feature that is
available in today's disk sub-systems and it will be a
shame to let it go like that.


Best regards,


Gaja

--- Charlie Mengler <[EMAIL PROTECTED]> wrote:
> I believe RAID-1+0 has more cost than benefit.
> I want to see & measure total I/O's to each spindle.
> While RAID-1 has higher cost to maintain, it gives
> me maximum flexibility to evenly distribute I/O
> loads.
> I contend that with proper table partitioning I can
> achieve & sustain higher I/O rates with RAID-1 than
> can be gotten from RAID-1+0.
> 
> HTH & YMMV
> 
> HAND!
> 
> [EMAIL PROTECTED] wrote:
> > 
> > Thanks for all the replies.      We are determined
> to lay out the data as
> > well as we can across the disks we are about to
> purchase - with the goal of
> > striping across array groups and smaller, faster
> drives.     The real
> > argument for us is 18GB vs. 73GB disk drives and
> how we can stripe.     The
> > Hitachi is configured into groups of 4 physical
> disks called "parity
> > groups" and you can choose RAID 5 or RAID 1+0 for
> that 4 disk set.    If
> > you have 73GB drives in a 4-disk RAID 5
> configuration you get roughly 219GB
> > of usable space in each parity group (this is what
> we are being told is the
> > best option for us).    This means our heavily
> concurrently accessed 400GB
> > production database goes on 2 parity groups (2
> sets of 4 disks).      To
> > me, this sounds like a nightmare waiting to happen
> and we are trying to
> > stop it.    The 18GB drives are less capacity but
> we can get ourselves
> > spread over more parity groups for better
> concurrency.     We do have about
> > 10GB of cache but it is being shared across the
> enterprise with various
> > other applications.      We as a DBA group are
> really trying to sell the
> > 18GB RAID 1+0 drive solution especially after
> reading the groups'
> > experiences - unfortunately we are fighting a lot
> of marketing hype.
> > 
> > If anyone has additional experiences or feedback
> with Hitachi or EMC they
> > would like to share or comments (agree/disagree)
> with my thoughts, I'd love
> > to hear them.       I'm open for learning!
> > 
> > Thanks,
> > 
> > John Dailey
> > Oracle DBA
> > ING Americas - Application Services
> > Atlanta, GA
> > 
> > 
> >                     "Don
> >                     Granaman"            To:    
> Multiple recipients of list ORACLE-L
> <[EMAIL PROTECTED]>
> >                     <granaman@cox        cc:
> >                     .net>                Subject: 
>    Re: disk subsystem performance question
> >                     Sent by:
> >                     root@fatcity.
> >                     com
> > 
> > 
> >                     04/10/2002
> >                     01:08 PM
> >                     Please
> >                     respond to
> >                     ORACLE-L
> > 
> > 
> > 
> > Short answer - NO!  Nobody's disk subsystem is so
> fast that no intelligence
> > is required in the layout.  This is common vendor
> blather and one of the
> > most popular myths.  I have been hearing it for at
> least six years - and it
> > still isn't true.  Layout still makes a huge
> difference.  RAID levels still
> > make a huge difference.  Cache won't solve all
> your problems (it does help
> > though).  I've redone the disk layout on some of
> the biggest, fastest
> > fully-loaded with cache EMC Syms available that
> had some "don't worry about
> > it" layout and seen database throughput go up by
> as much as 8x.
> > 
> > See Gaja's whitepaper on RAID at
> http://www.quest.com/whitepapers/Raid1.pdf
> > .
> > 
> > Don Granaman
> > [certifiable oraSaurus]
> > 
> > ----- Original Message -----
> > To: "Multiple recipients of list ORACLE-L"
> <[EMAIL PROTECTED]>
> > Sent: Wednesday, April 10, 2002 10:38 AM
> > 
> > > Hi all,
> > >
> > > We are running both a Hitachi 7700E and a 9960
> disk subsystem here and we
> > > are getting ready to move our production DBs
> from the old(7700E) to the
> > > new(9960) Hitachi.      We have had trouble in
> the past on the 7700E due
> > to
> > > disk contention and layout, i.e. we weren't
> striped across the array
> > groups
> > > very well.... this caused pretty poor I/O
> performance.        This has
> > been
> > > a learning experience for the DBAs and the SAs
> here for the logical vs.
> > > physical aspects of our disks.      Anyway, to
> make a long story short,
> > we
> > > are ordering disk for the move to the 9960 and
> we have 2 choices in disk
> > > sizes - 18GB and 73GB, and 2 choices in RAID -
> 1+0 and 5.     I would
> > like
> > > to get the smaller, faster 18GB drives in a RAID
> 1+0 configuration and
> > > stripe our data across the array groups as wide
> as possible.     However,
> > I
> > > am running into objections from the Hitachi
> people that their system is
> > > "soooo fast we need not worry about such minor
> details".   I'm having a
> > > hard time believing that given our I/O problems
> on the 7700E.
> > Performance
> > > is given a high priority here.
> > >
> > > What I would like to know is others' experience
> with disk subsystems -
> > > specifically Hitachi but EMC and others as
> well....   have you been able
> > to
> > > "throw the disk in and forget it" or have you
> had success in getting to
> > the
> > > dirty details?      Have you tested or noticed
> an improvement with
> > smaller,
> > > faster drives in a disk subsystem like the
> Hitachi or have you traveled
> > > that path and found no noticeable improvement?  
>    I'm looking for
> > either
> > > a) ammunition that my view is correct, or b) I'm
> wrong and we can get
> > > bigger drives which will make Enterprise
> Planning very happy from a $$$
> > > standpoint because our Hitachi capacity will
> last longer.
> > >
> > > We are running Oracle 8.1.7 / AIX 4.3.3 /
> Peoplesoft Financials version
> > 8.
> > > 2 production databases , one 400 GB and the
> other about 1TB.     We've
> > got
> > > some other production DBs but these are our big
> guys.
> > >
> > > Thanks in advance for any and all input - any
> help is greatly
> > appreciated.
> > > I'd be happy to share any info we have found up
> to this point and our
> > > experiences on the 7700E as well if anyone is
> interested - despite the
> > fact
> > > I will probably bore you to death   :-)
> > >
> > > John Dailey
> > > Oracle DBA
> > > ING Americas - Application Services
> > > Atlanta, GA
> > >
> > >
> > >
> 
=== message truncated ===


=====
Gaja Krishna Vaidyanatha
Director, Storage Management Products,
Quest Software, Inc.
Co-author - Oracle Performance Tuning 101
http://www.osborne.com/database_erp/0072131454/0072131454.shtml

__________________________________________________
Do You Yahoo!?
Yahoo! Tax Center - online filing with TurboTax
http://taxes.yahoo.com/
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Gaja Krishna Vaidyanatha
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to