> RAID5 is fine; it's the filesystem that you need to concentrate on.
>
> Since you mentioned ext2, I'm assuming you're installing Linux.  While
> you will no doubt get a lot of favoritism from some people on which is
> best, I would highly suggest that you configure a second volume as ext3
> and run some tests (filebench is a good test suite).  Then reformat it
> as XFS and run the same tests.  Then reformat as JFS and run the same
> tests.  The winner of your tests should be the filesystem you use.
>
> I would argue that file system stability is paramount for backups and
performance is really a distant second.  who cares how fast the system is if
it looses your data.  Also, Raid5 has been extensively tested for many years
and it is VERY well know that there is a large performance penalty for the
parity calculation.  *SOME* hardware raid cards can minimize this but in
software raid on linux it is a big deal.  raid5 will be substantially slower
than raid10 or raid1 that have no parity calculation.

That being said, if raid5(or6) is fast enough for you it is a mature and
stable option and a good choice, but certainly comes with a performance
penalty.



> And no matter which filesystem you end up using, don't use the
> main/system volume as the same place your data goes.  Dedicate a volume
> to BackupPC.
>

I second that!  though backuppc doesnt make heavy use of the root
filesystem.  This is really a best practices piece of advice.  always put
your data on a different partition or storage device than your system if you
can(specifically in a server environment)


>
> > What is the best way to set up the RAID array for BackupPC?
>
> RAID5 if you have between four and eight drives, and RAID1+0 (RAID10) if
> you have more than eight.
>

Really, if performance is a really big concern you need to do the math on
what solution will give you the most active spindles vs latency.  You can
take any latency penalties and apply them directly to the spindle count so a
3 spindle set vs a 4 spindle set that has a 20% latency penalty due to a
parity check is more like a 3:3.2 ratio instead of 3:4.  consider that raid5
will have a 1/(n-1) performance penalty(roughly) due to the parity write.

examples:
raid10 with 6 drives in a raid0(r1-1+2, r1-3+4, r1-5+6), is 3 active
spindles because the other three are mirrors but has a worst case safety of
just 1 drive
raid10 in raid0(raid1-1+2+3, raid1-4+5+6) is just 2 spindles but is more
resilient because you can loose 2 or more drives and keep the array up.

raid5 is 6-1 for parity and then 1/5 of that penalty for the parity write
because 1/5 more data is written is equal to 4 active spindles and can
tolerate 1 failure.
that math shows that the raid5 will have the highest thoroughput but this
doesnt account for added latency which will be the same penalty of 1/(n-1)
or 20%.

These are round numbers, kind of a rule of thumb.  6 volumes is about where
raid5 actually catches up.  with a 4 drive set the raid5 penalty brings is
to 2 active spindles and has a large 33% latency penalty because the array
has to wait for all writes to complete while a raid10 is 2 active spindles
without a latency hit.

raid6 shines with 10 or 12 spindles.  I say raid 6 because I wouldnt risk a
large array to a single drive fault and a hotspare has a rebuild window that
makes me nervous.  raid5 likes odd numbers of drives active(not including
hotspare) and raid6 like even numbers.  I cant give a scientific explanation
and can only explain it is a phenomenon.  if you use raid5/6, be sure to
have 6 or more members active and also have a hotspare.
------------------------------------------------------------------------------
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to