Christopher Derr wrote:
>
> So I can see it both ways I guess. I can back up 500 GB at a time from
> a 2 TB server for example, making good use of my 8 GB of memory for each
> full backup (4 full backups per week to get the entire 2 TB). This is
> if I have one backuppc server with onboard drive array to hold the data.
You'll probably want to do as much as is practical on a single server
since it is easier to watch (although backuppc mostly takes care of
itself) and you get better pooling.
> Alternatively, I could go the more extensible route: multiple, slightly
> less buff memory-wise backuppc servers, backing up to a large SAN, even
> at the same time. For an environment where I may be backing up data in
> the terabytes, would multiple backuppc head nodes backing up to a SAN
> over iSCSI/fiber be a good bet?
That's going to depend on the layout of the disks in the SAN. You'll
want the volumes to be completely independent, not grouped in an array
that share drives, then split into volumes. And it will be more
expensive (but perhaps more convenient)than just putting a couple of big
SATA drives in the backuppc server case.
--
Les Mikesell
[EMAIL PROTECTED]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/