Thanks, everyone! Looks like backuppc should be able to handle my
network, no problem. To hit on specific points, in threaded order:
- I'll be sure to get plenty of RAM. We're going to be buying a new,
probably Dell, rackmount system for this and I wouldn't have been
getting any less than 64G RAM anyhow, but bumping it up to 256 should be
no problem.
- I haven't looked at the Debian docs for backuppc yet, but it is
packaged in the main Debian stable repo and there should be
Debian-specific install instructions in the package. They're usually
pretty good, so I don't anticipate any major setup hassles.
- Budget is finite, but this is to replace an existing Tivoli backup
solution, so organizational accounting rules say I can probably get 5
years of TSM license fees with few or no questions asked. And IBM's
licensing fees ain't cheap.
- I'm definitely backing up the VMs as individual hosts, not as disk
image files. Aside from minimizing atomicity concerns, it also makes
single-file restores easier and, in the backuppc context, I doubt that
deduplication would work well (if at all) with disk images.
- For the database servers, I was already considering a cron job to do
SQL dumps of everything and backing that up instead of the raw database
files. But there's something fishy with the server that's sending
400G/day anyhow... It only has about 650G used on it and /var/lib/mysql
is under 100G, so there's no reason it should have 400G of changes
daily. I'm in the process of looking into that.
- Thanks for the tips on zfs settings. I tend to use ext4 by default
and planned to look at btrfs as an alternative, but I'll check zfs out, too.
- I'm already running icinga, so monitoring is handled. (Or will be,
once the backup server is installed.)
- I hadn't considered the possibility of horizontal scaling. Thanks for
bringing that up. I'll have a chat with the other admins tomorrow and
see what they think about that, although I think I personally prefer
vertical scaling just for the simplicity of single-point administration.
And another question which came to mind from the zfs point: Is anyone
familiar with VDO (Virtual Data Optimizer)? It's an abstraction layer
which sits between the kernel and the filesystem and does on-the-fly
data compression and disk-block-level deduplication. A friend uses a
homegrown rsync-based backup system and says it cuts his disk usage
significantly, but I'm wondering whether it would help much in a
backuppc setting, since bpc already does its own file-level deduplication.
On 12/1/20 5:37 PM, Richard Shaw wrote:
So long story short, a lot of it will depend on how fast your data
changes/grows, but it doesn't necessarily require a high end computer.
You really just need something beefy enough as to not be the
bottleneck. If you can make the client I/O the bottleneck, then you're
good. Depending on your budget (or what you have lying around) a
decent AMD budget Ryzen system would work quite nicely.
If you're familiar with Debian then I'm sure it's well documented how
to install and setup. I maintain the Fedora EPEL version and run it on
CentOS 8 quite nicely.
Thanks,
Richard
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/