Quoting Arnold Krille <[email protected]>:
On Fri, 13 Sep 2013 12:56:30 -0400 Carl Wilhelm Soderstrom <[email protected]> wrote:On 09/13 02:12 , Marcel Meckel wrote: > Debian Wheezy will be running from SD card inside the server, My company tried using CF cards as OS storage devices for a while. Our experience is that (anecdotally) they aren't any more reliable than spinny disk. They still fail sometimes. I don't know if SD cards will be any different, or if you might have a different way of mounting them which will be better.When the host is simple (aka single-use) and maybe even configured automatically with chef/puppet/whatever, the most you loose from a failing CF-card/sd-card is the time it takes to reinstall.
Along with that, CF/SD cards are cheap, so getting a couple extras and cloning the master to them makes disaster recovery nice and quick - just swap the CF/SD card and you're back up (minus any configuration tweaks/updates you made to the master since cloning it). I have 2 Freenas fileservers running smoothly on CF cards using CF->SATA adapters.
> 2. I always use LVM but it might not be useful in this case. > Would you recommend using LVM when the whole 12 TiB is used as > one big filesystem only? It might be useful if i have to add > another shelf of 25 disks to the system in the future to be > able to resize the DATADIR FS spanning then 2 enclosures. I wouldn't bother. I've done it both ways (with, and without the LVM). If you *know* that you'll be adding more disks in the future, it's a good idea. My experience is that planned expansions usually don't happen. ;) Also, if you're going to add more disks for more capacity, you're much better off adding a whole new machine. A second machine will increase your overall backup throughput as well as increasing your disk space. you won't get the benefit of pooling; but you will get more hosts backed up in a shorter amount of time.My advise would actually be a bit different: The main problem with backuppc isn't necessarily the disk acces but the memory-consumption as backuppc (especially with rsync-method) has to keep a rather big file-tree in memory. So maybe do a minimal hw-host and run two or even three virtual machines for backuppc. Then distribute your hosts-to-backup across these. That way the file-tree per backuppc-instance should be smaller with the "cost" of a bit less deduplication. But from my expierence is the benefit of massive deduplication the files in /etc and similar system-shares with small files. If you have duplicates in big user-data files, you are either backing up one nas-resource over several clients or your users are copying data where it shouldn't belong. Hope that is understandible, at the end of the week writing in a foreign language isn't the best way of expressing ones thoughts. Have a nice weekend, Arnold
-- Mark D. Montgomery II http://www.techiem2.net
pgpAL8NiJ__dV.pgp
Description: PGP Digital Signature
------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. Consolidate legacy IT systems to a single system of record for IT 2. Standardize and globalize service processes across IT 3. Implement zero-touch automation to replace manual, redundant tasks http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk
_______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
