Re: [BackupPC-users] Scripting archives: multiple hosts
[EMAIL PROTECTED] wrote on 01/05/2007 02:30:21 PM: > Timothy J. Massey wrote: > > > Is there a way to get BackupPC to archive *all* hosts for which it > > has a > > backup? Or is there a file that I could parse that would allow me > > to do > > it? I imagine I could parse the backuppc/hosts file, but it would be > > nice if I could have BackupPC tell me what hosts it has backups for, > > instead of just assuming that the hosts file reflects what it has. > > The pc dir contains a directory for every host that is backed up. So > something like: That's a pretty good idea. I like it better than parsing the hosts file. Is there a more "official" BackupPC way of getting a list of hosts? Otherwise, I'll run with this... Though I still like the idea of adding this as a standard BackupPC feature of a backup host (as per my other e-mail). Thank you for your help! Tim Massey - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] Scripting archives: multiple hosts
Timothy J. Massey wrote: I have a server with a half-dozen or so hosts that I would like to automatically archive. I have a command that looks like this: /usr/share/backuppc/bin/BackupPC_archiveHost /usr/share/backuppc/bin/BackupPC_tarCreate /usr/bin/split /usr/bin/ par2 localhost -1 /bin/cat .raw 000 /mnt/removable 10 * This command works fine. However, I would like to have *all* of the hosts automatically archive. Of course, I could simply copy the above command, replace "localhost" with the new hostname and go. However, I would like something a little more robust. Is there a way to get BackupPC to archive *all* hosts for which it has a backup? Or is there a file that I could parse that would allow me to do it? I imagine I could parse the backuppc/hosts file, but it would be nice if I could have BackupPC tell me what hosts it has backups for, instead of just assuming that the hosts file reflects what it has. The pc dir contains a directory for every host that is backed up. So something like: for host in `ls /var/lib/backuppc/pc/`; do /usr/share/backuppc/bin/ BackupPC_archiveHost /usr/share/backuppc/bin/BackupPC_tarCreate /usr/ bin/split /usr/bin/par2 $host -1 /bin/cat .raw 000 /mnt/removable 10 *; done Haven't tested this at all. Nils Breunese. PGP.sig Description: Dit deel van het bericht is digitaal ondertekend - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to move the Backuppc pool
Matthias Bertschy escreveu: > Hello list, > > For the last 4 weeks, I have been doing everything I could to try to > move our current backuppc pool (Pool is 79.18GB comprising 1132632 files > and 4369 directories) from a RAID0 to a RAID5 having different > filesystem sizes. > > The operating system is OpenBSD, so the filesystem is FFS, and I have > tried so far: > > * tar > * gtar (GNU version of tar) > * pax > * dump --> restore > > And none of them was able to successfully copy the pool. > I suspect this has to do with the huge number of hardlinks within the > pool, and requiring too much memory to catalog them all. > dd is not an option as the filesystems cannot be the same size. > > Has anyone a clue on how to solve my problem? > Thanks, and happy new year :-) > > Matthias > > - > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > ___ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/backuppc-users > http://backuppc.sourceforge.net/ > > cp -a worked for me in a 20GB pool... - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
[BackupPC-users] Automatic Archives
Hello! I'm trying to set up a BackupPC server that automatically archives itself. Right now, I'm doing it with hardcoded cron commands. However, I'm trying to build something both more robust and more self-contained within the BackupPC system. Is there a way to use the BackupPC scheduler to run an archive automatically? With the new 3.0 code, the GUI offers *everything* I might need in a single place, *except* a way to manage automatic archives. One idea that I have toyed with is the idea of using the $Conf{DumpPostUserCmd} to run a command to automatically archive the host. This would keep it within BackupPC, but it's not very clean. I would still have to write a script to manage how often the archives run: I only want the archives once a week, not every day. It seems like it would be much cleaner to just add a feature to each host for "ArchPeriod", which would work similar to FullPeriod and IncrPeriod: perform an archive after a certain interval, using the Archive* settings, which you could specify right in the host.pl file. Any thoughts? Is this something that others would find valuable? Tim Massey - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
[BackupPC-users] Scripting archives: multiple hosts
Hello! I have a server with a half-dozen or so hosts that I would like to automatically archive. I have a command that looks like this: /usr/share/backuppc/bin/BackupPC_archiveHost /usr/share/backuppc/bin/BackupPC_tarCreate /usr/bin/split /usr/bin/par2 localhost -1 /bin/cat .raw 000 /mnt/removable 10 * This command works fine. However, I would like to have *all* of the hosts automatically archive. Of course, I could simply copy the above command, replace "localhost" with the new hostname and go. However, I would like something a little more robust. Is there a way to get BackupPC to archive *all* hosts for which it has a backup? Or is there a file that I could parse that would allow me to do it? I imagine I could parse the backuppc/hosts file, but it would be nice if I could have BackupPC tell me what hosts it has backups for, instead of just assuming that the hosts file reflects what it has. Has anyone else done something like this? Thank you for any help you might be able to provide. I would appreciate any suggestions you might have. Tim Massey - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to move the Backuppc pool
Les Mikesell a écrit : > On Fri, 2007-01-05 at 15:45 +0100, Matthias Bertschy wrote: > > >> For the last 4 weeks, I have been doing everything I could to try to >> move our current backuppc pool (Pool is 79.18GB comprising 1132632 files >> and 4369 directories) from a RAID0 to a RAID5 having different >> filesystem sizes. >> >> The operating system is OpenBSD, so the filesystem is FFS, and I have >> tried so far: >> >> * tar >> * gtar (GNU version of tar) >> * pax >> * dump --> restore >> >> And none of them was able to successfully copy the pool. >> I suspect this has to do with the huge number of hardlinks within the >> pool, and requiring too much memory to catalog them all. >> dd is not an option as the filesystems cannot be the same size. >> >> Has anyone a clue on how to solve my problem? >> Thanks, and happy new year :-) >> > > The straightforward way is to just copy the configurations over > and let the new system start from scratch, keeping the old > drives around for some interval so you could restore a historical > copy if required. In this case that's an especially good idea > since you may see much worse performance on the RAID5 setup. > > Try afio, very efficient tool. Usage is similar to cpio. Yves -- Yves Trudeau, Ph. D., MCSE, OCP Analyste Senior Révolution Linux 819-780-8955 poste *104 Toutes les opinions et les prises de position exprimées dans ce courriel sont celles de son auteur et ne répresentent pas nécessairement celles de Révolution Linux Any views and opinions expressed in this email are solely those of the author and do not necessarily represent those of Revolution Linux - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to move the Backuppc pool
On Fri, 2007-01-05 at 15:45 +0100, Matthias Bertschy wrote: > For the last 4 weeks, I have been doing everything I could to try to > move our current backuppc pool (Pool is 79.18GB comprising 1132632 files > and 4369 directories) from a RAID0 to a RAID5 having different > filesystem sizes. > > The operating system is OpenBSD, so the filesystem is FFS, and I have > tried so far: > > * tar > * gtar (GNU version of tar) > * pax > * dump --> restore > > And none of them was able to successfully copy the pool. > I suspect this has to do with the huge number of hardlinks within the > pool, and requiring too much memory to catalog them all. > dd is not an option as the filesystems cannot be the same size. > > Has anyone a clue on how to solve my problem? > Thanks, and happy new year :-) The straightforward way is to just copy the configurations over and let the new system start from scratch, keeping the old drives around for some interval so you could restore a historical copy if required. In this case that's an especially good idea since you may see much worse performance on the RAID5 setup. -- Les Mikesell [EMAIL PROTECTED] - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to move the Backuppc pool
Tino Schwarze wrote: dd is not an option as the filesystems cannot be the same size. Could you dd, then resize the file system? Unfortunately no, because the RAID0 is huge ~1TB and we don't want such a size for the RAID5, mainly for performance reasons. Furthermore, downsizing a FFS partition is not supported on OpenBSD. Matthias - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to move the Backuppc pool
On Fri, Jan 05, 2007 at 03:45:43PM +0100, Matthias Bertschy wrote: > For the last 4 weeks, I have been doing everything I could to try to > move our current backuppc pool (Pool is 79.18GB comprising 1132632 files > and 4369 directories) from a RAID0 to a RAID5 having different > filesystem sizes. > > The operating system is OpenBSD, so the filesystem is FFS, and I have > tried so far: > > * tar > * gtar (GNU version of tar) > * pax > * dump --> restore > > And none of them was able to successfully copy the pool. > I suspect this has to do with the huge number of hardlinks within the > pool, and requiring too much memory to catalog them all. Yes, the hardlinks are difficult to cope with. > dd is not an option as the filesystems cannot be the same size. Could you dd, then resize the file system? Tino. -- www.quantenfeuerwerk.de www.spiritualdesign-chemnitz.de www.lebensraum11.de - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
[BackupPC-users] How to move the Backuppc pool
Hello list, For the last 4 weeks, I have been doing everything I could to try to move our current backuppc pool (Pool is 79.18GB comprising 1132632 files and 4369 directories) from a RAID0 to a RAID5 having different filesystem sizes. The operating system is OpenBSD, so the filesystem is FFS, and I have tried so far: * tar * gtar (GNU version of tar) * pax * dump --> restore And none of them was able to successfully copy the pool. I suspect this has to do with the huge number of hardlinks within the pool, and requiring too much memory to catalog them all. dd is not an option as the filesystems cannot be the same size. Has anyone a clue on how to solve my problem? Thanks, and happy new year :-) Matthias - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/