Re: [CentOS] Backup Redux
On 12/16/2011 06:16 PM, Les Mikesell wrote: On Fri, Dec 16, 2011 at 5:17 PM, Jonathan Nilsson jnils...@uci.edu wrote: From the little I've read it seems to be very similar to BackupPC. I think the only thing they have in common is that they both use rsync as the transfer agent. Well, they are both perl scripts... Backuppc just has more of it. Though based on the name I guess it is using LVM snapshots? no, rsnapshot does not use LVM snapshots (at least, not that I know of). it used cp -al to create hardlinks between each backup run. I think it can do something with them, but just locally so it isn't much of a backup. Does backupPC have the abilty to easily be configured so that each daily incremental and each weekly full backup are stored on different drives, i.e. to rotate drives based on your backup schedule and not just when a drive fills up? I think this might look something like having one drive for each day of the week, 1 for each week of the month, 1 for each month of the year, etc... Thanks, Nataraj ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Sun, Dec 18, 2011 at 6:00 PM, Nataraj incoming-cen...@rjl.com wrote: Does backupPC have the abilty to easily be configured so that each daily incremental and each weekly full backup are stored on different drives, i.e. to rotate drives based on your backup schedule and not just when a drive fills up? I think this might look something like having one drive for each day of the week, 1 for each week of the month, 1 for each month of the year, etc... No, due to the way that duplicate files are pooled with hardlinks, backuppc must store everything on a single filesystem and additional copies do not consume additional space - so there is no reason to do that from a storage perspective. However, from discussions on the mail list, I believe that there are people who swap/rotate the entire archive drive daily or weekly, letting it catch up after each swap, and others have various schemes to image-copy the filesystem with raid or lvm mirroring to get offsite copies. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
OK, I'm getting ready to finally dig into replacing our backups. Lots of good info in this thread -but so far no mention of rsnapshot Any comment on it ? Our environment is all Linux except for Mac desktops which would like have a different solution for backups. From the little I've read it seems to be very similar to BackupPC. Though based on the name I guess it is using LVM snapshots? Which of course means almost instantaneous backups - attractive. -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of In Defense of Food ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Fri, Dec 16, 2011 at 12:34 PM, Alan McKay alan.mc...@gmail.com wrote: OK, I'm getting ready to finally dig into replacing our backups. Lots of good info in this thread -but so far no mention of rsnapshot Any comment on it ? Our environment is all Linux except for Mac desktops which would like have a different solution for backups. From the little I've read it seems to be very similar to BackupPC. Though based on the name I guess it is using LVM snapshots? Which of course means almost instantaneous backups - attractive. Rsnapshot keeps what look like snapshots online with a history hardlinked where possible (in the history of the same system to save some space. I'm not quite sure how much it understands about lvm but it looks like that's a special case for local use. Backuppc will give you compression as well, plus will hardlink all identical content even if found on different targets or in different locations, so you can keep much more history in the same amount of space. And it gives you a nice web interface to monitor and control everything. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
good info in this thread -but so far no mention of rsnapshot Any comment on it ? we use rsnapshot. i think of it basically as a wrapper around rsync. it isn't a fully featured backup solution just on it's own, but it is a great tool. we have written a bash shell script wrapper around rsnapshot to do things like mount/umount the appropriate NFS storage brick to use as an rsync destination, and then send an email summary of the results if there was a problem. From the little I've read it seems to be very similar to BackupPC. I think the only thing they have in common is that they both use rsync as the transfer agent. Les describes how BackupPC has additional features like compression, hardlinking between different backup sets, and a web gui. I'd also add that BackupPC allows users to perform their own restore operations. rsnapshot has nothing like that. Though based on the name I guess it is using LVM snapshots? no, rsnapshot does not use LVM snapshots (at least, not that I know of). it used cp -al to create hardlinks between each backup run. -- Jonathan.Nilsson at uci dot edu Social Sciences Computing Services SSPB 1265 | 949.824.1536 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Fri, Dec 16, 2011 at 5:17 PM, Jonathan Nilsson jnils...@uci.edu wrote: From the little I've read it seems to be very similar to BackupPC. I think the only thing they have in common is that they both use rsync as the transfer agent. Well, they are both perl scripts... Backuppc just has more of it. Though based on the name I guess it is using LVM snapshots? no, rsnapshot does not use LVM snapshots (at least, not that I know of). it used cp -al to create hardlinks between each backup run. I think it can do something with them, but just locally so it isn't much of a backup. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
--On Thursday, December 08, 2011 01:06:10 PM -0500 Alan McKay alan.mc...@gmail.com wrote: Anyone have any experience with this, which just came to my attention http://www.arkeia.com/en/solutions/open-source-solutions Yes, I've used it (albiet about 8 years back or so), as well as many other solutions (both commercial and FOSS) over the years. Being bit by Arkeia (and previously Amanda and others) is why I started to design my own FOSS solution at the time ... however I was about 6 months into the design when Bacula first appeared, written by Kern with whom I had worked on a different project. (Actually, it was scary ... between my design and what he had, there was only about one significant difference at the time.) So I shelved my project and started using bacula, and haven't looked back. Use bacula. Drink the coolaid. Be happy. BTW, my observation with a lot of the products at the time was that they *mostly* worked, but when the edge cases caused me to lose backup history, or made it impossible to back up certain filesystems, or had other unusual problems, it made it obvious that something better was needed. Without naming names, there was one FOSS product I evaluated that, when I looked at the source, they weren't doing any decent level of error checking ... which would explain the SEGVs I was seeing. Once bitten, twice shy. I will allow for the possibility that Amanda and Arkeia may have improved over the years, however Bacula was already a solid product when they were flakey or had other limitations, and has just gotten better with time. Devin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Fri, Dec 9, 2011 at 11:44 AM, Devin Reade g...@gno.org wrote: Being bit by Arkeia (and previously Amanda and others) Errr, what? Amanda is a little cumbersome to set up, but it doesn't bite. If gnutar works, amanda should work or tell you why. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
--On Friday, December 09, 2011 11:48:49 AM -0600 Les Mikesell lesmikes...@gmail.com wrote: Errr, what? Amanda is a little cumbersome to set up, but it doesn't bite. If gnutar works, amanda should work or tell you why. As I said, it's been at least 8 years since I dealt with Amanda. Going by memory, though, in the case of Amanda, it wasn't flakiness but rather limitations. At the time Amanda had the limitation that the backup of a filesystem could not span tapes. This was a critical issue in that I had filesystems larger than the largest tapes available. In the case where filesystem sizes approached the size of a tape, it wasted a lot of tape space (which wasn't cheap). I'm willing to believe that they've fixed that limitation, but if so I'd already moved on. Arkeia, though, was definitely one of the flakey ones (IIRC I lost three months' worth of backups, which was caught by a scheduled validation process ... luckily it was only history and not current data that was gone.) Devin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Fri, Dec 9, 2011 at 12:11 PM, Devin Reade g...@gno.org wrote: --On Friday, December 09, 2011 11:48:49 AM -0600 Les Mikesell lesmikes...@gmail.com wrote: Errr, what? Amanda is a little cumbersome to set up, but it doesn't bite. If gnutar works, amanda should work or tell you why. As I said, it's been at least 8 years since I dealt with Amanda. Going by memory, though, in the case of Amanda, it wasn't flakiness but rather limitations. At the time Amanda had the limitation that the backup of a filesystem could not span tapes. This was a critical issue in that I had filesystems larger than the largest tapes available. In the case where filesystem sizes approached the size of a tape, it wasted a lot of tape space (which wasn't cheap). The brilliant thing about amanda, going back more than a decade, is that it knows how to estimate the size of backups and if you give it many filesystems to back up, it will skew a mix of full and incremental runs to fit the tape efficiently, getting at least an incremental every day and as many fulls as will fit. Of course you will cause problems if you put more data than will fit on your tape on a single filesystem, though. I'm willing to believe that they've fixed that limitation, but if so I'd already moved on. I think they have, but I just let my old system run until the tape drive died and by than was more than satisfied with backuppc, using a raid-mirroring scheme to make offsite copies (soon to be replaced independently running offsite servers). I'll be happy if I never see a tape again. --- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
--On Friday, December 09, 2011 01:03:18 PM -0600 Les Mikesell lesmikes...@gmail.com wrote: I'll be happy if I never see a tape again. Likewise. Skipping forward to the present, I'm doing normal backups in Bacula to virtual volumes on hard disk. As for offsite/archival backups, using the bacula add-on 'vchanger' and inexpensive high density SATA disks on an eSATA peripheral to act as a virtual tape autoloader magazine is both faster and less expensive than tape. Devin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Fri, Dec 9, 2011 at 1:52 PM, Devin Reade g...@gno.org wrote: --On Friday, December 09, 2011 01:03:18 PM -0600 Les Mikesell lesmikes...@gmail.com wrote: I'll be happy if I never see a tape again. Likewise. Skipping forward to the present, I'm doing normal backups in Bacula to virtual volumes on hard disk. As for offsite/archival backups, using the bacula add-on 'vchanger' and inexpensive high density SATA disks on an eSATA peripheral to act as a virtual tape autoloader magazine is both faster and less expensive than tape. Bacula is probably better suited to mixing online/tape or fake-tape for offsite, but you can't keep as much online as backuppc without help from ZFS or similar block-level dedup. I doubt if it can match the bandwidth efficiency of backuppc with rsync as the transport (not sure - how does the bacula agent deal with growing files, or big files with small changes?). -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
--On Friday, December 09, 2011 02:59:08 PM -0600 Les Mikesell lesmikes...@gmail.com wrote: I doubt if it can match the bandwidth efficiency of backuppc with rsync as the transport (not sure - how does the bacula agent deal with growing files, or big files with small changes?). There is a relatively new block-level delta plugin for that type of situation. I've not used it yet, so I don't have any opinions or comparisons. I'm not sure if it's in the community edition yet (some functionality starts out in the enterprise edition and then later gets moved into the community edition). Devin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Anyone have any experience with this, which just came to my attention http://www.arkeia.com/en/solutions/open-source-solutions I have used Arkeia for a few customers .. it works well. Do you have any specific questions about it? Barry ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup Redux
Hey folks, I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC NetWorker Management Console version 3.5.1.Build.269 based on NetWorker version 7.5.1.Build.269 The pickle we are in right now is that this software is Java based, and stops working at a very specific release of JRE (1.6.26 or something like that). We still have some machines around with that release and it looks like we need to keep at least 1 of them, but this is clearly not a long term viable solution. In the end I want to get our central IT group to take over our backups if possible (we are a bit of an island outside of central IT), but as I pursue that path I also want to pursue a 2ndary path assuming they will say no. I am familiar with BackupPC and will look at the other recommendations above. I think that Bacula and Amanda are sort of the drop-in replacements for what we have now so I'll look at them most closely. But if I do have to carry forward with our own backups I'd ideally like to get out of the tape game - never liked tapes. Anyway, since the last big backup discussion was over a year ago I figured I'd kick off another one to see if anything new has come up in the mean time. What are the current recommendations? cheers, -Alan -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of In Defense of Food ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
NetWorker Management Console version 3.5.1.Build.269 based on NetWorker version 7.5.1.Build.269 The pickle we are in right now is that this software is Java based, and stops working at a very specific release of JRE (1.6.26 or something like that). We still have some machines around with that release and it looks like we need to keep at least 1 of them, but this is clearly not a long term viable solution. I'm pretty sure I saw a note on the networker list that 7.6 SP3 works with update 27, update 29, and java 7. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 8:53 AM, Alan McKay alan.mc...@gmail.com wrote: Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC NetWorker Management Console version 3.5.1.Build.269 based on NetWorker version 7.5.1.Build.269 The pickle we are in right now is that this software is Java based, and stops working at a very specific release of JRE (1.6.26 or something like that). That sounds like something that can/should be fixed. I am familiar with BackupPC and will look at the other recommendations above. I think that Bacula and Amanda are sort of the drop-in replacements for what we have now so I'll look at them most closely. But if I do have to carry forward with our own backups I'd ideally like to get out of the tape game - never liked tapes. If you want mostly-online backups with perhaps an occasional tar archive, it will be hard to beat backuppc because of it's storage pooling and ability to run over rsync or smb with no remote agents. For all-tape, I'd probably go with amanda because of its ability juggle the full/incremental mix automatically to fit the available tape size. I haven't used bacula but it looks like it might be good if you want a mix of online and tape storage and can deal with the agent installs. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Dec 8, 2011, at 8:43 AM, Les Mikesell wrote: On Thu, Dec 8, 2011 at 8:53 AM, Alan McKay alan.mc...@gmail.com wrote: Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC NetWorker Management Console version 3.5.1.Build.269 based on NetWorker version 7.5.1.Build.269 The pickle we are in right now is that this software is Java based, and stops working at a very specific release of JRE (1.6.26 or something like that). That sounds like something that can/should be fixed. I am familiar with BackupPC and will look at the other recommendations above. I think that Bacula and Amanda are sort of the drop-in replacements for what we have now so I'll look at them most closely. But if I do have to carry forward with our own backups I'd ideally like to get out of the tape game - never liked tapes. If you want mostly-online backups with perhaps an occasional tar archive, it will be hard to beat backuppc because of it's storage pooling and ability to run over rsync or smb with no remote agents. For all-tape, I'd probably go with amanda because of its ability juggle the full/incremental mix automatically to fit the available tape size. I haven't used bacula but it looks like it might be good if you want a mix of online and tape storage and can deal with the agent installs. also - Bacula now has 'Enterprise' version with SLA and yes, Bacula can not only do tape and/or disk but can also migrate backup jobs (ie, disk to tape) http://www.baculasystems.com/ Craig ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Le jeu 08 déc 2011 09:43:21 CET, Les Mikesell a écrit: On Thu, Dec 8, 2011 at 8:53 AM, Alan McKay alan.mc...@gmail.com wrote: Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC NetWorker Management Console version 3.5.1.Build.269 based on NetWorker version 7.5.1.Build.269 The pickle we are in right now is that this software is Java based, and stops working at a very specific release of JRE (1.6.26 or something like that). That sounds like something that can/should be fixed. I am familiar with BackupPC and will look at the other recommendations above. I think that Bacula and Amanda are sort of the drop-in replacements for what we have now so I'll look at them most closely. But if I do have to carry forward with our own backups I'd ideally like to get out of the tape game - never liked tapes. If you want mostly-online backups with perhaps an occasional tar archive, it will be hard to beat backuppc because of it's storage pooling and ability to run over rsync or smb with no remote agents. For all-tape, I'd probably go with amanda because of its ability juggle the full/incremental mix automatically to fit the available tape size. I haven't used bacula but it looks like it might be good if you want a mix of online and tape storage and can deal with the agent installs. In this last scenario, dar (http://dar.linux.free.fr/doc/Features.html) works just fine and don't need any remote agent. It is also at least as fast as Bacula at restore time, provided the catalogue is ready. -- Philippe Naudin UMR MISTEA : Mathématiques, Informatique et STatistique pour l'Environnement et l'Agronomie INRA, bâtiment 29 - 2 place Viala - 34060 Montpellier cedex 2 tél: 04.99.61.26.34, fax: 04.99.61.29.03, mél: nau...@supagro.inra.fr ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 9:55 AM, Philippe Naudin philippe.nau...@supagro.inra.fr wrote: If you want mostly-online backups with perhaps an occasional tar archive, it will be hard to beat backuppc because of it's storage pooling and ability to run over rsync or smb with no remote agents. For all-tape, I'd probably go with amanda because of its ability juggle the full/incremental mix automatically to fit the available tape size. I haven't used bacula but it looks like it might be good if you want a mix of online and tape storage and can deal with the agent installs. In this last scenario, dar (http://dar.linux.free.fr/doc/Features.html) works just fine and don't need any remote agent. It is also at least as fast as Bacula at restore time, provided the catalogue is ready. That looks like a one-off kind of tool. Backuppc/amanda/backula are all frameworks to manage potentially large numbers of targets. Another interesting thing is Relax and Recover (http://rear.sourceforge.net/ - in EPEL as rear). This is something that you run on a working system to generate a bootable iso with that system's own tools to reconstruct the current filesystem layout (including LVM/md raid, etc.) and restore a backup onto it. It includes a few backup methods internally but with a small amount of work you could integrate your own backup approach into it to get a fully-scripted bare metal restore. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
I'm pretty sure I saw a note on the networker list that 7.6 SP3 works with update 27, update 29, and java 7. Well we don't have a support contract - is it a free upgrade? -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of In Defense of Food ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Alan McKay wrote: Hey folks, I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS You missed rsync. snip mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 10:31 AM, m.r...@5-cent.us wrote: I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS You missed rsync. Rsync is another one-off approach where you have to roll your own commands per target. Backuppc can use rsync as the transport, collating all the results into one centrally managed archive with a web interface that makes it easier to set up than rsync itself. Plus it will compress the data and pool all identical content so you can keep much more history online than you would expect. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Les Mikesell wrote: On Thu, Dec 8, 2011 at 10:31 AM, m.r...@5-cent.us wrote: I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS You missed rsync. Rsync is another one-off approach where you have to roll your own commands per target. Backuppc can use rsync as the transport, snip Actually, my manager wrote a set of scripts some years ago, and we *do* have centralized backup setups, which get automagically pushed out, and the backup hosts know what directories to backup from each server. But it is a roll-your-own, though I'd have to go look to see if he released it as FOSS. mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 11:00 AM, m.r...@5-cent.us wrote: I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS You missed rsync. Rsync is another one-off approach where you have to roll your own commands per target. Backuppc can use rsync as the transport, snip Actually, my manager wrote a set of scripts some years ago, and we *do* have centralized backup setups, which get automagically pushed out, and the backup hosts know what directories to backup from each server. But it is a roll-your-own, though I'd have to go look to see if he released it as FOSS. But is it better somehow than backuppc, which is basically a perl script that: (a) can use rsync, tar, smb, or ftp to collect the backups (b) provides a web interface with the ability to delegate host 'owners' (c) schedules everything for you (d) optionally compresses (e) detects and pools files with duplicate content, even from different sources. (f) is packaged in EPEL It does have its own quirks, of course. The main ones being that its rsync-in-perl (on the server side so it can work with its own compressed files while chatting with a stock remote rsync) is somewhat slow, and that its archive storage that uses hardlinks for pooling may end up being impractical to copy with file-oriented tools. But basically it just takes care of itself after the initial setup. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Anyone have any experience with this, which just came to my attention http://www.arkeia.com/en/solutions/open-source-solutions -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of In Defense of Food ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
Les Mikesell wrote: On Thu, Dec 8, 2011 at 10:31 AM, m.r...@5-cent.us wrote: I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS You missed rsync. Rsync is another one-off approach where you have to roll your own commands per target. Backuppc can use rsync as the transport, collating all the results into one centrally managed archive with a web interface that makes it easier to set up than rsync itself. Plus it will compress the data and pool all identical content so you can keep much more history online than you would expect. I use backuppc, but find that in order to restore one has to be or know the admin user password. There appears to be no way to open this up to users to directly see and restore from the file tree that it manages. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
I use backuppc, but find that in order to restore one has to be or know the admin user password. There appears to be no way to open this up to users to directly see and restore from the file tree that it manages. Huh? No. Users can do their own restores from the web interface without root access. I think you need to go back and read the fine manual a bit more :-) There is definitely a way to set up users on there though. I have a fair bit of experience with BackupPC (great software) -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of In Defense of Food ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 12:12 PM, Rob Kampen rkam...@kampensonline.com wrote: I use backuppc, but find that in order to restore one has to be or know the admin user password. There appears to be no way to open this up to users to directly see and restore from the file tree that it manages. You can delegate target machines to 'owners' who can only see their own machines with their login to the web interface, but there is not an easy way to do it at the home directory or file owner level for a multi-user machine. You can make a 'host' which is a subset of a target, and point several of those at the same real host with the ClientNameAlias option but it would take some additional work to secure those against each other. It probably could be done, though. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 3:53 PM, Alan McKay alan.mc...@gmail.com wrote: Hey folks, I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC My non-tape solution of choice is definitely rsync = box with ZFS, snapshot however often you'd like. = forever incrementals. For more redundancy and performance, add more ZFS boxes, do replication between them. For tapes, I'd go with Bacula, but my intermediate storage will probably be ZFS anyway, for easy management of filesystems. I like creating one storage device per client as per this amazing write-up by Henrik Johansen: http://myunix.dk/category/bacula/ I'd choose Bacula mainly for experience and being comfortable with it. In this setup, I'm used to managing it all with Puppet: From server to client to storage agents as well as creating individual zfs filesystems for each client on the storage server. I had to patch the puppet zfs provider a while back to make it work on FreeBSD. For Bacula, there now exists an awesome (modern) web interface, with ACL support and all: http://webacula.sourceforge.net/ Good luck. -- Mike ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On 12/08/11 11:26 AM, Mikael Fridh wrote: For more redundancy and performance, add more ZFS boxes, do replication between them. what zfs replication is that? last I heard, the only supported replication was physical block replication of the underlying device(s) (avs in solaris cluster, drbd in linux), and the replica couldn't be mounted at all, it was purely for standby failover scenarios. -- john r pierceN 37, W 122 santa cruz ca mid-left coast ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup Redux
On Thu, Dec 8, 2011 at 8:38 PM, John R Pierce pie...@hogranch.com wrote: On 12/08/11 11:26 AM, Mikael Fridh wrote: For more redundancy and performance, add more ZFS boxes, do replication between them. what zfs replication is that? last I heard, the only supported replication was physical block replication of the underlying device(s) (avs in solaris cluster, drbd in linux), and the replica couldn't be mounted at all, it was purely for standby failover scenarios. What I mean is merely incremental zfs send -i | zfs receive -F between two boxes for each new snapshot being created. You're free to mount the filesystem, but each new receive will roll it back to the previous snapshot when another incremental comes in (using zfs receive -F). It's not filesystem replication per se, but more periodic snapshots + incremental transfers. For doing multiple copies off backup data, I'd say it's more than good enough as replication. -- Mike ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup live system
Though I've worked with enterprise systems, I'm not familiar with FOOS backup software. Which of those recommended would allow me to backup a system while users are active on it? If it matters the system uses LVM. I'd also like to be able to avoid needing the network if possible. That is, I'd plug in a disk into a USB port and backup the system onto that... again, while the system is live. Thanks much. -- War is a failure of the imagination. --William Blake ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup live system
ken wrote: Though I've worked with enterprise systems, I'm not familiar with FOOS backup software. Which of those recommended would allow me to backup a system while users are active on it? If it matters the system uses LVM. I'd also like to be able to avoid needing the network if possible. That is, I'd plug in a disk into a USB port and backup the system onto that... again, while the system is live. There's always rsync - that's what we use. mark -- War is a failure of the imagination. --William Blake Like that sigfile. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup live system
On Thu, Oct 20, 2011 at 9:52 AM, ken geb...@mousecar.com wrote: Though I've worked with enterprise systems, I'm not familiar with FOOS backup software. Which of those recommended would allow me to backup a system while users are active on it? If it matters the system uses LVM. I'd also like to be able to avoid needing the network if possible. That is, I'd plug in a disk into a USB port and backup the system onto that... again, while the system is live. It is rare for linux applications to lock files, so almost all backup tools will work on an active system, catching the files in whatever state happens to appear in the filesystem. However, database-type applications will have their own requirements to preserve consistency across tables in the snapshot. Tar/dump/cpio/rsync are all good for copying data. If you want something that can completely reconstruct your system, look at http://rear.sourceforge.net/ (also in EPEL) which should meet you need exactly. But, anytime someone mentions backups, I like to plug backuppc. It does use the network (and another machine) and it won't restore a bootable disk, but it generally takes care of itself and makes sure you always have backup copies with little effort. (http://backuppc.sourceforge.net/ and EPEL). -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup live system
On Thu, 20 Oct 2011 10:52:15 -0400 ken geb...@mousecar.com wrote: If it matters the system uses LVM. I'd also like to be able to avoid needing the network if possible. That is, I'd plug in a disk into a USB port and backup the system onto that... again, while the system is live. If it should be an exact copy you can also do this via LVM snapshots e.g. http://www.howtoforge.com/linux_lvm_snapshots Brgds -- Freundliche Gruesse/Best Regards Benjamin Hackl IT/Administration Media FOCUS Research Ges.m.b.H. Maculangasse 8, 1220 Wien Austria Tel: +43 1 258 97 01-295 b.ha...@focusmr.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup live system
On Thu, Oct 20, 2011 at 10:52 AM, ken geb...@mousecar.com wrote: Though I've worked with enterprise systems, I'm not familiar with FOOS backup software. Which of those recommended would allow me to backup a system while users are active on it? If it matters the system uses LVM. I'd also like to be able to avoid needing the network if possible. That is, I'd plug in a disk into a USB port and backup the system onto that... again, while the system is live. Thanks much. Others have said that file are not locked on Linux, so you can back them up anyway, but this is surely not your point. The only way to get a consistent backup is to create a snapshot and back that up. If this is a VM you should be able to make a snapshot and then back up the VM files. LVM is a good way to do it on both physical and virtual machines, but there are a few caveats: - You need free PEs on the volume group. When you make an LVM snapshot it needs this extra space to store the changed blocks while the snapshot is in existence. Most default LVM installs do not reserve spare PEs for this. The amount of free PEs you need is completely dependent on how many changes get made to the volume while the snapshot exists. If you run out of PEs, the behavior is undefined. - There is a huge performance penalty. As long as any snapshot exists, there is at least a 50% performance hit. If this is a high performance database server, you might not be able to afford it. Make sure to do your backup on slow times. The howtoforge link seems to cover most of the mechanics. -☙ Brian Mathis ❧- ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup and restore for CentOS 5.5
Hi - I have a NFS/NIS server environment running CentOS 5.5 on a Dell Optiplex 240. I would like to back it up and then move it to a new machine. Couple of questions: 1. This is an older computer Dell Optiplex 240 that I am unable to connect a USB drive to. In the linux rescue environment fdisk -l shows only /dev/hda but lsusb shows the three partitions that are on the drive. I would like to know how to connect the USB drive to the system so I can rsync or dd to the drive. 2. I was going to rsync a backup, excluding /proc, /sys, /dev and /tmp, either to a mounted USB drive or to a remote station. I know how to do that but how do I setup the drive up for the restore on the new computer. I have google'd but without success. I would think I need to format the hard drive, then run the LVM tools (-very ignorant here), and install grub after all the files are in place (maybe mkinitrd too). Is there information avaiable to do this or would some be willing to provide what I would need to accomplish this. Thanks, -- Denis Becker ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup and restore for CentOS 5.5
On 10/03/11 1:16 PM, Denis wrote: Hi - I have a NFS/NIS server environment running CentOS 5.5 on a Dell Optiplex 240. I would like to back it up and then move it to a new machine. Couple of questions: 1. This is an older computer Dell Optiplex 240 that I am unable to connect a USB drive to. In the linux rescue environment fdisk -l shows only /dev/hda but lsusb shows the three partitions that are on the drive. I would like to know how to connect the USB drive to the system so I can rsync or dd to the drive. 2. I was going to rsync a backup, excluding /proc, /sys, /dev and /tmp, either to a mounted USB drive or to a remote station. I know how to do that but how do I setup the drive up for the restore on the new computer. I have google'd but without success. I would think I need to format the hard drive, then run the LVM tools (-very ignorant here), and install grub after all the files are in place (maybe mkinitrd too). Is there information avaiable to do this or would some be willing to provide what I would need to accomplish this. personally, I'd do a clean install on the new system, with CentOS 6, then copy over users from /etc/passwd and shadow, the /home directories, any other NFS share points, and manually configure any other applications you might host. I believe you can move an NIS master by bringing it up as a NIS client, synching it, then promoting it to a ypserver, then making it the new master (and in fact, if you do this, you don't even need to manually copy the users via /etc/{passwd,shadow} ) -- john r pierceN 37, W 122 santa cruz ca mid-left coast ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup and restore for CentOS 5.5
On Mon, Oct 3, 2011 at 3:16 PM, Denis denis.bec...@mnsu.edu wrote: Hi - I have a NFS/NIS server environment running CentOS 5.5 on a Dell Optiplex 240. I would like to back it up and then move it to a new machine. You don't mention the type of the new machine. If it is not identical hardware you are probably better off installing a new copy of CentOS on the new machine and then using rsync to copy over any needed data. Couple of questions: 1. This is an older computer Dell Optiplex 240 that I am unable to connect a USB drive to. In the linux rescue environment fdisk -l shows only /dev/hda but lsusb shows the three partitions that are on the drive. I would like to know how to connect the USB drive to the system so I can rsync or dd to the drive. Not sure about that. 2. I was going to rsync a backup, excluding /proc, /sys, /dev and /tmp, either to a mounted USB drive or to a remote station. I know how to do that but how do I setup the drive up for the restore on the new computer. I have google'd but without success. I would think I need to format the hard drive, then run the LVM tools (-very ignorant here), and install grub after all the files are in place (maybe mkinitrd too). Is there information avaiable to do this or would some be willing to provide what I would need to accomplish this. If you are going to identical hardware, you could use clonezilla to copy the drive to an image (either local via USB or to a network file share or an ssh connection) and reverse the process to copy the image back to the drive on your new machine. It is not impossible to make this (or other types of system copies) work on different hardware but you may have to rebuild the initrd with different driver modules. And in any case you will have to reconfigure the network since the MAC address copied from the old machine won't be correct in the new one. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup and restore for CentOS 5.5
Les Mikesell wrote: On Mon, Oct 3, 2011 at 3:16 PM, Denis denis.bec...@mnsu.edu wrote: Hi - I have a NFS/NIS server environment running CentOS 5.5 on a Dell Optiplex 240. I would like to back it up and then move it to a new machine. You don't mention the type of the new machine. If it is not identical hardware you are probably better off installing a new copy of CentOS on the new machine and then using rsync to copy over any needed data. snip At the very least, you're going to have to rebuild the initrd. mark, who missed that on the upgrade he's working on at the moment ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
Recall.. I run now the following task every day tar -cvzf /rescue/website-$(date +%u).tgz /var/www/htdocs/* I want now to move these files from the local server to a remote server via ftp. any help. Thanks On Fri, Jan 28, 2011 at 5:33 PM, cpol...@surewest.net wrote: madu...@gmail.com wrote: Should I add to my tar the following option -p, --preserve-permissions extract all protection information tar -cvzfp .. Thanks On Tue, Jan 25, 2011 at 7:10 PM, John Doe jd...@yahoo.com wrote: From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... I hope I'm not duplicating something someone has already said -- /tmp may not be the best possible choice for backups. A reboot could potentially help by cleansing that directory. Off-host copies (eg, scp website-20110101-1459.tgz fred@otherhost:/home/fred/backups/) would address a number of risks. -- Charles Polisher ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
On 03/20/2011 08:31 AM, madu...@gmail.com wrote: Recall.. I run now the following task every day tar -cvzf /rescue/website-$(date +%u).tgz /var/www/htdocs/* I want now to move these files from the local server to a remote server via ftp. any help. Thanks man lftp t ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
Should I add to my tar the following option -p, --preserve-permissions extract all protection information tar -cvzfp .. Thanks On Tue, Jan 25, 2011 at 7:10 PM, John Doe jd...@yahoo.com wrote: From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... JD ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
madu...@gmail.com wrote: Should I add to my tar the following option -p, --preserve-permissions extract all protection information tar -cvzfp .. Thanks On Tue, Jan 25, 2011 at 7:10 PM, John Doe jd...@yahoo.com wrote: From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... I hope I'm not duplicating something someone has already said -- /tmp may not be the best possible choice for backups. A reboot could potentially help by cleansing that directory. Off-host copies (eg, scp website-20110101-1459.tgz fred@otherhost:/home/fred/backups/) would address a number of risks. -- Charles Polisher ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
I have reallocated it to /home thx On Fri, Jan 28, 2011 at 5:33 PM, cpol...@surewest.net wrote: madu...@gmail.com wrote: Should I add to my tar the following option -p, --preserve-permissions extract all protection information tar -cvzfp .. Thanks On Tue, Jan 25, 2011 at 7:10 PM, John Doe jd...@yahoo.com wrote: From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... I hope I'm not duplicating something someone has already said -- /tmp may not be the best possible choice for backups. A reboot could potentially help by cleansing that directory. Off-host copies (eg, scp website-20110101-1459.tgz fred@otherhost:/home/fred/backups/) would address a number of risks. -- Charles Polisher ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
On Fri, 28 Jan 2011, madu...@gmail.com wrote: To: CentOS mailing list centos@centos.org From: madu...@gmail.com madu...@gmail.com Subject: Re: [CentOS] backup script I have reallocated it to /home thx On Fri, Jan 28, 2011 at 5:33 PM, cpol...@surewest.net wrote: madu...@gmail.com wrote: Should I add to my tar the following option -p, --preserve-permissions extract all protection information tar -cvzfp .. Thanks On Tue, Jan 25, 2011 at 7:10 PM, John Doe jd...@yahoo.com wrote: From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... I hope I'm not duplicating something someone has already said -- /tmp may not be the best possible choice for backups. A reboot could potentially help by cleansing that directory. Off-host copies (eg, scp website-20110101-1459.tgz fred@otherhost:/home/fred/backups/) would address a number of risks. Hi Charles. You might find this php script I wrote handy: http://forums.fedoraforum.org/showthread.php?t=248436 I use a seperate 500GB drive just for storing backups of various things I don't want to loose. Then at certain intervals (ie when I think needed), I burn the backups to CD or DVD - just to be extra safe! Most of my backup scripts are run by cron jobs overnight. Kind Regards, Keith Roberts - Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk All email addresses are challenge-response protected with TMDA [http://tmda.net] -___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
madu...@gmail.com wrote: I have reallocated it to /home thx Please stop top posting. Relocated it to /home, as in /home/backup? Don't clutter your base directories, that's very bad practice. mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
home folder for backup /backup On Fri, Jan 28, 2011 at 7:49 PM, m.r...@5-cent.us wrote: madu...@gmail.com wrote: I have reallocated it to /home thx Please stop top posting. Relocated it to /home, as in /home/backup? Don't clutter your base directories, that's very bad practice. mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
madu...@gmail.com wrote: home folder for backup /backup On Fri, Jan 28, 2011 at 7:49 PM, m.r...@5-cent.us wrote: madu...@gmail.com wrote: I have reallocated it to /home thx Please stop top posting. Relocated it to /home, as in /home/backup? Don't clutter your base directories, that's very bad practice. Â Â Â mark Do you actually understand what we're talking about when *many* of us here ask people to STOP TOP POSTING? mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
-Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of m.r...@5-cent.us Sent: Friday, January 28, 2011 1:07 PM To: CentOS mailing list Subject: Re: [CentOS] backup script madu...@gmail.com wrote: home folder for backup /backup On Fri, Jan 28, 2011 at 7:49 PM, m.r...@5-cent.us wrote: madu...@gmail.com wrote: I have reallocated it to /home thx Please stop top posting. Relocated it to /home, as in /home/backup? Don't clutter your base directories, that's very bad practice. Do you actually understand what we're talking about when *many* of us here ask people to STOP TOP POSTING? mark Furthermore, do you understand the need to make clear fact-rich helpful posts? I have no personal gripe against top-posting that I don't also have against people quoting the entire message running 2+ pages to add that works for me or Package yada does it better at the bottom. You, madunix, are both top-posting and making uselessly short ambiguous posts. Please stop the one practice, or the other. *** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated** ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
On Fri, Jan 28, 2011 at 1:03 PM, madu...@gmail.com madu...@gmail.com wrote: home folder for backup /backup This is a tactical problem. If you actually read the File System Hierarchy guidelines, you'll see that it should be in /var as dynamic, volatile content, probably undar /var/backup. If that backup repository is network mounted for whatever reasons, it also keeps mounting problems off of the / directory, which is very desirable. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
Hi, Try the ff: On 1/25/11 4:31 PM, madu...@gmail.com wrote: I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis Yes, just use crontab for that. Something like, 30 6 * * * command-or-path-to-script-here to run the command or script everyday 6:30 a.m. and to keep the last 5days backup on the box and remove older version than 5days. Can you point me out. I think the easiest way is to use logrotate. man logrotate for details. HTH, -- - Edwin - mailto:ml2ed...@gmail.com “Pleasant sayings are a honeycomb, sweet to the soul and a healing to the bones.”—Proverbs 16:24 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
You could create a script and have a variable date --date=5 days ago append to your tar file and after that, combine it with if syntax. If match, then rm. HTH On Tue, Jan 25, 2011 at 3:31 PM, madu...@gmail.com madu...@gmail.comwrote: I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. Can you point me out. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
Am thinking to have this in my script #!/bin/bash tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* find /tmp/website/website*.tgz -ctime +5 -exec rm {} \; # removes older then 5 days crontab it 30 6 * * * /mypath/myscript On Tue, Jan 25, 2011 at 10:45 AM, Nelson ntseraf...@gmail.com wrote: You could create a script and have a variable date --date=5 days ago append to your tar file and after that, combine it with if syntax. If match, then rm. HTH On Tue, Jan 25, 2011 at 3:31 PM, madu...@gmail.com madu...@gmail.com wrote: I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. Can you point me out. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
On 25/01/11 21:56, madu...@gmail.com wrote: Am thinking to have this in my script #!/bin/bash tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* find /tmp/website/website*.tgz -ctime +5 -exec rm {} \; # removes older then 5 days That should do in your case. Though, in general, you would prefer the following (because, in the general case, that glob could match a _lot_ of things, though in _your_ case, it should be fine). find /tmp/website/ -name website\*.tgz -ctime +5 -exec rm {} \; Also, from a security standpoint (especially if your website contains things private materials the webserver would not serve), you should use umask to change the default permissions the archive is assigned. You can set this temporarily as follows: (umask 077; tar ) The (...) construct defines a _subshell_. A umask specifies the mode bits to clear on a new file, so 077 causes new files to be created as rw---. Umask is a property inherited from parent process to child processes, and is in effect until either changed or the parent proces (the shell, typically) ends. The umask _command_ (actually, _shell-internal_ command) affects the umask of the shell process, which causes the tar child process to see the change). To prevent subsequent processes also getting that same, restrictive, umask, I've used a sub-shell (the round-brackets), to limit the scope of the umask effect to just the tar command. PS. You're not really keeping your website backups in /tmp, are you? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] backup script
From: madu...@gmail.com madu...@gmail.com I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. A quick way to do it is to use the day of the week: website-$(date +%u).tgz It will automaticaly keep the last 7 days... Otherwise, you will have to use date calculations... JD ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] backup script
I want to create bash script to have a zip copy from a website running on linux /var/www/htdocs/* local on the same box on different directory I am thinking to do a local backup using crontab (snapshot my web) tar -cvzf /tmp/website-$(date +%Y%m%d-%H%M).tgz /var/www/htdocs/* This command will create a file /tmp/website-20110101-1459.tgz I want it run on daily basis and to keep the last 5days backup on the box and remove older version than 5days. Can you point me out. Thanks madunix ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup KVM / Qemu on Virt-Manager
Hi guys, I have a CentOS system with virt-manager installed on it, the system is installed on a LVM partition with one PV for swap and one for /, I only use KVM and qemu virtual machine on this server, I want to do a backup from my Virtual Machines on this server should I use LVM backup or an other stuff to do these backups ? what should I do ? stopping all VM before backup (I think) then do a LVM backup all help is appreciated -- Cordialement, / Greetings, Georghy FUSCO ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup KVM / Qemu on Virt-Manager
This is what lvm snapshots are for. Make a snapshot, back it up, delete it. VM keeps running on the 'real' lv. On Wed, 7 Apr 2010, Georghy wrote: Hi guys, I have a CentOS system with virt-manager installed on it, the system is installed on a LVM partition with one PV for swap and one for /, I only use KVM and qemu virtual machine on this server, I want to do a backup from my Virtual Machines on this server should I use LVM backup or an other stuff to do these backups ? what should I do ? stopping all VM before backup (I think) then do a LVM backup all help is appreciated -- Jim Wildman, CISSP, RHCE j...@rossberry.com http://www.rossberry.com Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one. Thomas Paine ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 7:05 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 05:06:08PM +0530, Agnello George wrote: On Wed, Feb 24, 2010 at 4:57 PM, Eero Volotinen eero.voloti...@iki.fi wrote: 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) Does http://code.google.com/p/brackup/ also work in on remote machines . Brackup will backup to local disk, or remotely to ftp, sftp, Amazon S3, or Rackspace CloudFiles targets/servers. So yes, on a lan you can backup over ftp or sftp just fine. Re docs, install brackup, 'man Brackup::Manual::Overview'. I've also written a few blog posts on it: http://www.openfusion.net/tags/brackup. Cheers, Gavin I am trying to install the brackup app on my system, the documentations seems very helpful ( http://www.openfusion.net/net/fun_with_brackup) But i have a few queries with the config file : [TARGET:backups] type = Filesystem path = /backup [SOURCE:imapsource] path = /var/spool/imap chunk_size = 5m # what does this mean gpg_recipient = 5E1B3EC5 # what does this mean [SOURCE:bradhome] chunk_size = 64MB path = /raid/bradfitz/ ignore = ^\.thumbnails/ ignore = ^\.kde/share/thumbnails/ ignore = ^\.ee/minis/ ignore = ^build/ ignore = ^(gqview|nautilus)/thumbnails/ and suppose i want to backup it up to another server with scp / ssh how is this attatined . secondly in whant format is the backup maintained . -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Have you read Brackup::Manual::Overview? Your questions are all answered in the man pages there or linked from there. On Thu, Feb 25, 2010 at 05:21:47PM +0530, Agnello George wrote: On Wed, Feb 24, 2010 at 7:05 PM, Gavin Carr ga...@openfusion.com.au wrote: Brackup will backup to local disk, or remotely to ftp, sftp, Amazon S3, or Rackspace CloudFiles targets/servers. So yes, on a lan you can backup over ftp or sftp just fine. Re docs, install brackup, 'man Brackup::Manual::Overview'. I've also written a few blog posts on it: http://www.openfusion.net/tags/brackup. I am trying to install the brackup app on my system, the documentations seems very helpful ( http://www.openfusion.net/net/fun_with_brackup) But i have a few queries with the config file : [TARGET:backups] type = Filesystem path = /backup [SOURCE:imapsource] path = /var/spool/imap chunk_size = 5m # what does this mean gpg_recipient = 5E1B3EC5 # what does this mean man Brackup::Manual::Overview; man Brackup::Root [SOURCE:bradhome] chunk_size = 64MB path = /raid/bradfitz/ ignore = ^\.thumbnails/ ignore = ^\.kde/share/thumbnails/ ignore = ^\.ee/minis/ ignore = ^build/ ignore = ^(gqview|nautilus)/thumbnails/ and suppose i want to backup it up to another server with scp / ssh how is this attatined . man Brackup::Target::Sftp secondly in whant format is the backup maintained . Backups are trees of file chunks, and a metadata file to put the chunks back together as files. So you get de-duplication for free between files and across backups. Cheers, Gavin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On 02/24/2010 07:44 PM, Les Mikesell wrote: Err.. raid is NOT backup solution. Neither is a snapshot in another location on the same machine. Thats not true, raid is an online setup - different location could be point in time, and on blockdev;s that dont share user access load. Which in turn makes it easier to do intensive complete backups to offsite without impacting user level of service the machine can deliver, amongst other things[1]. Dont compare apples to banana's and call them oranges. - KB [1] Changeset and data/system model over time relation mapping for an adaptive system sizing feedback loop! ( how'se that for buzzword injection! ) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Karanbir Singh wrote: On 02/24/2010 07:44 PM, Les Mikesell wrote: Err.. raid is NOT backup solution. Neither is a snapshot in another location on the same machine. Thats not true, raid is an online setup - different location could be point in time, and on blockdev;s that dont share user access load. Which in turn makes it easier to do intensive complete backups to offsite without impacting user level of service the machine can deliver, amongst other things[1]. Dont compare apples to banana's and call them oranges. Yes and no... There's an overlapping set of possibilities that they do and don't back up. They both cover single disk failures. They don't cover big operator errors (rm -rf /), building/site disasters, some types of controller/electrical issues, etc. The snapshots give you a short history that can help with small user/operator errors at the expense of being out of date when the live disk fails. So have several types of fruit to stay healthy. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Thu, 2010-02-25 at 12:33 +, Karanbir Singh wrote: [1] Changeset and data/system model over time relation mapping for an adaptive system sizing feedback loop! ( how'se that for buzzword injection! ) --- Well if you run vacum on a Postgres DB then all that goes to the crapper... So we resort to real time backups or replication. Of which with replication on a postgres db with the 2GB Blobs being inserted there is going to be a problem... even if you have enough shared memory configured. The thing is regardless you are never in RT Replication. John ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
From: Agnello George agnello.dso...@gmail.com is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . It apparently support: Brackup::Target::Amazon backup to Amazon's S3 service Brackup::Target::CloudFiles backup to Rackspace's CloudFiles Service Brackup::Target::Filebased Brackup::Target::Filesystem backup to a locally mounted filesystem Brackup::Target::Ftp backup to an FTP server Brackup::Target::Sftpbackup to an SSH/SFTP server So, you could use ftp or sftp... JD ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Agnello George wrote: The requirement fro backup is not primarily for HDD failure , but human error failure . In case one of our user ( eg: the COO with huge mailbox size has delete all his certain very important mails, and he want to recover them , the contacts us as we are supposed to maintain his mail backup for a week, and we should restore his backup immediately ) this the main requirement for the backup and that too on the same server different partition . Have you considered using a snapshot approach? By that, I mean one which uses hard links to create the backup, and as files get added/ modified, the data are copied, and links are created. Usually, one has a snapshot directory with something like a daily snapshot, and 24 hourly ones, something like that. Mike -- p=p=%c%s%c;main(){printf(p,34,p,34);};main(){printf(p,34,p,34);} Oppose globalization and One World Governments like the UN. This message made from 100% recycled bits. You have found the bank of Larn. I speak only for myself, and I am unanimous in that! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Karanbir Singh wrote: On 02/24/2010 07:44 PM, Les Mikesell wrote: Err.. raid is NOT backup solution. Neither is a snapshot in another location on the same machine. Thats not true, raid is an online setup - different location could be point in time, and on blockdev;s that dont share user access load. Which I think that, without causing any more dispute, I can point out that backup covers a wide range of solutions to a less broad but still not uniquely one set of needs. No one of the means to backup is a full solution to all the needs which backup satisfies. Even when one is using the term backup narrowly in the sense of protection from disaster, there are still different kinds of backup. For example, there is the full disaster recovery or bare metal backup, which is intended to work with another piece of identical hardware, starting with blank fixed storage, and ending up with a working system which looks identical to the original at the epoch at which the backup was made. This is significantly different from one intended merely to restore the user altered or created data on a machine which has been newly installed with a compatible version of the OS, for example. That's why one needs to know the intended use of the backup set before making any recommendations on procedure and content of the backup set. Mike -- p=p=%c%s%c;main(){printf(p,34,p,34);};main(){printf(p,34,p,34);} Oppose globalization and One World Governments like the UN. This message made from 100% recycled bits. You have found the bank of Larn. I speak only for myself, and I am unanimous in that! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Years ago, I set up a backup tool that wrapped rsync. It has faithfully and reliably backed up a dozen hosts and too many TB of data to mention, offsite, automatically saving as many backup points as disk space allows. You're certainly welcome to try it! http://www.effortlessis.com/thisisnotbackupbuddy/ It works on an ascending powers basis, EG: 1 day ago, 2 days ago, 4 days ago, 8 days ago, 16 days ago... until out of disk space. =) -Ben On Thursday 25 February 2010 10:22:13 am Mike McCarty wrote: Agnello George wrote: The requirement fro backup is not primarily for HDD failure , but human error failure . In case one of our user ( eg: the COO with huge mailbox size has delete all his certain very important mails, and he want to recover them , the contacts us as we are supposed to maintain his mail backup for a week, and we should restore his backup immediately ) this the main requirement for the backup and that too on the same server different partition . Have you considered using a snapshot approach? By that, I mean one which uses hard links to create the backup, and as files get added/ modified, the data are copied, and links are created. Usually, one has a snapshot directory with something like a daily snapshot, and 24 hourly ones, something like that. Mike -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Greetings, On Fri, Feb 26, 2010 at 5:50 AM, Benjamin Smith li...@benjamindsmith.comwrote: Years ago, I set up a backup tool that wrapped rsync. It has faithfully and reliably backed up a dozen hosts and too many TB of data to mention, offsite, automatically saving as many backup points as disk space allows. You're certainly welcome to try it! http://www.effortlessis.com/thisisnotbackupbuddy/ It works on an ascending powers basis, EG: 1 day ago, 2 days ago, 4 days ago, 8 days ago, 16 days ago... until out of disk space. =) grin|smile|whatever it is a criminal offence (in free software world) to hide this gem from the world for all this long. ;) Regards, Rajagopal ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup solution to backup /var/spool/imap above 150GB data
Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Agnello George agnello.dso...@gmail.com: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . So, you need to add more disk i/o? (so, add some disk space with faster raid?) Take a look at: http://rdiff-backup.nongnu.org/ -- Eero ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Dne 24.2.2010 10:00, Agnello George napsal(a): Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. David ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 3:08 PM, David Hrbáč hrbac.c...@seznam.cz wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. David backup directory structure is /var/spool/imap/a /adomain.com/a/agnello^dsouza/ -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 3:08 PM, David Hrbáč hrbac.c...@seznam.cz wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. David backup directory structure is /var/spool/imap/a /adomain.com/a/agnello^dsouza/ Well, does that directory contains one file or lot of files ? Usually maildir structure is like dir++/tmp/current/new directories and each message is in own file on mailbox all files are inside one file. -- Eero ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 10:38:32AM +0100, David Hrbáč wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. rsync and rdiff should handle mbox format okay though. Though I agree Maildir is generally nicer for differential backups. Agnello, how long is a lot of time? A backup is always going to have to walk the entire tree and checksum (or at least stat) every file, so there's a minimum cost you're always going to have. How long does a 'find /var/spool/imap -ls' take, for instance? You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Eero Volotinen eero.voloti...@iki.fi 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 3:08 PM, David Hrbáč hrbac.c...@seznam.cz wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. David backup directory structure is /var/spool/imap/a /adomain.com/a/agnello^dsouza/ Well, does that directory contains one file or lot of files ? Usually maildir structure is like dir++/tmp/current/new directories and each message is in own file on mailbox all files are inside one file. its in a maildir format and the structure slightly different from what i mentioned earlier : r...@server1 ~]# ls -la /var/spool/imap/a/user/ajay/ total 5180 -rw--- 1 cyrus mail 34616 Feb 24 16:02 4790. -rw--- 1 cyrus mail 4490 Feb 24 16:03 4791. -rw--- 1 cyrus mail 199253 Feb 24 16:07 4792. -rw--- 1 cyrus mail 22930 Feb 24 16:09 4793. -rw--- 1 cyrus mail 8485 Feb 24 16:11 4794. -rw--- 1 cyrus mail 12664 Feb 24 16:13 4795. -rw--- 1 cyrus mail 4296 Feb 24 16:13 4796. -rw--- 1 cyrus mail 5337 Feb 24 16:15 4797. -rw--- 1 cyrus mail 111030 Feb 24 16:21 4798. -rw--- 1 cyrus mail 4805500 Feb 24 16:23 4799. -rw--- 1 cyrus mail 22920 Feb 24 16:23 cyrus.cache -rw--- 1 cyrus mail 204 Dec 10 16:27 cyrus.header -rw--- 1 cyrus mail 896 Feb 24 16:23 cyrus.index -rw--- 1 cyrus mail 8669 Feb 24 11:28 cyrus.squat this is Just a very small user and a example -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 10:38:32AM +0100, David Hrbáč wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. rsync and rdiff should handle mbox format okay though. Though I agree Maildir is generally nicer for differential backups. Agnello, how long is a lot of time? A backup is always going to have to walk the entire tree and checksum (or at least stat) every file, so there's a minimum cost you're always going to have. How long does a 'find /var/spool/imap -ls' take, for instance? You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin http://lists.centos.org/mailman/listinfo/centos is it possible with brackup http://code.google.com/p/brackup/ to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 10:38:32AM +0100, David Hrbáč wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. rsync and rdiff should handle mbox format okay though. Though I agree Maildir is generally nicer for differential backups. Agnello, how long is a lot of time? A backup is always going to have to walk the entire tree and checksum (or at least stat) every file, so there's a minimum cost you're always going to have. How long does a 'find /var/spool/imap -ls' take, for instance? You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) -- Eero ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 4:57 PM, Eero Volotinen eero.voloti...@iki.fiwrote: 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 10:38:32AM +0100, David Hrbáč wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. rsync and rdiff should handle mbox format okay though. Though I agree Maildir is generally nicer for differential backups. Agnello, how long is a lot of time? A backup is always going to have to walk the entire tree and checksum (or at least stat) every file, so there's a minimum cost you're always going to have. How long does a 'find /var/spool/imap -ls' take, for instance? You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) Does http://code.google.com/p/brackup/ also work in on remote machines . -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:57 PM, Eero Volotinen eero.voloti...@iki.fi wrote: 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 10:38:32AM +0100, David Hrbáč wrote: Dne 24.2.2010 10:00, Agnello George napsal(a): We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary Is seems to me, that you are using mbox format. So, differential backup is hard to achieve. Migrate to maildir, every mail is a file, easy to backup differentially. rsync and rdiff should handle mbox format okay though. Though I agree Maildir is generally nicer for differential backups. Agnello, how long is a lot of time? A backup is always going to have to walk the entire tree and checksum (or at least stat) every file, so there's a minimum cost you're always going to have. How long does a 'find /var/spool/imap -ls' take, for instance? You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) Summary from webpage: Flexible backup tool. Slices, dices, encrypts, and sprays across the net, notably to Amazon's S3. -- Eero ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On 02/24/2010 11:21 AM, Agnello George wrote: -rw--- 1 cyrus mail 4805500 Feb 24 16:23 4799. -rw--- 1 cyrus mail 22920 Feb 24 16:23 cyrus.cache -rw--- 1 cyrus mail 204 Dec 10 16:27 cyrus.header -rw--- 1 cyrus mail 896 Feb 24 16:23 cyrus.index -rw--- 1 cyrus mail 8669 Feb 24 11:28 cyrus.squat this is Just a very small user and a example About 90% of your problem is already solved here, you are using cyrus which has built in mail level replication. All you need to do is setup a lvm volume away from this main store and run your mail replica over to it. then just backup using whatever tools you want. Free win you get is online failover, backup in whatever manner you want! - KB ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 6:19 PM, Karanbir Singh mail-li...@karan.orgwrote: On 02/24/2010 11:21 AM, Agnello George wrote: -rw--- 1 cyrus mail 4805500 Feb 24 16:23 4799. -rw--- 1 cyrus mail 22920 Feb 24 16:23 cyrus.cache -rw--- 1 cyrus mail 204 Dec 10 16:27 cyrus.header -rw--- 1 cyrus mail 896 Feb 24 16:23 cyrus.index -rw--- 1 cyrus mail 8669 Feb 24 11:28 cyrus.squat this is Just a very small user and a example About 90% of your problem is already solved here, you are using cyrus which has built in mail level replication. All you need to do is setup a lvm volume away from this main store and run your mail replica over to it. then just backup using whatever tools you want. Free win you get is online failover, backup in whatever manner you want! yes just spoke to my senior and confrimed that this was alreday tried out a delayed replication is possible . but the current suitation is we need to take backup on the same server on a different partition /backup :( -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 05:06:08PM +0530, Agnello George wrote: On Wed, Feb 24, 2010 at 4:57 PM, Eero Volotinen eero.voloti...@iki.fiwrote: 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) Does http://code.google.com/p/brackup/ also work in on remote machines . Brackup will backup to local disk, or remotely to ftp, sftp, Amazon S3, or Rackspace CloudFiles targets/servers. So yes, on a lan you can backup over ftp or sftp just fine. Re docs, install brackup, 'man Brackup::Manual::Overview'. I've also written a few blog posts on it: http://www.openfusion.net/tags/brackup. Cheers, Gavin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Wed, Feb 24, 2010 at 7:05 PM, Gavin Carr ga...@openfusion.com.au wrote: On Wed, Feb 24, 2010 at 05:06:08PM +0530, Agnello George wrote: On Wed, Feb 24, 2010 at 4:57 PM, Eero Volotinen eero.voloti...@iki.fi wrote: 2010/2/24 Agnello George agnello.dso...@gmail.com: On Wed, Feb 24, 2010 at 4:13 PM, Gavin Carr ga...@openfusion.com.au wrote: You might want to try brackup (http://code.google.com/p/brackup/). For very large trees of relatively small files it seems to significantly out-perform rsync-based backups. I've got brackup packages in my repository (see http://www.openfusion.net/linux/openfusion_rpm_repository). Cheers, Gavin is it possible with brackup to back it up to a different server on the same lan instead of /backup . Is there any documentation on the same . rsync or rdiff-backup works on local disk or remote disk.(and other backup methods too!) Does http://code.google.com/p/brackup/ also work in on remote machines . Brackup will backup to local disk, or remotely to ftp, sftp, Amazon S3, or Rackspace CloudFiles targets/servers. So yes, on a lan you can backup over ftp or sftp just fine. Re docs, install brackup, 'man Brackup::Manual::Overview'. I've also written a few blog posts on it: http://www.openfusion.net/tags/brackup. Cheers, Gavin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos it will take me some time to try this .. will get back on its output !! .. thanks -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Agnello George wrote: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . We have tried dar , rsync, rdiff and impasync . But its is not sufficing the need as to take a lot of time and consumes a lot of I/O . Is there any back up solution that you can think of , that can work in this situation - open source or proprietary If you are just concerned about a single disk failure you could set up RAID1 on the disks (with some downtime to rebuild...) to keep the copy in realtime with little loss of speed. Rsync should work as well as anything for snapshots but you might need to update to a 3.x version to speed up handling large numbers of files. The 2.x version included in Centos will read the entire directory tree into memory before starting the comparisons and copies. The rpmforge repo has a packaged 3.0.7 version but I haven't tried it. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On 02/24/2010 01:07 PM, Agnello George wrote: yes just spoke to my senior and confrimed that this was alreday tried out a delayed replication is possible . but the current suitation is we need to take backup on the same server on a different partition /backup :( you can replicate to a local mail store as well. just make sure you put it on a block device that is suiteable and fits in with the rest of your backup strategy. And if you put it in an isolated enough place on the block dev, it wont contest with the users access. - KB ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Agnello George wrote: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . You've stated things in terms of solutions. You may possibly get better answers if you state your goal. There is some capability you are trying to achieve. Tell us what that is, and you may make more progress. IOW, what is the purpose of the backup? As one mentioned, RAID may handle your needs. Mike -- p=p=%c%s%c;main(){printf(p,34,p,34);};main(){printf(p,34,p,34);} Oppose globalization and One World Governments like the UN. This message made from 100% recycled bits. You have found the bank of Larn. I speak only for myself, and I am unanimous in that! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
2010/2/24 Mike McCarty mike.mcca...@sbcglobal.net: Agnello George wrote: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . You've stated things in terms of solutions. You may possibly get better answers if you state your goal. There is some capability you are trying to achieve. Tell us what that is, and you may make more progress. IOW, what is the purpose of the backup? As one mentioned, RAID may handle your needs. Err.. raid is NOT backup solution. -- Eero ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On 2/24/2010 1:31 PM, Eero Volotinen wrote: We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . You've stated things in terms of solutions. You may possibly get better answers if you state your goal. There is some capability you are trying to achieve. Tell us what that is, and you may make more progress. IOW, what is the purpose of the backup? As one mentioned, RAID may handle your needs. Err.. raid is NOT backup solution. Neither is a snapshot in another location on the same machine. But both will cover the most likely thing to fail, with raid doing it transparently, the snapshot losing data from the time the last snapshot copy happened. Usually what you want is raid _and_ a history of backups kept elsewhere. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
Eero Volotinen wrote: 2010/2/24 Mike McCarty mike.mcca...@sbcglobal.net: Agnello George wrote: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . You've stated things in terms of solutions. You may possibly get better answers if you state your goal. There is some capability you are trying to achieve. Tell us what that is, and you may make more progress. IOW, what is the purpose of the backup? As one mentioned, RAID may handle your needs. Err.. raid is NOT backup solution. Of course not. RAID is a means to achieve availability, which may be his goal. Karanbir already stated a means to do what he seemed to want, but it seemed not to satisfy his needs. Unless the query is placed in terms of requirements and goals, instead of solutions, it'll be difficult to achieve satisfactory results. The purpose of backup is some degree of disaster recovery, and perhaps also migration. If that's truly his goal, then ISTM that Karanbir suggested a viable solution to achieving avialability while also performing backup, by doing on-the-fly duplication of the data onto another file system which can then be backed up at liesure. Doing so in a manner which ensures a true snapshot may be more difficult to achieve, while still ensuring availability. I normally do my backups in single user mode with all file systems mounted read only, except the one to receive the backup. That of course precludes availability during the backup procedure. That's why I would like to see what he actually wants to achieve, instead of how he has chosen to go about it. Mike -- p=p=%c%s%c;main(){printf(p,34,p,34);};main(){printf(p,34,p,34);} Oppose globalization and One World Governments like the UN. This message made from 100% recycled bits. You have found the bank of Larn. I speak only for myself, and I am unanimous in that! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup solution to backup /var/spool/imap above 150GB data
On Thu, Feb 25, 2010 at 1:19 AM, Mike McCarty mike.mcca...@sbcglobal.netwrote: Eero Volotinen wrote: 2010/2/24 Mike McCarty mike.mcca...@sbcglobal.net: Agnello George wrote: Hi We have an issue with one of our clients , they have a mail server with the /var/spool/imap partition as 150 GB . They need to take differential backup on to /backup partition ( a different HDD of total 250 GB space ) . You've stated things in terms of solutions. You may possibly get better answers if you state your goal. There is some capability you are trying to achieve. Tell us what that is, and you may make more progress. IOW, what is the purpose of the backup? As one mentioned, RAID may handle your needs. Err.. raid is NOT backup solution. Of course not. RAID is a means to achieve availability, which may be his goal. Karanbir already stated a means to do what he seemed to want, but it seemed not to satisfy his needs. Unless the query is placed in terms of requirements and goals, instead of solutions, it'll be difficult to achieve satisfactory results. The purpose of backup is some degree of disaster recovery, and perhaps also migration. If that's truly his goal, then ISTM that Karanbir suggested a viable solution to achieving avialability while also performing backup, by doing on-the-fly duplication of the data onto another file system which can then be backed up at liesure. Doing so in a manner which ensures a true snapshot may be more difficult to achieve, while still ensuring availability. I normally do my backups in single user mode with all file systems mounted read only, except the one to receive the backup. That of course precludes availability during the backup procedure. That's why I would like to see what he actually wants to achieve, instead of how he has chosen to go about it. Mike -- The requirement fro backup is not primarily for HDD failure , but human error failure . In case one of our user ( eg: the COO with huge mailbox size has delete all his certain very important mails, and he want to recover them , the contacts us as we are supposed to maintain his mail backup for a week, and we should restore his backup immediately ) this the main requirement for the backup and that too on the same server different partition . -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
-Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of m.r...@5-cent.us Sent: Friday, January 15, 2010 5:14 PM To: CentOS mailing list Subject: Re: [CentOS] Backup server I was thinking about your long term here. Make sure to use LVM to create your underlaying partition. Then you can add disk space in the future without having to reformat everything and can just grow your ext3/ext4 partition instead. With six drives installed, there is no more space to add more drives in the chassis. But thanks for the hint! snip Ahh, but when you replace some of them with larger drives? Then I'll consider it. ;-) -- /Sorin smime.p7s Description: S/MIME cryptographic signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Backup solution-Solved (Was: Backup server)
-Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of Arturas Skauronas Sent: Friday, January 15, 2010 6:50 PM To: CentOS mailing list Subject: Re: [CentOS] Backup server Guys, BackupPC works like the proverbial charm. Thank you very much to all who advised and hinted me about this solution and the initial burn-in problems! -- /Sorin smime.p7s Description: S/MIME cryptographic signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
-Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of Les Mikesell Sent: Thursday, January 14, 2010 3:14 PM To: CentOS mailing list Subject: Re: [CentOS] Backup server Yes, but if you use the epel rpm, either mount it at /var/lib/BackupPC or put a symlink there before the install. If you install from the sourceforge source there is an install script that modifies the location so you can put things where you want, but the rpm packages have already done that. The next version will make this easier to change but the current one needs to stay in the location set when the package was built. Ran into some problems and couldn't login to the web interface. The above helped, when tracking down the paths and symlinks. Thanks! So far, BackupPC looks good. Will start configuring it now and do some test backups later this afternoon. Darn users can't let me work in peace... ;-) -- /Sorin smime.p7s Description: S/MIME cryptographic signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
Sorin Srbu wrote: Today I have five 500GB-disks raided on linux machine. Remove one for parity and I have 2TB of real space available. Doing a 0+1, ie 1TB, would indeed be better as performance goes, but 1TB of space, well, it just isn't enough unfortunately. As it is now, the 2TB shebang is mounted as /backup. Does that count as a single filesystem? I was thinking about your long term here. Make sure to use LVM to create your underlaying partition. Then you can add disk space in the future without having to reformat everything and can just grow your ext3/ext4 partition instead. -- Benjamin Franz ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
-Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of Benjamin Franz Sent: Friday, January 15, 2010 2:26 PM To: CentOS mailing list Subject: Re: [CentOS] Backup server I was thinking about your long term here. Make sure to use LVM to create your underlaying partition. Then you can add disk space in the future without having to reformat everything and can just grow your ext3/ext4 partition instead. With six drives installed, there is no more space to add more drives in the chassis. But thanks for the hint! -- /Sorin smime.p7s Description: S/MIME cryptographic signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
Sorin Srbu wrote: So you need to be able to walk the fine line between these two. I'm trying. Something it just isn't enough. Although the boss has a soft spot for linux, as he also heads the CADD (Computer Aided Drug Design)-group. To put it into perspective, ask the manager how much it would cost the business if this data was unrecoverable? After that, if they still don't want to spend a few hundred $$s on the insurance, get it in writing that your manager understands the risk and print it out and post it on your office wall. Rather confrontative isn't it? Me being a Swede, I try to avoid those situations if possible, and find a compromise instead that both parties can live with. 8-} Oh, and I'm a government employee, so the money I spend is tax-payers money. Got to be careful there. Being careful with the money is the point. Someone has to understand the risks. You know how that saying goes? You can chose between good, fast and cheap. But you're only ever allowed to pick any two. For me that's IT in a nutshell. ;-) The other question to ask is whether an offsite copy is needed. After a fire or other site disaster some businesses might collect the insurance money and disappear - others might want to be able to rebuild and continue. Government operations would probably need to continue and need a plan for that. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
Sorin Srbu wrote: -Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of Benjamin Franz Sent: Friday, January 15, 2010 2:26 PM To: CentOS mailing list Subject: Re: [CentOS] Backup server I was thinking about your long term here. Make sure to use LVM to create your underlaying partition. Then you can add disk space in the future without having to reformat everything and can just grow your ext3/ext4 partition instead. With six drives installed, there is no more space to add more drives in the chassis. But thanks for the hint! Ok. Oh, one last thing. Don't forget to use the '-E stride=XX,stripe-width=YY (where XX and YY are replaced with the appropriate values) options creating your filesystem on the RAID. Otherwise your disk drive usage will have 'hot spots' and slower than optimal speed. Do a man mke2fs to understand how to use them correctly. -- Benjamin Franz ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Backup server
Sorin Srbu wrote: Yes, but if you use the epel rpm, either mount it at /var/lib/BackupPC or put a symlink there before the install. If you install from the sourceforge source there is an install script that modifies the location so you can put things where you want, but the rpm packages have already done that. The next version will make this easier to change but the current one needs to stay in the location set when the package was built. Ran into some problems and couldn't login to the web interface. The above helped, when tracking down the paths and symlinks. Thanks! So far, BackupPC looks good. Will start configuring it now and do some test backups later this afternoon. Darn users can't let me work in peace... ;-) You might want to join the mail lists: http://backuppc.sourceforge.net/info.html#lists if you have any specific questions about it. There are several users with a lot of experience and the author still participates. -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos