Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread Stephen Joyce
Les,

No need to apologize. I just felt that the arguments made by
backu...@kosowsky.org were both specious and unnecessarily abrasive,
especially since I didn't solicit them.

I've been using BackupPC since 2005 and currently have 6 BPC servers
backing up several dozen TBs. While I'm sure there are readers here who
pre-date that, I don't consider myself a novice.

Anyway, if anyone's interested my patch for per-pc pools for v. 3.2.1 is
attached. I'm currently beta-testing it.

This patch makes PoolDir and CPoolDir appear as configuration options
on the Backup Settings page; they may be over-ridden on a per PC basis as
many other configuration settings may be.



On Wed, Mar 13, 2013 at 11:32 AM, Les Mikesell lesmikes...@gmail.comwrote:

 On Wed, Mar 13, 2013 at 10:02 AM, Stephen Joyce sjb...@gmail.com wrote:

  Thank you for your input, but I've already considered your other
  suggestions.
 
  As a reminder to other gentle readers, and to avoid further
 philosophical
  tirades about my foolish idea, my original question posed was Has
 anyone
  gone down this path before me? If so, did you succeed or fail? I'd like
 to
  compare notes either way. If you haven't, then please don't feel
 compelled
  to send an abrasive reply.

 I'm sorry.  I thought you were asking for advice from people with
 experience.  If you have tested VMs and concluded that they are not
 suitable for your purpose, never mind, then.

 --
   Les Mikesell
  lesmikes...@gmail.com


 --
 Everyone hates slow websites. So do we.
 Make your web apps faster with AppDynamics
 Download AppDynamics Lite for free today:
 http://p.sf.net/sfu/appdyn_d2d_mar
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



BackupPC-3.2.1-per-pc-pools.patch
Description: Binary data
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread Les Mikesell
On Thu, Mar 14, 2013 at 12:32 PM, Stephen Joyce sjb...@gmail.com wrote:

 I've been using BackupPC since 2005 and currently have 6 BPC servers backing
 up several dozen TBs. While I'm sure there are readers here who pre-date
 that, I don't consider myself a novice.

I'm still somewhat curious about why you dismissed virtual machines,
which seem to me like a more obvious way to divvy up some
partly-shared resources - and would offer a cleaner separation of
control and better (still not great) methods to migrate to
new/different hardware later.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archiving incremental backups?

2013-03-14 Thread Peter Carlsson
On Sun, Mar 10, 2013 at 04:12:31AM +0100, Holger Parplies wrote:
 Hi,
 
 Peter Carlsson wrote on 2013-03-09 11:10:59 +0100 [Re: [BackupPC-users] 
 Archiving incremental backups?]:
  [...]
  I regulary do an archive to a USB HDD that I store offsite. This is a
  manual step since I have to bring the HDD, do the archiving, and then
  store the HDD offsite again.
  
  To know for sure that files that are modified in between these regular
  archives, I want to make tar archives of the incremental backups and
  move them on a daily basis offsite over the Internet.
  
  Why I only want to do this for the incremental backuped files is to
  reduce the amount of data over the Internet.
 
 in theory, you could use rsync to transfer incremental changes over the
 internet. This could even be substantially less traffic than an incremental
 tar, presuming you have changing files that can efficiently be transferred
 with rsync (like growing log files or large files where only small portions
 change from day to day). In the worst case (only new files, no changed files)
 there should not be much difference, except that rsync requires more
 computational power, and that rsync (with the right options) will track
 deletions. rsync also has built-in capability to limit transfer bandwidth.
 
 The thing is, you would need to keep an image of your target file system on
 both ends, i.e. unpack the tar file on your USB HDD and have it accessible 
 over
 the internet, and have a local copy on or near your BackupPC server. You could
 use the original file system (the one you back up in the first place) instead
 of a copy, presuming your concern is not to mirror backup state (your file
 system will probably have changed since the last backup), and the file system
 can handle the additional load of the rsync run.
 
 One thing I would like to note, though, is that you really want at least two
 independent offsite copies, so you will not be left without one if things
 break while you are replacing the old copy with a new one. This is especially
 true with tar archives. An rsync run will generally not destroy much (maybe
 one file) if it fails prematurely, though it will leave you with a state of
 your offsite copy that probably never existed on the original (which is easily
 fixed if you can just restart rsync). But you should note that requiring the
 offsite copy to be online (at least during the incremental update) makes it
 somewhat vulnerable, so having an additional *offline* offsite backup would
 be a good idea.
 
  But what I want to achieve is, at least in my
  opinion, better than to only have the full manual archives that at
  best will be done one or two times a month.
 
 With rsync, in my opinion, you can almost get away without the full manual
 archives. The same note applies here as in the frequently asked question,
 with rsync, do I need full backups at all?. Ideally, you would turn on the
 rsync option --ignore-times regularly (e.g. once a month) to catch any
 (rare) changes rsync might have missed (but that's just a detail).
 
 I don't know if what I described seems possible in your scenario. We should
 figure that out first before going into too much detail.
 
  The most important thing is that it is simple and automatic, otherwise
  it will never be done.
 
 It should be possible to make this automatic (except for exchanging or syncing
 the offsite drives once in a while, but that's much like your monthly manual
 archives now). It doesn't seem overly complex, but that depends somewhat on
 your setup. How were you planning to get the incrementals over the internet?
 Have you done size estimates to see if it is feasible?
 
 
 Jeffrey hinted at the possibility to tar up the pc/host/num directory for an
 incremental backup. While that is certainly possible, it leaves you with the
 problem that you would need to unmangle names and interpret attrib files.
 That's not too difficult, but it would require some coding. The thing is, how
 (and under what circumstances) would you ever *use* the offsite backup? In the
 event of a catastrophe? Bring in the disk, restore the full backup, restore
 all incrementals? I'd be in favour of having a working file system image (or
 better, two identical ones - one as a backup remaining offsite) which you can
 just plug in and use, if things need to go really fast, or copy over without
 needing to think much (i.e. without much that can go wrong).
 
 Another thing to keep in mind are backup levels. Normally, your incrementals
 will tend to be relative to the last full, meaning they will grow from day to
 day (because day 2 repeats the changes from day 1 and so on). You *can* fix
 that (at the cost of more complexity at backup time), but you probably don't
 want to bother with this approach anyway :).
 
 
 There's, of course, the third possibility of just setting up an offsite
 BackupPC server that makes independent backups of the target host. You'd
 want to have a VPN for that, but my guess is 

Re: [BackupPC-users] Archiving incremental backups?

2013-03-14 Thread Peter Carlsson
On Sat, Mar 09, 2013 at 07:41:24PM -0500, backu...@kosowsky.org wrote:
 Holger Parplies wrote at about 02:04:05 +0100 on Saturday, March 9, 2013:
   Hi,
   
   Peter Carlsson wrote on 2013-03-08 23:21:38 +0100 [[BackupPC-users] 
 Archiving incremental backups?]:
Hello!

Is it possible to archive only the incremental part of a backup?
   
   no, I don't think that is supported, and I also don't think it is a good
   idea :-).
   
I would like to make a tar archive only of the files that is part of an
incremental backup.
   
   What exactly are you trying to achieve? You are describing the wrong step
   toward an unknown goal. We can't give you good advice without knowing what
   you want to do.
   
   In short, a tar file of a full backup + one (or more) tar files of 
 incremental
   backups *do not* equal a snapshot of your source file system at the point 
 of
   the last incremental, simply because you lose all information about files
   being deleted (a tar file cannot represent files that are supposed to be
   deleted on extraction, as far as I know). Of course, you might not have 
 this
   information in your BackupPC history, if you are using tar or smb 
 transport,
   but you can fix that by switching to rsync(d), and your archives will still
   only be tar files.
   
 
 However, if he is talking about 'tarring' the pc tree component for a
 BackupPC incremental backup, then the deletions are indeed encoded in
 the attrib files, though I am not sure how he would intend to
 reconstruct it all in practice without some code to glue it back
 together properly unless he is already has a copy of the fulls and is
 maintaining a full incremental chain...
 
 That being said, if that is what he means, he could just literally tar
 the incremental backup tree... however, he would still lose the
 pooling benefits...

Hi,

I realize after yours and others explanations, that I have not thought
about all the shortcomings, but I was thinking this could be a good
compromise. This would allow me (at least with some effort) to restore
modified files even if a crash happened between two full archives.

I will go back to my drawing board and think more about what I want to
achieve, now that I have additional information.

Best regards,
Peter Carlsson

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread backuppc
Stephen Joyce wrote at about 13:32:30 -0400 on Thursday, March 14, 2013:
  Les,
  
  No need to apologize. I just felt that the arguments made by
  backu...@kosowsky.org were both specious and unnecessarily abrasive,
  especially since I didn't solicit them.
  
 OMG - grow a pair... if you don't like an idea, ignore it, don't cry
 about it...

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2013-03-14 13:18:53 -0500 [Re: [BackupPC-users] Per-PC 
pools]:
 I'm still somewhat curious about why you dismissed virtual machines,
 which seem to me like a more obvious way to divvy up some
 partly-shared resources - and would offer a cleaner separation of
 control and better (still not great) methods to migrate to
 new/different hardware later.

and, in particular, would require no code changes at all. Yes, I'm curious,
too.

Concerning the code, I won't do more than have a quick glance at it, because
I'm not convinced it's a good idea. What the quick glance tells me, is that
the patch is next to unreadable, because it's not in unified format (i.e. no
context).
So, you seem to change PoolDir and CPoolDir in the library (though I don't see
where; let's hope your code is always executed before some part of BackupPC
tries to access the pool). That basically avoids touching any code in PoolWrite
and probably BackupPC_link. And by having a string setting for the pool
location, you enable pool sharing in a simple way. But you (i.e. the
administrator of the BackupPC instance) had better get the configuration right
(i.e. have the relevant pc/ and *pool/ directories on a common file system).
You don't seem to have checks for hard-linking capability. There's not much
help from the software in case of configuration errors.

You had better hope that the code consequently uses the PoolDir and CPoolDir
settings and not $TopDir/{c,}pool (easy enough to check). You probably remember
that it was a long standing bug that you couldn't set $Conf{TopDir} with the
desired effect ...

Furthermore, you lose the ability to use one BackupPC::Lib object for more
than one host (presuming you need the pool location). BackupPC probably
doesn't do that (I'm guessing), but I don't think this is a documented or
intended property.

While you might successfully use the code virtually forever, I would strongly
discourage anyone else from using it. There is just too much you need to
understand and have in mind. It's sort of half-automatic, because only half of
the consistency checks are done by the software. And by exposing the *PoolDir
settings to the web gui, you are suggesting that they are (changeable)
configuration options, while in reality they are descriptions of your disk
layout to BackupPC. I'd probably have preferred a fixed setting of
../{c,}pool relative to the host pc/ directory - i.e. use the pool on the FS
where the pc/ directory is. That is less flexible, but also less error-prone.

Regards,
Holger

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread backuppc
Stephen Joyce wrote at about 11:02:59 -0400 on Wednesday, March 13, 2013:
  I never said anything about philosophy. Those are your words and
  philosophical arguments.
  
  Thank you for your input, but I've already considered your other
  suggestions.

My bad... you are right... you cited political reasons I should
have referred you to alt.politics.backuppc if that is what you are
seeking. If, however, you are looking for actual technical advice,
then you might want to consider what people are trying to tell you.

  
  As a reminder to other gentle readers, and to avoid further philosophical
  tirades about my foolish idea, my original question posed was Has anyone
  gone down this path before me? If so, did you succeed or fail? I'd like to
  compare notes either way. If you haven't, then please don't feel compelled
  to send an abrasive reply.

I'm sorry, I thought you actually wanted help from contributors who
know a thing or two or three about how BackupPC works and not just
hear from people who pursued the same foolish idea (your wording).

Perhaps next time you should phrase your request more precisely if you
are only interested in hearing from people who succeeded or failed in
going down a path that those of us who actually know the working of
BackupPC think to be foolish... My guess is you won't receive many
answers... 

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread backuppc
Holger Parplies wrote at about 03:46:42 +0100 on Friday, March 15, 2013:
  Hi,
  
  Les Mikesell wrote on 2013-03-14 13:18:53 -0500 [Re: [BackupPC-users] Per-PC 
  pools]:
   I'm still somewhat curious about why you dismissed virtual machines,
   which seem to me like a more obvious way to divvy up some
   partly-shared resources - and would offer a cleaner separation of
   control and better (still not great) methods to migrate to
   new/different hardware later.
  
  and, in particular, would require no code changes at all. Yes, I'm curious,
  too.
  
  Concerning the code, I won't do more than have a quick glance at it, because
  I'm not convinced it's a good idea. What the quick glance tells me, is that
  the patch is next to unreadable, because it's not in unified format (i.e. no
  context).
  So, you seem to change PoolDir and CPoolDir in the library (though I don't 
  see
  where; let's hope your code is always executed before some part of BackupPC
  tries to access the pool). That basically avoids touching any code in 
  PoolWrite
  and probably BackupPC_link. And by having a string setting for the pool
  location, you enable pool sharing in a simple way. But you (i.e. the
  administrator of the BackupPC instance) had better get the configuration 
  right
  (i.e. have the relevant pc/ and *pool/ directories on a common file system).
  You don't seem to have checks for hard-linking capability. There's not much
  help from the software in case of configuration errors.
  
  You had better hope that the code consequently uses the PoolDir and CPoolDir
  settings and not $TopDir/{c,}pool (easy enough to check). You probably 
  remember
  that it was a long standing bug that you couldn't set $Conf{TopDir} with the
  desired effect ...
  
  Furthermore, you lose the ability to use one BackupPC::Lib object for more
  than one host (presuming you need the pool location). BackupPC probably
  doesn't do that (I'm guessing), but I don't think this is a documented or
  intended property.
  
  While you might successfully use the code virtually forever, I would strongly
  discourage anyone else from using it. There is just too much you need to
  understand and have in mind. It's sort of half-automatic, because only half 
  of
  the consistency checks are done by the software. And by exposing the *PoolDir
  settings to the web gui, you are suggesting that they are (changeable)
  configuration options, while in reality they are descriptions of your disk
  layout to BackupPC. I'd probably have preferred a fixed setting of
  ../{c,}pool relative to the host pc/ directory - i.e. use the pool on the 
  FS
  where the pc/ directory is. That is less flexible, but also less error-prone.
  

Beats me how this would work without also changing all the things
referencing the location of the pc tree (remember the super sensitive
OP specifically talked about using separate filesystems). In
particular, I see no reference to changes made to
BackupPC_link. Because as we all know, the pc tree and pool have to be
on the same filesystems... Then again no changes have been made to the
routine that checks for linkability so maybe the OP will never know
about such coding lapses.

Also, based on my playing with the code in Lib.pm and various other
modules, I seem to recall many more hard-coded references to pool
vs. cpool. Of course, it's possible that the OP got lucky and things
just somehow still work, but I sure as heck wouldn't count on it...

The fact that the OP doesn't know how to use standard patch format
doesn't give me a lot of confidence...

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per-PC pools

2013-03-14 Thread Holger Parplies
Hi,

backu...@kosowsky.org wrote on 2013-03-14 23:05:42 -0400 [Re: [BackupPC-users] 
Per-PC pools]:
 [...]
 Beats me how this would work without also changing all the things
 referencing the location of the pc tree (remember the super sensitive
 OP specifically talked about using separate filesystems).

my guess is that pc/xyz is a soft link to /somewhere/pc/xyz/ and the
corresponding pool setting is /somewhere/{c,}pool. This means setting up a
new host is manual work. I remember BackupPC_link having problems with soft
links at some point, although pc/ and pool/ were, in fact, on the same FS.
But, honestly, I don't want to waste much more time on this topic. It might
work. The ideas are not bad. And it might not work, but that's not my problem.

The only thing I am worried about is that someone finds the code at the end of
a google search and uses it without further reading or thinking (in
particular, whether it applies to his situation at all, which it doesn't most
of the time the question pops up). For that reason alone I am commenting
(maybe he at least reads the thread). The OP can decide for himself, and I
wish him the best of luck. I'm confident he won't come here with his problems
if he runs into any.

 Then again no changes have been made to the
 routine that checks for linkability so maybe the OP will never know
 about such coding lapses.

They would show up in the logs. Probably.

 Also, based on my playing with the code in Lib.pm and various other
 modules, I seem to recall many more hard-coded references to pool
 vs. cpool.

Possible. Also not my problem :-). I hinted at that, and that's where the
matter ends for me.

 Of course, it's possible that the OP got lucky and things
 just somehow still work, but I sure as heck wouldn't count on it...

You mean like what happens if the target FS is not mounted? Or in other
corner-cases less obvious? That is probably the real issue. No additional
error cases are handled, but I'm sure numerous ones are introduced.

Regards,
Holger

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/