Re: [BackupPC-users] Backup corrupted, impossible to corect it

2024-08-23 Thread Paul Fox
Guillermo Rozas wrote:
 >  That's not necessarily a bad choice, especially if you can't fit your
 >  storage on a single platter, but you do lose that capability of having
 >  a full snapshot on either drive.
 > 
 >ZFS can do mirror configurations without problem.
 >Regarding faulting the RAID to take one of the drives as a snapshot: be
 >careful that reconstruction of the new drive puts a non-standard stress on
 >the drive that's still in the system (higher the bigger is the drive), and
 >it may increase the risk of it dying during the process, taking down the
 >whole system. 

I've never heard of this concern before.  It seems like an unlikely
scenario, since the "stress" is simply reading the remaining drive. 
But in any case, if that were to happen, the failed drive would be
just one of two copies of the data (the other being the one I had just
removed), and the running system would still be healthy as well.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 73.4 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup corrupted, impossible to corect it

2024-08-22 Thread Paul Fox
les wrote:
 >As for your comments on filesystems, I think you
 > are better off using a raid system that will remove the bad disk if
 > errors are detected, rebuilding when you replace it.  I always liked
 > the simplicity of software raid1 (mirrored) because you could recover
 > from any surviving disk on any computer with a compatible interface or

It also means you can trivially take snapshots of your pool by artificially
"failing" one half of the pair.  Take that disk, put it on a shelf or take
it offsite, install another disk in its place, and let the raid rebuild. 

paul
=------
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 70.4 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup Share Content

2023-07-29 Thread Paul Fox
jbk wrote:
 >BackupPC-4.4.0-9.el9.x86_64
 > 
 >I see now this is a bigger problem than I thought initially. I thought
 >that each backup # was a restore point to the state of the share at the
 >time of that backup but instead it is an accumulation of all backups w/o
 >reflecting any of the deletions of files now non existent on the source

Surely that's not correct.  If I do a restore of a specific backup,
I'd expect to get exactly the files that were present when that backup
was run.  If I wanted files that had been deleted before that backup
was run, I would understand that I'd need to restore them from a
previous backup.

If this isn't the case, then I'm...  shocked, I guess.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 85.1 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] ssh+rsync and known_hosts

2023-07-22 Thread Paul Fox
Kenneth Porter wrote:
 > I'm setting up some Raspberry Pis and I set up BackupPC to back them up 
 > using ssh+rsync. I installed the key in ~backuppc/.ssh/authorized_keys but 
 > the initial backup was still failing.

Unless things have changed (and they might have, but I still do it
this way), then the public key needs to go into /root/.ssh/authorized_keys.
Backuppc (on your backuppc server) needs root access to the client in
order to be able to read all of the files it needs.  (You could use a
different user id on the client if you're sure that user can read all
the files which need to be backed up.)

 > So I tried manually ssh'ing into the 
 > Pi and discovered I was hitting the question to add the Pi to known_hosts. 
 > I don't see this mentioned in the documentation. I'm not sure where it 
 > would even go, but I wanted to mention it as I'll likely forget this a year 
 > from now.

You should be trying to manually ssh from the backuppc account, and
you should be trying to become root on the client.  I usually do this:

sudo su - backuppc  # take on the identity of backuppc
ssh root@clientmachine  # log in to the client as root
id  # verify identity on client
exit# leave the client
exit# resume your normal identity

When you hit that "add to known hosts?" question from ssh, just answer
"yes".  ssh will put the key in the right place (which is in
~backuppc/ssh/known_hosts).  Don't forget to exit out of both the ssh
and the "sudo su" after you've tested.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 73.1 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-24 Thread Paul Fox
"G.W. Haywood via BackupPC-users" wrote:
 > Hi there,
 > 
 > On Fri, 24 Feb 2023, Paul Fox wrote:
 > 
 > > ...
 > > Nothing makes me nervous like having someone tell me that my backup
 > > strategy, which has been rock solid for almost 20 years, might now
 > > have problems. ...
 > 
 > In this thread I don't think I've seen anything to cause that kind of
 > concern for anyone who amongst other things checks now and again that
 > random backups can be recovered correctly.  I do that - most recently
 > this morning - and I've never been disappointed.

In addition, Christian mentioned to me off list that the machine
on which he had the pool corruption had seen several hard crashes
in the last couple of weeks.

My stress level has gone way down.  :-)  But perhaps more frequent
fsck and/or Backuppc_fsck runs might be in my future.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 34.9 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-24 Thread Paul Fox
Christian Völker wrote:
 > Hi all,
 > 
 > meanwhile I gave up on using my script for checking the pool- I guess it 
 > was totally wrong.
 > 
 > However, I used BackuPC_fsck and BackupPC_refCountUpdate to check the 
 > pool and they gave me loads of errors. In particular on two hosts. All 
 > other hosts where fine.
 > 
 > So I removed with BackupPC_Delete all backups on the failing hosts. Now 
 > it looks like my pool and pc directory do match again as the above tools 
 > do not report any errors.
 > 
 > I expect BackupPC_nightly to do its job and remove unneded files from pool/.
 > 
 > Solved so far, even though I have no clue why there where errors in the 
 > pool<->pc structure.

This is a little bit disturbing.

Do you have no idea where the errors came from?  Did you preserve
the output from those tools, for analysis?

David -- have you perhaps tried those tools as well?

Nothing makes me nervous like having someone tell me that my backup
strategy, which has been rock solid for almost 20 years, might now
have problems.  :-(

paul

=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 32.2 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-21 Thread Paul Fox
Is it really necessary to be sending multiple identical ADVERTISEMENTS
to the list?

 > 
 > Non-text parts of message 423:
 > 1.1.2)   image   191K"0psDl4r0am0Jp1EI.png"
 > 1.1.3)   image   191K"FtUSQ9AjaqTx9Zg4.png"
 > 1.1.4)   image   191K"0MorVgYDTsoDwC1N.png"
 > 


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 35.4 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-21 Thread Paul Fox
"G.W. Haywood via BackupPC-users" wrote:
 > I wonder if you install the (read-only) FUSE filesystem 'backuppcfs'
 > it will help you to diagnose this further?  As you seem to be having
 > trouble with a small number of large files it shouldn't be difficult
 > to isolate the problem areas and maybe post some details here.
 > 
 > After version 4 of BackupPC was released there was a corresponding
 > update to backuppcfs.  If you do use it make sure you get the right
 > one.  You should be able to find it in the list archives.

Finding a good version of backuppcfs is harder than one might think. 
I went down that rabbit hole back in November.  In the end, backuppcfs
wasn't the tool I needed, so I ended up not doing anything with it.  But
here's what I found at the time.  I don't think anything has changed.

https://sourceforge.net/p/backuppc/mailman/message/37736996/


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 34.3 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] File Size Summary shows too large value for host

2022-12-15 Thread Paul Fox
Stefan Helfer wrote:
 > Hello Iosif,
 > 
 > Hooray, I have figured it out.  It took me a few hours and more
 > than 60 full backups, but I found the cause.

I admire your perseverance!

 > My understanding and conclusion is now:
 > BackupPC v3 shows the size of the backup as it would be restored to
 > disk.  With all hardlinks intact and used.
 > BackupPC v4, on the other hand, shows the size of the backup as if
 > all the hardlinks had been resolved into individual standalone
 > copies of the files.

This sounds like a bug in V4, to me.

Just to be clear, we're talking about the size number reported here,
correct, on the "Host  Backup Summary" page?
---
File Size/Count Reuse Summary   

   Existing files are those already in the pool; new files are those added to   
   the pool. Empty files and SMB errors aren't counted in the reuse and new 
   counts.  

   ++   
   ||Totals |Existing Files |   New Files   |   
   |+---+---+---|   
   |Backup#|Type|#Files|Size/MiB|MiB/sec|#Files|Size/MiB|#Files|Size/MiB|   
   |---++--++---+--++--+|   
   |  319  |full|226034| 76330.2|  68.89|93| 7.5|   337|   415.0|   
   |---++--++---+--++--+|
 ^^
 This number, 76330.2.


If this number is wildly inflated by the presence of lots of
hard-linked files, then the results are very misleading.  In the worst
case, it might not be clear that you could restore the backup to the
disk it came from.

Am I right?

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 34.5 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] backuppcfs versions

2022-11-18 Thread Paul Fox
backu...@kosowsky.org wrote:
 > Paul Fox wrote at about 15:52:07 -0500 on Wednesday, November 16, 2022:
 >  > backu...@kosowsky.org wrote:
 >  >  > 'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
 >  >  > the contents/ownership/perms/dates/Xattrs etc. of any file in your
 >  >  > backup.
 >  >  > 
 >  >  > It is great for troubleshooting as well as for partial restores...
 >  > 
 >  > Are you referring to the version that Craig attached to this list
 >  > message in June 2017?  Or is there a later version?
 >  > 
 >  > (Not that I don't trust Craig to have gotten the v4 support right the
 >  > first time.  :-)
 >  > 
 >  >   https://sourceforge.net/p/backuppc/mailman/message/35899426/
 >  > 
 >  > paul
 >  > 
 > 
 > I believe a later version was posted and is indeed necessary for v4
 > since:
 > - The notion of how incrementals are constructed from fulls has been
 >   inverted
 > - The format of the attrib files has changed extensively
 > - The pool hierarchy, naming convention, and storage format have
 >   changed
 > - Xattrs are now supported
 > 
 > Look through the archives... it was posted since v4 somewhere...


I did some searching.

On 2017-06-17, Craig attached his "updated for 4.x" version here:
https://sourceforge.net/p/backuppc/mailman/message/35899426/
This is the same message and version that I linked to above.

On 2019-04-11, Alexander Kobel endorsed it, but provided some caveats
about using it:
https://sourceforge.net/p/backuppc/mailman/message/36636938/

On 2019-05-02, backu...@kosowsky.org (i.e., you :-) reported that it
emits (apparently benign?) error messages:
https://sourceforge.net/p/backuppc/mailman/message/36655409/

I find no later version, in the archives, than the one posted
by Craig.

I did see the version that Mike Hughes pointed out, created by github
user phoenix741, here:
https://gist.github.com/phoenix741/99a5076569b01ba5a116cec24a798d5f
The starting point of that code is identical to Craig's version of
06-2017, with the addition of a small patch that appears to modify
some ranges of attributes:

https://gist.github.com/phoenix741/99a5076569b01ba5a116cec24a798d5f/revisions?diff=unified

With no comments in the code, and no commit log (do gists even have
commit logs?), it's a little hard for a layman to know what the purpose
of the patch is.  (Is it generally applicable?  Or did it only solve some
particular problem of theirs?)

So anyway, these are all the versions of v4-capable backuppcfs I've been 
able to find.

Needless to say, it would be great if someone would take that script and
take on a maintainer role.  (I don't have the perl skills, nor the necessary
backuppc knowledge, to do that myself.)

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 33.1 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-16 Thread Paul Fox
backu...@kosowsky.org wrote:
 > 'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
 > the contents/ownership/perms/dates/Xattrs etc. of any file in your
 > backup.
 > 
 > It is great for troubleshooting as well as for partial restores...

Are you referring to the version that Craig attached to this list
message in June 2017?  Or is there a later version?

(Not that I don't trust Craig to have gotten the v4 support right the
first time.  :-)

  https://sourceforge.net/p/backuppc/mailman/message/35899426/

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 41.7 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] extra Pool Size charts in Status screen

2022-11-09 Thread Paul Fox
Libor Klepáč wrote:
 >Hi,
 >good finding, i was wondering, why are there two sets of graphs too.
 >Can you send bug report to debian?

Sure.  I was king of hoping the debian maintainer was on the list, and
would just take care of it.  But I'll submit the bug.

In the meantime, just removing pool.rrd makes the 2nd graph go away.

paul

 >Thanks,
 >Libor
 > 
 >══════
 > 
 >From: Paul Fox 
 >Sent: Tuesday, November 8, 2022 3:12 PM
 >To: General list for user discussion, questions and support
 >
 >Subject: Re: [BackupPC-users] extra Pool Size charts in Status screen
 > 
 >I wrote:
 > > After upgrading from V3 to V4 (via a system upgrade from Ubuntu 20 to
 > > 22) my server status screen now has two copies of the 4 and 52 week
 > > pool size charts.  (i.e, 4 charts total.)
 >...
 > > The first images (which are log/poolUsage{4,52}.png) are generated
 > > from log/poolUsage.rrd.  (I think so -- at least, all three have
 > > identical modtimes).
 > >
 > > The second set of images are generated (in GeneralInfo.pm) from
 > > log/pool.rrd, which in my case is several days old, from before my
 > > upgrade to V4.  My suspicion is that this is a stale file, but I also
 > > see that there's also code in GeneralInfo.pm to create log/pool.rrd,
 > > prior to using it to create the images.
 > >
 > > So, what's going on?
 > 
 >With further investigation:
 > 
 >It seems that the second pair of graphs are generated by code in
 >GeneralInfo.pm which is added by the Debian package patches.  In
 >particular, 01-debian.patch and 06-fix-rrd-graph-permissions.patch.
 > 
 >Unless I'm mistaken, it seems that backuppc V3 didn't provide pool
 >graphs at all.  The graphs I've been seeing for the last couple of
 >decades have been created by code added by the Debian packager.
 > 
 >That's great (I like the charts), but now that backuppc V4 is creating
 >its own pool graphs, perhaps the Debian patches which do so should go
 >away.
 > 
 >BTW, the code in GeneralInfo.pm (courtesy the debian patches 1 and 6)
 >generates the graphs on the fly using the date in log/pool.rrd.  I
 >haven't figured out how pool.rrd ever got updated with pool data in
 >the first place.  It seems likely that that code is gone already,
 >because having renamed the pool.rrd a couple of days ago, it hasn't
 >been recreated.
 > 
 >paul
 >=--
 >paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 45.1
 >degrees)
 > 
 >___
 >BackupPC-users mailing list
 >BackupPC-users@lists.sourceforge.net
 >List:[1]https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >Wiki:[2]https://github.com/backuppc/backuppc/wiki
 >Project: [3]https://backuppc.github.io/backuppc/
 > 
 > References
 > 
 >Visible links
 >1. https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >2. https://github.com/backuppc/backuppc/wiki
 >3. https://backuppc.github.io/backuppc/
 > _______
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:https://github.com/backuppc/backuppc/wiki
 > Project: https://backuppc.github.io/backuppc/


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 43.9 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] extra Pool Size charts in Status screen

2022-11-08 Thread Paul Fox
I wrote:
 > After upgrading from V3 to V4 (via a system upgrade from Ubuntu 20 to
 > 22) my server status screen now has two copies of the 4 and 52 week
 > pool size charts.  (i.e, 4 charts total.)
...
 > The first images (which are log/poolUsage{4,52}.png) are generated
 > from log/poolUsage.rrd.  (I think so -- at least, all three have
 > identical modtimes).
 > 
 > The second set of images are generated (in GeneralInfo.pm) from
 > log/pool.rrd, which in my case is several days old, from before my
 > upgrade to V4.  My suspicion is that this is a stale file, but I also
 > see that there's also code in GeneralInfo.pm to create log/pool.rrd,
 > prior to using it to create the images.
 > 
 > So, what's going on?

With further investigation:

It seems that the second pair of graphs are generated by code in
GeneralInfo.pm which is added by the Debian package patches.  In
particular, 01-debian.patch and 06-fix-rrd-graph-permissions.patch.

Unless I'm mistaken, it seems that backuppc V3 didn't provide pool
graphs at all.  The graphs I've been seeing for the last couple of
decades have been created by code added by the Debian packager.

That's great (I like the charts), but now that backuppc V4 is creating
its own pool graphs, perhaps the Debian patches which do so should go
away.

BTW, the code in GeneralInfo.pm (courtesy the debian patches 1 and 6)
generates the graphs on the fly using the date in log/pool.rrd.  I
haven't figured out how pool.rrd ever got updated with pool data in
the first place.  It seems likely that that code is gone already,
because having renamed the pool.rrd a couple of days ago, it hasn't
been recreated.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 45.1 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] extra Pool Size charts in Status screen

2022-11-06 Thread Paul Fox
After upgrading from V3 to V4 (via a system upgrade from Ubuntu 20 to
22) my server status screen now has two copies of the 4 and 52 week
pool size charts.  (i.e, 4 charts total.)

I've put a screenshot of the page here:
https://www.foxharp.boston.ma.us/tmp/z/bpc_extra_charts.jpg

Is this correct?  Seems confusing to have different charts with
the same titles and slightly differing content.

The first images (which are log/poolUsage{4,52}.png) are generated
from log/poolUsage.rrd.  (I think so -- at least, all three have
identical modtimes).

The second set of images are generated (in GeneralInfo.pm) from
log/pool.rrd, which in my case is several days old, from before my
upgrade to V4.  My suspicion is that this is a stale file, but I also
see that there's also code in GeneralInfo.pm to create log/pool.rrd,
prior to using it to create the images.

So, what's going on?

paul
=------
 paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 61.0 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Setuid problem

2022-11-05 Thread Paul Fox
Dave Bachmann wrote:
 >I had tried that earlier but it had other errors - just can't recall the
 >details right now, but on one of my attempts there was a message at the
 >end that configure.pl did not replace config.pl despite the fact that

I think Les wasn't asking about a package that requires "configure.pl", which
implies you're building it yourself.  I think the question was more
along the lines of, why didn't you do "apt install backuppc"?

paul

 >there was no existing config.pl at that time.
 >I have since been using BackupPC-4.4.0.tar.gz. Before unpacking and
 >installing it I deleted all files owned by backuppc that had been
 >previously installed. There remains the possibility that there is a config
 >file somewhere that has a pointer or values from my previous attempt, but
 >I'm not sure how to identify them.
 > 
 >══
 > 
 >From: Les Mikesell 
 >Sent: Saturday, November 5, 2022 13:13
 >To: General list for user discussion, questions and support
 >
 >Subject: Re: [BackupPC-users] Setuid problem
 > 
 >On Sat, Nov 5, 2022 at 2:57 PM Dave Bachmann 
 >wrote:
 >>
 >> This reinforces my fear that the latest install may not have run
 >properly and that there are other problems lurking. I expect that
 >index.cgi should have been created by configure.perl, but don't understand
 >why it wasn't. What's involved in creating it post-hoc?
 >>
 > 
 >Is there some reason you don't use the packaged version for your linux
 >distribution?
 > 
 >--
 >   Les Mikesell
 >  lesmikes...@gmail.com
 > 
 >___
 >BackupPC-users mailing list
 >BackupPC-users@lists.sourceforge.net
 >List:[1]https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >Wiki:[2]https://github.com/backuppc/backuppc/wiki
 >Project: [3]https://backuppc.github.io/backuppc/
 > 
 > References
 > 
 >Visible links
 >1. https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >2. https://github.com/backuppc/backuppc/wiki
 >3. https://backuppc.github.io/backuppc/
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:https://github.com/backuppc/backuppc/wiki
 > Project: https://backuppc.github.io/backuppc/


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 65.8 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Setuid problem

2022-11-05 Thread Paul Fox
 >I am running into the setuid problem, eg. when running it I receive the
 >following message: "Error: Wrong user: my userid is 33, instead of
 >117(backuppc)" where userid 33 = www-data.

And is your index.cgi setuid to backuppc, like this?

$ ls -l /usr/lib/backuppc/cgi-bin/
total 16
-rwsr-x--- 1 backuppc www-data 14488 Mar  7  2022 index.cgi*

=------
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 68.7 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] out-of-date documentation at sourceforge

2022-11-04 Thread Paul Fox
Marcel Blenkers wrote:
 >Hi there, 
 >this site is out of date as the project moved to github
 >[1]https://backuppc.github.io/backuppc/

Thanks.  All the more reason for the resources on SF to be turned
off, I'd think.  There's no usefulness in having google mislead users.

Thanks,
paul


 >Greetings 
 >Marcel 
 >Am 4. November 2022 21:07:09 schrieb Paul Fox :
 > 
 >  During my recent upgrade to V4 (courtesy my Ubuntu LTS upgrades) I've
 >  been referring to the docs at
 >  https://backuppc.sourceforge.net/BackupPC-4.1.3.html
 >  I only just realized this morning that those date from 2017, and
 >  (likely) differ from the docs I get by hitting the Documentation
 >  button in the web UI.  (Those docs describe 4.4.0)
 >  Is there a reason for the out-of-date V4 docs on SF?
 >  paul
 >  =--
 >   paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 64.6
 >  degrees)
 >  ___
 >  BackupPC-users mailing list
 >  BackupPC-users@lists.sourceforge.net
 >  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >  Wiki:https://github.com/backuppc/backuppc/wiki
 >  Project: https://backuppc.github.io/backuppc/
 > 
 > References
 > 
 >Visible links
 >1. https://backuppc.github.io/backuppc/
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:    https://github.com/backuppc/backuppc/wiki
 > Project: https://backuppc.github.io/backuppc/


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 61.2 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] out-of-date documentation at sourceforge

2022-11-04 Thread Paul Fox
During my recent upgrade to V4 (courtesy my Ubuntu LTS upgrades) I've
been referring to the docs at
https://backuppc.sourceforge.net/BackupPC-4.1.3.html

I only just realized this morning that those date from 2017, and
(likely) differ from the docs I get by hitting the Documentation
button in the web UI.  (Those docs describe 4.4.0)

Is there a reason for the out-of-date V4 docs on SF?

paul
=------
 paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 64.6 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Missing backup files

2022-11-03 Thread Paul Fox
backu...@kosowsky.org wrote:
 > READ THE DOCUMENTATION (I am sounding like a broken record).

Indeed, you are.

=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 59.5 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] v4: How can I disable BackupPC but keep the service running? (Read-only BackupPC)

2022-10-28 Thread Paul Fox
backu...@kosowsky.org wrote:
 > Alternatively, use backuppcfs which creates an easily browsable fuse
 > filesystem for all of your backups -- it works even when backuppc is
 > *not* running.
 > 
 > Personally, in most cases, I find backuppcfs much easier to use to
 > access backups than using either the web gui or tar/rsync restores.

Slick.  I assume it presents a "normal looking" filesystem, with correct
permissions, etc?  Is it a standard part of V4?

paul
=------
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 45.7 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] guide/how-to for the upgrade from v3 to v4?

2022-10-04 Thread Paul Fox
I'm about to upgrade my backuppc server machine from Ubuntu 20 LTS to
Ubuntu 22 LTS.  And that will involve an upgrade from backuppc v3 to v4.

Questions:
- Is there a way to decouple these upgrades?  Both will take
  time, and attention to details, and I'd like to not have to do
  both at once.  Ideally, I'd upgrade the OS, and then when I'm
  sure it's stable after a few days, I would upgrade backuppc.  I
  suppose I could simply disable backups during that time, but
  that seems...  non-optimal.

- The worst case would be that the OS upgrade will start forcing
  me to deal with backuppc configuration halfway through the OS
  upgrade, asking me to diff and manually convert configuration files
  while the OS is only half-upgraded.  Is there at least a way
  to easily leave my backuppc config untouched, with backuppc disabled,
  until the OS upgrade is all finished?  And _then_ do an upgrade
  of backuppc?

- If backuppc 4 were packaged (as a backport?) to Ubuntu 20, I
  could perform the backuppc upgrade prior to the OS upgrade.  I
  don't find it in the Ubuntu package lists:  is this packaging
  available anywhere else?  (I'm not particularly interested in
  installing from tar, or from git.)

- All I've been able to learn about the backuppc upgrade has come
  from searching the mailing list archives.  It seems I'll have
  problems with my RsyncClientCmd and RsyncRestoreCmd settings
  (which mostly just force an alternate ssh port), and perhaps
  some other settings.  Is there a list of all the potential
  trouble points?  Has anyone written a "Here are all the things I
  had to fix" web page, or, maybe, a similar message to the
  mailing list that I may have missed?

paul

p.s. It seems I've been concerned about this upgrade for some time.  In
searching old mail, I found this:
  https://sourceforge.net/p/backuppc/mailman/message/35868022/

=--
 paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 47.3 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] backuppc and Caddy

2022-04-26 Thread Paul Fox
Does anyone have handy a Caddyfile stanza for running backuppc with
the Caddy web server, rather than Apache?

paul
=--
 paul fox, p...@foxharp.boston.ma.us (arlington, ma)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Rotating multiple drives on one mount point

2022-03-26 Thread Paul Fox
kenneth wrote:
 > 
 > The main problem is that "yesterday's" backup might be off-site for 
 > someone who needs a semi-recent file restored. But I don't see much way 
 > to solve that without keeping the previous backup media on-site.

This is one of the problems solved by using a RAID pair for your backups.
Your pool never gets "reset".  It just keeps going, and what you move
offsite is a snapshot.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Rotating multiple drives on one mount point

2022-03-26 Thread Paul Fox
Kenneth Porter wrote:
 > I'm adding a second external drive to my rotation so I can keep one 
 > off-site in case of disaster.
 > 
 > How do people handle this? What do your systemd mount/automount unit files 
 > look like? Do you use a single drive label so a single systemd unit works 
 > to mount any backup drive to the same mount point?

The trick I use (and has been discussed here, though years ago)
is to set up your backuppc pool as a mirrored RAID pair of drives.

Backups go to both drives, of course, automatically via the magic of RAID.

When you want to rotate, you declare one drive as failed.  Remove it,
put it somewhere safe.  Install a replacement, restart the RAID, and the
synchronization just happens, again via the magic of RAID.

In addition, SATA is a hot-pluggable interface, so (assuming your
hardware does everything right), you can break the RAID pair, power
down the drive, and replace it, all without rebooting.  I do stop
backuppc and unmount the pool disks while doing all of this, to ensure
that the offsite disk will mount cleanly if needed someday.

I have 3 or 4 in rotation, and swap them out once every several
months.

In my case, I use encryption on the removeable half, so I don't need
to worry so much about how securely it's stored.  I used to keep one
in my desk drawer at work, for instance.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-17 Thread Paul Fox
"G.W. Haywood via BackupPC-users" wrote:
 > 
 > On Thu, 17 Feb 2022, brogeriofernandes wrote:
 > 
 > > I'm wondering if would be possible to run a command just after
 > > client transfers file data but before it's stored in backuppc
 > > pool. My idea is to do an image compression, like jpeg-xl lossless,
 > > instead of the standard zlib one.
 > 
 > Have you considered using a compressing filesystem on the server?

Just kibitzing from the sidelines:  It seems like image manipulation
tools should be learning how to deal with the compresseed jpgs
directly.  Then there would be no reason not to simply compress them
all and leave them that way.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] tool for comparing contents of incrementals?

2021-11-20 Thread Paul Fox
I've tried searching the archives for an answer, but there are so many
hits to my searches I'm not able to find my needle in the haystack (if
it's there at all).

A recent filesystem calamity leads me to want to know which files were
modified or deleted on a particular day.

Does anyone know of a tool/script that will look at two backuppc
incremental backup trees, and produce a (relatively) readable diff
listing, showing files deleted, and if not full diffs, then at least
"length changed" notifications for changed files?

paul
=-
p...@foxharp.boston.ma.us


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backing up the server computer

2017-07-15 Thread Paul Fox
Adam Goryachev wrote:
 > 
 > 
 > On 15/7/17 13:00, Paul Fox wrote:
 > > i didn't say i don't also have some excludes.  i exclude /proc and
 > > /sys.  /dev is a separate filesystem.  /tmp, believe it or not, i do
 > > back up, to help with the morning-after regret of having lost a file i
 > > thought i didn't need.  i think we're ending up in the same place -- i
 > > just need to specifically include mounted filesystems (as separate
 > > share), which is how i prefer it.
 >
 > Actually, I think you will find that /proc, /dev, /sys, etc are actually 
 > different filesystems, and so will automatically be excluded by 
 > --one-file-system.

thanks -- when i saw those in my excludes list last night just before
posting i though "hmmm.  i'll bet i don't need those".  i think they're
left over from before i added --one-file-system.  (which was probably
12 or 13 years ago.)  i also noticed i'm not using RsyncExtraArgs, which
would be cleaner and safer.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 60.4 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-14 Thread Paul Fox
B wrote:
 > On Fri, 14 Jul 2017 18:56:19 -0400
 > Paul Fox  wrote:
 > 
 > > i confess i haven't been following this thread in all its gory detail,
 > 
 > The BackupPC god absolves you (although, it is the BPC v.3x god, so
 > you'll need to upgrade the confessionnal if you want to also be absolved
 > by the v.4.x one.)

:-)

 > > but i suspect that many folks do their backups onto a separately
 > > mounted disk.  if you do that, then adding "--one-file-system" to the
 > > rsync args takes care of it:  you can back up from '/', but only the
 > > root filesystem will be backed up.  any other filesystems on that
 > > machine will also need to be backed up as separate shares, of course.
 > 
 > But this way, you still backup unwanted directories, such as /tmp, /dev,
 > /proc, etc.
 > Starting on the disk root and excluding these allows for a tight control
 > over what you want and the rest, providing you need almost the whole
 > system to be saved for whatever reason.

i didn't say i don't also have some excludes.  i exclude /proc and
/sys.  /dev is a separate filesystem.  /tmp, believe it or not, i do
back up, to help with the morning-after regret of having lost a file i
thought i didn't need.  i think we're ending up in the same place -- i
just need to specifically include mounted filesystems (as separate
share), which is how i prefer it.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 58.1 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-14 Thread Paul Fox
B wrote:
 > On Fri, 14 Jul 2017 18:22:54 -0400
 > Bob Katz  wrote:
 > 
 > > Oh boy  I get it!!! I can't believe how stupid I was about that. 
 > 
 > Me too ;-p)
 > 
 > > Well, doesn't this mean I have to establish a whole bunch of modules 
 > > with a different path for each module, in order to back up everything 
 > > EXCEPT the backup location? Maybe I should try a different method than 
 > > rsyncd
 > 
 > You can still use '/', but that means you'll have to exclude all
 > unwanted directories - I use BPC this way 'cos I really need the
 > whole system being backed up.

i confess i haven't been following this thread in all its gory detail,
but i suspect that many folks do their backups onto a separately
mounted disk.  if you do that, then adding "--one-file-system" to the
rsync args takes care of it:  you can back up from '/', but only the
root filesystem will be backed up.  any other filesystems on that
machine will also need to be backed up as separate shares, of course.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 59.2 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4 and rsync for localhost

2017-06-15 Thread Paul Fox
Michael Stowe wrote:
 > On 2017-06-15 03:25, Daniel Berteaud wrote:
 > > Hi there.
 > > 
 > > Using BackupPC since v2, I used to be able to BackupPC the host itself
 > > (the one running BackupPC) using rsync by simply modifying
 > > $Conf{RsyncClientCmd} (to something like '/usr/bin/sudo $rsyncPath
 > > $argList', and same for $Conf{RsyncClientRestoreCmd}). THis worked
 > > with BackupPC v3 too.
 > > 
 > > How can the same be done with BackupPC v4 now that RsyncClientCmd
 > > isn't used anymore ? Setting RsyncSshArgs to undef does't work as
 > > it'll try to run with ssh (but without a full path), eg
 ...
 > 
 > I looked on my own setup to answer this question, since I used a similar 
 > method under 3.x and have been backing up the local systems under 4.x 
 > since the alpha versions.
 > 
 > Turns out I just use a pretty vanilla rsync/ssh setup, and set up ssh 
 > keys so the box can log into itself without issues.
 > 

i will have the same issue that Daniel has.  I use:
   $Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList+';
   $Conf{RsyncClientRestoreCmd} = '/usr/bin/sudo $rsyncPath $argList+';
in order to backup the backuppc host without the encryption/decryption
overhead of ssh.

for another host, i use this:
  $Conf{SshPath} = '/var/lib/backuppc/pc/broom/dual_commands';
  $Conf{RsyncClientCmd} ='$sshPath backup  $host $rsyncPath $argList+';
  $Conf{RsyncClientRestoreCmd} = '$sshPath restore $host $rsyncPath $argList+';

where the "dual_commands" script first tries the host on a wired
connection, and then on a wireless connection -- this is for a laptop
which might be connected either way.  (this also requires wrapping
the "ping" command -- i didn't show that, above.)


is there really no way to handle special cases like this in v4?

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 63.9 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Debugging excludes and....

2017-06-05 Thread Paul Fox
this question ("how do i exclude X and Z without excluding Y") seems
to come up a lot -- i've certainly asked it in the past.

if seems like it wouldn't be too hard (for someone more perl-savvy
than i am) to create a small utility that could be used for testing.
it would be given a list of patterns (just as bob as given it below), and
it would filters a list of files from stdin (the output of "find -print",
for instance), emitting only those names which would be backed up.  i
suppose it would need knowledge of the underlying backup transport (tar
vs. rsync), as well.

okay, maybe not a trivial tool.  but it would sure be nice to be able
to do a dry run of a set of patterns in order to test them, before
actually disturbing one's backuppc config.

paul

Bob Katz wrote:
 > Hi, guys. I want to exclude any file that contains "Previews.lrdata".  
 > 
 > For example:
 > 
 > Lightroom Catalog/Lightroom 5/Lightroom 5
 > Catalog/Old/original1.Lightroom 5 Catalog-2 Previews.lrdata/9/9965
 > 
 > 
 > 
 > Another challenging exclude needs to contain "PreviewsDefective.lrdata"
 > 
 > 
 > Lightroom Catalog/Lightroom 5/Lightroom 5 Catalog/original1.Lightroom 5
 > Catalog-4 Smart PreviewsDefective.lrdata/9/9E74
 > 
 > 
 > --   
 > 
 > Would it be as simple as using *Previews.lrdata*, for example:
 > 
 > 
 > $Conf{BackupFilesExclude} = {
 >   '*' => [
 > '*Previews.lrdata*',
 > 'Lightroom ACR cache/',
 > 'Library/Caches/',
 > '*.lrprev',
 > 'Lightroom Catalog/Backups/',
 > 'Library/Logs/CrashReporter/',
 > 'Library/Application Support/MobileSync/Backup/'
 >   ]
 > 
 > 
 > 
 > 
 > Thanks for any advice,
 > 
 > 
 > Bob
 > 
 > 
 > 
 > 
 > 
 > --
 > Check out the vibrant tech community on one of the world's most
 > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
 > _______
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/
 > 


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 49.6 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Updated BackupPC 4 Debian Packages

2017-05-30 Thread Paul Fox
Holger Parplies wrote:
 > Hi,
 > 
 > (shouldn't this really be on backuppc-devel?)

not if you want feedback from users.

 > 
 > Ludovic Drolez wrote on 2017-05-30 20:45:35 +0200 [Re: [BackupPC-users] 
 > Updated BackupPC 4 Debian Packages]:
 > > > On Fri, May 26, 2017 at 10:34:11PM -0700, Craig Barratt wrote:
 > > > >No, rsync-bpc isn't usable without BackupPC.
 > 
 > stupid question: should it even be installed in /usr/bin then?
 > 
 > > > > [...]
 > > > >The main upgrade risk area is around rsync config parameters and 
 > > > > arguments
 > > > >not being compatible between 3.x and 4.x. Configure.pl tries to
 > > > >extract $Conf{RsyncSshArgs} (a new 4.x setting) from the
 > > > >old $Conf{RsyncClientCmd} setting.
 > 
 > As far as I can tell, an automatic conversion is not always possible. For

i haven't hit this upgrade yet, since i'm running Ubuntu LTS 16.04.  but
i have to say that from what i'm hearing, i'm not looking forward to
it.  i have multiple hosts with host-specific RsyncClientCmd and
RsyncRestoreCmd settings, sometimes to simply change the port number, but
at other times to provide a wrapper script which tries a couple of addresses
for the host before giving up.

if backuppc isn't going to handle the upgrade seamlessly, i hope there's
a more obvious warning than "do you want to take the maintainer's new
version?".  it should spell out that the upgrade will be a bit complicated,
and that perhaps it should be delayed until the user has enough time.

as i say, i haven't attempted this upgrade yet.  perhaps i'm worried
about nonexistent issues.  you just scared me a bit, is all.  ;-)

paul

 > simple cases, it's easy enough. Varying orders of ssh command line options
 > make things complicated. And in the general case, RsyncClientCmd could be
 > virtually *anything* that leads to a connection to something that emulates
 > an rsync protocol. I'm not sure RsyncSshArgs can be as flexible, or at least
 > that this can be achieved by an automated configuration translation.
 > 
 > Also, I believe configure.pl doesn't handle host configuration files, and I
 > would assume that doing so in postinst would violate policy, because host
 > configuration files don't belong to the package, do they?
 > 
 > Aside from that, there is no longer an RsyncClientRestoreCmd, so part of the
 > formerly possible configuration simply does not translate.
 > 
 > Finally, the configuration file may contain arbitrary Perl code for
 > determining the value of RsyncClientCmd (or anything else, for that matter),
 > defeating conversion as with the web configuration editor.
 > 
 > Thinking about it, for config.pl, simply including a new version would leave
 > it up to the user to resolve the differences between his local version and
 > the new version, wouldn't it?
 > 
 > Regards,
 > Holger
 > 
 > --
 > Check out the vibrant tech community on one of the world's most
 > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
 > _______
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/
 > 


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 51.1 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.2.0]

2017-05-09 Thread Paul Fox
Steve Palm wrote:
 > You are right, of course...  Seems the -T parameter to ssh went
 > away at some point from the master rsync transfer configuration,
 > and the ssh authorized_keys entries for the backup login were
 > created with no-pty.

i've been burned by this in the past -- in my case it was a wrapper 
around ssh that i use for a particular host which was producing a bit
of startup output, and it took me a long time to realize it was
causing the failure.

i can't think of any way for backuppc to help with this (and i tried).

paul

 > 
 > Thankfully, at this point, it seems to be running very smoothly. :)
 > 
 > Steve
 > 
 > > On May 7, 2017, at 3:17 AM, Craig Barratt  > <mailto:cbarr...@users.sourceforge.net>> wrote:
 > > 
 > > Steve,
 > > 
 > > Most likely it's a problem with ssh (perhaps it's prompting for a
 > > password) or the client shell is producing output before rsync is
 > > run.
 > > 
 > > Craig
 > > 
 > > On Wednesday, May 3, 2017, Steve Palm  > <mailto:n9...@n9yty.com>> wrote:
 > > Any clue what this is saying?
 > > 
 > > 
 > > This is the rsync child about to exec /usr/local/bin/rsync_bpc
 > > rsync_bpc: connection unexpectedly closed (0 bytes received so far) 
 > > [Receiver]
 > > Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 
 > > 0 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 1676476 inode
 > > rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.2.0]
 > > rsync_bpc exited with fatal status 255 (65280) (rsync error: unexplained 
 > > error (code 255) at io.c(226) [Receiver=3.1.2.0])
 > > Xfer PIDs are now
 > > Got fatal error during xfer (No files dumped for share /Users/)
 > > Backup aborted (No files dumped for share /Users/)
 > > 
 > > 
 > > 
 > > --
 > > Check out the vibrant tech community on one of the world's most
 > > engaging tech sites, Slashdot.org <http://slashdot.org/>! 
 > > http://sdm.link/slashdot <http://sdm.link/slashdot>
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net 
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users 
 > > <https://lists.sourceforge.net/lists/listinfo/backuppc-users>
 > > Wiki:http://backuppc.wiki.sourceforge.net 
 > > <http://backuppc.wiki.sourceforge.net/>
 > > Project: http://backuppc.sourceforge.net/ 
 > > <http://backuppc.sourceforge.net/>
 > > --
 > > Check out the vibrant tech community on one of the world's most
 > > engaging tech sites, Slashdot.org <http://slashdot.org/>! 
 > > http://sdm.link/slashdot___ 
 > > <http://sdm.link/slashdot___>
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net 
 > > <mailto:BackupPC-users@lists.sourceforge.net>
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > 
 > --
 > Check out the vibrant tech community on one of the world's most
 > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 45.9 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] appending to the Excludes hash

2017-05-07 Thread Paul Fox
Craig Barratt wrote:
 > It's ok putting code in config.pl, but you should be aware that if you use
 > the CGI editor, it will not survive.

ah -- thanks for that.  i don't use that editor, but i can imagine
being tempted to learn how someday.  i guess i'll avoid the temptation.

paul

 > 
 > Craig
 > 
 > On Thursday, May 4, 2017, Paul Fox  wrote:
 > 
 > > Bowie Bailey wrote:
 > >  > On 5/4/2017 10:35 AM, Paul Fox wrote:
 > >  > >
 > >  > > is there a nice perl way to do something like this?  syntax
 > >  > > intentionally left vague:
 > >  > >
 > >  > >  $Conf{BackupFilesExclude} += { ...  '/home' }
 > >  > >
 > >  > > i'd like to be able to append to either the '*' catchall array
 > >  > > or a share-specific array.
 > >  >
 > >  > Off the top of my head, you could do it like this:
 > >  >
 > >  > push @{$Conf{BackupFilesExclude}{'*'}}, '/dir1', '/dir2', '/dir3';
 > >  > push @{$Conf{BackupFilesExclude}{'/'}}, '/dir4', '/dir5', '/dir6';
 > >
 > > thanks!  syntax was perfect.
 > >
 > > paul


=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 52.7 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] appending to the Excludes hash

2017-05-04 Thread Paul Fox
Bowie Bailey wrote:
 > On 5/4/2017 10:35 AM, Paul Fox wrote:
 > >
 > > is there a nice perl way to do something like this?  syntax
 > > intentionally left vague:
 > >
 > >  $Conf{BackupFilesExclude} += { ...  '/home' }
 > >
 > > i'd like to be able to append to either the '*' catchall array
 > > or a share-specific array.
 > 
 > Off the top of my head, you could do it like this:
 > 
 > push @{$Conf{BackupFilesExclude}{'*'}}, '/dir1', '/dir2', '/dir3';
 > push @{$Conf{BackupFilesExclude}{'/'}}, '/dir4', '/dir5', '/dir6';

thanks!  syntax was perfect.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 54.9 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] appending to the Excludes hash

2017-05-04 Thread Paul Fox
i guess this question is really about perl syntax, but it might
be something commonly done by backuppc users, so here goes:

i have the following in my global config:

$Conf{BackupFilesExclude} = {
  '/' => [  
'/proc', 
'/sys', 
],
  '*' => [  
'.cache',
'.gvfs',
'slocate.db',  
'ID',
'*._nobackup_',
'*.o'
]
};


sometimes i wish to augment this list for a particular host:

$Conf{BackupFilesExclude} = {
  '/' => [  
'/home', # /home is backed up separately on this host
'/proc', 
'/sys', 
],
  '*' => [  
'.cache',
'.gvfs',
'slocate.db',  
'ID',
'*._nobackup_',
'*.o'
]
};

clearly it would be cleaner not to have to duplicate the entire
data structure in the host's config file.

is there a nice perl way to do something like this?  syntax
intentionally left vague:

$Conf{BackupFilesExclude} += { ...  '/home' }

i'd like to be able to append to either the '*' catchall array
or a share-specific array.

paul
p.s. it's a testament to backuppc's stability that i've had my
subscription to this list disabled for almost 10 years!
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 58.5 degrees)


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hard links & encryption

2008-02-07 Thread Paul Fox
stephen wrote:
 > On Wed, 6 Feb 2008, Les Mikesell wrote:
 > 
 > > You also have to know how many references there are to each pool item.
 > > That is, pretty much duplicate the code of a filesystem without gaining
 > > much.  And you can't let any of this change for the duration it takes to
 > > complete your mirroring.
 > 
 > I understand how the hardlinks are used (and it works pretty well) but I 
 > can't help but think that a database of file references would work as well 
 > (possibly better) than the hardlinks...

personally, i'd want to weigh that against the full transparency and
simplicity of the current solution.  i  that i can cd into
the PC tree and/or the pool if i need to -- for instance, if the
restore tools aren't currently available or convenvient.  i have
trouble believing that a database solution would be as
satisfying, to me.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 27.3 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hard links & encryption

2008-02-06 Thread Paul Fox
Robin Lee Powell <[EMAIL PROTECTED]> wrote:
 > On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell wrote:
 > > This reminds me: is there some fundamental reason backuppc can't
 > > use symlinks?  It would make so many things like this *so* much
 > > easier. It such a great package otherwise; this is the only thing
 > > that's given me cause to be annoyed with it.
 > 
 > Still wondering this.

with hard links, you can tell that a file in the main pool is no
longer needed, by looking at its link count.  when the link count
goes to 1, none of the per-PC backup trees is referencing it, so
it can be deleted.  (this is what the BackupPC_trashClean process
does.)

with symlinks, you wouldn't get that reference count, and "garbage
collection" would be much more expensive.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 33.6 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] web interface with elinks/links2 ?

2008-01-11 Thread Paul Fox
Rob Owens <[EMAIL PROTECTED]> wrote:
 > 
 > Paul Fox wrote:
 > > i may have been mistaken about what changed.  googling, i just
 > > found the following thread, from this list, from august of this
 > > year.  rob owens describes my problem exactly, and claims the
 > > buttons stopped working with 3.0.0, not with a change in browser:
 > > 
 > 
 > Paul,
 > 
 > I'm 100% sure that the problem for me occurred immediately after
 > upgrading to 3.0.0.  However, I can't be 100% sure that I didn't upgrade
 > elinks at the same time.  I don't think I did, but I can't guarantee it.

craig has confirmed that it's a backuppc change, and it seems it's
going to stay that way.  oh well.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 33.8 degrees)

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hard links, tar vs. rsync

2008-01-08 Thread Paul Fox
"Nils Breunese (Lemonbit)" <[EMAIL PROTECTED]> wrote:
 > Paul Fox wrote:
 > 
 > > (i still think "--hard-links" should be mentioned in the config file
 > > comments somewhere, or perhaps should even become the default.)
 > 
 > I believe --hard-links is a default option. At least I don't recall  
 > adding it and I have it in my config.pl.

sigh.  i'm not doing well on this, am i.  you're right -- i just checked
the 3.0.0 tarball.  i must have dropped '--hard-links' when i brought my
config forward from 2.1.x.

sorry, again, for the noise.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 50.5 degrees)

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hard links, tar vs. rsync

2008-01-08 Thread Paul Fox
please ignore this entire post.  egregious user error was involved.

my apologies.

summary:  adding --hard-links to the commandline for a previously backed-up
system works fine -- files that used to be just files become hard-links,
as they should.

sorry for the noise.

(i still think "--hard-links" should be mentioned in the config file
comments somewhere, or perhaps should even become the default.)

paul


i wrote:
 > i wrote:
 >  > many thanks, craig.  somehow i overlooked this response until
 >  > today, and in my earlier testing, i hadn't added '--hard-links'. 
 >  > i hadn't yet had time to debug my lack hard links, so i'm glad i
 >  > found your reply.
 > 
 > hmmm.  it seems i may have created a problem for myself.
 > 
 > i've been doing backups with tar, for forever.
 > 
 > i have files in the pool that are marked as hardlink, and which
 > restore as such (using tar, at least -- haven't tried rsync, and
 > i'm not too concerned about that.)
 > 
 > a few days ago i tried switching that host to use rsync, but i
 > neglected to add the "--hard-links" argument.  i did a single
 > full backup.  the same files in the pool (which were hard-links in
 > tar backups) were now just regular unlinked files.
 > 
 > today, having found craig's message, i added "--hard-links" to
 > RsyncArgs, and did another full, thinking it would fix it.  but
 > the same files are still just files, and not hard-links.  
 > the full backup command was:
 >   Running: /usr/bin/ssh -x -l root stump /usr/bin/rsync --server --sender 
 > --numeric-ids --perms --owner --group -D --one-file-system --hard-links 
 > --links 
 > --times --block-size=2048 --recursive --ignore-times . /
 > 
 > another test, using a different, newly converted (i.e. from tar
 > to rsync) host, tells me that a fresh full backup using the
 > correct --hard-lins argument works fine, and hard-links are preserved
 > correctly.
 > 
 > so -- is there anything i can do to "fix" a host where there are
 > files in the wrong "link" state, presumably as a result of
 > running an rsync backup without '--hard-links'?
 > 
 > paul
 > =-
 >  paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 37.4 degrees)

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 44.6 degrees)

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] web interface with elinks/links2 ?

2008-01-07 Thread Paul Fox
Carl Wilhelm Soderstrom <[EMAIL PROTECTED]> wrote:
 > On 01/04 01:29 , Paul Fox wrote:
 > > oh, sure -- there are lots of ways of exporting a browser session --
 > > VNC, or even X11 over ssh (which is very slow, but okay once in a
 > > blue moon).  but i spend 90% of my time in ssh within an xterm,
 > > and elinks is (or, rather, "was") _so_ quick to use for a quick
 > > backuppc status check or file restore that i'd like to figure out how
 > > to keep using it if i can.
 > 
 > why not use your local browser down an ssh tunnel?
 > I even set up aliases for the most common ones I use:

thanks.  that's another good workaround for the original problem.  :-)
(and probably easier to implement than the vnc stuff i was doing a
while ago.)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 41.4 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hard links, tar vs. rsync

2008-01-06 Thread Paul Fox
i wrote:
 > many thanks, craig.  somehow i overlooked this response until
 > today, and in my earlier testing, i hadn't added '--hard-links'. 
 > i hadn't yet had time to debug my lack hard links, so i'm glad i
 > found your reply.

hmmm.  it seems i may have created a problem for myself.

i've been doing backups with tar, for forever.

i have files in the pool that are marked as hardlink, and which
restore as such (using tar, at least -- haven't tried rsync, and
i'm not too concerned about that.)

a few days ago i tried switching that host to use rsync, but i
neglected to add the "--hard-links" argument.  i did a single
full backup.  the same files in the pool (which were hard-links in
tar backups) were now just regular unlinked files.

today, having found craig's message, i added "--hard-links" to
RsyncArgs, and did another full, thinking it would fix it.  but
the same files are still just files, and not hard-links.  
the full backup command was:
  Running: /usr/bin/ssh -x -l root stump /usr/bin/rsync --server --sender 
--numeric-ids --perms --owner --group -D --one-file-system --hard-links --links 
--times --block-size=2048 --recursive --ignore-times . /

another test, using a different, newly converted (i.e. from tar
to rsync) host, tells me that a fresh full backup using the
correct --hard-lins argument works fine, and hard-links are preserved
correctly.

so -- is there anything i can do to "fix" a host where there are
files in the wrong "link" state, presumably as a result of
running an rsync backup without '--hard-links'?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 37.4 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hard links, tar vs. rsync

2008-01-06 Thread Paul Fox
Craig Barratt <[EMAIL PROTECTED]> wrote:
 > Paul writes:
 > 
 > > for a long time there was an issue with using rsync as the transport,
 > > since hard links would not be preserved in the backups.  i believe this
 > > is fixed now -- can someone remind me in which release the fix appeared?
 > > (i'm running 3.0.0.)
 > 
 > Yes, hardlinks with rsync work in 3.x.  Just add '--hard-links' to
 > $Conf{RsyncArgs} and $Conf{RsyncRestoreArgs}.

many thanks, craig.  somehow i overlooked this response until
today, and in my earlier testing, i hadn't added '--hard-links'. 
i hadn't yet had time to debug my lack hard links, so i'm glad i
found your reply.

perhaps '--hard-links' should be in the default value of
RsyncArgs and RsyncRestoreArgs?  this would be a change in
"historic" behavior, but it would make rsync and tar more equivalent.
or, at the least, perhaps '--hard-links' should be mentioned in
the comments adjacent to the RsyncArgs values.  (while i'm at
it:  "--one-file-system" should probably be mentioned right
adjacent as well -- it's discussed in other places, but not right
there.)

and while i'm really at it:  another bit of text that could be
fixed is the FAQ/RoadMap.  i've since found that the top of the
FAQ says it's been replaced by the wiki, but since it's indexed
by google, one can easily end up on a sub-page, which may have
out of date information (like "hard-links aren't supported").)

 > There is, however, a subtle issue that hardlinks in backups made with
 > tar won't be restore correctly with rsync (some of the files will be
 > regular files rather than hardlinks).  If you need to restore an older
 > backup, just switch the xfer method back to tar.  Once the older tar
 > backups expire then this won't be an issue.

you've made me curious -- once the files and metadata are in the pool,
what's the difference between files (or, rather, hard links) backed 
up by tar vs. rsync?

again, many thanks for all of the hard, and excellent, work that's
gone into backuppc.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.9 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] web interface with elinks/links2 ?

2008-01-04 Thread Paul Fox
i wrote:
 > i used to be able to access and operate the backuppc web interface
 > using elinks or links2.  after a system upgrade, which brought me
 > new copies of those text browsers, i can no longer do so.  backuppc
 > did not change (i'm at 3.0.0).

i may have been mistaken about what changed.  googling, i just
found the following thread, from this list, from august of this
year.  rob owens describes my problem exactly, and claims the
buttons stopped working with 3.0.0, not with a change in browser:

 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg06412.html

janne pikkarainen proposes a solution here:

 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg06569.html

craig -- any chance of this change being made?

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 24.1 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] web interface with elinks/links2 ?

2008-01-04 Thread Paul Fox
Les Mikesell <[EMAIL PROTECTED]> wrote:
 > Paul Fox wrote:
 > > i used to be able to access and operate the backuppc web interface
 > > using elinks or links2.  after a system upgrade, which brought me
 > > new copies of those text browsers, i can no longer do so.  backuppc
 > > did not change (i'm at 3.0.0).
 ...
 > Do you have to use a text based browser?  If the reasons for doing so 
 > are that you only have ssh access and/or you have a low bandwidth 
 ...

oh, sure -- there are lots of ways of exporting a browser session --
VNC, or even X11 over ssh (which is very slow, but okay once in a
blue moon).  but i spend 90% of my time in ssh within an xterm,
and elinks is (or, rather, "was") _so_ quick to use for a quick
backuppc status check or file restore that i'd like to figure out how
to keep using it if i can.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 22.3 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] web interface with elinks/links2 ?

2008-01-04 Thread Paul Fox
i used to be able to access and operate the backuppc web interface
using elinks or links2.  after a system upgrade, which brought me
new copies of those text browsers, i can no longer do so.  backuppc
did not change (i'm at 3.0.0).

the problem is that the button for, for example, "Start Full Backup"
is recognized by elinks/links2 as "Harmless Button".  i'm not sure
what that's supposed to mean, but i do know that the buttons do nothing.

has anyone else seen this?  might there be a workaround?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 19.2 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] hard links, tar vs. rsync

2007-12-28 Thread Paul Fox
for a long time there was an issue with using rsync as the transport,
since hard links would not be preserved in the backups.  i believe this
is fixed now -- can someone remind me in which release the fix appeared?
(i'm running 3.0.0.)

i've been using tar for all of my machines because of this, and i think
i'd like to switch to rsync.  are there any caveats re: switching from
one to the other for a client with lots of existing backups?  any tuning
of the rsync commandline, for instance?

currently i'm using this for all clients:
$Conf{TarClientCmd} = \
 '$sshPath -x -q -n -l root $host $tarPath \
  --one-file-system -c -v -f - -C $shareName+ --totals';

i just don't want to screw something up when i switch over.  (i'll start
with a less important client, in any case. :-)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 32.2 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Idea For BackupPC Improvement

2007-12-20 Thread Paul Fox
les wrote:
 > Paul Fox wrote:
 > 
 > > of course, they don't believe in man pages, do they...   sigh.  what
 > > a crock.
 > 
 > It's gnu, remember - the people who insist that you need the source to 
 > do anything useful.

yeah, i know.

i'm just cranky because i just found out the OLPC give-1-get-1 program
lost my address.  :-) :-/

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 25.3 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Idea For BackupPC Improvement

2007-12-20 Thread Paul Fox
Les Mikesell <[EMAIL PROTECTED]> wrote:
 > Yes but it is specific to gnutar not anything general. Try timing:

that's worse, of course!!

 > tar --totals --one-file-system -cf /dev/null /
 > or
 > tar --totals --one-file-system -cf - / > /dev/null
 > vs.
 > tar --totals --one-file-system -cf - / |cat > /dev/null
 > 
 > 
 > Next to none of the difference comes from the overhead of running 'cat'.
 > The feature does make sense, especially if you run amanda - she is smart 
 > enough to adjust the full/incremental mix to fill a tape every night and 
 > still get at least some incremental level of every machine if it can 
 > possibly fit.  But, I agree that it would have been cleaner to add an 
 > explicit option instead of magically detecting a connection to 
 > /dev/null.  ...

exactly.  that's the kind of irresponsible behavior i'd expect from
a windows program.  they've broken a compact with the user.  and worse,
i can find no mention of this aberrant behavior in the man page.  
(strace confirms that your description of the behavior is correct.)

i can't believe they could be so _stupid_.

paul
of course, they don't believe in man pages, do they...   sigh.  what
a crock.

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 25.3 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Idea For BackupPC Improvement

2007-12-20 Thread Paul Fox
les wrote:
 > Les Stott wrote:
 > 
 > > tar -cvf /dev/null
 > >  
 > > the tar to /dev/null actually doesn't take that long at all, maybe a few 
 > > minutes depending on the size.
 > 
 > Gnu tar actually recognizes if stdout is connected to /dev/null (even if 
 > you redirect instead of specifying -f) and doesn't bother to read the 
 > file contents.  I think that was an optimization intended to be used 

rgh.  that's absurd.  are you sure?  it would completely eliminate
any usefulness of /dev/null.  

i once heard a similar (and possibly apocryphal) story about some
CPU h/w engineers that thought they were doing the s/w folks
a favor by making the NOP instruction take zero time. :-)  not
sending output to /dev/null makes just as much sense.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 25.3 degrees)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-06 Thread Paul Fox
stephen wrote:
 > On Tue, 6 Nov 2007, Paul Fox wrote:
 > 
 > > this is perfect wiki fodder, i'd say...
 > 
 > Yes and no. It's more correctly classified as a general system performance 
 > tweak rather than something BackupPC specific. At best it belongs in the 
 > off-topic area.

i think you're splitting hairs.  by that logic, everything
related to disk and volume management (raid, pool copying) would
be off-topic, since it's "general system operations".

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 46.0 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-06 Thread Paul Fox
this is perfect wiki fodder, i'd say...

=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 46.4 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] minor runtime/config error

2007-11-02 Thread Paul Fox
hi --

i recently did a fresh install of 3.0.0.  i have no PCs in my
network, and the machine has no samba software installed on it.

backuppc complains about smbclient and nmblookup being missing,
and refuses to run, though i have no use for either one.  i.e.:

# /etc/init.d/backuppc restart
Restarting backuppc: 2007-11-02 12:11:33 $Conf{SmbClientPath} = 
'/usr/bin/smbclient' is not a valid executable program

my workaround was to set those paths to /bin/cat.  (experimenting
just now, it seems i could also have set them to null.)

in contrast, i also didn't have File::RsyncP installed (which i
needed for one host -- mostly i use tar), and that error didn't
show up until i tried to run a backup for that host, and the
error was reported right in the host summary status box for that
host.  this seems like the preferable failure mode, to me.

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 45.0 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar exclude (was: 3.1.0beta1 released )

2007-10-23 Thread Paul Fox
Frans Pop <[EMAIL PROTECTED]> wrote:
 > On Tuesday 23 October 2007, Paul Fox wrote:
 > > i missed the discussion of this change, and can't find it in my
 > > saved mail:
 > >  > * Fixed handling of $Conf{BackupFilesExclude} for tar XferMethod.
 > >  >   Patch supplied by Frans Pop.
 > 
 > It's not a change. It's a fix for a regression from 2.1.2 to 3.0.0. See:
 > http://sourceforge.net/mailarchive/forum.php?thread_name=200708121944.00852.elendil%40planet.nl&forum_name=backuppc-devel
 > and the Debian bug report linked from there.
 > 
 > > do current users of exclude and tar need to make any changes?
 > 
 > Possibly, but only if you worked around the regression in some way.

thanks.

your bug report says:
Since BackupPC 3.0 Xfer/Tar.pm prepends "./" before _all_
file exclusions.  Effectively, this means that it is no
longer possible to exclude files by name only, you would
always have to list the full path for each individual file.

my excludes say:
$Conf{BackupFilesExclude} = {
  '*' => [
'/proc',
'/sys',
'*._nobackup_'
  ]
};

combining the two, i would conclude that my exclude of files
whose names match '*._nobackup_' should not work.  but my
exclude is working fine -- no files or directories with that name
form get backed up.

so i'm confused.  did i "work around the regression in some way"?  :-)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 58.1 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar exclude (was: 3.1.0beta1 released )

2007-10-23 Thread Paul Fox
i missed the discussion of this change, and can't find it in my
saved mail:

 > 
 > * Fixed handling of $Conf{BackupFilesExclude} for tar XferMethod.
 >   Patch supplied by Frans Pop.

do current users of exclude and tar need to make any changes?

paul
=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 56.8 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] wiki and forums

2007-10-11 Thread Paul Fox
Craig Barratt <[EMAIL PROTECTED]> wrote:
 > I don't have a proposal for a hosted site.  Can someone volunteer
 > something that is sure to have a long available life?  We might
 > consider code.google.com - it provides project hosting and a Wiki
 > but I don't know if it has the right features.

whoever is evaluating the wiki technology should probably at least
consider the wiki feature on sourceforge, since if nothing else
it will probably satisfy the "long available life" requirement.

http://www.wiki.sourceforge.net/

(i'm not quite sure how membership/authorization work.)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 53.2 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wiki and forums

2007-10-10 Thread Paul Fox
les wrote:
 > Neither forums nor email lists are great for accumulating answers to 
 > recurring problems in a way that others can find them.  A wiki is 
 > perfect for that but it takes some manual intervention to gather 
 > information out of the the mail list and put it there in a more usable 
 > format.
 > 

right -- without an interested/active maintainer, i find wikis to
be more promise than content.  there's not enough structure.

i'd prefer something like a wiki, but more structured, something,
say, with dedicated FAQ and HOWTO sections that could be edited
easily, with automatic indexing and search capabilities.  something
like faq-o-matic, which is where i first saw the concept in action:
http://faqomatic.sourceforge.net/fom-serve/cache/427.html
i'm sure there are others.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 52.9 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wiki and forums

2007-10-10 Thread Paul Fox
doug wrote:
 > dan wrote:
 > > just curious, why not visit forums?
 > >
 > 
 > Requires yet another thing to sign into, another place I have to go to.
 > Mailing lists come to me, makes it easier to archive.  When the network

there seems to be a vacuum in the open-source world for a forum
package that properly integrates mailing list access.  yahoo groups
seems to have actually gotten it right.  (except for the advertising.  :-)

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 51.8 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and SMTP??

2007-07-25 Thread Paul Fox
"Gustavo Azambuja" <[EMAIL PROTECTED]> wrote:
 > Hi, i need send alerts and reports without sendmail, and use smtp form my
 > ISP. i can do that??

"without sendmail"?

can you use a simpler sendmail replacement, instead, like ssmtp?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 77.9 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Live

2007-07-23 Thread Paul Fox
Carl Wilhelm Soderstrom <[EMAIL PROTECTED]> wrote:
 > On 07/23 09:22 , Rob Owens wrote:
 > > I've remastered Knoppix  before, 
 ...
 > ...  At
 > that point you install backuppc to the Knoppix ramdrive (since it uses

i suspect this is the step rob was going to avoid by remastering.

paul
=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 63.1 degrees)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] standalone backuppc_zcat?

2007-07-19 Thread Paul Fox
is there a standalone executable which will undo backuppc's
compression?

yesterday i needed some files from an old backup, only available
on our offsite disk.  i realize that we should have a second
(idle) backuppc installation available for such eventualities,
but we don't, and i simply mounted the disk containing the old
pool, navigated to the right place, and copied the files i
needed.  renaming/etc was no problem, but then to uncompress, i
had to copy the files to a system with backuppc installed, so
that i could run them through BackupPC_zcat.

what i would have loved to have instead was a simple C program
version of that script.  (or, better, a backup system that uses
gzip or bzip for its compression.  :-)

is there a standalone decompressor?

what are the reasons for backuppc using a "non-standard"
compression?  (or, am i way off base?  am i simply using the
wrong program or arguments to decompress?)

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 64.6 degrees)

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] host config editing via web?

2007-04-18 Thread Paul Fox
Les Mikesell <[EMAIL PROTECTED]> wrote:
 > Paul Fox wrote:
 > > while i've always just edited the pc//config.pl to make
 > > host config changes (usually to add/change a
 > > $Conf{ClientNameAlias} setting when a host's DNS name changes),
 > > with 3.0.0, i figured i'd try using the web.  as far as i can
 > > tell, there's no way to edit the per-host configuration overrides
 > > via the web.  is that true?  or am i being dense?
 > 
 > Go to the host's 'home' page, look on the left for 'Edit Config'.

whew -- thank you!  i was sure it must be there somewhere.  i was
only looking on the main config edit pages.

it seems to me that ideally there would be a link to editing the
per-host config in the add/delete table on the Edit Hosts page,
but in lieu of that, perhaps a note on the Edit Hosts page
describing where to find the per-host config parameters would be
useful.

paul
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 37.6 degrees)

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] host config editing via web?

2007-04-18 Thread Paul Fox
while i've always just edited the pc//config.pl to make
host config changes (usually to add/change a
$Conf{ClientNameAlias} setting when a host's DNS name changes),
with 3.0.0, i figured i'd try using the web.  as far as i can
tell, there's no way to edit the per-host configuration overrides
via the web.  is that true?  or am i being dense?

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.2 degrees)

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] link counting fixed in 3.0.0? [Was: Subversion working copies cause too many links]

2007-03-31 Thread Paul Fox
Gregor Schmid <[EMAIL PROTECTED]> wrote:
 > Is there anything special to consider when upgrading from 2.1 to 3.0?
 > I found very little on the net and in the docs, I guess that implies
 > that configure.pl is working just fine.

it works very well.  for those of us not fluent in perl, the hardest
part is figuring out how to upgrade/install the required packages
from cpan.  other than that, my upgrade went flawlessly.

paul
=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 32.9 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] if system is on UTC...

2007-03-08 Thread Paul Fox
jonathan wrote:
 > 
 > For Linux / Unix servers, it is kind of tempting to just change system 
 > time to UTC, forget about future modifications to DST, and get used to 
 > making sense of the logs in UTC time.  However, things like cron jobs 
 > and BackupPC blackout times are now going to be time shifted as if UTC 
 > is now local time e.g. cron job that used to run at 2am EST is now 
 > running at 2am UTC or 9pm EST or 10pm EDT.

why would any of this be easier than letting the system track DST
itself?  even on my ancient RH7.2-based server, updating the
zoneinfo files took me all of 5 minutes.  and for anything more
modern, a semi-automatic upgrade (i.e. "apt-get update; apt-get
upgrade") took care of it quite a long time ago without me even
noticing.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 14.9 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] switching from tar to rsync

2007-01-26 Thread Paul Fox
hi --  i have a couple of hosts at remote sites that i've started
doing (partial) backups on.  since i use tar everywhere locally,
that's how i configured them at first.  i realized later that that
wasn't what i wanted, so i switched them to rsync, and immediately
did a full backup on each.  no reason this shouldn't be okay, right?

the one suspicious thing was this -- the log has a _lot_ of messages
that look like this:
Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(.bashrc)
is this "normal"?  i'm guessing it's because on a full, rsync is trying
to remove a previous version of the file that might be there, since
this isn't backup #0.  can i assume this is benign?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 0.3 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] mothballing a host's previous contents

2006-11-01 Thread Paul Fox
i'm pretty confident i'm okay on this, but just to be sure:

i have a host that i've reinstalled with a new OS.  i want to
keep its former backups around for quite a while, in case i need
something.  the hostname remains the same ("stump").  in the
backuppc configs, i've done this:

mv pc/stump pc/oldstump
mkdir pc/stump
cp pc/oldstump/config.pl pc/stump
chown -R backuppc:backuppc pc/stump

and then i edited conf/hosts to add a line for "oldstump".

is this sufficient?

there are other places where a machine's hostname is stored which
might cause the two trees' contents to be confused somehow later on?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 55.9 degrees)

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] "disk too full"

2006-10-15 Thread Paul Fox
Craig Barratt <[EMAIL PROTECTED]> wrote:
 > Paul Fox writes:
 > 
 > > my backup pool disk is 96% full, and backuppc has stopped doing
 > > backups.  i have no problem with that.
...
 > > what bothers me is that it never told me.  there's no notice on
 ...
 > 
 > Looks like you haven't setup the admin email, $Conf{EMailAdminUserName}.

hmm.  sorry for jumping to the wrong assumption.  i confess i was
surprised that you wouldn't have thought of this.  :-)

i do have that configured, but have definitely never gotten that mail
to the configured address.  doing "su - backuppc" and running
/sbin/sendmail -t -f backuppc pgf-admin
by hand sends mail corrrectly.  and i do get messages of the form:

Date:11 Sep 2006 05:43:18 -
From:[EMAIL PROTECTED]
To:  [EMAIL PROTECTED]
Subject: BackupPC administrative attention needed
"

The following hosts had an error that is probably caused by a
misconfiguration.  Please fix these hosts:
  - mulch (Unexpected end of tar archive)

Regards,
PC Backup Genie

so i'd say mail itself is working.

last night's log does contain the line:
2006-10-15 01:00:00 24hr disk usage: 96% max, 96% recent, 22 skipped hosts

so backuppc was aware of the error, and $Info{DUDailySkipHostCntPrev} was
nonzero, which should have triggered the mail.  i'm somewhat confused.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 49.6 degrees)

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] "disk too full"

2006-10-15 Thread Paul Fox
my backup pool disk is 96% full, and backuppc has stopped doing
backups.  i have no problem with that.

i don't look at the PC status page every day, and only found out
that things were amiss when i got one of the "your machine hasn't
been backed up for a week" messages.  it turns out that none of
my machines have been backed up in that time.

what bothers me is that it never told me.  there's no notice on
the status page that the disk is "too full", and no mail was sent
when the disk filled up.  i'd think a message to the effect of
"your PC or laptop was not backed up because there is not enough
space left on the backup server" would be entirely appropriate.

(i'm running 2.1.0pl1 -- i know i need to upgrade.  if this
behavior has changed in more recent releases, i apologize in
advance for the noise.)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 48.2 degrees)

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync trying to backup kcore

2006-09-21 Thread Paul Fox
Toby Johnson <[EMAIL PROTECTED]> wrote:
 > >
 > > Well, sorry it's
 > >
 > > $Conf{BackupFilesEsclude} = {'/var' => 
 > > ['named/chroot/dev','named/chroot/etc','named/chroot/proc','log',]};
 > >
 > > without the fisrt slash. I usually backup from the root 
 > > ($Conf{RsyncShareName} = ['/'];)
 > > so I can exclude with absolute paths. but when you exclude from another 
 > > directory 
 > > you must use relative paths
 > >   
 > 
 > Thanks, that did the trick! The documentation made it seem as though the 
 > { share } => [files] syntax was for SMB only so I hadn't tried that at all.

for my own understanding -- the problem could have been solved without
this syntax, if only the original pathnames had been relative rather than
absolute, correct?  i.e., by changing this:

$Conf{BackupFilesExclude} = ['/var/named/chroot/dev', 
'/var/named/chroot/etc', '/var/named/chroot/proc', '/var/log'];

to this:

$Conf{BackupFilesExclude} = ['var/named/chroot/dev', 
'var/named/chroot/etc', 'var/named/chroot/proc', 'var/log'];

correct?

or should it have been this:
$Conf{BackupFilesExclude} = ['named/chroot/dev', 
'named/chroot/etc', 'named/chroot/proc', 'log'];

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 49.3 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CIFS/SMB for Data Store (NAS)

2006-07-29 Thread Paul Fox
Filipe <[EMAIL PROTECTED]> wrote:
 > Paul Fox escreveu:
 > > [EMAIL PROTECTED] wrote:
 > >  > The NAS I'm looking at getting uses Windows XP Embeded... or something, 
 > >  > and it only supports SMB/CIFS I'm not sure what is required for 
 > >  > Hardlinks? I am guessing that the FS for the NAS will be NTFS
 > >
 > > if i were a betting man, i'd bet a lot of money that this won't work.
 > > i don't think any microsoft filesystem supports hard links.
 > >
 > 
 > I'm new on this, never head of hard links, but I used wikipedia and:
 > http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/fsutil_hardlink.mspx?mfr=true
 > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/fs/createhardlink.asp
 > 
 > so...?!

so, that may explain why i'm not a betting man.  i didn't know NTFS had
the concept of hard links.

i'd still be wary of assuming backuppc will be able to use them
successfully.  i've never heard of anyone hosting the backuppc pool
on a non-unix filesystem.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 69.8 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CIFS/SMB for Data Store (NAS)

2006-07-28 Thread Paul Fox
[EMAIL PROTECTED] wrote:
 > The NAS I'm looking at getting uses Windows XP Embeded... or something, 
 > and it only supports SMB/CIFS I'm not sure what is required for 
 > Hardlinks? I am guessing that the FS for the NAS will be NTFS

if i were a betting man, i'd bet a lot of money that this won't work.
i don't think any microsoft filesystem supports hard links.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 66.2 degrees)

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] nonstart with empty status.pl

2006-06-26 Thread Paul Fox
recently i had a problem where backuppc didn't start after a reboot.
when i finally noticed (since usually it runs so well, i sometimes
forget to check), it turned out that somehow the log/status.pl file
was empty, and this was causing an error on startup.  removing the
file entirely allowed things to proceed as usual.

this is easy to reproduce by replacing status.pl with an empty
file.  (i seem to be running 2.1.0.)

paul
=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 64.6 degrees)

Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple host/directory configuration question

2006-04-19 Thread Paul Fox
 > My question is if I have multiple hosts with various
 > different mount points, how do I specify them in
 > config.pl and hosts file? For example, "target_host1"
 > gets [/data, /home, /etc] backed up, and
 > "target_host2" gets [/usr/local, /opt] backed up.
 > 
 > How do you specify this when there's only one instance
 > of $Conf{RsyncShareName} = [/data, /home, /etc];?
 > 
 > How would I do ie:
 > $Conf{RsyncShareName} = [/data, /home, /etc];
 > (target_host1)
 > 
 > and:
 > $Conf{RsyncShareName} = [/usr/local /opt];
 > (target_host2)
 > 
 > Is this possible?

i'm not sure what the problem is.

in pc/target_host1/config.pl you'd put one of those lines, and
in pc/target_host2/config.pl , you'd put the other one.  that's
what the pc//config.pl files are for.

am i missing something?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 63.5 degrees)


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email Reminders for Retired Machine

2006-04-19 Thread Paul Fox
 > > I have retired a machine and set $Conf{FullPeriod} = -2, but despite this, 
 > > I
 > > continue receiving "no recent backups" email reminders. Can these 
 > > reminders be
 > > disabled?
 > >
 > Set this also
 > 
 > $Conf{EMailNotifyOldBackupDays} = 365.0;
 > 
 > Thus it will only email once a year, you could probably use "-1" to 
 > disable completely.

i don't know about the -1, but setting to 365 will simply hold it off
for a year, then it will mail every day.  "1" worked for me.  :-)
(actually, i just checked, and after a year i set it to 1500, so i'm
good for a few more.)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 64.8 degrees)


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] why are full backups needed with BackupPC?

2006-03-25 Thread Paul Fox
 > > 
 > > You should be able to tell backuppc to make fulls as often as
 > > you want.  The only downside with rsync is the extra time
 > > it takes to do the full block checksum compare on existing
 > > files. 
 > 
 > Is it really the only downside of full backups?
 > 
 > Doesn't a full backup mean that *everything* will be transferred again?

with tar, yes.  with rsync, no -- rsync only recompares checksums.

 > Does the full rsync backup in BackupPC transfer only changes (compared 
 > to the last full backup), or maybe it transfers everything?

it ignores any hints that are used during incrementals (dates,
modes, etc), and transfers everything that isn't already
available in the pool.  the result is the same as if it transferred
everything.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 34.7 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] why are full backups needed with BackupPC?

2006-03-24 Thread Paul Fox
 > >  > Why are full backups needed at all with BackupPC?
 > >  > 
 > >  > According to documentation, "BackupPC's CGI interface ``fills-in'' 
 > >  > incremental backups based on the last full backup, giving every backup 
 > > a 
 > >  > ``full'' appearance."
 > >  > 
 > >  > So, in theory, it should be enough to make just one full, initial 
 > >  > backup, and then only incremental backups.
 > >  > 
 > >  > Or do I miss something here?
 > > 
 > > one reason (there may be others) is that incrementals don't account
 > > for the removal of files.  if a full contains a file that is later
 > > removed, it will always appear in that "filled" view, even after
 > > the file is gone from your system.  so full backups are necessary
 > > to reestablish a true image of your current contents.
 > > 
 > > (this is with tar -- rsync incrementals may actually remove deleted
 > > files.  i don't use rsync.)
 > 
 > So with rsync it shouldn't be an issue, right? Could anyone comment on that?
 > 
 > 
 > Anyway, will several full backups use only one hardlinked file in the 
 > pool, or do full backups use separate, non-hardlinked files?

they use only one hardlinked file -- i.e., no extra space.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 44.1 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] why are full backups needed with BackupPC?

2006-03-24 Thread Paul Fox
 > Why are full backups needed at all with BackupPC?
 > 
 > According to documentation, "BackupPC's CGI interface ``fills-in'' 
 > incremental backups based on the last full backup, giving every backup a 
 > ``full'' appearance."
 > 
 > So, in theory, it should be enough to make just one full, initial 
 > backup, and then only incremental backups.
 > 
 > Or do I miss something here?

one reason (there may be others) is that incrementals don't account
for the removal of files.  if a full contains a file that is later
removed, it will always appear in that "filled" view, even after
the file is gone from your system.  so full backups are necessary
to reestablish a true image of your current contents.

(this is with tar -- rsync incrementals may actually remove deleted
files.  i don't use rsync.)

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 46.0 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Paul Fox
 > > okay, right?  it's only when you want to preserve or copy your
 > > pool that there's an issue?  (or am i neglecting something?  i
 > > might well be.)
 > 
 > Even just the normal process of looking at the pool, either to see if a
 > file is present, or as part of the cleanup scan is much slower.

noted.

 > The pools wouldn't change.  The backup trees themselves are not really
 > transparent, anyway.  The names are mangled, and the attributes are stored
 > in an attribute file.  I would suspect that people browse backups using the
 > web interface more than they try to glean anything from the 'pc'
 > directories.

but when one just wants to look at a file, you _can_ just cd there. 

 > If someone really wanted to, they could write a fuse plugin that would
 > present the backup directory as a real tree, complete with attributes, and
 > visible at any particular time.  This would be a useful browsing method.

this is a good idea, in any case.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.7 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Paul Fox
 > The depth isn't really the issue.  It is that they are created under one
 > tree, and hardlinked to another tree.  The normal FS optimization of
 > putting the inodes of files in a given directory near each other breaks
 > down, and the directories in the pool end up with files of very diverse
 > inodes.
 > 
 > Just running a 'du' on my pool takes several seconds for each leaf
 > directory, very heavily thrashing the drive.
 > 
 > If you copy a backup pool, either with 'cp -a' or tar (something that will
 > preserve the hardlinks), the result will either be the same, or the pool
 > will be more efficient and the pc trees will be very inefficient.  It all
 > depends on which tree the backup copies first.

to clarify -- in the "normal" case, where the backup data is
usually not read, but only written, the current filesystems are
okay, right?  it's only when you want to preserve or copy your
pool that there's an issue?  (or am i neglecting something?  i
might well be.)

if this is mostly true, then creating a better data copier might
be productive.  i thought there was work some time ago to allow
listing the files to be copied in inode order, using an external
tool that pre-processed the tree.  what happened with that?

 > I still say it is going to be a lot easier to change how backuppc works
 > than it is going to be to find a filesystem that will deal with this very
 > unusual use case well.

but having the backup pools exist in the native filesystem in a
(relatively) transparent way is a huge part of backuppc's
attraction.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.9 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] regarding "file is unchanged; not dumped" messages

2006-02-08 Thread Paul Fox
is this patch, submitted some time ago, considered the "correct" fix
for the excessive messages from incremental tar runs?  i just went
hopefully looking for a tar option to suppress that message, but
didn't find one.  :-/

paul

 > 
 > so to avoid this annoying line on incremental backup logs using tar method i 
 > have modified Tar.pm to not log thoses line, here is the patch :
 > 
 > 
 > -
 > $diff -urN Tar.pm.bkp Tar.pm
 > --- Tar.pm.bkp  2005-10-09 15:38:34.288803965 +0200
 > +++ Tar.pm  2005-10-09 15:44:54.338083736 +0200
 > @@ -221,8 +221,11 @@
 >  $t->{XferLOG}->write(\"$_\n") if ( $t->{logLevel} >= 2 );
 >  $t->{fileCnt}++;
 >  } else {
 > +   if ( ! /file is unchanged\; not dumped$/ )
 > +   {
 >  $t->{XferLOG}->write(\"$_\n") if ( $t->{logLevel} >= 0 );
 >  $t->{xferErrCnt}++;
 > +   }
 > #
 > # If tar encounters a minor error, it will exit with a non-zero
 > # status.  We still consider that ok.  Remember if tar prints
 > 
 > ---
 > 
 > this modification working for me and i have eliminate all messages regarding 
 > files not dumped due to unchanged status.
 > 
 > This modification need probably to be tested by somebody else, and may be we 
 > can add some filtering strings in config.pl to be more scalable.
 > 
 > Another way of modification can be done only in the CGI handling view logs, 
 > this way have the advantage to log all messages and display only errors.
 > 
 > Michael.
 > 

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 19.2 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Strange rsync error on OS X G5 QUAD

2006-01-27 Thread Paul Fox
 > > Remote[1]: rsync: opendir "/dev/fd/3" failed: Bad file descriptor (9)
 > > Xfer PIDs are now 30450,30452
 > > 
 > > This is the initial backup.  That's not trying to back up a non-existent
 > > floppy, is it?
 > 
 > FD(4)BSD Kernel Interfaces Manual
 > FD(4)
 > 
 > NAME
 >  fd, stdin, stdout, stderr -- file descriptor files
 > 
 > DESCRIPTION
 >  The files /dev/fd/0 through /dev/fd/# refer to file descriptors which 
 > can
 >  be accessed through the file system.  
 > 
 > Looks like it's some bsd-ism or osx-ism for file descriptors.  I'd suggest
 > excluding /dev/fd* from your backups.

/dev/fd exists on linux as well.  on linux it's a symlink into /proc, and
since /proc is always excluded from backups, there's no issue.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 25.9 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full VS Incremental

2006-01-19 Thread Paul Fox
 > I need to call on my trusty BackupPC server to do a near bare-metal 
 > recovery of a server.  I've got the OS loaded well enough to interface 
 > with BackupPC.  What I need to know now is this:  If I pick the most 
 > recent incremental install, will the data be filled to include all the 
 > files from the previous full backup, or do I need to do the full, then 
 > each incremental since?

it will be filled.  no need for two restores.

paul
=-----
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.8 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Junk after my sig

2006-01-19 Thread Paul Fox
 > It's been brought to my attention that on some of my mails a bunch of
 > junk appears after my sig.
 > 
 > My apologies if it's caused anyone issues.
 > 
 > Where that's comming from is still unknown.  My sig is just a simple
 > one liner with my name and I use my gmail account for several other
 > lists with no apparent problems.

sourceforge is broken.

your mail is sent as:
  Content-Type: text/plain; charset=UTF-8
  Content-Transfer-Encoding: base64
  Content-Disposition: inline

which says that the body of the text is base-64 encoded.  why it
is you're sending this way is beyond me.  however, sourceforge
then adds its own plaintext sig after your encoded body.  the net
result in the raw message looks like this, indented by two spaces
for clarity.  the block of gobbledygook is your text, the rest
is sourceforge's sig.  to do this correctly, their addition should
be a separate mime part, or they should decode your text before adding
to it.

--begin quote--
  SXQncyBiZWVuIGJyb3VnaHQgdG8gbXkgYXR0ZW50aW9uIHRoYXQgb24gc29tZSBvZiBteSBtYWls
  cyBhIGJ1bmNoIG9mCmp1bmsgYXBwZWFycyBhZnRlciBteSBzaWcuCgpNeSBhcG9sb2dpZXMgaWYg
  aXQncyBjYXVzZWQgYW55b25lIGlzc3Vlcy4KCldoZXJlIHRoYXQncyBjb21taW5nIGZyb20gaXMg
  c3RpbGwgdW5rbm93bi4gIE15IHNpZyBpcyBqdXN0IGEgc2ltcGxlCm9uZSBsaW5lciB3aXRoIG15
  IG5hbWUgYW5kIEkgdXNlIG15IGdtYWlsIGFjY291bnQgZm9yIHNldmVyYWwgb3RoZXIKbGlzdHMg
  d2l0aCBubyBhcHBhcmVudCBwcm9ibGVtcy4KCi0tClJpY2hhcmQgQS4gU21pdGgK
 
 
  ---
  This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
  for problems?  Stop!  Download the new AJAX search engine that makes
  searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
  http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/backuppc-users
  http://backuppc.sourceforge.net/
--end quote--

in my case, my mailer (nmh) barfs on trying to do a base64 decode on
the SF signature, and doesn't show me your text at all.

summary:  sourceforge has a problem, but if you can figure out how
to prevent your mail from being base64 encoded in the first place,
it would avoid the issue.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.6 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] clarification of BackupFilesExclude expressions

2006-01-01 Thread Paul Fox
i'm resending this question, since i don't believe i saw a
response.  is anyone else successfully using BackupFilesExclude
with the tar method?

(i'd simply add the "--exclude" args to the tarcmd itself, except
i see from the docs that there's trickery involved in escaping
wildcard characters.)

i wrote:
 >  > > i'd like to be able to flag any file or directory that i want
 >  > > backuppc to skip by adding a "._nobackup_" suffix to its name.
 >  > > 
 >  > > will this do the trick?  (backup method is tar)
 >  > > 
 >  > > $Conf{BackupFilesExclude} = { '/proc'. '*._nobackup_' };

craig wrote:
 >  > 
 >  > You need a comma instead of a period.  Otherwise it should work.
 > 

i wrote:
 > i'm a little confused.  this doesn't seem to be working.
 > 
 > if i look at the XferLOG (or at the running backup tar process on
 > a client system), the tar command for the backup doesn't include
 > a --exclude argument.  where does the contents of
 > $Conf{BackupFilesExclude} get transformed and applied to the tar
 > arguments?  the docs imply that they should be part of $fileList, but
 > in my case this appears to simply be ".":  (wrapped for clarity)
 >     Running:  /usr/bin/ssh -x -q -n -l root woodruff /bin/tar
 >  --one-file-system -c -v -f - -C / --totals
 >  --newer=2005-12-04\ 00:38:48 .
 > 

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 23.2 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fast backup solution

2006-01-01 Thread Paul Fox
 > 
 > rsync is definitely better than tar (just doesn't work for HFS forks on 
 > Mac OSX10.3 and before).  I use rsync for my linux machines and it works 
 > great.

i know i say this everytime this topic comes up, so i apologize in
advance, but until the next release of backuppc with a built-in rsync
client is released (as i understand it), the rsync method does not
preserve hard links.

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 22.8 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fast backup solution

2006-01-01 Thread Paul Fox
 > > If my understanding of the backuppc architecture is correct, then I 
 > > don't see the point of doing "full" backups in the sense of transferring 
 > > all the files accross the network.
 > 
 > With transports other than rsync, that is necessary to be sure
 > you have a full copy of the all files.  For example, none of the
 > others will pick up old files in their new positions under
 > a renamed directory using their incremental modes.

what about deleted files?  with the tar method, deleted files will 
continue appearing in a "filled" view of an an incremental backup.  doing
periodic full backups is necessary to get a completely consistent view.
is this not also true of rsync?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 22.3 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cpool question

2005-12-21 Thread Paul Fox
 > 
 > I don't see why not.  Just change the hardlink into a file whose 
 > contents are the name of the pool file it pointed to.  That seems like 
 > trival one liner in a script, both converting and unconverting the 
 > link.  Since it seems too easy, there must be a "gotcha" which I am 
 > missing!

the gotcha i'm aware of comes at cleanup time -- right now, when
the last pc referencing a pool file stops needing the file, the
file in the pool is deleted during cleanup based on the link count
being equal to 1 (since that means there are no references to it).
changing from hard links would become more complicated and less
robust, since you'd either have to have another "who's using this
pool file" list, or you'd have to search all of the pc trees to look
for references, either at cleanup time, or at deletion time.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 24.1 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cpool question

2005-12-20 Thread Paul Fox
 > > >
 > > > I confess I haven't searched the archives to see if
 > > > anybody has suggested cp in the past :-)
 > >
 > > they have.  :-)
 ...
 > But then, I recently copied a directory full of hardlinks (originally
 > created with storeBackup, a similar but simpler backup utility I use at
 > home), over NFS, and then mv'd it locally to a different partition, and in
 > both instances hardlinks were preserved. Also, the info page for mv says
 > that it uses some of the same code as cp -a.
 > 
 > So the question now is, in what circumstances does cp work or not work?

i'm sure it's a matter of scale.  the backuppc pool contains 1000's of
hard-links, and preserving them through a copy is a hard problem.  and
one that isn't a design center for most copying programs.

for the original poster:  have you looked for what exactly is causing
the space on your external drive to be chewed up?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 24.8 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cpool question

2005-12-20 Thread Paul Fox
 > > there's no perfect way to make a copy of the pool.  the best is
 > > to do a byte-for-byte copy, either to an identical hard-drive, or
 > > by using software RAID as a syncing mechanism for your two disks.
 > > (i.e., run as a "broken" RAID pair most of the time, and only add
 > > the second half when you want to do your external copy.)  this, and
 > > other tricks, have been extensively covered in the archives.  if
 > > there were a FAQ section in the backuppc documentation (i don't think
 > > there is), this would be question #1.
 > 
 > If you are simply copying the pool, why not try "cp -a". It preserves
 > hardlinks (at least in Linux), and probably doesn't require as much memory
 > as rsync does. You don't get the smart copy-only-what-has-changed behavior,
 > but if it's faster, it's faster. Just an idea, and perhaps simpler than
 > using RAID.

doesn't work.

 > 
 > I confess I haven't searched the archives to see if anybody has suggested
 > cp in the past :-)

they have.  :-)

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 25.5 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cpool question

2005-12-20 Thread Paul Fox
 > 
 > So if I was already preserving the hard links and it still is using all
 > the space, what would the next thought be?
 > 

next thought:  double check your rsync arguments.  :-)
thought after that:  no clue.

paul
=---------
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 26.4 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cpool question

2005-12-20 Thread Paul Fox
 > 
 > Hi all,
 > 
 > I have a question as to how cpool stores it's information.
 > 
 > I have everything running good with my backup system, but now I want to
 > rsync all or /var/lib/backuppc to an external hard drive for an offsite
 > backup.  
 > When I run the rsync, everything syncs fine except the cpool directory.
 > If I do a du on /var/lib/backuppc/cpool it's size is 3.8 GB.  If I do it
 > on me external device, the size is 104 GB.  So I keep running out of
 > disk space on my external drive.  How do I keep the sizes the same?


the cpool (or the pool, if you're not using compression) makes heavy
use of hardlinks in order to avoid storing duplicate copies of
identical files.  rsync, by default, will not preserve hardlinks -- so
your external copy is getting multiple copies of the identical files
that backuppc tried to consolidate.  you can tell rsync to preserve
the hardlinks, but you won't be happy -- your copy will now take
days, instead of hours or minutes, and you may run out of memory.

there's no perfect way to make a copy of the pool.  the best is
to do a byte-for-byte copy, either to an identical hard-drive, or
by using software RAID as a syncing mechanism for your two disks.
(i.e., run as a "broken" RAID pair most of the time, and only add
the second half when you want to do your external copy.)  this, and
other tricks, have been extensively covered in the archives.  if
there were a FAQ section in the backuppc documentation (i don't think
there is), this would be question #1.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 27.5 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] clarification of BackupFilesExclude expressions

2005-12-19 Thread Paul Fox
i wrote:
 >  > > i'd like to be able to flag any file or directory that i want
 >  > > backuppc to skip by adding a "._nobackup_" suffix to its name.
 >  > > 
 >  > > will this do the trick?  (backup method is tar)
 >  > > 
 >  > > $Conf{BackupFilesExclude} = { '/proc'. '*._nobackup_' };
 >  > 
 >  > You need a comma instead of a period.  Otherwise it should work.
 > 
 > i'm a little confused.  this doesn't seem to be working.
 > 
 > if i look at the XferLOG (or at the running backup tar process on
 > a client system), the tar command for the backup doesn't include
 > a --exclude argument.  where does the contents of
 > $Conf{BackupFilesExclude} get transformed and applied to the tar
 > arguments?  the docs imply that they should be part of $fileList, but
 > in my case this appears to simply be ".":  (wrapped for clarity)
 > Running:  /usr/bin/ssh -x -q -n -l root woodruff /bin/tar
 >  --one-file-system -c -v -f - -C / --totals
 >  --newer=2005-12-04\ 00:38:48 .

does anyone use BackupFilesExclude with tar?  i guess i'll just
hardcode my excluded list into the TarClientCmd string if i don't
hear that this "should work".

the relevant config lines:

$Conf{BackupFilesExclude} = { '/proc', '*._nobackup_' };

and:

$Conf{TarClientCmd} = '$sshPath -x -q -n -l root $host'
. ' $tarPath --one-file-system -c -v -f - -C $shareName+'
. ' --totals';


$Conf{TarFullArgs} = '$fileList+';

$Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 23.0 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] "backup done" vs. "nothing to do"

2005-12-16 Thread Paul Fox
 >  > 
 >  > can someone satisfy my curiousity and tell me how backuppc
 >  > decides between the "backup done" and "nothing to do" messages in
 >  > the "Last attempt" column of the Host Summary status page?
 >
 > If you check the status of your host after it has completed its backup
 > but before the next check, the status is "backup done" (if it
 > completed).
 > 

thanks -- if by "before the next check" you mean before the next
time the server wakes up, i guess i understand.  i'm surprised i
don't see all my clients showing one message, or the other, then,
rather than a mix of the two.  i'll watch it some more, i guess.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.4 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] "backup done" vs. "nothing to do"

2005-12-16 Thread Paul Fox
can someone satisfy my curiousity and tell me how backuppc
decides between the "backup done" and "nothing to do" messages in
the "Last attempt" column of the Host Summary status page?

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.5 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] clarification of BackupFilesExclude expressions

2005-12-16 Thread Paul Fox
 > > i'd like to be able to flag any file or directory that i want
 > > backuppc to skip by adding a "._nobackup_" suffix to its name.
 > > 
 > > will this do the trick?  (backup method is tar)
 > > 
 > > $Conf{BackupFilesExclude} = { '/proc'. '*._nobackup_' };
 > 
 > You need a comma instead of a period.  Otherwise it should work.

i'm a little confused.  this doesn't seem to be working.

if i look at the XferLOG (or at the running backup tar process on
a client system), the tar command for the backup doesn't include
a --exclude argument.  where does the contents of
$Conf{BackupFilesExclude} get transformed and applied to the tar
arguments?  the docs imply that they should be part of $fileList, but
in my case this appears to simply be ".":  (wrapped for clarity)
Running:  /usr/bin/ssh -x -q -n -l root woodruff /bin/tar
--one-file-system -c -v -f - -C / --totals
--newer=2005-12-04\ 00:38:48 .

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.2 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] clarification of BackupFilesExclude expressions

2005-12-13 Thread Paul Fox
 > > 
 > > i'd like to be able to flag any file or directory that i want
 > > backuppc to skip by adding a "._nobackup_" suffix to its name.
 > > 
 > > will this do the trick?  (backup method is tar)
 > > 
 > > $Conf{BackupFilesExclude} = { '/proc'. '*._nobackup_' };
 > 
 > You need a comma instead of a period.  Otherwise it should work.

oops.  that was a typo.  yes, thanks.

 > > i'd also kind of like to be able to tell backuppc to skip ".o"
 > > object files, but there are places where i don't want to do that,
 > > like under /lib/modules.  if i exclude "*.o", can i force inclusion
 > > of all of /lib/modules by putting it into $Conf{BackupFilesOnly} ? 
 > > the docs are a little ambiguous on this for the tar method --
 > > i.e., for smb only one of BackupFilesOnly and BackupFilesExclude
 > > is used.  but what about for tar, and in what order are they
 > > processed?
 > 
 > The behavior depends upon the XferMethod,

ah, okay.

 > which is tar in your
 > case.  $Conf{BackupFilesOnly} is a set of directories to backup.
 > Each entry of $Conf{BackupFilesExclude} is sent to tar with the
 > --exclude option.  This provides a set of regular expressions that
 > are applied to any file to see if it matches, and therefore should
 > be skipped.  Therefore, $Conf{BackupFilesExclude} applies equally
 > to every directory in $Conf{BackupFilesOnly}.  So I don't think
 > you can accomplish what you want with tar.
 > 
 > The only alternative I can think of is to split the top-level
 > directories into seperate "shares" (ie: put them in $Conf{TarShareName}
 > instead of $Conf{BackupFilesOnly}), and then use share-specific
 > settings in $Conf{BackupFilesExclude}.  The causes a different
 > transfer (ie: tar) to be done for each "share".

okay, i'll consider that.

 > 
 > Rsync allows richer exclude/include options, and by adding the
 > right --include and --exclude options to the RsyncClientCmd you
 > should be able to include just .o files below /lib/modules and
 > exclude all the others.

excellent.  thanks.

=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 10.0 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] clarification of BackupFilesExclude expressions

2005-12-12 Thread Paul Fox
hi -- i recently realized that there are some pretty big files on
my system that change frequently, and which don't need to be
backed up -- mail index files, for example.

i'd like to be able to flag any file or directory that i want
backuppc to skip by adding a "._nobackup_" suffix to its name.

will this do the trick?  (backup method is tar)

$Conf{BackupFilesExclude} = { '/proc'. '*._nobackup_' };

i'd also kind of like to be able to tell backuppc to skip ".o"
object files, but there are places where i don't want to do that,
like under /lib/modules.  if i exclude "*.o", can i force inclusion
of all of /lib/modules by putting it into $Conf{BackupFilesOnly} ? 
the docs are a little ambiguous on this for the tar method --
i.e., for smb only one of BackupFilesOnly and BackupFilesExclude
is used.  but what about for tar, and in what order are they
processed?

thanks,
paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 34.0 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] incremental revelation

2005-12-02 Thread Paul Fox
i just did a restore of a directory (happily not because of
disaster, but because it was an easy way to get at some files
that live on a machine that's currently offline) and had a big
surprise.

i was accessing an incremental backup tree.  since all backups
are "filled", i was very surprised when my restored tree was
obviously incomplete.  then i remembered that i had created the
directory several days ago (but _after_ the most recent full
backup) by doing a "cp -a" of a neighboring directory (i was
cloning a build tree).  of course the date-preserving nature
of "cp -a" meant that my tar-based incremental backup didn't pick
up any files whose dates were older than the previous full, even
though those files had never been backed up.

no data was actually lost, and all is well, but now i'm curious --
would rsync have done the "right thing" in this case?

the only reason i don't use rsync is because it doesn't preserve
hard links, which i use fairly frequently.   but i may reconsider...

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 35.4 degrees)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


  1   2   >