Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Frans Pop
On Tuesday 21 August 2007, you wrote:
> Frans Pop wrote:
> >> If this is really the way backuppc does incremental backups, I think
> >> backuppc should be a bit more incremental with its incremental
> >> backups. Instead of comparing against the last full, it should compare
> >> against the last full and incremental backups. This would solve this
> >> problem and make backuppc more efficient anyway, AFAIK.
> >
> > That proposal goes completely against the basic principles of
> > incremental backups!

> What principles, and how do they apply to a system where all copies of
> everything are pooled?

Guess I was confused with differential backups. Anyway, the way incremental 
backups as described in the BackupPC documentation (for 2.x) make it clear 
that BackupPC's incremental backups are really differential backups.

This page has a nice description of the different methods:
http://en.wikipedia.org/wiki/Incremental_backup

> > If you want something like that, you should use multiple levels of
> > incremental backups.
>
> I thought that was an option in 3.0 but I haven't used it yet.  If your
> targets aren't doing anything else at night you can just do rsync fulls
> every time and waste a few cpu cycles.

Yes, in 3.0 "Multilevel incremental" as defined in that wikipedia page 
should be supported. I've not yet used it myself either.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Giving BackupPC a good kick

2007-08-21 Thread Rob Owens
Jacob, how about using sudo to perform the backups?  That way BackupPC
will have read access to everything.  See this thread: 
http://sourceforge.net/mailarchive/message.php?msg_id=46977D09.4080600%40bio-chemvalve.com

-Rob

Jacob wrote:
>> Is there a way to keep shoving BackupPC onwards even when it cries on 
>> unreadable user directories (such as .gnucash)? It seems kind of strange for 
>> BackupPC to have a fit and not backup anything else just because it can't 
>> access one folder. :P
>>
>> Here's the error log for .gnucash:
>>
>> Running: /usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C /home/jacob/.gnucash 
>> --totals .
>> Xfer PIDs are now 27985,27984
>> /bin/tar: /home/jacob/.gnucash: Cannot chdir: Permission denied
>> /bin/tar: Error is not recoverable: exiting now
>> Tar exited with error 512 () status
>> tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
>> filesTotal, 0 sizeTotal
>> Got fatal error during xfer (No files dumped for share /home/jacob/.gnucash)
>> Backup aborted (No files dumped for share /home/jacob/.gnucash)
>>
>>
>> (P.S. - I'm working on the permissions settings for .gnucash as you read 
>> this.)


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Rich Rauenzahn

> If this is really the way backuppc does incremental backups, I think backuppc 
> should be a bit more incremental with its incremental backups. Instead of 
> comparing against the last full, it should compare against the last full and 
> incremental backups. This would solve this problem and make backuppc more 
> efficient anyway, AFAIK.
>   
>

Isn't that what $Conf{IncrLevels} 
 
is for?

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

> I'm curious about this as well, but would like to add to the question -- 
> what if I'm backing up some hosts across the internet, and I set the 
> compression to bzip2 -9.  But local hosts on the LAN I set to gzip -4. 
>
> I believe I read that the pool checksums are based on the uncompressed 
> data -- so I would expect that anything common backed up across the 
> internet first will be shared as bzip2, but anything common backed up 
> locally with gzip first would be shared as gzip. 
>
> I'm also assuming it is ok to be mixing the two compression methods in 
> the pools!
>
> Rich
>   

Looks like I am right.  I added a unique file to the bzip2 host, backed 
it up, then copied it to a gzip host, and the file was found in the pool 
during the 2nd backup.I don't think changing the backup compression 
ratio would make a difference as well.  I vaguely recall in the 
docs/faq/'net saying you could increase it later if you started running 
out of space... which was the argument for going for lower compression 
levels at first.

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Giving BackupPC a good kick

2007-08-21 Thread Les Mikesell
Jacob wrote:
> Is there a way to keep shoving BackupPC onwards even when it cries on 
> unreadable user directories (such as .gnucash)? It seems kind of strange for 
> BackupPC to have a fit and not backup anything else just because it can't 
> access one folder. :P
> 
> Here's the error log for .gnucash:
> 
> Running: /usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C /home/jacob/.gnucash 
> --totals .
> Xfer PIDs are now 27985,27984
> /bin/tar: /home/jacob/.gnucash: Cannot chdir: Permission denied
> /bin/tar: Error is not recoverable: exiting now
> Tar exited with error 512 () status
> tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
> filesTotal, 0 sizeTotal
> Got fatal error during xfer (No files dumped for share /home/jacob/.gnucash)
> Backup aborted (No files dumped for share /home/jacob/.gnucash)
> 
> 
> (P.S. - I'm working on the permissions settings for .gnucash as you read 
> this.)

It seems reasonable for tar to quit when it can't access the top level 
  directory you told it to copy.  What else could it do?

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

Rob Owens wrote:

Rich Rauenzahn wrote:
  

For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes
3.3 seconds.  That's 1,776,176 bytes/sec. 


Rich, I just tried bzip'ing an ogg file and found that it got slightly
larger.  The reason, I believe, is that formats like ogg, mp3, mpg, etc.
are already compressed.  You might want to run some tests yourself to
see whether or not it makes sense for you to be compressing your backups.
  
I only compressed the mp3 as an example of a worst case scenario.  I 
assume it takes the longest to compress since it is not compressible.


Let's test my assumption by compressing my procmail.log (easily 
compressed) --


74,416,448 bytes in 84 seconds.  885,910 bytes/sec.  So yes, that mp3 
was slower to compress.


But yes, it would be nice if there were an option to disable compression 
for certain filetypes.


Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rob Owens


Rich Rauenzahn wrote:
> For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes
> 3.3 seconds.  That's 1,776,176 bytes/sec. 
Rich, I just tried bzip'ing an ogg file and found that it got slightly
larger.  The reason, I believe, is that formats like ogg, mp3, mpg, etc.
are already compressed.  You might want to run some tests yourself to
see whether or not it makes sense for you to be compressing your backups.

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Giving BackupPC a good kick

2007-08-21 Thread Jacob
Is there a way to keep shoving BackupPC onwards even when it cries on 
unreadable user directories (such as .gnucash)? It seems kind of strange for 
BackupPC to have a fit and not backup anything else just because it can't 
access one folder. :P

Here's the error log for .gnucash:

Running: /usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C /home/jacob/.gnucash 
--totals .
Xfer PIDs are now 27985,27984
/bin/tar: /home/jacob/.gnucash: Cannot chdir: Permission denied
/bin/tar: Error is not recoverable: exiting now
Tar exited with error 512 () status
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
filesTotal, 0 sizeTotal
Got fatal error during xfer (No files dumped for share /home/jacob/.gnucash)
Backup aborted (No files dumped for share /home/jacob/.gnucash)


(P.S. - I'm working on the permissions settings for .gnucash as you read this.)

-- 
Jacob

"For then there will be great distress, unequaled
from the beginning of the world until now—and never
to be equaled again. If those days had not been cut
short, no one would survive, but for the sake of the
elect those days will be shortened."

Are you ready?


signature.asc
Description: PGP signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Les Mikesell
Frans Pop wrote:

>> If this is really the way backuppc does incremental backups, I think
>> backuppc should be a bit more incremental with its incremental backups.
>> Instead of comparing against the last full, it should compare against the
>> last full and incremental backups. This would solve this problem and make
>> backuppc more efficient anyway, AFAIK.
> 
> That proposal goes completely against the basic principles of incremental 
> backups! 

What principles, and how do they apply to a system where all copies of 
everything are pooled?

> If you want something like that, you should use multiple levels of 
> incremental backups.

I thought that was an option in 3.0 but I haven't used it yet.  If your 
targets aren't doing anything else at night you can just do rsync fulls 
every time and waste a few cpu cycles.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to automate deleting of the "new files" during restoration?

2007-08-21 Thread Rob Owens
Bachir wrote:
> My goal is simpel: When restoring a fileset for a specified date
> (including "most recent"), BackupPC should give me exactly the files
> and directories that existed at the time of the last backup prior to
> that date.
>
> According to my several tests this doesn't works even if the last
> backup was a Full backup. Files and directories crated after a backup
> (full or incremental) will NOT be deleted during a restoration.
>
> For exempel: suppose taking a full backup of a directory including the
> files a1, b1, c1. Then delete the files a1, b1, c1 and create 3 new
> files instead a2, b2, c2 (in the same directory). Doing a direct
> restoration of the directory will give me a right copy of the files
> a1, b1 and c1. BUT the files a2, b2, c2 are still left in the same
> directory (which now will be including 6 files).
>
> To avoid this problem i tried to add "--delete" to
> $Conf{RsyncRestoreArgs} but the restoration failed with the following
> error:
>
> "
> Remote[1]: ERROR: buffer overflow in recv_rules [receiver]
> Remote[1]: rsync error: error allocating core memory buffers (code 22)
> at util.c(121) [receiver=2.6.9]
> Read EOF: Connection reset by peer
> Tried again: got 0 bytes
> Done: 301 files, 6183643 bytes
> restore failed: Unable to read 4 bytes
> "
>
> I guess this mean that rsyncp doesn't support the "--delete" option.
> Any workaround??
> Thank you for help!
For a workaround, how about restoring the files to an empty temporary
location.  Then use plain rsync with the --delete option to synchronize
your host with that temporary location.  For large backups, this may not
be feasible (time/space constraints), but it should work for relatively
small backups.

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Rob Owens
Jacob wrote:
> On Tue, 21 Aug 2007 10:38:10 -0400
> Rob Owens <[EMAIL PROTECTED]> wrote:
>
>   
>> Carl Wilhelm Soderstrom wrote:
>> 
>>> It would also be nice at times to be able to one-time-schedule the next
>>> backup of a particular host to be a full backup (for instance, if you knew
>>> that you'd just added some data). The way to do this right now is:
>>> - Start the backup right now while you're thinking of it, and hope you don't
>>>   irritate people too much by doing a backup in the middle of the day (or
>>>   whatever other time it is).
>>> - Try to remember to start the backup later on, when it won't irritate
>>>   people.
>>> - Set up a cron or at job to schedule the backup.
>>>
>>> It would be nice to be able to schedule future jobs from within the web
>>> interface tho.  Perhaps have it call 'at', or else use some sort of internal
>>> persistence mechanism?
>>>   
>>>   
>> It would also be nice to be able to tell the server "don't back up this
>> host tonight".  Some of my users run cpu-intensive jobs overnight, which
>> need to be complete by the next morning, and which would get slowed down
>> by a backup occurring right in the middle of it.
>> 
>
> This actually can be done already, I believe. Click the "Stop/Deque Backups" 
> button on your host's admin page, and that'll not only stop any 
> currently-running backups, but won't allow any to be run for the time 
> specified.
>   
You may be right.  I've tried clicking that while there was no backup
happening, but interpreted this response as an error:  "ok: no backup
was pending or running" (I didn't notice the 'ok' in the front before). 
I'm testing it now by telling it not to back up for another 24 hours.

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Les Mikesell wrote:
> Don't you run several concurrent backups? Unless you limit it to the 
> number of CPUs in the server the high compression versions will still 
> be stealing cycles from the LAN backups.
I'm backing up 5 machines.  Only one is on the internet, and the amount 
of CPU time/sec the internet backup takes is very small.

For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes 
3.3 seconds.  That's 1,776,176 bytes/sec.  The DSL line pumping the data 
to me is pushing 42,086 bytes/sec, and that includes ethernet/IP/ssh/ssh 
compression overhead.  (hmm, now that I think about it, the real 
transfer could be higher because ssh is compressing, but even if it was 
100k/sec of real data it is still peanuts.)

Does that make the theoretical load on the CPU 6% if I got the math 
right?100*1024/1776176 = 0.06.  Checking the current backup, yeah, 
it's about 2% right of a CPU now.

>> I am using ssh -C as well.  And see my other post about rsync 
>> --compress -- it is broken or something.
>
> It is just not supported by the perl rsync version built into backuppc.
>

Ah -- well, it fails quite silently =-).  I couldn't figure out why the 
same files kept getting transferred over and over again...

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Jacob
On Tue, 21 Aug 2007 10:38:10 -0400
Rob Owens <[EMAIL PROTECTED]> wrote:

> 
> 
> Carl Wilhelm Soderstrom wrote:
> > It would also be nice at times to be able to one-time-schedule the next
> > backup of a particular host to be a full backup (for instance, if you knew
> > that you'd just added some data). The way to do this right now is:
> > - Start the backup right now while you're thinking of it, and hope you don't
> >   irritate people too much by doing a backup in the middle of the day (or
> >   whatever other time it is).
> > - Try to remember to start the backup later on, when it won't irritate
> >   people.
> > - Set up a cron or at job to schedule the backup.
> >
> > It would be nice to be able to schedule future jobs from within the web
> > interface tho.  Perhaps have it call 'at', or else use some sort of internal
> > persistence mechanism?
> >   
> It would also be nice to be able to tell the server "don't back up this
> host tonight".  Some of my users run cpu-intensive jobs overnight, which
> need to be complete by the next morning, and which would get slowed down
> by a backup occurring right in the middle of it.

This actually can be done already, I believe. Click the "Stop/Deque Backups" 
button on your host's admin page, and that'll not only stop any 
currently-running backups, but won't allow any to be run for the time specified.

-- 
Jacob

"For then there will be great distress, unequaled
from the beginning of the world until now—and never
to be equaled again. If those days had not been cut
short, no one would survive, but for the sake of the
elect those days will be shortened."

Are you ready?


signature.asc
Description: PGP signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

>
> Compression is done on the server side after the transfer.  What's the 
> point of using different methods?  According to the docs, compressed 
> and uncompressed files aren't pooled but different levels are.  The 
> only way to get compression over the wire is to add the -C option to 
> ssh - and you'll probably want to use rsync if bandwidth matters.
>

Because if I'm transferring the backup at 40kbps/sec across the 
internet, bzip2'ing on the server isn't going to slow down the backup, 
which is the main reason for not using the higher compression.

I am using ssh -C as well.  And see my other post about rsync --compress 
-- it is broken or something.

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Frans Pop
On Tuesday 21 August 2007, Jacob wrote:
> If this is really the way backuppc does incremental backups, I think
> backuppc should be a bit more incremental with its incremental backups.
> Instead of comparing against the last full, it should compare against the
> last full and incremental backups. This would solve this problem and make
> backuppc more efficient anyway, AFAIK.

That proposal goes completely against the basic principles of incremental 
backups! If you want something like that, you should use multiple levels of 
incremental backups.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Les Mikesell
Rich Rauenzahn wrote:
> 
>>
>> Compression is done on the server side after the transfer.  What's the 
>> point of using different methods?  According to the docs, compressed 
>> and uncompressed files aren't pooled but different levels are.  The 
>> only way to get compression over the wire is to add the -C option to 
>> ssh - and you'll probably want to use rsync if bandwidth matters.
>>
> 
> Because if I'm transferring the backup at 40kbps/sec across the 
> internet, bzip2'ing on the server isn't going to slow down the backup, 
> which is the main reason for not using the higher compression.

Don't you run several concurrent backups? Unless you limit it to the 
number of CPUs in the server the high compression versions will still be 
stealing cycles from the LAN backups.

> I am using ssh -C as well.  And see my other post about rsync --compress 
> -- it is broken or something.

It is just not supported by the perl rsync version built into backuppc.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] How to automate deleting of the "new files" during restoration?

2007-08-21 Thread Bachir
My goal is simpel: When restoring a fileset for a specified date
(including "most recent"), BackupPC should give me exactly the files
and directories that existed at the time of the last backup prior to
that date.

According to my several tests this doesn't works even if the last
backup was a Full backup. Files and directories crated after a backup
(full or incremental) will NOT be deleted during a restoration.

For exempel: suppose taking a full backup of a directory including the
files a1, b1, c1. Then delete the files a1, b1, c1 and create 3 new
files instead a2, b2, c2 (in the same directory). Doing a direct
restoration of the directory will give me a right copy of the files
a1, b1 and c1. BUT the files a2, b2, c2 are still left in the same
directory (which now will be including 6 files).

To avoid this problem i tried to add "--delete" to
$Conf{RsyncRestoreArgs} but the restoration failed with the following
error:

"
Remote[1]: ERROR: buffer overflow in recv_rules [receiver]
Remote[1]: rsync error: error allocating core memory buffers (code 22)
at util.c(121) [receiver=2.6.9]
Read EOF: Connection reset by peer
Tried again: got 0 bytes
Done: 301 files, 6183643 bytes
restore failed: Unable to read 4 bytes
"

I guess this mean that rsyncp doesn't support the "--delete" option.
Any workaround??
Thank you for help!

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Jacob
On Tue, 21 Aug 2007 09:04:40 -0500
Carl Wilhelm Soderstrom <[EMAIL PROTECTED]> wrote:

> I just noticed today that one of the hosts I'm backing up, is suddenly
> taking much longer to back up. Looks like someone put a large quantity of
> new data on it.
> 
> Problem is that whereas the full backups used to take (as a proportional
> scale) 1x, and incrementals perhaps 0.2x, the latest incrementals are taking
> ~2.5x the time of the last full backup. Due to the way backups are done in
> backuppc (always making an incremental against the last full), they'll keep
> on being excessively large until the next full backup.
> 
> Would it be reasonable to have backuppc check the time used by the last
> incremental against the time used by the last full, and if it's taken longer
> to do the incremental, then automatically do a full backup next time? (Of
> course, make a note in the logs as to why this was done).

If this is really the way backuppc does incremental backups, I think backuppc 
should be a bit more incremental with its incremental backups. Instead of 
comparing against the last full, it should compare against the last full and 
incremental backups. This would solve this problem and make backuppc more 
efficient anyway, AFAIK.

-- 
Jacob

"For then there will be great distress, unequaled
from the beginning of the world until now—and never
to be equaled again. If those days had not been cut
short, no one would survive, but for the sake of the
elect those days will be shortened."

Are you ready?


signature.asc
Description: PGP signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Les Mikesell
Rich Rauenzahn wrote:

>> If I have 2 hosts that contain common files, and compression is enabled
>> on one but not the other, will these hosts' files ever get pooled? 
>>
>> What if compression is enabled on both, but different compression levels
>> are set?
>>
>>
>>   
> I'm curious about this as well, but would like to add to the question -- 
> what if I'm backing up some hosts across the internet, and I set the 
> compression to bzip2 -9.  But local hosts on the LAN I set to gzip -4. 
> 
> I believe I read that the pool checksums are based on the uncompressed 
> data -- so I would expect that anything common backed up across the 
> internet first will be shared as bzip2, but anything common backed up 
> locally with gzip first would be shared as gzip. 
> 
> I'm also assuming it is ok to be mixing the two compression methods in 
> the pools!

Compression is done on the server side after the transfer.  What's the 
point of using different methods?  According to the docs, compressed and 
uncompressed files aren't pooled but different levels are.  The only way 
to get compression over the wire is to add the -C option to ssh - and 
you'll probably want to use rsync if bandwidth matters.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

Rob Owens wrote:
> If I have 2 hosts that contain common files, and compression is enabled
> on one but not the other, will these hosts' files ever get pooled? 
>
> What if compression is enabled on both, but different compression levels
> are set?
>
> Thanks
>
> -Rob
>
>
>   
I'm curious about this as well, but would like to add to the question -- 
what if I'm backing up some hosts across the internet, and I set the 
compression to bzip2 -9.  But local hosts on the LAN I set to gzip -4. 

I believe I read that the pool checksums are based on the uncompressed 
data -- so I would expect that anything common backed up across the 
internet first will be shared as bzip2, but anything common backed up 
locally with gzip first would be shared as gzip. 

I'm also assuming it is ok to be mixing the two compression methods in 
the pools!

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Rob Owens


Carl Wilhelm Soderstrom wrote:
> It would also be nice at times to be able to one-time-schedule the next
> backup of a particular host to be a full backup (for instance, if you knew
> that you'd just added some data). The way to do this right now is:
> - Start the backup right now while you're thinking of it, and hope you don't
>   irritate people too much by doing a backup in the middle of the day (or
>   whatever other time it is).
> - Try to remember to start the backup later on, when it won't irritate
>   people.
> - Set up a cron or at job to schedule the backup.
>
> It would be nice to be able to schedule future jobs from within the web
> interface tho.  Perhaps have it call 'at', or else use some sort of internal
> persistence mechanism?
>   
It would also be nice to be able to tell the server "don't back up this
host tonight".  Some of my users run cpu-intensive jobs overnight, which
need to be complete by the next morning, and which would get slowed down
by a backup occurring right in the middle of it.

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rob Owens
If I have 2 hosts that contain common files, and compression is enabled
on one but not the other, will these hosts' files ever get pooled? 

What if compression is enabled on both, but different compression levels
are set?

Thanks

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Feature request: user-configurable directory exclusion

2007-08-21 Thread Rob Owens
For my backups of /home I always exclude '/*/tmp/' and tell my users
that anything they don't want backed up should go in /home/username/tmp

Rsync will exclude /home/username/tmp and any files or directories
contained in /home/username/tmp

-Rob

Robin Lee Powell wrote:
> A feature I'd really like, and would be willing to give gifts in
> return for, would be something like this:
>
> User touches a file named ".donotbackup" in a directory.  Backuppc
> notices this and does not backup that directory.  The sysadmin
> doesn't have to alter the system include list.
>
> -Robin
>
>   

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] IncrLevels with rsync

2007-08-21 Thread Rob Owens
I just noticed the $Conf{IncrLevels} setting.  I'm using rsync and
rsyncd as my transport, and I'd like to minimize my network usage since
I'm backing up over the internet.  I don't care about disk or cpu usage.

Does setting:
 $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6];
do anything to reduce my network usage?  Or does rsync and the pooling
mechanism already take care of that "behind the scenes".

-Rob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Carl Wilhelm Soderstrom
I just noticed today that one of the hosts I'm backing up, is suddenly
taking much longer to back up. Looks like someone put a large quantity of
new data on it.

Problem is that whereas the full backups used to take (as a proportional
scale) 1x, and incrementals perhaps 0.2x, the latest incrementals are taking
~2.5x the time of the last full backup. Due to the way backups are done in
backuppc (always making an incremental against the last full), they'll keep
on being excessively large until the next full backup.

Would it be reasonable to have backuppc check the time used by the last
incremental against the time used by the last full, and if it's taken longer
to do the incremental, then automatically do a full backup next time? (Of
course, make a note in the logs as to why this was done).

It would also be nice at times to be able to one-time-schedule the next
backup of a particular host to be a full backup (for instance, if you knew
that you'd just added some data). The way to do this right now is:
- Start the backup right now while you're thinking of it, and hope you don't
  irritate people too much by doing a backup in the middle of the day (or
  whatever other time it is).
- Try to remember to start the backup later on, when it won't irritate
  people.
- Set up a cron or at job to schedule the backup.

It would be nice to be able to schedule future jobs from within the web
interface tho.  Perhaps have it call 'at', or else use some sort of internal
persistence mechanism?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] md4 doesn't match

2007-08-21 Thread Lee A. Connell
I have been getting this error lately: Exchange-State.bkf: md4 doesn't
match: will retry in phase 1; file removed

 

When this error happens I am unable to use the web interface to click on
my backup number.  The interface just hangs, on any of the backups where
I had no Xferr errors the interface works fine.  What to do to resolve
this?

 

Thanks,

 

-Lee

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] aborted by signal=PIPE

2007-08-21 Thread Keith Edmunds
Version 2.1.2 using rsync.

I have a client that intermittently fails with "aborted by signal=PIPE",
which I realise is a common error. I have $Conf{ClientTimeout} =
72000; the end of the log file (with $Conf{XferLogLevel} = 8;) looks like
this:


Starting file 114944 (), blkCnt=256, blkSize=524288, remainder=503040
: size doesn't match ( vs 134196480)
: blk=255, newData=0, rxMatchBlk=, rxMatchNext=0
: blk=, newData=255, rxMatchBlk=255, rxMatchNext=256
Unable to read 503040 bytes from  got=0, seekPosn=133693440 (0,1,256,,)
Read EOF: 
Tried again: got 0 bytes
Child is sending done
Got done from child
Can't write 4 bytes to socket
Got stats: 1279724134 1279345231 1096045344 0 ('errorCnt' =>
1,'ExistFileSize' => 0,'ExistFileCnt' => 113057,'TotalFileCnt' =>
113057,'ExistFileCompSize' => 463081472,'TotalFileSize' => 0) finish:
removing in-process file


Does the "Read EOF" suggest a network problem between client and server?
Or is there some other more likely cause?

Thanks,
Keith

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsync --compress broken ?

2007-08-21 Thread Rich Rauenzahn
Whenever I use these options, rsync "seems" to work and transfer 
files but nothing ever seems to actually get written to the backup 
dirs:

$Conf{RsyncArgs} = [  # defaults, except I added the compress flags.
 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '-D',
 '--links',
 '--hard-links',
 '--times',
 '--block-size=2048',
 '--recursive',
 '--checksum-seed=32761',
 '--compress',  # these two are suspicious
 '--compress-level=9'   # these two are suspicious
];

Taking out the --compress and --compress-level fixes it.

I've monitored with lsof and a manual backup with -v -- the remote rsync 
opens the files, seems to transfer them (tcpdump shows lots of traffic), 
but the file never seems to get put on disk.   It's never opened on the 
backuppc server (checked with lsof).  Manual backup with -v shows no 
files being processed, a "create d ." is shown, then nothing. I'd hate 
to use ssh compression since I've read compression is more efficient at 
the rsync level.

I don't believe my environment is unusual -- I changed the default 
client to be rsyncd.  Remote and local systems are both Linux, FC6.

Here's the rest of the config for this client:

$Conf{RsyncShareName} = [
 'BackupPC'
];
$Conf{RsyncdPasswd} = '*';

$Conf{RsyncdClientPort} = '9001';
$Conf{ClientNameAlias} = 'localhost';
$Conf{DumpPreUserCmd} = '/etc/rjr/BackupPC/bin/open_ssh_tunnel';

$Conf{BackupFilesExclude} = {
 '*' => [
   '/var/mail/*.xspam',
   '/var/mail/*.xraw',
   '/proc/',
   '/var/named/chroot/proc/',
   '/var/spool/squid/',
   '/sys/',
   '/dev/',
   '/oldboot/',
   '*.iso',
   '*.iso.*',
   '/var/mail/*.xspam.*',
   '/var/mail/*.xraw.*',
   '/media/',
   '/misc/',
   '/net/',
   '/mnt/',
   'Thumbs.db'
 ]
};
$Conf{PingCmd} = '/etc/rjr/BackupPC/bin/ping_tcp_ssh';
$Conf{PingMaxMsec} = '1';
$Conf{DumpPostUserCmd} = '/etc/rjr/BackupPC/bin/kill_ssh_tunnel';

$Conf{ArchiveComp} = 'bzip2'; # since the cpu time to compress will be 
way shorter than the WAN time.
$Conf{CompressLevel} = '9'; # since the cpu time to compress will be way 
shorter than the WAN time.




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/