Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn

>
> I thought about this a little a year or so ago -- enough to attempt to 
> try to understand the rsync perl modules (failed!).
>
> I thought perhaps what would be best is a berkeley db/tied hash lookup 
> table/cache that would map rsync checksums+file size to a pool 
> item.The local rsync client would request the checksum of each 
> remote file before transfer, and if it was in the cache and in the 
> pool, it could be used as the local version, then let the rsync 
> protocol take over to verify all of the blocks.
>
> I really like that BackupPC doesn't store its data in a database that 
> could get corrupted, and the berkeley db would just be a cache whose 
> integrity wouldn't be critical to the integrity of the backups.   And 
> the cache isn't relied on 100%, but rather the actual pool file the 
> cache points to is used as the ultimate authority.
>

Sorry -- to develop my idea further, the cache would be created/updated 
during backups, and a tool could be written to generate them in batch.   
A weekly routine could walk the cache and remove checksum entries that 
no longer point to pool items.  Small files (size user configurable) 
could also be excluded from the cache to minimize overhead/space.

Rich

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn



Les Mikesell wrote:

Johan Ehnberg wrote:
  

OK. I can see now why this is true. But it seems like one could
rewrite the backuppc rsync protocol to check the pool for a file with
same checksum  before syncing. This could give some real speedup on
long files. This would be possible at least for the cpool where the
rsync checksums (and full file checksums) are stored at the end of
each file.
  
Now this would be quite the feature - and it fits perfecty with the idea 
of smart pooling that BackupPC has. The effects are rather interesting:


- Different incremental levels won't be needed to preserve bandwidth
- Full backups will indirectly use earlier incrementals as reference

Definite whishlist item.



But you'll have to read through millions of files and the common case of 
a growing logfile isn't going to find a match anyway.  The only way this 
could work is if the remote rsync could send a starting hash matching 
the one used to construct the pool filenames - and then you still have 
to deal with the odds of collisions


I thought about this a little a year or so ago -- enough to attempt to 
try to understand the rsync perl modules (failed!).


I thought perhaps what would be best is a berkeley db/tied hash lookup 
table/cache that would map rsync checksums+file size to a pool item.
The local rsync client would request the checksum of each remote file 
before transfer, and if it was in the cache and in the pool, it could be 
used as the local version, then let the rsync protocol take over to 
verify all of the blocks.


I really like that BackupPC doesn't store its data in a database that 
could get corrupted, and the berkeley db would just be a cache whose 
integrity wouldn't be critical to the integrity of the backups.   And 
the cache isn't relied on 100%, but rather the actual pool file the 
cache points to is used as the ultimate authority.


Rich


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the backup to an external USB drive

2008-12-15 Thread Rich Rauenzahn
Andreas Micklei wrote:
> Am Freitag, 12. Dezember 2008 schrieb Rich Rauenzahn:
>   
>> Some of you running dd might want to consider "dump"
>> On the other hand...
>> http://dump.sourceforge.net/isdumpdeprecated.html
>> Although some of the arguments apply to dd as well.
>> 
>
> I have been doing that for about two years now. Works great! It's just like 
> dd, only faster and more space efficient. However you can only dump 
> partitions, not the whole disk, so I also save the output of fdisk -l, so I 
> can recreate the partition table in case of disaster. Booting a live cd and 
> using restore is quite easy if I need to.
>
>
>   
Since I heavily use lvol, I instead store a vgdisplay -v along with bdf 
and mount output.

Rich


--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the backup to an external USB drive

2008-12-12 Thread Rich Rauenzahn

> Yesterday I read a little about the LVM snapshot, I didn't know that LVM 
> had this feature. When I read that suggestion I thought that snapshot 
> was a sort of dd. By now I am running a dd on the snapshot, I opened 
> some space on the lvm by reducing the root size and created the snapshot.
>
>   
Some of you running dd might want to consider "dump"

On the other hand...

http://dump.sourceforge.net/isdumpdeprecated.html

Although some of the arguments apply to dd as well.

Rich



--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Child Exited Prematurely

2008-12-05 Thread Rich Rauenzahn


David Rees wrote:
> On Thu, Dec 4, 2008 at 7:24 PM, Nick Smith <[EMAIL PROTECTED]> wrote:
>   
>> Did you ever get this resolved? Im having the same problem, now all of
>> my backups are failing with the same errors you are getting.  Im using
>> 2.6.9 protocol version 29.  Ubuntu doesnt seem to have a newer version
>> available yet.  They are all on fiber or fast cable internet that is
>> reliable.  Different firewalls at each location.  I could never find
>> any info if pfsense or m0n0wall close inactive connections.
>> 
>
> I've got one server that all it does is back itself up that gives this
> error. It started out only occasionally failing, but now I can't
> complete a full backup without it bailing out with
>
> I've ruled out ssh by invoking rsync directly, and when that failed
> with the same exact error, I tried tar which also died with a similar
> message.  Running the backup from the command line didn't reveal any
> additional interesting error messages. It's like something is breaking
> pipes on the machine after a random amount of time.
>
> I am very confused by this one but have seen other similar reports.
>
>
>   

Now I seem to be getting this as well, but only from my XP box / Linux 
box (FC8) where I run rsync in daemon mode.   I haven't noticed it yet 
on any other machines yet.

Event Type:Warning
Event Source:rsyncd
Event Category:None
Event ID:0
Date:12/5/2008
Time:2:18:12 AM
User:AKANE\Administrator
Computer:AKANE
Description:
The description for Event ID ( 0 ) in Source ( rsyncd ) cannot be
found. The local computer may not have the necessary registry
information or message DLL files to display messages from a remote
computer. You may be able to use the /AUXSOURCE= flag to retrieve
this description; see Help and Support for details. The following
information is part of the event: rsyncd: PID 1244: rsync:
writefd_unbuffered failed to write 4092 bytes [sender]: Connection
reset by peer (104).

Event Type:Warning
Event Source:rsyncd
Event Category:None
Event ID:0
Date:12/5/2008
Time:2:18:12 AM
User:AKANE\Administrator
Computer:AKANE
Description:
The description for Event ID ( 0 ) in Source ( rsyncd ) cannot be
found. The local computer may not have the necessary registry
information or message DLL files to display messages from a remote
computer. You may be able to use the /AUXSOURCE= flag to retrieve
this description; see Help and Support for details. The following
information is part of the event: rsyncd: PID 1244: rsync error:
error in rsync protocol data stream (code 12) at
/home/lapo/packaging/rsync-3.0.4-1/src/rsync-3.0.4/io.c(1541)
[sender=3.0.4].




--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] merge host from one pool into another pool

2008-11-25 Thread Rich Rauenzahn

Msquared wrote:

I guess the simple question is, then: is there a tool you can run that
will de-dupe and/or re-merge with the pool?

  


How about a linux tool like either of these?   There is a risk of 
linking two files that are the same, but are semantically different... 
like an attrib file or something.


[EMAIL PROTECTED] ~]$ rpm -qi hardlink fdupe
Name: hardlink Relocations: (not relocatable)
Version : 1.0   Vendor: Fedora Project
Release : 5.fc8 Build Date: Wed 22 Aug 2007 
11:42:52 PM PDT
Install Date: Sun 16 Dec 2007 10:23:46 PM PST  Build Host: 
xenbuilder4.fedora.phx.redhat.com
Group   : System Environment/Base   Source RPM: 
hardlink-1.0-5.fc8.src.rpm

Size: 10639License: GPL+
Signature   : DSA/SHA1, Wed 24 Oct 2007 08:07:11 PM PDT, Key ID 
b44269d04f2a6fd2

Packager: Fedora Project
URL : http://cvs.fedora.redhat.com/viewcvs/devel/hardlink/
Summary : Create a tree of hardlinks
Description :
hardlink is used to create a tree of hard links.
It's used by kernel installation to dramatically reduce the
amount of diskspace used by each kernel package installed.
package fdupe is not installed

[EMAIL PROTECTED] ~]$ rpm -qi fdupes   
Name: fdupes   Relocations: (not relocatable)

Version : 1.40  Vendor: Fedora Project
Release : 10.fc8Build Date: Wed 28 Nov 2007 
12:29:44 PM PST
Install Date: Fri 07 Nov 2008 02:44:54 PM PST  Build Host: 
xenbuilder1.fedora.redhat.com
Group   : Applications/File Source RPM: 
fdupes-1.40-10.fc8.src.rpm

Size: 23464License: MIT
Signature   : DSA/SHA1, Thu 28 Aug 2008 09:08:59 PM PDT, Key ID 
62aec3dc6df2196f

Packager: Fedora Project
URL : http://netdial.caribe.net/~adrian2/fdupes.html
Summary : Finds duplicate files in a given set of directories
Description :
FDUPES is a program for identifying duplicate files residing within 
specified

directories.
[EMAIL PROTECTED] ~]$

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Child Exited Prematurely

2008-11-24 Thread Rich Rauenzahn

> This is most likely a TCP timeout or other network problem.
>
> Rsync added a TCP keep-alive option in protocol version 29
> (if I recall correctly) and is not currently supported in
> File::RsyncP that BackupPC uses.
>
>
>   

It's too bad rsync doesn't have any kind of hook/plugin system where you 
could add in your own hooks for writing/reading files.. then one could 
always use the stock rsync, but have a backuppc back end to do the I/O.  
(I suppose one could use LD_PRELOAD or a fuse filesystem...)

One could also devise a way to lookup the new file's checksum to see if 
that file exists anywhere else in the pool already.

Rich


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there any way for BackupPC to restore hard links properly?

2008-11-10 Thread Rich Rauenzahn


Craig Barratt wrote:
> This results in one subtle bug that can't be easily fixed: if you
> switch the Xfer method from tar to rsync, old backups that have
> hardlinks stored with tar won't be correctly restored with rsync.
> The workaround is generate a tar file and extract it, or switch
> the Xfer method back to tar before you do the restore.  The
> opposite case should work correctly.
>
> Craig
>
>   
FYI,

If one does ever find themselves in the situation with a bunch of files 
they need to re-hardlink, I found some great utilities to do that with:

Name   : hardlink
Summary: Create a tree of hardlinks
URL: http://cvs.fedora.redhat.com/viewcvs/devel/hardlink/
License: GPL+
Description: hardlink is used to create a tree of hard links. It's used 
by kernel installation to dramatically reduce
   : the amount of diskspace used by each kernel package installed.

Name   : fdupes
Version: 1.40
Summary: Finds duplicate files in a given set of directories
URL: http://netdial.caribe.net/~adrian2/fdupes.html
License: MIT
Description: FDUPES is a program for identifying duplicate files 
residing within specified directories.




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Feature Request: Link to latest Full

2008-11-07 Thread Rich Rauenzahn


Nils Breunese (Lemonbit) wrote:
> If you want to send an archive of a backup to tape that you can  
> restore (without BackupPC), check out 'Archive Functions' in the  
> BackupPC documentation: 
> http://backuppc.sourceforge.net/faq/BackupPC.html#archive_functions
>
> Nils Breunese.
>
>
>   
I've always wished the BackupPC_archiveHost command had an option for 
the "latest" backup.

Rich


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tailing the XferLOG

2008-11-07 Thread Rich Rauenzahn

Kevin DeGraaf wrote:
> Is there a way to tail the XferLOG during an rsync-over-ssh dump?
>
>
>
>   
How about something like:

mkfifo pipe
tail -f --bytes=+0 XferLOG.z > pipe &
BackupPC_zcat pipe

Another option is to run strace against the process writing the files 
and filtering for just open's.

Rich


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync - writefd_unbuffered failed to write

2008-10-27 Thread Rich Rauenzahn


Matthias Meyer wrote:
> Hi,
>
> I've installed cygwin and rsync on a vista client and want backup it to 
> backuppc 3.1.0-2 on my Linux server:
>   

2008/10/17 10:30:28 [1432] rsync: writefd_unbuffered failed to write 4 bytes 
[sender]: Connection reset by peer (104)
2008/10/17 10:30:28 [1432] rsync error: error in rsync protocol data stream 
(code 12) at io.c(1541) [sender=3.0.4]
2008/10/17 10:30:28 [1432] _exit_cleanup(code=12, file=io.c, line=1541): 

Not that this will help, but I was getting these on Vista as well with 
rsync outside of backuppc -- in fact, I was copying files TO vista via 
rsync.  I gave up on rsync...

Rich




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Move BackupPC

2008-08-08 Thread Rich Rauenzahn

Diederik De Deckere wrote:
anyway. you should consider NOT moving to raid5 as it is very very  
slow with backuppc.  specifically, write speed is less that half  
that of a raid1 and way less than a raid0+1.





I'm not sure if speed is an issue here since all backups are taken  
over night.
  


I'm using RAID5 at home across 4x500GB drives to backup our home 
machines and another across the internet.   Space efficiency was more 
important to me than speed and I don't have any problems using RAID5.  
So it really depends on your use case and priorities.  (Not all of us 
are backing up 100 laptops in a corporate environment!)


Rich
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-06 Thread Rich Rauenzahn
Rob Owens wrote:
> Holger Parplies wrote:
>   
>> Your best options remain to either do a block level copy of the file system
>> (dd) or to start over. 
>> 
>
> One suggestion I've heard on this list before, which may be a good one 
> for you, is to simply start over with a new pool but save the existing 
> pool for a few weeks/months/years.  Then if you never need to restore a 
> backup from the old pool, you will have saved yourself a lot of effort.
>
>   

Yet another option often overlooked is using 'dump' instead of dd. 


DUMP(8)   System management commands   
DUMP(8)

NAME
   dump - ext2/3 filesystem backup

[...]

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPCd: still dead?

2008-07-15 Thread Rich Rauenzahn


dan wrote:
there is no official message that says its dead, but its dead. no 
activity for 2 full years, and keene's last post to this like is like 
2 full years ago.


id guess it's completely dead!

On Thu, Jul 10, 2008 at 3:19 PM, dnk <[EMAIL PROTECTED] 
> wrote:


In my search to become more familiar with backuppc, I came across some
references to BackupPCd. I then also came across a mailing list
archive with the subject "BackupPCd: dead?".

Is this the case? I still find references to it in the backuppc docs
as well.




I exchanged email with him sometime in the last year.  He started 
working on it for a job he was employed at, but moved onto another 
job.   Didn't sound like he'd get back to it anytime soon since he 
didn't need it anymore.


Rich


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental directory structure

2008-06-24 Thread Rich Rauenzahn

> I don't want to get into a war about filesystem formats, but perhaps
> this is a valid data point for XFS. I don't know about other filesystem
> types either...
>
> You might like to check out/talk to some XFS experts, and see what they
> say about your very slow performance There may be some options to
> tune/improve the performance, or they may simply suggest another FS
> format which is better suited to the workload in backuppc.
>
> Either way, let us all know the results, I'm sure you aren't the only
> person on the list with this issue.
>
> BTW, I use reiserfs on my backuppc server, and will let you know the
> number of directories/files under the backuppc directory, along with the
> time to create new directories  will probably take some hours for
> the results :)
>
>
>   
For comparison's sake... this is on a 4 disk software RAID5.

[EMAIL PROTECTED] .RAID]# find . -type d | wc -l
1164630
[EMAIL PROTECTED] .RAID]# cd tmp
[EMAIL PROTECTED] tmp]# pwd
/.RAID/tmp
[EMAIL PROTECTED] tmp]# time for i in `seq 1 1`; do mkdir $i; done
real0m35.198s
user0m5.840s
sys 0m25.695s
[EMAIL PROTECTED] tmp]# mount | grep /.RAID
/dev/mapper/VolGroupRAID-lvol0 on /.RAID type ext3 (rw,noatime,nodiratime)
# [EMAIL PROTECTED] .RAID]# df -i | egrep Ino\|.RAID
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/mapper/VolGroupRAID-lvol0
 164839424 1857828 1629815962% /.RAID



-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need access to raw mailing list archive

2008-02-21 Thread Rich Rauenzahn


Curtis Preston wrote:
> Hey, folks!  It's W. Curtis Preston, the author of Backup & Recovery and
> the webmaster of www.backupcentral.com.
>
> I'd like to add the backuppc-users list to the backup software mailing
> lists that I support on BackupCentral.com.  I added the NetWorker,
> NetBackup, and TSM lists a year ago with great success, and I'd like to
> do the same for popular open source packages like BackupPC.  
>
> One of the things I do when I add a list is to import its archives into
> the forum engine for searchability.  In order to do this, I need access
> to a raw version of the archives, not an HTML version such as is
> currently available to me for this list.
>
> Can anyone help me get this?
>
> Thanks in advance.
>
> Project: http://backuppc.sourceforge.net/
>   

I assume (please, oh, PLEASE!) that actual email addresses will be 
removed from the net-readable archive?

Rich


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Enhancing WAN link transfers

2008-02-19 Thread Rich Rauenzahn


dan wrote:
> no, incrementals are more efficient on bandwidth.  they do a less 
> strenuous test to determine if  a file has changed.
>
> at the expense of CPU power on both sides, you can compress the rsync 
> traffic either with rsync -z 
Have you tried rsync -z?   Last I heard, BackupPC's rsync modules don't 
support it.

Rich



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Improving security, and user options

2008-02-05 Thread Rich Rauenzahn



There are several secure ways to set up a read-only backup system, but
that loses the convenience of browsing and restoring files via the web
interface. But, users can still directly download files or tar archives,
so it is a reasonable approach, and probably the right thing to do for now.
  
And, if I do need to restore a system, I can temporarily change the 
read-only attribute in rsyncd.conf -- or do it by hand.   I like that 
manual step of root intervention.


Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Improving security, and user options

2008-02-05 Thread Rich Rauenzahn


Joe Krahn wrote:
> (Maybe this should be posted to -devel?)
> Unrestricted remote root access by a non-root user is generally not a
> secure design. There are many ways to restrict the access to backup
>   

This seems like a good chance to explain how I handle the rsync security 
-- I prefer it over the sudo method and did not like the idea of a 
remote ssh root login.

For remote backups, I setup a nonpriv account that I configure for 
password-less login from the backup server.  I then setup rsyncd to 
listen only on localhost on the remote host.  I also set an 
rsyncd.secrets file and configure the rsyncd.conf shares to be read-only. 

To backup, I create a tunnel using the password-less login and then 
backup over the tunnel.  For local backups, you obviously don't need the 
tunnel -- just connect to localhost.

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best way to backup localhost?

2008-01-22 Thread Rich Rauenzahn


B. Cook wrote:
> Hello All,
>
> I'm setting up a new machine trying out different things..
>
> Do I need to setup sshd/rsync so that the backuppc 'user' can have  
> full access to / ?
>
> Or is there a better, more efficient way?
>
> Thanks in advance.
>
>   

I prefer to let rsyncd do the privilege raise by simply having an rsyncd 
daemon listening only on 127.0.0.1 and even then only with an rsync 
password.  I also set the rsyncd share to readonly=yes (I can always 
change it temporarily for a restore.)   Then I set the backuppc client 
config to point to 127.0.0.1 for the client.

I've never really liked the sudo suggestions...  seems fairly vulnerable 
to passing (possibly destructive) extra flags,etc., to the underlying 
rsync command.  But I'm a bit of a security nut.

The downside is that all of the file traffic has to go through 
loopback... but I don't notice it.

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to manually add a single file to a backed up host

2008-01-15 Thread Rich Rauenzahn



Les Mikesell wrote:

Adam Goryachev wrote:
  

In this case, it no longer matters, and for those that are interested in
how to use rsync to transfer a 20G file full of 0's in a few minutes
over a very slow connection, here is how:

rsync --partial --progress source destination
Then, cancel the transfer after 1% or so.
Then, restart the transfer with the same command.



Wouldn't it have worked to enable ssh compression with the -C option for 
this?
  

And how about rsync's spare file option?

   -S, --sparsehandle sparse files efficiently

Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Update Re: How to speed up backup process?

2008-01-11 Thread Rich Rauenzahn

> the previous post mentioned something about the compression being for 
> the pool.  In fact you can also compress the rsync stream with the -z 
> flag which works great on limited bandwidth connections as long as you 
> have a decent CPU.  The ssh stream
I tried compressing the rsync stream and found the sessions hung, and 
someone on the list (Craig?) thought that the Rsync perl module didn't 
understand the option or something... I'd need to go back in the 
archives to find my post, and can't right now.  Do you have -z working 
in rsync?

> can also be compressed but I would recommend not using ssh compression 
> if you are using rsync compression because the
>   
Also, if you are doing this over ssh, you can turn down ssh encryption 
to something faster but less secure.I use the option "-carcfour" on 
the ssh command line.

Rich


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] scheduling backups

2008-01-10 Thread Rich Rauenzahn

>>
> This seems to make sense, however when I set it up at first the 
> host(s) could not be backed up because of "host not found" error. 
> Placing hostA_share1 and hostA_share2 and so on  in the 
> /etc/backuppc/hosts file seemed to cause this as they are not real 
> host names and fail an nmblookup query.  Should these names be able to 
> pass an nmblookup query?  Also, pings to the host seemed to fail until 
> I created new dns entries for each hostA_share# on the dns server.  
> After doing so I was able to do a successful backup.  Is this 
> absolutely necessary or should a local update of the /etc/hosts file 
> suffice? ip   fqdn   hostA_share#

If ping doesn't work, then you didn't set the client name setting in 
each config file.  And read the doc's referred to earlier (  
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_ 
) -- This doesn't work with DHCP set to 1 in the hosts file.

The DHCP flag is a bit of a misnomer.. I have DHCP configured to 
register names into my local bind/named and I do not need the DHCP flag.

Rich


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] scheduling backups

2008-01-10 Thread Rich Rauenzahn

> Multiple profiles?  I'm not sure I understand.  Would this consist of 
> creating multiple per-host configurations for the same host? If so, 
>   
Yes...

> would there be a specific naming convention?  Is there a way to have 
>   

No...
> backuppc still automatically back them up - aren't these configurations 
> to be named after the host thus allowing only one file per host?
>
>   
See "$Conf{ClientNameAlias}" -- set it to the real hostname that the 
config file pertains to.  So you could have several config files.. such as..

hostA_share1.pl
hostA_share2.pl
hostA_share3.pl

And have all three of those names in the /etc/BackupPC/hosts file 
(without the .pl), but each one would override the hostname by having 
the ClientNameAlias set within it to 'hostA'

Rich

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the BackupPC server

2008-01-07 Thread Rich Rauenzahn


Nils Breunese (Lemonbit) wrote:
> This is topic is discussed pretty regularly on this mailinglist.  
> Please also search the archives. Because of the heavy use of hardlinks  
> breaking the pool up into smaller batches is not really feasible and  
> indeed rsync doesn't really handle very large numbers of files and  
> hardlinks (because it needs to load the full trees in memory). I  
> believe the most common solution is not to backup the backup server,  
> but to use RAID (and rotate disks offsite) or use a second backup  
> server (so each host is backed up separately by each backup server).
>
> Nils Breunese.
>   

I don't recall seeing 'dump' offered as an alternative for backing up 
the repository in this topic we see so often.  I've been recently 
considering using it for offline storage.  Have I just missed those 
particular threads?

DUMP(8)   System management commands   
DUMP(8)

NAME
   dump - ext2/3 filesystem backup

SYNOPSIS
   dump [-level#] [-ackMnqSuv] [-A file] [-B records] [-b blocksize] 
[-d den-
   sity] [-D file] [-e inode numbers] [-E file] [-f  file]  [-F  
script]  [-h
   level] [-I nr errors] [-jcompression level] [-L label] [-Q file] 
[-s feet]
   [-T date] [-y] [-zcompression level] files-to-dump

   dump [-W | -w]


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File::RsyncP installation problem.

2007-12-06 Thread Rich Rauenzahn


Matthew Metzger wrote:
> /bin/sh: cc: command not found
>   
Install a compiler/gcc...?
> make[1]: *** [Digest.o] Error 127
> make[1]: Leaving directory `/home/sysadmin/File-RsyncP-0.68/Digest'
> make: *** [subdirs] Error 2
> ---
>
> I get the same type of errors when trying to install the module from CPAN.
>
> Can anyone shed some light on why I'm having a problem getting RsyncP to 
> version 0.68?
>
> thanks for your time,
>
> Matthew Metzger
>   

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] (no subject)

2007-12-05 Thread Rich Rauenzahn
Carl Keil wrote:
> I'm sorry for the delay, I'm just now getting a chance to try the "perl
> -ne" suggestion.  What do you mean by backuppc's header info?  How would I
> search for that?
>
> Thanks for your suggestions,
>
>
>   
backuppc stores the compressed backed up files in compressed blocks with 
an md4 rsync checksum.  For instance, you can't just "gunzip <  
filename" from the pool to examine the contents of the file.  You have 
to use backuppc's zcat utility.   I don't know that the format is 
documented outside of the perl module that does it.  Kinda looks like it 
does zlib blocks and it tweaks the first byte...

Take a look at BackuppC::FileZIO for more details...

http://backuppc.cvs.sourceforge.net/backuppc/BackupPC/lib/BackupPC/FileZIO.pm?revision=1.26&view=markup

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn

John Pettitt wrote:
  

What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

  
You're sure?  That isn't my observation.  At least with rsync, the files 
in the 'new' subdirectory of the backup are already compressed, and I 
vaguely recall reading the code and noticing it compresses them during 
the transfer (but on the server side as it receives the data).  After 
the whole rsync session is finished, then the NewFiles hash list is 
compared with the pool.  Identical files (determined by hash code of 
uncompressed data) are then linked to the pool.


If that is all true, then it seems like there is an opportunity to 
compare the size of the existing file in the pool with the new file, and 
keep the smaller one.


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn


Craig Barratt wrote:
> You're right.
>
> Each file in the pool is only compressed once, at the current
> compression level.  Matching pool files is done by comparing
> uncompressed file contents, not compressed files.
>
> It's done this way because compression is typically a lot more
> expensive than uncompressing.  Changing the compression level
> will only apply to new additions to the pool.
>
> To benchmark compression ratios you could remove all the files
> in the pool between runs, but of course you should only do that
> on a test setup, not a production installation.
>
>   

I know backuppc will sometimes need to re-transfer a file (for instance, 
if it is a 2nd copy in another location.)  I assume it then 
re-compresses it on the re-transfer, as my understanding is the 
compression happens as the file is written to disk.(?)  

Would it make sense to add to the enhancement request list the ability 
to replace the existing file in the pool with the new file contents if 
the newly compressed/transferred file is smaller?  I assume this could 
be done during the pool check at the end of the backup... then if some 
backups use a higher level of compression, the smallest version of the 
file is always preferred (ok, usually preferred, because the transfer is 
avoided with rsync if the file is in the same place as before.)

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-04 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote:
>
> Hello,
>
> I would like to have an information about compression level.
>
> I'm still doing several tests about compression and I would like to 
> have your opinion about something :
> I think that there is a very little difference between level 1 and 
> level 9.
> I tought that I will be more.
>
> For example, with a directory (1GB - 1308 files : excel, word, pdf, 
> bmp, jpg, zip, ...) with compression level :
>
> 9 I have the result : 54.4% compressed (1st size : 1018.4 Mo / 
> compressed size : 464.5 Mo)
> 1 I have the result : 52.8% compressed (1st size : 1018.4 Mo / 
> compressed size : 480.5 Mo)
>
> Do you think that's correct / normal ?
I'll ask this again:  How are you ensuring that each compression test 
isn't reusing the compressed files that are already in the pool?  What 
is your test methodology?

I don't think BackupPC will update the pool with the smaller file even 
though it knows the source was identical, and some tests I just did 
backing up /tmp seem to agree.  Once compressed and copied into the 
pool, the file is not updated with future higher compressed copies.  
Does anyone know something otherwise?

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended partitioning for BackupPC on Ubuntu server?

2007-11-30 Thread Rich Rauenzahn


Angus Scott-Fleming wrote:
> I'm trying to set up a BackupPC server with an eye to the future so
> that I can expand storage easily, so I want LVM with RAID
> (unfortunately it'll have to be RAID 1, as I can't see that LVM and
> software-RAID 5 work together, but that's another issue :-().  
>
>   

RAID5 and LVM work fine for me (Fedora Core 6).  I created an MD drive 
and LVM'ified it.  Mounted it in /.RAID and made my own install of 
backuppc to point at it.  (or I could have also just created a softlink 
from /var)

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scripting Help Needed

2007-11-30 Thread Rich Rauenzahn



Steve Willoughby wrote:

On Thu, Nov 29, 2007 at 03:22:10PM -0800, Carl Keil wrote:
  

Hi Folks,

I'm trying to retrieve some deleted files from a BackUpPC backup.  The
backup was deleted, but not much has been written to the drive since the
backup.  This is an ext3 file system, so I'm forced to use the grep an
unmounted drive method of retrieval.

Does anyone know a way to have grep return everything between two retrived
strings?  Like everything between "tuxpaint" and "end".  I'm trying to
retrieve PNG files.  Can you describe to me the tools and syntaxes I'd
need to employ to accomplish such a feat?  I'm familiar with the command
line, I've gotten grep to return some interesting results and I know about
piping commands, but I can't quite figure out the steps to extract these
pngs from the raw hard drive.



instead of grep, how about the command:
   perl -ne 'print if /tuxpaint/../end/'

That would be a filter to print the lines from the one matching the
regexp /tuxpaint/ to the one matching /end/.

It'll work as a filter like grep does; either specify filenames at the 
end of the command line or pipe something into it.


  


Are you searching backuppc's ext3 filesystem?  Those PNG backup files 
are likely compressed with backuppc and gzip, so you're really wanting 
to look for backuppc's header information.  Also, realize that 
sufficiently large files will not necessarily be contiguous on the 
unmounted drive.


Here's a thread where someone had some limited success with midnight 
commander in 2002 
http://www.ale.org/archive/ale/ale-2002-08/msg01317.html


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn


Gene Horodecki wrote:
> Sounds reasonable... What did you do about the attrib file?  I noticed 
> there is a file called 'attrib' in each of the pool directories with 
> some binary data in it.
>
Nothing... it just contains permissions, etc.  That's why I did another 
full after the move -- then all of the metadata is updated correctly.

Rich


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn

Gene Horodecki wrote:

I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.



Great suggestions..  It's too late for me now because the backup (should
be) 95% complete.. but I will remember that for next time.

Tell me, are the directories in the pc/ path just regular
directories that have the letter 'f' prepended to them?  Did you have to
reorganize every layer of backups in existance to match, or just one layer?

I'll do this next time.
  


So if I moved, say, /var/www to /home/www, I first made a full backup 
before the move.  Then I moved the www directory on the remote host, 
then went to the backuppc server and moved fwww from /fvar/fwww to 
/fhome/fwww within that latest backup tree.  Then I did another full.   
I think that's what I did anyway... =-)


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn


Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls.  Unfortunate, because all the files are there.. but they
> need to be sent in full so that BackupPC can calculate the checksums..
>   

I had that problem as well.. so I uhh.. well, I fiddled with the backup 
directory on the backuppc server and moved them around so that backuppc 
wouldn't see  I had moved them remotely..  Not something I would exactly 
recommend doing... although it worked.

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn

Les Mikesell wrote:

Gene Horodecki wrote:
  

Is this true?  Why not just send the checksum/name/date/permissions of the
file first and see if it exists already and link it in if it does.  If the
file does not exist by name but there is a checksum for the file, then just
use the vital data to link in the file and you're done.  I'm thinking
Backuppc shouldn't need to send the entire file for that?



You are talking to a stock rsync on the other end.  I don't think it 
knows about the hashing scheme and collision detection that the backuppc 
pooling mechanism uses for filename generation.  

It doesn't, but I did wonder on this list a while ago if the rsync 
checksum could be stored in the pool with the hash and also kept in a 
lookup table (rsync checksum -> filename) on the backuppc server.  It 
could be then used to see if the file exists in the pool somewhere else 
before having rsync download it. 

I tried to follow the rsync backuppc perl code to see where that logic 
could be injected, but ultimately gave up.  There may be a limitation on 
the rsync server side, but I couldn't determine that either.


Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] adjusting compression level

2007-11-20 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote:
>
> Hello,
>
> I installed BackupPC 3.0.0 on a Fedora Core 6 (server) in order to 
> back up Windows XP client.
> For your information, I use rsyncd.
>
> I'm checking and testing all differents compression levels to compare 
> them.
> The test is based on 3.00 Gb in full backupc. There are more than 400 
> files to back up.
>
> For the moment, I tested with the levels 1, 2, 3, 4, 5, 6 and without 
> compression (i.e. 0) and the difference
> between the levels is very very small.
Are you just running a full backup over and over?   What's your 
methodology?  Doesn't the pool already have a previously compressed copy 
of the file and BackupPC therefore throws the new copy away since it is 
already in the pool?

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and hard drive spin down

2007-11-19 Thread Rich Rauenzahn

Alex Zelter wrote:
> Hi,
> I am running backuppc version 3.0.0-3ubu on Ubuntu gutsy. Ubuntu puts 
> backuppc's backups in /var/lib/backuppc. I have this linked to /backup 
> which is on an external hard drive. This hard drive is set to spin 
> down after 30 minutes of inactivity. backuppc is set to wake up once 
> each night at 2 am to do its backups. It is only backing up the 
> localhost to the external hard drive. My problem is that while 
> backuppc is running, the external hard drive never spins down. [...]

I had a similar problem, but was able to solve it with the following in 
the config.pl

$Conf{WakeupSchedule} = [
  '0',
  '12'
];

to only wake up at midnight and noon.  It sounds like you've already 
done something similar?  How did you set yours to wake up only at 2am?

Maybe use "sar -d" to see if anything is reading/writing to that disk?  
You can use fuser -cv /foo to see what exact processes have files/dirs 
open in that filesystem as well.

Rich



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No ping possible, what to do?

2007-11-19 Thread Rich Rauenzahn

I've been using this script, written such that backuppc can parse the 
output. 

#!/usr/bin/perl -w
use strict;
use Net::Ping;

die "usage: $0 port hostname\n" unless(@ARGV == 2);

my $serv = shift @ARGV;
my $host = shift @ARGV;

my $p = Net::Ping->new("tcp", 30);
$p->hires();

if($serv =~ /^\d+/) {
   $p->{port_num} = $serv;
} else {
   $p->{port_num} = getservbyname($serv, "tcp");
}

my @res = $p->ping($host);

if(@res) {
   my ($succ,$rtt,$ip) = @res;
   if($succ) {
  my $ms = int ($rtt*1000);
  print "$host [$ip] - $serv - time=$ms ms\n";
   } else {
  exit -1;
   }
}

# ./ping_tcp  http yahoo.com
yahoo.com [66.94.234.13] - 80 - time=12 ms

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Skipping directories with .nobackup files (creating dynamic file exclude/include lists)

2007-11-19 Thread Rich Rauenzahn


John Rouillard wrote:
> Hi all:
>
> Our current backup system (hdup) allows the user to prune directories
> from the backups. So I can backup the /home share and have a user
> create
>
>   /home/user/data/lots_of_junk_data_that_doesnt_need_backup/.nobackup.
>
> that file will prevent the backup of
>
>   /home/user/data/lots_of_junk_data_that_doesnt_need_backup
>
>
>   

Have you tried the -F option in rsync?  Seems to work for me.  I added 
-F to the rsync remote options and then dropped a .rsync-filter file in 
the directory.  The contents of .rsync-filter is just one line, 

- *

It is a little more work than just 'touch .nobackup'.  On the other 
hand, maybe a cronjob could be made with an appropriate find that 
sprinkles these wherever the .nobackup files exists.  It could be 
followed by another find for .rsync-filter files and removes them if a 
.nobackup files doesn't exist as well (maybe that's not such a good idea 
if others are using rsync filters on the box independently of you...).  
Anyway, I'd suggest doing this in File::Find in perl.

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up servers behind a PPPoEo-connection

2007-10-01 Thread Rich Rauenzahn



Peter Carlsson wrote:
> Hello!
>
> I'm using BackupPC version 2.1.2pl1 on a Debian (stable) server to
> successfully backup itself as well as two other Debian-machines on
> the local network.
>
> What is the best/easiest way to configure BackupPC to also backup
> two other machines (one Debian and on Windows XP) that is on
> another network and connects to the Internet with an DSL provider
> using PPPoE?
>
> I have failed to find information how to do that. Could someone
> please explain what to do or point me to some documentation.
>
>   

I use rsync with an ssh tunnel for Linux.  I use password-less ssh to 
get to the other box (theoretically you could also do that with XP and 
Cygwin, but I've found ssh for XP/Cygwin to be really unreliable as a 
service.)  I prefer to ssh in as an unprivileged user who then opens a 
tunnel to the (configured read-only, listen to localhost only) rsync 
service.

My prebackup command is:

#!/bin/sh
cd /tmp && /usr/bin/ssh -o "Compression yes" -o "CompressionLevel 9" -x 
-carcfour -2 -f -N -L 9001:localhost:873 [EMAIL PROTECTED] 
1>/dev/null 2>/dev/null &
sleep 5

My postbackup command is:

#!/bin/sh
/usr/bin/pgrep -f '/usr/bin/ssh .* [EMAIL PROTECTED]' | 
/usr/bin/xargs /usr/bin/kill

And I use this ping_tcp script for checking ssh availability (because 
ICMP is sometimes firewalled, and it is the ssh port we really  need to 
check for.)  This script is intended to have output similar enough to 
/sbin/ping that BackupPC can parse it.

#!/usr/bin/perl -w
use strict;
use Net::Ping;

my $p = Net::Ping->new("tcp", 30);
$p->hires();

$p->{port_num} = getservbyname($ARGV[0], "tcp");
my @res = $p->ping($ARGV[1]);

if(@res) {
   my ($succ,$rtt,$ip) = @res;
   if($succ) {
  my $ms = $rtt*1000;
  print "$ARGV[1] [$ip] - $ARGV[0] - time=$ms ms\n";
   } else {
  exit -1;
   }
}

My client.pl config file:

$Conf{RsyncdClientPort} = '9001';
$Conf{ClientNameAlias} = 'localhost';
$Conf{RsyncdPasswd} = 'password';

$Conf{PingMaxMsec} = '2000';
$Conf{DumpPreUserCmd} = '/etc/BackupPC/bin/ssh_remote.host.foo';
$Conf{DumpPostUserCmd} = '/etc/BackupPC/bin/kill_ssh_remote.host.foo';

$Conf{PingCmd} = '/etc/BackupPC/bin/ping_tcp ssh remote.host.foo';

And the remote rsyncd.conf:

use chroot=yes
secrets file = /etc/rsyncd.secrets
Hosts allow = 127.0.0.1
Hosts deny = * 

[BackupPC]
uid = root
gid = root
auth users = BackupPC
read only=yes
path = /

I'm going to put a copy of this (quick and dirty/no formatting for now) 
at http://shroop.net/backuppc

Rich


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] UserCmdCheckStatus not working?

2007-09-21 Thread Rich Rauenzahn



Ambrose Li wrote:

On 21/09/2007, Rich Rauenzahn <[EMAIL PROTECTED]> wrote:
  

Not familiar with MACs -- but -- shouldn't the mountpoint have
permissions such that only the OS can add directories?



That's not exactly how things work on Macs. They have a /tmp-like
directory /Volumes in which mount points can be created. When
a drive is connected, the automounter would create a mount point
that is unique to the current user, then mounts the volume. So it
really is meaningless to talk about permissions of the mount
points because they do not exist until the automounter (or
something else, in this case BackupPC) creates them.

So if something creates a directory that has the same name as
a mount point, the automounter will mistakenly think that it (the
automounter) has created that directory for another user, and
will happily create another mount point (with a slightly different
name) for the volume. This guarantees that the directory
created by BackupPC will never be the actual mount point used
by the automounter.
  


Ah, then, I'd agree with the other poster -- create a unique .dotfile or 
something on your volume and test for it.  Or set it to be owned by 
backuppc:backuppc and test for that owner..


#!/usr/bin/perl -w
use strict;

my $dir = "/.RAID/BackupPC";
#my $dir = "/.RAID";

# modified find2perl output...

# for lookup by name...
my (%uid, %user);
while (my ($name, $pw, $uid) = getpwent) {
   $uid{$name} = $uid{$uid} = $uid;
}

my ($dev,$ino,$mode,$nlink,$uid,$gid);

if((($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($dir)) &&
  -d _ &&
  ($uid == $uid{'backuppc'})) # you could test for the actual number 
instead

{
  print "good\n";
} else {
  print "bad\n";
}

Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] UserCmdCheckStatus not working?

2007-09-21 Thread Rich Rauenzahn


Kimball Larsen wrote:
> Hey, that's a good idea.  You certainly did not misunderstand the  
> problem, and I believe that will work.  I do wish that there was a  
> way to prevent backuppc from even creating the filesystem in the  
> first place, but having a script clean it up is ok.
>
>   
Not familiar with MACs -- but -- shouldn't the mountpoint have 
permissions such that only the OS can add directories?

That's how I prevent something like that from happening. For example, I 
don't want squid writing to the/var/spool/squid's mountpoint, so I make 
sure /var/spool/squid is rwx only by root.  But the filesystem mounted 
on top of /var/spool/squid is owned by squid:squid.

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Backing up TIVO

2007-09-21 Thread Rich Rauenzahn
Now that I have your attention =-)...

I just wanted to report I got it working. 

I got tired of tarring up my /var/hack customizations and ftp'ing them.  
I realized rsync would be more efficient and then thought -- heck, if I 
find rsync for TIVO, I might was well add it to my BackupPC 
environment.  The version of rsync out there 
(http://tivoutils.sourceforge.net/) doesn't support checksum-seed, but 
otherwise it works fine.  Someday maybe I'll learn how to cross compile 
the latest one.

(And of course it doesn't back up the proprietary media partitions -- 
just the OS, / and /var).

Not particularly useful to most of you, but a testament to the 
flexibility of BackupPC...

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up one share uncompressed

2007-09-12 Thread Rich Rauenzahn


Mark Allison wrote:
> Thanks - I haven't tested this but I assumed that compression would be 
> much slower. As all the files are MP3s (already compressed), bzip2 is 
> unlikely to be able to compress it much further. I'm currently using a 
> setting of 5, bzip2.
>
> Thanks for the replies, I'll probably just leave the compression on.
>
> Mark.
> tp://backuppc.sourceforge.net/
>   

Why not just add a  seperate backup for your mp3 directory (and exclude 
it from the main) with compression level 1 via gzip?  You can mix 
compression levels and if the file is found somewhere else, it will be 
found as a match.

I wonder which would be better?  explicitly excluding /media/mp3 and 
including it in another, or excluding *.jpg *.mp3 *.avi and including 
that in another?  Would make restores more complicated...

Rich



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-12 Thread Rich Rauenzahn

> The problem is that BackupPC and rsync use different checksum algorithms.
> This has been discussed many times. I believe there is a specialized client
> being developped (BackupPCd) which may allow such speedups, but, as Les
> says,
>   
>   
What would be the problem with a lookup table of rsync cheksums to pool 
filenames/pool hash code?

backuppc+rsync requests checksum for file from remote system
remote rsync transmits
backuyppc+rsync checks checksum+filesize in rsyncsum+filesize => poolsum 
table
if there is a match, copy or link file from pool to destination file
else download the file

I'm not sure what the risk of duplicate rsync hashes is, but what does 
backuppc do now with that possibility during rsync?  I suppose it is 
used more for determining if a file has changed more than if two files 
are the same?

Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn



Les Mikesell wrote:

Rich Rauenzahn wrote:
  

Rob Owens wrote:


My understanding is that with tar and smb, all files are downloaded (and
then discarded if they're already in the pool).  Rsync is smart enough,
though, not to download files already in the pool.

-Rob
  
  
I was about to post the same thing.  I moved/renamed some directories 
around on the server I am backing up, and it is downloading the entire 
file(s) again.   Is there any interest in having BackupPC w/ rsync check 
the pool first before downloading?   Is there a reason behind not doing 
it, or is it just something that hasn't been gotten to yet?



I don't think the remote rsync passes enough information to match the 
pool hashes.  The check is done against files of the same name/location 
from the last backup and when matches are found there, only file 
differences are transferred.
  


I'm looking through the sources now.. I assumed that somehow the 
interface to File::RsyncP could return a checksum to BackupPC... can't 
tell if they are that tightly bound or not.  How/when does compression 
occur?  Ah, I see.  It passes an I/O object into RsyncP.  I think I'll 
move this to the devel list =-).


Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn


Rob Owens wrote:
> My understanding is that with tar and smb, all files are downloaded (and
> then discarded if they're already in the pool).  Rsync is smart enough,
> though, not to download files already in the pool.
>
> -Rob
>   

I was about to post the same thing.  I moved/renamed some directories 
around on the server I am backing up, and it is downloading the entire 
file(s) again.   Is there any interest in having BackupPC w/ rsync check 
the pool first before downloading?   Is there a reason behind not doing 
it, or is it just something that hasn't been gotten to yet?

Rich

> David Koski wrote:
>   
>> I have been trying to get a good backup with backuppc (2.1.1) but it has been
>> taking days.  I ran a dump on the command line so I could see what is going
>> on and I see the files that are in the pool are being downloaded.  For 
>> example:
>>
>>   pool 700   511/1008039 home/daler/My 
>> Documents/DRAWINGS/Lakeport/Pics/C_03.tif
>>
>> This is a large file and at 750kb/s takes a while.  Is this expected?  I 
>> thought if
>> they are in the pool they do not need to be downloaded.
>>
>> 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Rich Rauenzahn

> If this is really the way backuppc does incremental backups, I think backuppc 
> should be a bit more incremental with its incremental backups. Instead of 
> comparing against the last full, it should compare against the last full and 
> incremental backups. This would solve this problem and make backuppc more 
> efficient anyway, AFAIK.
>   
>

Isn't that what $Conf{IncrLevels} 
 
is for?

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

> I'm curious about this as well, but would like to add to the question -- 
> what if I'm backing up some hosts across the internet, and I set the 
> compression to bzip2 -9.  But local hosts on the LAN I set to gzip -4. 
>
> I believe I read that the pool checksums are based on the uncompressed 
> data -- so I would expect that anything common backed up across the 
> internet first will be shared as bzip2, but anything common backed up 
> locally with gzip first would be shared as gzip. 
>
> I'm also assuming it is ok to be mixing the two compression methods in 
> the pools!
>
> Rich
>   

Looks like I am right.  I added a unique file to the bzip2 host, backed 
it up, then copied it to a gzip host, and the file was found in the pool 
during the 2nd backup.I don't think changing the backup compression 
ratio would make a difference as well.  I vaguely recall in the 
docs/faq/'net saying you could increase it later if you started running 
out of space... which was the argument for going for lower compression 
levels at first.

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

Rob Owens wrote:

Rich Rauenzahn wrote:
  

For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes
3.3 seconds.  That's 1,776,176 bytes/sec. 


Rich, I just tried bzip'ing an ogg file and found that it got slightly
larger.  The reason, I believe, is that formats like ogg, mp3, mpg, etc.
are already compressed.  You might want to run some tests yourself to
see whether or not it makes sense for you to be compressing your backups.
  
I only compressed the mp3 as an example of a worst case scenario.  I 
assume it takes the longest to compress since it is not compressible.


Let's test my assumption by compressing my procmail.log (easily 
compressed) --


74,416,448 bytes in 84 seconds.  885,910 bytes/sec.  So yes, that mp3 
was slower to compress.


But yes, it would be nice if there were an option to disable compression 
for certain filetypes.


Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Les Mikesell wrote:
> Don't you run several concurrent backups? Unless you limit it to the 
> number of CPUs in the server the high compression versions will still 
> be stealing cycles from the LAN backups.
I'm backing up 5 machines.  Only one is on the internet, and the amount 
of CPU time/sec the internet backup takes is very small.

For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes 
3.3 seconds.  That's 1,776,176 bytes/sec.  The DSL line pumping the data 
to me is pushing 42,086 bytes/sec, and that includes ethernet/IP/ssh/ssh 
compression overhead.  (hmm, now that I think about it, the real 
transfer could be higher because ssh is compressing, but even if it was 
100k/sec of real data it is still peanuts.)

Does that make the theoretical load on the CPU 6% if I got the math 
right?100*1024/1776176 = 0.06.  Checking the current backup, yeah, 
it's about 2% right of a CPU now.

>> I am using ssh -C as well.  And see my other post about rsync 
>> --compress -- it is broken or something.
>
> It is just not supported by the perl rsync version built into backuppc.
>

Ah -- well, it fails quite silently =-).  I couldn't figure out why the 
same files kept getting transferred over and over again...

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

>
> Compression is done on the server side after the transfer.  What's the 
> point of using different methods?  According to the docs, compressed 
> and uncompressed files aren't pooled but different levels are.  The 
> only way to get compression over the wire is to add the -C option to 
> ssh - and you'll probably want to use rsync if bandwidth matters.
>

Because if I'm transferring the backup at 40kbps/sec across the 
internet, bzip2'ing on the server isn't going to slow down the backup, 
which is the main reason for not using the higher compression.

I am using ssh -C as well.  And see my other post about rsync --compress 
-- it is broken or something.

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn

Rob Owens wrote:
> If I have 2 hosts that contain common files, and compression is enabled
> on one but not the other, will these hosts' files ever get pooled? 
>
> What if compression is enabled on both, but different compression levels
> are set?
>
> Thanks
>
> -Rob
>
>
>   
I'm curious about this as well, but would like to add to the question -- 
what if I'm backing up some hosts across the internet, and I set the 
compression to bzip2 -9.  But local hosts on the LAN I set to gzip -4. 

I believe I read that the pool checksums are based on the uncompressed 
data -- so I would expect that anything common backed up across the 
internet first will be shared as bzip2, but anything common backed up 
locally with gzip first would be shared as gzip. 

I'm also assuming it is ok to be mixing the two compression methods in 
the pools!

Rich


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsync --compress broken ?

2007-08-21 Thread Rich Rauenzahn
Whenever I use these options, rsync "seems" to work and transfer 
files but nothing ever seems to actually get written to the backup 
dirs:

$Conf{RsyncArgs} = [  # defaults, except I added the compress flags.
 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '-D',
 '--links',
 '--hard-links',
 '--times',
 '--block-size=2048',
 '--recursive',
 '--checksum-seed=32761',
 '--compress',  # these two are suspicious
 '--compress-level=9'   # these two are suspicious
];

Taking out the --compress and --compress-level fixes it.

I've monitored with lsof and a manual backup with -v -- the remote rsync 
opens the files, seems to transfer them (tcpdump shows lots of traffic), 
but the file never seems to get put on disk.   It's never opened on the 
backuppc server (checked with lsof).  Manual backup with -v shows no 
files being processed, a "create d ." is shown, then nothing. I'd hate 
to use ssh compression since I've read compression is more efficient at 
the rsync level.

I don't believe my environment is unusual -- I changed the default 
client to be rsyncd.  Remote and local systems are both Linux, FC6.

Here's the rest of the config for this client:

$Conf{RsyncShareName} = [
 'BackupPC'
];
$Conf{RsyncdPasswd} = '*';

$Conf{RsyncdClientPort} = '9001';
$Conf{ClientNameAlias} = 'localhost';
$Conf{DumpPreUserCmd} = '/etc/rjr/BackupPC/bin/open_ssh_tunnel';

$Conf{BackupFilesExclude} = {
 '*' => [
   '/var/mail/*.xspam',
   '/var/mail/*.xraw',
   '/proc/',
   '/var/named/chroot/proc/',
   '/var/spool/squid/',
   '/sys/',
   '/dev/',
   '/oldboot/',
   '*.iso',
   '*.iso.*',
   '/var/mail/*.xspam.*',
   '/var/mail/*.xraw.*',
   '/media/',
   '/misc/',
   '/net/',
   '/mnt/',
   'Thumbs.db'
 ]
};
$Conf{PingCmd} = '/etc/rjr/BackupPC/bin/ping_tcp_ssh';
$Conf{PingMaxMsec} = '1';
$Conf{DumpPostUserCmd} = '/etc/rjr/BackupPC/bin/kill_ssh_tunnel';

$Conf{ArchiveComp} = 'bzip2'; # since the cpu time to compress will be 
way shorter than the WAN time.
$Conf{CompressLevel} = '9'; # since the cpu time to compress will be way 
shorter than the WAN time.




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/