Re: [BackupPC-users] Backups very slow after upgrade to squeeze

2012-05-12 Thread Tim Fletcher
On 09/05/12 18:03, Les Mikesell wrote:
> I generally use --one-file-system as an rsync option which will keep 
> it from wandering into /proc and /sys as well as any nfs or iso mounts 
> that are accidentally in the path. Of course if you do that, you have 
> to add explict 'share' entries for each mount point that you want 
> backed up and be careful to add new ones as needed. 

I would argue that should be a configuration default to stop exactly 
this sort of thing happening.

-- 
Tim Fletcher


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] After OS Upgrade -- NT_STATUS_ACCESS_DENIED

2012-05-12 Thread Brad Morgan
> I just upgraded my server from Ubuntu 11.10 to Ubuntu 12.04 and now
BackupPC is failing on all hosts with:
> Backup failed on canton (NT_STATUS_ACCESS_DENIED listing \\*)
> Anyone know what trivial / stupid thing I've missed?
 
My first request for help was a bit vague but I believe I have a better idea
about what's going on.
 
The smbclient (I'll have to reinstall to get a version number) in Ubuntu
11.10 returns the exact same errors as the smbclient (3.6.3) in Ubuntu 12.04
with one exception, the errors were ignored or skipped by BackupPC with
11.10 and are fatal with 12.04.
 
I've studied the man page for smbclient and can't find any options other
than -E to change the error behavior.
 
I am backing up a mixture of Windows XP (SP3) and Windows 7 clients. I would
prefer to continue using smb as the transfer method. Any suggestions on how
to proceed would be appreciated.
 
Regards,
 
Brad
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Les Mikesell
On Sat, May 12, 2012 at 9:17 AM, Andreas Piening
 wrote:
> I want a backup that gives me the opportunity to get the server back and 
> running within a few minutes + download time of the image + restore time from 
> partimage.
> It is ok to loose the files created since the backup-run last night, I see 
> this more as a "live insurance". The documents are already backed up with 
> daily and weekly  backup-sets via backuppc.

If you need close to real-time recovery, you need to have some sort of
live clustering with failover.  Just copying the images around will
take hours.  For the less critical things, I normally keep a 'base'
image of the system type which can be a file for a VM, or a clonezilla
image for hardware that can be used as a starting point but I don't
update those very often.   Backuppc just backs up the live files
normally for everything so for the small (and more common) problems
you can grab individual files easily and for a disaster you would
start with the nearest image master and do a full restore on top of
it, more or less the same as with hardware.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Andreas Piening
I want a backup that gives me the opportunity to get the server back and 
running within a few minutes + download time of the image + restore time from 
partimage.
It is ok to loose the files created since the backup-run last night, I see this 
more as a "live insurance". The documents are already backed up with daily and 
weekly  backup-sets via backuppc.

Anyways, I think I should keep at least one additional copy of the 
image-backup, if the latest backup has not been created correctly I can use the 
image created one night before. It is ok for me to loose 50 GB backup space if 
I can sleep better.

Your suggestion with rsyncing from the snapshot has its benefits: I can mount 
the nts-partition (aka C:\ drive) of the windows server 2008 via ntfs and do 
the backup from the mountpoint. This way I would save much space and get a 
incremental backup-set for let's say 2 weeks in the past for "allmost free".
But I asked here on the list some weeks ago if it is reliable to do a backup 
this way without loosing any file permissions, shadow copies, file-attributes 
or something like that. I have allready done this with linux-servers and know 
that it works there perfectly well (ok I need to install grub manually after 
the files are restored), but I'm quite unsure about windoze. I have not yet 
tried to create a new dev, format it with mkfs.ntfs and sync the files back and 
try to boot from it. But no one told me that he ever have successfully tried 
this or that it should work at all and I have learned to know windows boot 
drives to be very "fragile".

But I can't believe that I'm the only one who needs to do backups of virtual 
windows machines over the network who is not willing to pay 900 bucks for a 
acronis true image licence per server! And the only difference there would be 
that acronis is able to store incremental diffs for a already created backup 
but after a week or so I need to do a full backup there too. The performance 
and space-efficiency of acronis is better that with partimage but not that "I 
would spend over 1000 EUR for that" better...

I'm allready happy if I get my rsync to make differential transfers of my image 
files, no matter if I waste several gigs of space...

Andreas

Am 12.05.2012 um 15:28 schrieb Tim Fletcher:

> On 12/05/12 11:57, Andreas Piening wrote:
>> Hi Les,
>> 
>> I allready thought about that and I agree that the handling of large image 
>> files is problematic in general. I need to make images for the windows-based 
>> virtual machines to get them back running when a disaster happens. If I go 
>> away from backuppc for transfering these images, I don't see any benefits 
>> (maybe because I just don't know of a image solution that solves my problems 
>> better).
>> As I already use backuppc to do backups of the data partitions (all linux 
>> based) I don't want my backups to become more complex than necessary.
>> I can live with the amount of harddisk space the compressed images will 
>> consume and the IO while merging the files is acceptable for me, too.
>> I can tell the imaging software (partimage) to cut the image into 2 GB 
>> volumes, but I doubt that this enables effective pooling, since the system 
>> volume I make the image from has temporary files, profiles, databases and so 
>> on stored. If every image file has changes (even if there are only a few 
>> megs altered), I expect the rsync algorithm to be less effective than 
>> comparing large files where it is more likely to have a "unchanged" long 
>> part which is not interrupted by artificial file size boundaries resulting 
>> from the 2 GB volume splitting.
>> 
>> I hope I made my situation clear.
>> If anyone has experiences in large image file handling which I may benefit 
>> from, please let be know!
> 
> The real question is what are you trying to do, do you want a backup (ie 
> another single copy of a recent version of the image file) or an archive (ie 
> a series of daily or weekly snapshots of the images as they change)?
> 
> BackupPC is designed to produce archives mainly of small to medium sized 
> files and it stores the full file not changes (aka deltas) and so for large 
> files (multi gigabyte in your case) that change each backup it is much less 
> efficient.
> 
> To my mind if you already have backuppc backing up your data partitions and 
> the issue is that you want to back up the raw disk images from your virtual 
> machines OS disks the best thing to snapshot them as you have already setup 
> and then simply rsync that snapshot to another host which will just transfer 
> the deltas between the diskimages. This will leave you with backuppc 
> providing an ongoing archive for your data partitions and a simple rsync 
> backup for your root disks that will at worse mean you lose a days changes in 
> case of a total failure.
> 
> -- 
> Tim Fletcher

--
Live Security Virtual Conference
Exclusive live event will cov

Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Tim Fletcher
On 12/05/12 11:57, Andreas Piening wrote:
> Hi Les,
>
> I allready thought about that and I agree that the handling of large 
> image files is problematic in general. I need to make images for the 
> windows-based virtual machines to get them back running when a 
> disaster happens. If I go away from backuppc for transfering these 
> images, I don't see any benefits (maybe because I just don't know of a 
> image solution that solves my problems better).
> As I already use backuppc to do backups of the data partitions (all 
> linux based) I don't want my backups to become more complex than 
> necessary.
> I can live with the amount of harddisk space the compressed images 
> will consume and the IO while merging the files is acceptable for me, too.
> I can tell the imaging software (partimage) to cut the image into 2 GB 
> volumes, but I doubt that this enables effective pooling, since the 
> system volume I make the image from has temporary files, profiles, 
> databases and so on stored. If every image file has changes (even if 
> there are only a few megs altered), I expect the rsync algorithm to be 
> less effective than comparing large files where it is more likely to 
> have a "unchanged" long part which is not interrupted by artificial 
> file size boundaries resulting from the 2 GB volume splitting.
>
> I hope I made my situation clear.
> If anyone has experiences in large image file handling which I may 
> benefit from, please let be know!

The real question is what are you trying to do, do you want a backup (ie 
another single copy of a recent version of the image file) or an archive 
(ie a series of daily or weekly snapshots of the images as they change)?

BackupPC is designed to produce archives mainly of small to medium sized 
files and it stores the full file not changes (aka deltas) and so for 
large files (multi gigabyte in your case) that change each backup it is 
much less efficient.

To my mind if you already have backuppc backing up your data partitions 
and the issue is that you want to back up the raw disk images from your 
virtual machines OS disks the best thing to snapshot them as you have 
already setup and then simply rsync that snapshot to another host which 
will just transfer the deltas between the diskimages. This will leave 
you with backuppc providing an ongoing archive for your data partitions 
and a simple rsync backup for your root disks that will at worse mean 
you lose a days changes in case of a total failure.

-- 
Tim Fletcher


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Andreas Piening
Hi Les,

I allready thought about that and I agree that the handling of large image 
files is problematic in general. I need to make images for the windows-based 
virtual machines to get them back running when a disaster happens. If I go away 
from backuppc for transfering these images, I don't see any benefits (maybe 
because I just don't know of a image solution that solves my problems better).
As I already use backuppc to do backups of the data partitions (all linux 
based) I don't want my backups to become more complex than necessary.
I can live with the amount of harddisk space the compressed images will consume 
and the IO while merging the files is acceptable for me, too.
I can tell the imaging software (partimage) to cut the image into 2 GB volumes, 
but I doubt that this enables effective pooling, since the system volume I make 
the image from has temporary files, profiles, databases and so on stored. If 
every image file has changes (even if there are only a few megs altered), I 
expect the rsync algorithm to be less effective than comparing large files 
where it is more likely to have a "unchanged" long part which is not 
interrupted by artificial file size boundaries resulting from the 2 GB volume 
splitting.

I hope I made my situation clear.
If anyone has experiences in large image file handling which I may benefit 
from, please let be know!

Thank you very much,

Andreas Piening

Am 12.05.2012 um 06:04 schrieb Les Mikesell:

> On Fri, May 11, 2012 at 4:01 PM, Andreas Piening
>  wrote:
>> Hello Backuppc-users,
>> 
>> I stuck while trying to identify the suitable rsync parameters to handle 
>> large image file backups with backuppc.
>> 
>> Following scenario: I use partimage to do LVM-snapshot based full images of 
>> my virtual (windows-) machines (KVM) blockdevices. I want to save theses 
>> images from the virtualization server to my backup machine running backuppc. 
>> The images are between 40 and 60 Gigs uncompressed each. The time-window for 
>> the backup needs to stay outside the working hours and is not large enough 
>> to transfer the images over the line every night. I red about rsync's 
>> capability to only transfer the changed parts in the file by a clever 
>> checksum-algorithm to minimize the network traffic. That's what I want.
>> 
>> I tested it by creating a initial backup of one image, created a new one 
>> with only a few megs of changed data and triggered a new backup process. But 
>> I noticed that the whole file was re-transfered. I waited till the end to 
>> get sure about that and decided that it was not the ultimate idea to check 
>> this with a compressed 18 GB image file but this was my real woking data 
>> image and I expected it to work like expected. Searching for reasons for the 
>> complete re-transmission I ended in a discussion-thread where they talked 
>> about rsync backups of compressed large files. The explanation made sense to 
>> me: The compression algorithm can cause a complete different archive file 
>> even if just some megs of data at the beginning of the file hast been 
>> altered, because of recursion and back-references.
>> So I decided to store my image uncompressed which is about 46 Gigs now. I 
>> found out that I need to add the "-C" parameter to rsync, since data 
>> compression is not enabled per default. Anyway: the whole file was 
>> re-created in the second backup run instead of just transfering the changed 
>> parts, again.
>> 
>> My backuppc-option "RsyncClientCmd" is set to "$sshPath -C -q -x -l root 
>> $host $rsyncPath $argList+" which is backup-pcs default disregarding the 
>> "-C".
>> 
>> Honestly, I don't understand the exact reason for this. There are some 
>> possibilities that may be guilty:
>> 
>> -> partimage does not create a linear backup image file, even if it is 
>> uncompressed
>> -> there is just another parameter for rsync I missed which enables 
>> differential file-changes-transfers
>> -> rsync exames the file but decides to not use differential updates for 
>> this one because of it's size or just because it's created-timestamp is not 
>> the same as the prior one
>> 
>> Please give me a hint if you've successfully made differential backups of 
>> large image files.
> 
> I'm not sure there is a good way to handle very large files in
> backuppc.  Even if rysnc identifies and transfers only the changes,
> the server is going to copy and merge the unchanged parts from the
> previous file which may take just as long anyway, and it will not be
> able to pool the copies.Maybe you could split the target into many
> small files before the backup.  Then any chunk that is unchanged
> between runs would be skipped quickly and the contents could be
> pooled.
> 
> -- 
>  Les Mikesell
>lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and h