Re: [BackupPC-users] Encrypting BackupPC TopDir

2010-02-18 Thread Eric Persson
Les Mikesell wrote:
> tribat wrote:
>> Well the thing is I already have a RAID-5 array set up without LVM. It's 
>> just one huge ext3 filesystem, and thats the way I like it. I don't really 
>> like the idea of splitting the available disk space cause I like having one 
>> massive filesystem that can offer me all the unused space on the device when 
>> needed. If the device is split into multiple partitions then also the unused 
>> space is gonna be splitted.
>>
>> So that's why I would really like the EncFS to work. I've done all my 
>> testing with the default EncFS settings so that would mean "External 
>> Chaining" has been disabled.
> 
> If you aren't running any other web sites it might work to run httpd as the 
> backuppc user - and change ownership on any other files it needs.
> 

If tribat is running multiple sites, a simple solution would be to use 
apache-mpm-itk( http://mpm-itk.sesse.net/ ) and set the backuppc user on 
the virtualhost running the backuppc webgui. I'm not 100% sure it works 
with the cgi, but it works with mod_php so fairly good chances, and 
better than something like suexec.

/eric - new to the list, but mostly reading others posts ;)

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Encrypting BackupPC TopDir

2010-02-18 Thread tribat

I would rather like the EncFS --public mode to work correctly than add 
complexity to my apache setup.

+--
|This was sent by tri...@r00t3d.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] zero-byte files

2010-02-18 Thread Chris Baker

>>What do you mean by "ignoring"?

It seems that BackupPC is backing them up. However, they did not show when I
did a restore of an entire data folder. All other files restored fine.

>>Is there anything special/different about those zero-byte files? 
>>(e.g., permissions, ownership, special file types)

I don't think so. The files are generated by our very crappy accounting
system.

>>What transfer method are you using?

I'm using smb.



--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Odd 'unexpected repeated share name error'

2010-02-18 Thread John Rouillard
On Sun, Feb 14, 2010 at 01:39:23PM -0800, Craig Barratt wrote:
> If you want to try the two fixes, here's a patch against 3.2.0beta1
> (note: I haven't tested this yet - just did this sitting on a plane).
> 
> Craig
> 
> --- bin/BackupPC_dump 2010-01-24 17:30:43.0 -0800
> +++ bin/BackupPC_dump 2010-02-14 12:02:14.859375000 -0800
> @@ -623,6 +623,7 @@
>  #
>  # Now backup each of the shares
>  #
> +my $shareDuplicate = {};
>  for my $shareName ( @$ShareNames ) {
>  local(*RH, *WH);
>  
> @@ -632,11 +633,17 @@
>  $shareName = encode("utf8", $shareName);
>  $stat{xferOK} = $stat{hostAbort} = undef;
>  $stat{hostError} = $stat{lastOutputLine} = undef;
> -if ( -d "$Dir/new/$shareName" ) {
> +if ( $shareName eq "" ) {
> +print(LOG $bpc->timeStamp,
> +  "unexpected empty share name skipped\n");
> +next;
> +}
> +if ( $shareDuplicate->{$shareName} ) {
>  print(LOG $bpc->timeStamp,
>"unexpected repeated share name $shareName skipped\n");
>  next;
>  }
> +$shareDuplicate->{$shareName} = 1;
>  
>  UserCommandRun("DumpPreShareCmd", $shareName);
>  if ( $? && $Conf{UserCmdCheckStatus} ) {
> @@ -915,6 +922,10 @@
>  #
>  last;
>  }
> +#
> +# Wait for any child processes to exit
> +#
> +1 while ( wait() >= 0 );
>  }
>  
>  #

I hand applied the zombie fix around line 915 and that resulted in all
the backups reporting:

  2010-02-18 14:25:57 DumpPreShareCmd returned error status -1... exiting

I rolled back the change. Any ideas?

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Mike Bydalek
Hello.

Recently I've started using BackupPC to backup my file server and am
seeing some things that just don't quite make much sense.  Lately
backups have been taking quite some time, in fact the current one
started on 2/16 @ 11pm and is still running.  I do have a lot of data,
around 330G, but not a whole lot changes on a daily basis.

Here are my latest backups with the times:

Compression performance for files already in the pool and newly
compressed files.
Existing Files  New Files
Backup# TypeComp Level  Size/MB Comp/MB Comp
Size/MB Comp/MB Comp
0   full3   78446.6 45871.8 41.5%   258032.2
155715.639.7%
2   incr3   276482.0165123.840.3%   143.0   70.3
50.8%
3   incr3   900.7   530.7   41.1%   964.7   469.5   51.3%
4   incr3   113.5   2.7 97.6%   218.0   30.885.9%
5   incr3   73.46.1 91.7%   194.8   32.283.5%
6   incr3   304.3   172.2   43.4%   1735.9  944.0   45.6%
7   incr3   275955.3165019.140.2%   1337.0  658.0   
50.8%
8   incr3   520.4   249.4   52.1%   672.7   282.5   58.0%
9   incr3   502.9   243.5   51.6%   587.6   264.1   55.1%
10  incr3   116.8   3.7 96.8%   201.5   24.388.0%
11  incr3   14.03.0 78.7%   244.3   32.886.6%
12  incr3   127.1   5.3 95.9%   82.222.372.8%
13  incr3   276989.9165643.440.2%   329.3   40.7
87.7%
14  incr3   275957.9165146.040.2%   1503.5  615.1   
59.1%

My question is, why did backups 13 and 14 backup all that data?  Same
with 2 and 7 for that matter.

Here's the times for the first few backups to give you an idea of the
time it's taking:

Backup#  TypeFilled  Level   Start Date  
Duration/mins  
Age/days Server Backup Path
0   fullyes 0   2/9 07:29   1767.6  9.0 
/backup/BackupPC/pc/fileserver/0
2   incrno  1   2/10 23:59  1124.8  7.3 
/backup/BackupPC/pc/fileserver/2
3   incrno  3   2/11 19:00  68.36.5 
/backup/BackupPC/pc/fileserver/3
4   incrno  4   2/12 01:00  73.66.3 
/backup/BackupPC/pc/fileserver/4
5   incrno  5   2/12 07:00  73.96.0 
/backup/BackupPC/pc/fileserver/5
6   incrno  6   2/12 13:00  102.5   5.8 
/backup/BackupPC/pc/fileserver/6
7   incrno  1   2/12 19:00  1097.0  5.5 
/backup/BackupPC/pc/fileserver/7

Below is my config.  I'm still messing with the IncrLevels and have a
super short period just to get some increments and all that going.

$Conf{PingMaxMsec} = '200';
$Conf{RsyncShareName} = [
  '/',
  '/data/secondary',
  '/boot'
];
$Conf{BackupFilesExclude} = {
  '*' => [
'/dev',
'/mnt',
'/proc',
'/sys',
'/tmp',
'/var/named/chroot/proc'
  ]
};
$Conf{BlackoutPeriods} = [];
$Conf{IncrKeepCnt} = '12';
$Conf{IncrLevels} = [
  '1',
  '2',
  '3',
  '4',
  '5',
  '6'
];
$Conf{IncrPeriod} = '0.24';
$Conf{RsyncArgs} = [
  '--numeric-ids',
  '--perms',
  '--owner',
  '--group',
  '-D',
  '--links',
  '--hard-links',
  '--times',
  '--block-size=2048',
  '--recursive'
];

I also have logging set to 1 and when looking at the XferLOG I do see
all the files, but I'm seeing a lot of "create d"  I'm not sure what
the column should be when it is transferring a file.

Any ideas as to what may be going on?  After the initial backup, I
would expect each increment to only take a short amount of time.

Thanks in advance for any insight!

Regards,
Mike

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread John Rouillard
On Thu, Feb 18, 2010 at 07:51:13AM -0700, Mike Bydalek wrote:
> Recently I've started using BackupPC to backup my file server and am
> seeing some things that just don't quite make much sense.  Lately
> backups have been taking quite some time, in fact the current one
> started on 2/16 @ 11pm and is still running.  I do have a lot of data,
> around 330G, but not a whole lot changes on a daily basis.
> 
> Here are my latest backups with the times:
> 
> Compression performance for files already in the pool and newly
> compressed files.
>   Existing Files  New Files
> Backup#   TypeComp Level  Size/MB Comp/MB Comp
> Size/MB Comp/MB Comp
> 0 full3   78446.6 45871.8 41.5%   258032.2
> 155715.639.7%
> 2 incr3   276482.0165123.840.3%   143.0   70.3
> 50.8%
> 3 incr3   900.7   530.7   41.1%   964.7   469.5   51.3%
> 4 incr3   113.5   2.7 97.6%   218.0   30.885.9%
> 5 incr3   73.46.1 91.7%   194.8   32.283.5%
> 6 incr3   304.3   172.2   43.4%   1735.9  944.0   45.6%
> 7 incr3   275955.3165019.140.2%   1337.0  658.0   
> 50.8%
> 8 incr3   520.4   249.4   52.1%   672.7   282.5   58.0%
> 9 incr3   502.9   243.5   51.6%   587.6   264.1   55.1%
> 10incr3   116.8   3.7 96.8%   201.5   24.388.0%
> 11incr3   14.03.0 78.7%   244.3   32.886.6%
> 12incr3   127.1   5.3 95.9%   82.222.372.8%
> 13incr3   276989.9165643.440.2%   329.3   40.7
> 87.7%
> 14incr3   275957.9165146.040.2%   1503.5  615.1   
> 59.1%
> 
> My question is, why did backups 13 and 14 backup all that data?  Same
> with 2 and 7 for that matter.

What level are your incremental backups? if backup 2 was at level 1
and backup 7 was at level 1 (you use levels 1 2 3 4 5 6) and backup 13
is back at level 1 that's kind of what I would expect since level 1
backs up everything since since the last full.

However 14 should be quite a bit less unelss it was also a level 1.
 
> Below is my config.  I'm still messing with the IncrLevels and have a
> super short period just to get some increments and all that going.
 [...]
> $Conf{IncrLevels} = [
>   '1',
>   '2',
>   '3',
>   '4',
>   '5',
>   '6'
> ];

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Error when attempting to backup Windows 7 client PC network shares using smb transport method

2010-02-18 Thread Christian Tye
Hello,

 I am attempting to backup several network folder shares on a Windows client
PC running Windows 7 Home Premium 32-bit OS using smb transport method, and
I am receiving the following errors (see below), on *ANY* of the shares that
I add to be backed up by BackupPC. I have had no issues backing up many
different versions of Windows including, Windows 2000, Windows XP, and
Windows Vista. This is our first Windows 7 client PC in our company to be
added to BackupPC and I was wondering if there have been any kind of changes
done in Windows 7 that would be preventing it from being backed up
successfully? Or if Windows 7 is even compatible w/ BackupPC? I have checked
the permissions on the shared folders of the client PC and they seem to be
ok, but I could be wrong. Can anyone suggest/recommend what the permissions
should be on the shared drives/folders? I wasn't sure if this issue is
related to a samba config setting or if I just have the permissions on the
shared drives/folders incorrect? Any help or comments would be greatly
appreciated and if any additional info is needed, please just let me know,
thanks.

 Samba version installed on backup server:  3.2.7-11.4.1

BackupPC version installed on backup server:  3.1.0
Client PC name: 'djwc'
*
Here is the output from my XferLOG: *

  Contents of file /data/BackupPC/pc/djwc/XferLOG.bad.z, modified 2010-02-18
08:01:43

Running: /usr/bin/smbclient djwc\\D -U John\ Conner -E -d 1 -c
tarmode\ full -Tc -
full backup started for share D
Xfer PIDs are now 30935,30934
session setup failed: SUCCESS - 0
session setup failed: NT_STATUS_OK
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0
sizeExistComp, 0 filesTotal, 0 sizeTotal
Got fatal error during xfer (No files dumped for share D)
Backup aborted (No files dumped for share D)
Not saving this as a partial backup since it has fewer files than the
prior one (got 0 and 0 files versus 0)

*Here is the output from the '.pl' file  for the Windows 7 client PC:*

  *djwc.pl file: *

 $Conf{SmbShareName} = ['D', 'F', 'G', 'H', 'I', 'C'];

$Conf{XferMethod} = 'smb';

$Conf{SmbSharePasswd} = 'morph';

$Conf{SmbShareUserName} = 'John Conner';

$Conf{FullKeepCnt} = [4, 0, 4, 0, 0, 4];
--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error when attempting to backup Windows 7 client PC network shares using smb transport method

2010-02-18 Thread Les Mikesell
On 2/18/2010 1:31 PM, Christian Tye wrote:
> Hello,
>
>   I am attempting to backup several network folder shares on a Windows
> client PC running Windows 7 Home Premium 32-bit OS using smb transport
> method, and I am receiving the following errors (see below), on _ANY_ of
> the shares that I add to be backed up by BackupPC. I have had no issues
> backing up many different versions of Windows including, Windows 2000,
> Windows XP, and Windows Vista. This is our first Windows 7 client PC in
> our company to be added to BackupPC and I was wondering if there have
> been any kind of changes done in Windows 7 that would be preventing it
> from being backed up successfully? Or if Windows 7 is even compatible w/
> BackupPC? I have checked the permissions on the shared folders of the
> client PC and they seem to be ok, but I could be wrong. Can anyone
> suggest/recommend what the permissions should be on the shared
> drives/folders? I wasn't sure if this issue is related to a samba config
> setting or if I just have the permissions on the shared drives/folders
> incorrect? Any help or comments would be greatly appreciated and if any
> additional info is needed, please just let me know, thanks.
>
> Samba version installed on backup server:  3.2.7-11.4.1

I've seen some changelog entries on recent samba updates that mention 
fixing compatibility with windows 7 but don't recall exactly which 
version or what it fixed.  You should be able to connect to a share 
manually with smbclient with the same credentials to see what works.  It 
looks like you are connecting but can't see any files.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup copied files

2010-02-18 Thread Chris Owen
Hi Guys

I copied some files from our old file server. The problem I have got  
is that the files that where copied are not being backed up.

I did read somewhere that this is how its ment to work but I am unable  
to find how to change this.

I am sure I have just missed somthing.

Many Thanks

Chris Owen
Sent from my iPhone

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup copied files

2010-02-18 Thread Les Mikesell
On 2/18/2010 4:02 PM, Chris Owen wrote:
> Hi Guys
>
> I copied some files from our old file server. The problem I have got
> is that the files that where copied are not being backed up.
>
> I did read somewhere that this is how its ment to work but I am unable
> to find how to change this.
>
> I am sure I have just missed somthing.

What xfer type are you using and has a full happened since the files 
were copied?  The tar and smb methods use only the file timestamps to 
determine what to take in incrementals, so if your file copy preserved 
old timestamps they will be skipped until the next full.  If you use 
rsync or rsyncd they should be picked up on the next run, incremental or 
full.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Check-if-alive-pings alternatives

2010-02-18 Thread Matthias Meyer
Sorin Srbu wrote:

>>-Original Message-
>>From: Matthias Meyer [mailto:matthias.me...@gmx.li]
>>Sent: Thursday, February 11, 2010 11:35 PM
>>To: backuppc-users@lists.sourceforge.net
>>Subject: Re: [BackupPC-users] Check-if-alive-pings alternatives
>>
>>Sorin Srbu wrote:
>>> Short of making the router visible on the network for pings, is there
>>> any other way to circumvent this problem? Maybe connecting to the
>>> ssh-port or something? Ideas and pointers are greatly appreciated!
>>
>>It isn't necessary that BackupPC use ping.
>>It is configurable via $Conf{PingCmd}.
>>My clients start a ssh-tunnel to my server and my PingCmd check the
>>established connection with netstat.
> 
> Do you use any particular switches with that then?
> 
Yes, within my "ping-command".
br
Matthias
-- 
Don't Panic


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Mike Bydalek
On Thu, Feb 18, 2010 at 12:04 PM, John Rouillard
 wrote:
> On Thu, Feb 18, 2010 at 07:51:13AM -0700, Mike Bydalek wrote:
>> My question is, why did backups 13 and 14 backup all that data?  Same
>> with 2 and 7 for that matter.
>
> What level are your incremental backups? if backup 2 was at level 1
> and backup 7 was at level 1 (you use levels 1 2 3 4 5 6) and backup 13
> is back at level 1 that's kind of what I would expect since level 1
> backs up everything since since the last full.
>
> However 14 should be quite a bit less unelss it was also a level 1.
>
>> Below is my config.  I'm still messing with the IncrLevels and have a
>> super short period just to get some increments and all that going.
>  [...]
>> $Conf{IncrLevels} = [
>>   '1',
>>   '2',
>>   '3',
>>   '4',
>>   '5',
>>   '6'
>> ];
>

After re-reading the documentation for {IncrLevels} again the
configuration settings are starting to make sense.  The only question
I have left is, does creating a new "full" backup *have* to do the
entire full backup again?  Can't it just perform an increment and
merge it to create a full?  The reason I ask is I'm planning on moving
this server off-site so it'll go over a WAN.  Sending 250G over a 1M
connection every week or two doesn't sound fun!  Is this what
$Conf{IncrFill} is supposed to handle?

What I want is to basically perform a backup every day and keep 30
days of backups without doing another 'full' backup.  I don't really
care how many 'full' backups I have as long as I can restore from 29
days ago.  Would these settings do the trick for that?

$Conf{FullPeriod}  = 30;
$Conf{IncrPeriod}  = 1;
$Conf{IncrKeepCnt} = 30;
$Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6 .. 30];
$Conf{IncrFill} = 1;

This may start to get off topic, so I can start a new thread if needed.

Thanks for your help!

Regards,
Mike

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Kameleon
I can speak from experience on the matter of a slow link back to the
backuppc server. We have mutliple sites that we backup to a central
backuppc server. 2 of the sites have a 256k upload and 2 others are
T1's. The only issue is getting the initial full. What I did on the 2
local (256k) sites is take the backuppc server to the site and run the
initial full backup. After that everything is able to run without
issue over any link we have. So in that regard you should be fine.



On 2/18/10, Mike Bydalek  wrote:
> On Thu, Feb 18, 2010 at 12:04 PM, John Rouillard
>  wrote:
>> On Thu, Feb 18, 2010 at 07:51:13AM -0700, Mike Bydalek wrote:
>>> My question is, why did backups 13 and 14 backup all that data?  Same
>>> with 2 and 7 for that matter.
>>
>> What level are your incremental backups? if backup 2 was at level 1
>> and backup 7 was at level 1 (you use levels 1 2 3 4 5 6) and backup 13
>> is back at level 1 that's kind of what I would expect since level 1
>> backs up everything since since the last full.
>>
>> However 14 should be quite a bit less unelss it was also a level 1.
>>
>>> Below is my config.  I'm still messing with the IncrLevels and have a
>>> super short period just to get some increments and all that going.
>>  [...]
>>> $Conf{IncrLevels} = [
>>>   '1',
>>>   '2',
>>>   '3',
>>>   '4',
>>>   '5',
>>>   '6'
>>> ];
>>
>
> After re-reading the documentation for {IncrLevels} again the
> configuration settings are starting to make sense.  The only question
> I have left is, does creating a new "full" backup *have* to do the
> entire full backup again?  Can't it just perform an increment and
> merge it to create a full?  The reason I ask is I'm planning on moving
> this server off-site so it'll go over a WAN.  Sending 250G over a 1M
> connection every week or two doesn't sound fun!  Is this what
> $Conf{IncrFill} is supposed to handle?
>
> What I want is to basically perform a backup every day and keep 30
> days of backups without doing another 'full' backup.  I don't really
> care how many 'full' backups I have as long as I can restore from 29
> days ago.  Would these settings do the trick for that?
>
> $Conf{FullPeriod}  = 30;
> $Conf{IncrPeriod}  = 1;
> $Conf{IncrKeepCnt} = 30;
> $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6 .. 30];
> $Conf{IncrFill} = 1;
>
> This may start to get off topic, so I can start a new thread if needed.
>
> Thanks for your help!
>
> Regards,
> Mike
>
> --
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Sent from my mobile device

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread John Rouillard
On Thu, Feb 18, 2010 at 06:06:15PM -0700, Mike Bydalek wrote:
> On Thu, Feb 18, 2010 at 12:04 PM, John Rouillard
>  wrote:
> > On Thu, Feb 18, 2010 at 07:51:13AM -0700, Mike Bydalek wrote:
> >> My question is, why did backups 13 and 14 backup all that data?  Same
> >> with 2 and 7 for that matter.
> >
> > What level are your incremental backups? if backup 2 was at level 1
> > and backup 7 was at level 1 (you use levels 1 2 3 4 5 6) and backup 13
> > is back at level 1 that's kind of what I would expect since level 1
> > backs up everything since since the last full.
> >
> > However 14 should be quite a bit less unelss it was also a level 1.
> >
> >> Below is my config.  I'm still messing with the IncrLevels and have a
> >> super short period just to get some increments and all that going.
> >  [...]
> >> $Conf{IncrLevels} = [
> >>   '1',
> >>   '2',
> >>   '3',
> >>   '4',
> >>   '5',
> >>   '6'
> >> ];
> >
> 
> After re-reading the documentation for {IncrLevels} again the
> configuration settings are starting to make sense.  The only question
> I have left is, does creating a new "full" backup *have* to do the
> entire full backup again?  Can't it just perform an increment and
> merge it to create a full?  The reason I ask is I'm planning on moving
> this server off-site so it'll go over a WAN.  Sending 250G over a 1M
> connection every week or two doesn't sound fun! 

If you are using rsync, it will only transfer new/incremental data.
If you are using tar I think it transfers everything. I assume you
aren't using smb across the internet 8-). Ftp will transfer
everything.
 
> Is this what $Conf{IncrFill} is supposed to handle?

No, that controls what the tree in the storage directory looks like.
Normally an incremental on disk (under the pc/hostname/number
directory) consists only of new files. The merging of multiple
incremental backups and fulls is done by the web interface so by
browsing backup 27 you can restore /etc/password that was backed up in
run 0.

If you want to sync an entire backup tree offsite (e.g. for disaster
recovery under another backuppc instance) and you sync an incremental
you will be missing most of the files. If you sync a filled
incremental however you actually have a merged copy (on disk) of all
the files on the system from the prior backups. A copy of the merged
incremental can be used to restore a system.

> What I want is to basically perform a backup every day and keep 30
> days of backups without doing another 'full' backup.  I don't really
> care how many 'full' backups I have as long as I can restore from 29
> days ago.  Would these settings do the trick for that?
> 
> $Conf{FullPeriod}  = 30;
> $Conf{IncrPeriod}  = 1;
> $Conf{IncrKeepCnt} = 30;
> $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6 .. 30];
> $Conf{IncrFill} = 1;

They will, but to restore backuppc will have to merge the files from
30 incrementals. This could slow down the web interface or
BackupPC_zip/tar generation. If you do something like (untested):

> $Conf{FullPeriod}  = 8;
> $Conf{FullKeepCount} = 4;
> $Conf{IncrPeriod}  = 1;
> $Conf{IncrKeepCnt} = 28;
> $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6, 7];
> $Conf{IncrFill} = 0;

you should have a full every 8 days and you keep 4 of them so you get
32 days of coverage. You do incrementals between every full and keep
28 of them (which leads to 32 day coverage 28 incrementals + 4 fulls).
But to restore you only have 8 backups that have to be merged together
to create a valid restore.

Also an advantage to this is that you get fulls more often and these
fulls should take less time to run than the full done every 30
days. E.G. assume you change 2GB of data every 8 days. Each full will
need to sync only 2GB of data. In your original scheme the full will
have to move ~8GB of data (4x2). If you have a defined time window in
which a backup must complete, you need to schedule your fulls so that
they can complete within the window and having more frequent fulls
does this.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Les Mikesell
Mike Bydalek wrote:
> 
> 
> After re-reading the documentation for {IncrLevels} again the
> configuration settings are starting to make sense.  The only question
> I have left is, does creating a new "full" backup *have* to do the
> entire full backup again?  Can't it just perform an increment and
> merge it to create a full?  The reason I ask is I'm planning on moving
> this server off-site so it'll go over a WAN.  Sending 250G over a 1M
> connection every week or two doesn't sound fun!  Is this what
> $Conf{IncrFill} is supposed to handle?

Why don't you just do fulls more often?  With rsync they don't transfer any 
more 
data than an incremental and they always become the comparison for the next 
run. 
The only real difference is that it reads everything on the target system, but 
over a WAN that won't make it much slower.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Timothy J Massey
Mike Bydalek  wrote on 02/18/2010 
08:06:15 PM:

> After re-reading the documentation for {IncrLevels} again the
> configuration settings are starting to make sense.  The only question
> I have left is, does creating a new "full" backup *have* to do the
> entire full backup again?  Can't it just perform an increment and
> merge it to create a full?  The reason I ask is I'm planning on moving
> this server off-site so it'll go over a WAN.  Sending 250G over a 1M
> connection every week or two doesn't sound fun!  Is this what
> $Conf{IncrFill} is supposed to handle?

Not really.  What is supposed to handle that is the type of backup 
transfer method you're using.

You really, *really* want to use rsync or rsyncd to do this.  In that 
case, the difference between a full and an incremental bandwidth-wise is 
negligible because of rsync's bandwidth-saving properties.

Without rsync, fulls will be nearly impossible no matter *what* your 
incremental count is.  With rsync, you don't have t do anything crazy: 
just use the default settings.

> What I want is to basically perform a backup every day and keep 30
> days of backups without doing another 'full' backup.  I don't really
> care how many 'full' backups I have as long as I can restore from 29
> days ago.  Would these settings do the trick for that?

If you're using rsync, don't bother.  Just keep the typical 
full/incremental settings; you only have to change the IncrKeepCnt to 26 
and the FullKeepCnt to 4 (or keep IncrKeepCnt at 30 and the FullKeepCnt at 
4 if you have the space).

The initial sync is the doozy.  People have all kinds of ways of doing 
this:  moving the backup server to the local network of the host for the 
first full, doing them a piece of a time, or just waiting the week or so 
it would take to do the backup.  Once the first full is done, the rest 
should be fine, depending on your amount of changed data per week.  Now, 
if you're changing 10GB of data per day, any backup is going to be 
difficult.  But assuming reasonable deltas, this will work perfectly.

Tim Massey


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Craig Barratt
Mike,

> Backup# TypeComp Level  Size/MB Comp/MB Comp  
>   Size/MB Comp/MB Comp
> 0   full3   78446.6 45871.8 41.5%   258032.2  
>   155715.639.7%
> 2   incr3   276482.0165123.840.3%   143.0   70.3  
>   50.8%

Notice that the first full is 78GB, but the incr is 276GB.

I would guess you added shares, directories or changed excludes after the
full, but before the incremental.  Or a large amount of data was added
between these two backups.

> 3   incr3   900.7   530.7   41.1%   964.7   469.5   51.3%
> 4   incr3   113.5   2.7 97.6%   218.0   30.885.9%
> 5   incr3   73.46.1 91.7%   194.8   32.283.5%
> 6   incr3   304.3   172.2   43.4%   1735.9  944.0   45.6%
> 7   incr3   275955.3165019.140.2%   1337.0  658.0 
>   50.8%
> 8   incr3   520.4   249.4   52.1%   672.7   282.5   58.0%
> 9   incr3   502.9   243.5   51.6%   587.6   264.1   55.1%
> 10  incr3   116.8   3.7 96.8%   201.5   24.388.0%
> 11  incr3   14.03.0 78.7%   244.3   32.886.6%
> 12  incr3   127.1   5.3 95.9%   82.222.372.8%
> 13  incr3   276989.9165643.440.2%   329.3   40.7  
>   87.7%
> 14  incr3   275957.9165146.040.2%   1503.5  615.1 
>   59.1%
> 
> My question is, why did backups 13 and 14 backup all that data?  Same
> with 2 and 7 for that matter.
> 
> Here's the times for the first few backups to give you an idea of the
> time it's taking:
> 
> Backup#  TypeFilled  Level   Start Date  
> Duration/mins
> Age/days Server Backup Path
> 0   fullyes 0   2/9 07:29   1767.6  9.0 
> /backup/BackupPC/pc/fileserver/0
> 2   incrno  1   2/10 23:59  1124.8  7.3 
> /backup/BackupPC/pc/fileserver/2
> 3   incrno  3   2/11 19:00  68.36.5 
> /backup/BackupPC/pc/fileserver/3
> 4   incrno  4   2/12 01:00  73.66.3 
> /backup/BackupPC/pc/fileserver/4
> 5   incrno  5   2/12 07:00  73.96.0 
> /backup/BackupPC/pc/fileserver/5
> 6   incrno  6   2/12 13:00  102.5   5.8 
> /backup/BackupPC/pc/fileserver/6
> 7   incrno  1   2/12 19:00  1097.0  5.5 
> /backup/BackupPC/pc/fileserver/7

You can see that incrementals 2 & 7 are level 1 (4th column).  Since the
full (#0) has a much smaller set of files, those level 1 incrementals
are backing up a lot of data, and a lot more than the original full.

Bottom line: you need to do a full backup.  It's going to take a while
(but no longer than the incrementals).  After that, the future fulls
and incrementals will be a lot faster.

Doing a full backup is good practice if you make a significant
configuration change that significantly changes what is being
backed up, or add a large amount of data.

Craig

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-18 Thread Ralf Gross
Chris Robertson schrieb:
> Ralf Gross wrote:
> > Hi,
> >
> > I'm faced with the growing storage demands in my department. In the
> > near future we will need several hundred TB. Mostly large files. ATM
> > we already have 80 TB of data with gets backed up to tape.
> >
> > Providing the primary storage is not the big problem. My biggest
> > concern is the backup of the data. One solution would be using a
> > NetApp solution with snapshots. On the other hand is this a very
> > expensive solution, the data will be written once, but then only read
> > again. Short: it should be a cheap solution, but the data should be
> > backed up. And it would be nice if we could abandon tape backups...
> >
> > My idea is to use some big RAID 6 arrays for the primary data, create
> > LUNs in slices of max. 10 TB with XFS filesystems.
> >
> > Backuppc would be ideal for backup, because of the pool feature (we
> > already use backuppc for a smaller amount of data).
> >
> > Has anyone experiences with backuppc and a pool size of >50 TB? I'm
> > not sure how well this will work. I see that backuppc needs 45h to
> > backup 3,2 TB of data right now, mostly small files.
> >
> > I don't like very large filesystems, but I don't see how this will
> > scale with either multiple backuppc server and smaller filesystems
> > (well, more than one server will be needed anyway, but I don't want to
> > run 20 or more server...) or (if possible) with multiple backuppc
> > instances on the same server, each with a own pool filesystem.
> >
> > So, anyone using backuppc in such an environment?
> >   
> 
> In one way, and compared to some my backup set is pretty small (pool is 
> 791.45GB).  In another dimension, I think it is one of the larger 
> (comprising 20874602 files).  The breadth of my pool leads to...
> 
> -bash-3.2$ df -i /data/
> FilesystemInodes   IUsed   IFree IUse% Mounted on
> /dev/drbd0   1932728448 47240613 18854878353% /data
> 
> ...nearly 50 million inodes used (so somewhere close to 30 million hard 
> links).  XFS holds up surprisingly well to this abuse*, but the strain 
> shows.  Traversing the whole pool takes three days.  Attempting to grow 
> my tail (the number of backups I keep) causes serious performance 
> degradation as I approach 55 million inodes.
> 
> Just an anecdote to be aware of.

I think I've to look for a different solution, I just can't imagine a
pool with > 10 TB.

 
> * I have recently taken my DRBD mirror off-line and copied the BackupPC 
> directory structure to both XFS-without-DRBD and an EXT4 file system for 
> testing.  Performance of the XFS file system was not much different 
> with, or without DRBD (a fat fiber link helps there).  The first 
> traversal of the pool on the EXT4 partition is about 66% through the 
> pool traversal after about 96 hours.

nice ;)

Ralf

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/