Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adam Goryachev wrote:
> Filipe Brandenburger wrote:
>> Hi,
>
>> On Thu, Jun 11, 2009 at 14:13, Les Mikesell wrote:
>>> Mirco Piccin wrote:
 Anyway, each daily file is quite similar to the each other, so rsync
 (or custom script) should be the better way to to the job.
>>> That won't help unless each file is named the same as the previous one.
>> You can try to use the "-y" or "--fuzzy" option to rsync (at least
>> rsync 3) to implement this.
>
>> Quoting from the man page: "-y, --fuzzy: This option tells rsync that
>> it should look for a basis file for any destination file that  is
>> missing. The current algorithm looks in the same directory as the
>> destination file for either a file that has an identical size and
>> modified-time, or a similarly-named file. If found, rsync uses the
>> fuzzy basis file to try to speed up the transfer."
>
> I'm assuming this doesn't help with backuppc because of the whole perl
> module thing ? It would be interesting to see how "fuzzy" the filenames
> can be?

BTW, what is the possibility of having backuppc request the first 100k
or whatever of a file is needed to calculate the pool checksum, then see
if the file exists in the pool, and then we wouldn't need to re-download
(for example) the 30M linux kernel package, or the 300M windows sp3
file, etc... This would also solve the issue of renaming
files/folders... Can the rsync protocol handle sending just the first
portion of a file?

Of course the full file checksum would need to match as well :) probably
if we find a match in the pool. then re-start the transfer with this
target file, so that we will run the checksum over the entire file...

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkox0fQACgkQGyoxogrTyiX+pgCaAqg6FeQmMsX0MLMY9VaaEvZS
D28AoIae2FASFEFr8UdeCNzZOVDI0cQG
=0cy2
-END PGP SIGNATURE-

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Filipe Brandenburger wrote:
> Hi,
> 
> On Thu, Jun 11, 2009 at 14:13, Les Mikesell wrote:
>> Mirco Piccin wrote:
>>> Anyway, each daily file is quite similar to the each other, so rsync
>>> (or custom script) should be the better way to to the job.
>> That won't help unless each file is named the same as the previous one.
> 
> You can try to use the "-y" or "--fuzzy" option to rsync (at least
> rsync 3) to implement this.
> 
> Quoting from the man page: "-y, --fuzzy: This option tells rsync that
> it should look for a basis file for any destination file that  is
> missing. The current algorithm looks in the same directory as the
> destination file for either a file that has an identical size and
> modified-time, or a similarly-named file. If found, rsync uses the
> fuzzy basis file to try to speed up the transfer."

I'm assuming this doesn't help with backuppc because of the whole perl
module thing ? It would be interesting to see how "fuzzy" the filenames
can be?

20090601_Master_Backup.tar
20090602_Master_Backup.tar

or

Master_Backup_etc_0001.blah
Master_Backup_etc_0002.blah

or

Master Backup Tue Jun 10 2009.blah
Master Backup Wed Jun 11 2009.blah

The first two are only one char different (and at most 5 or 6 chars
different (20091231 -> 20100101)) however the last one is more "fuzzy"...

Would be fantastic if backuppc was able to deal with this also!

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkoxzsgACgkQGyoxogrTyiXGDwCdF0JsCB1Y1Cdz6pigI+gGCgpX
z6AAn0Pa7xT6EuJA+nAeBv4/9e9Qhr1I
=uxr6
-END PGP SIGNATURE-

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Filipe Brandenburger
Hi,

On Thu, Jun 11, 2009 at 14:13, Les Mikesell wrote:
> Mirco Piccin wrote:
>> Anyway, each daily file is quite similar to the each other, so rsync
>> (or custom script) should be the better way to to the job.
>
> That won't help unless each file is named the same as the previous one.

You can try to use the "-y" or "--fuzzy" option to rsync (at least
rsync 3) to implement this.

Quoting from the man page: "-y, --fuzzy: This option tells rsync that
it should look for a basis file for any destination file that  is
missing. The current algorithm looks in the same directory as the
destination file for either a file that has an identical size and
modified-time, or a similarly-named file. If found, rsync uses the
fuzzy basis file to try to speed up the transfer."

HTH,
Filipe

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] include directive in config.pl

2009-06-11 Thread Filipe Brandenburger
Hi,

On Thu, Jun 11, 2009 at 13:20, Matthias Meyer wrote:
> I hoped the necessary code would be simple enough and somebody would be so
> nice to post it here.

Something like this should work:

# define the list of junctions for Vista in English:
@VistaJunctions_english = ( "/dir1", "/dir2", "/dir3" );

# define the list of junctions for Vista in German:
@VistaJunctions_german = ( "/verzeichnis_eins", "/verzeichnis_zwei",
"/verzeichnis_drei" );

# And then join it all!
$Conf{BackupFilesExclude} = {
 'WINDOWS' => [
   '/Downloaded Program Files',
   '/Offline Web Pages',
   '/Temp',
   '/proc',
   '/System32/LogFiles/WMI/RtBackup'
 ],
 '*' => [
   'pagefile.sys',
   'hiberfil.sys',
   '/System Volume Information',
   '/RECYCLER',
   '/$Recycle.Bin',
   '/$RECYCLE.BIN',
   '/MSOCache',
   '/proc',
   '/Windows',
   @VistaJunctions_english,
   @VistaJunctions_german
 ]
};


Is this what you are looking for?

If you want to source the definitions of @VistaJunctions_english and
@VistaJunctions_german from another file, you can use this command at
the start of the .pl file:

require "/path/to/junction_definitions.pl";


HTH,
Filipe

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 13:13:46 -0500 on Thursday, June 11, 2009:
 > Mirco Piccin wrote:
 > > 
 > >> Also, is each daily file completely distinct from the previous one or
 > >> is just incrementally changed? Because if it is just incrementally
 > >> changed you may want to first rsync against the previous day's backup
 > >> to reduce network bandwidth.
 > > 
 > > My BackupPC is running on a VIA processor, max MB/s : less than 5 :-(
 > > So, backup 840 GB each time is not the best solution ...
 > > (this is the reason i did not configure the backup as you suggest)
 > > 
 > > Anyway, each daily file is quite similar to the each other, so rsync
 > > (or custom script) should be the better way to to the job.
 > 
 > That won't help unless each file is named the same as the previous one. 
 > Perhaps you could smb-mount the share into the backuppc server and move 
 > the files around so you always have the newest file under the same name 
 > in a subdirectory of the share for the duration of the backup - then you 
 > could put it back if you want. That would let you use the 'some number 
 > of fulls only' approach I suggested earlier and also transfer less data 
 > (but the rsync CPU vs. network tradeoff may be a wash).
 > 
 > If you don't use some approach to just get one file in the directory per 
 > day, you will probably run out of space on your 2nd full when you 
 > transfer the current week's files before the previous full can be 
 > deleted.  Or are you doing this already?
 > 

I think Mirco was saying "so rsync (or custom script) should be the
better way to to the job." I assume that he would write a simple
script that would do something like:
1. Hard link the last file version to a file with the name of the
   current file in another temporary directory:
ln / /


2. Rsync the current file relative to the last file in tempdir:
rsync -a  --link-dest= / 
/ 

   If there are no changes then a hard link is created (which is
   equivalent to a hard link to /). If you
   would prefer the file to be copied rather than hard-linked even if
   no changes, then use --compare-dest instead of link-dest.

   If there are changes, then rsync will use last-file-name as the
   basis for creating the new backup. i.e. it will *copy* the
   unchanged blocks locally from / and only
   transfer the *changed* blocks over the network link. (correct me if
   I'm wrong here of course).

3. Remove the temporary link file
rm /

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Jeffrey J. Kosowsky
Mirco Piccin wrote at about 19:13:42 +0200 on Thursday, June 11, 2009:
 > Hi, thanks for reply.
 > 
 > >  > > Every day (except sunday) a procedure stores in this folder a 120GB 
 > > file.
 > >  > > The name of the file is the day name.
 > >  > >
 > >  > > So, in a week, i have 6 different files generated (about 720 GB).
 > >  > > Every week the files are overwritten by the procedure.
 > >  > >
 > >  > > I'd like to backup only the newest file, and not all the folder.
 > >  > > The problem is that i suppose i must have a full backup of the folder
 > >  > > (720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
 > >  > > incremental backup.
 > ...
 > >  > > and so on, for a total of 1440 GB (the double of the effective disk
 > >  > > space needed).
 > ...
 > > Couldn't you just do daily full backups (with no incrementals) while
 > > setting $Conf{FullKeepCnt}=1. Then as long as you made sure that
 > > BackupPC_nightly didn't run in the middle, you would effectively just
 > > be adding one new backup to the pool each day and later when
 > > BackupPC_nightly runs you would be erasing the entry from 8 days
 > > earlier, so you would never have more than 720+120=840 GB in the
 > > pool. Now this wouldn't be particularly bandwidth efficient since you
 > > are always doing full rather than incrementals, but it would work...
 > >
 > > However, if you really are only trying to backup a single new 120GB
 > > file every day, I wonder whether you might be better off just using a
 > > daily 'rsync' cron job. It seems like that would be simpler, more
 > > reliable, and more efficient.
 > >
 > > Also, is each daily file completely distinct from the previous one or
 > > is just incrementally changed? Because if it is just incrementally
 > > changed you may want to first rsync against the previous day's backup
 > > to reduce network bandwidth.
 > 
 > My BackupPC is running on a VIA processor, max MB/s : less than 5 :-(
 > So, backup 840 GB each time is not the best solution ...
 > (this is the reason i did not configure the backup as you suggest)
 > 
 > Anyway, each daily file is quite similar to the each other, so rsync
 > (or custom script) should be the better way to to the job.
 > 

Rsync seems like what you want, especially since there is so much file
similarity. 

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Les Mikesell
Mirco Piccin wrote:
> 
>> Also, is each daily file completely distinct from the previous one or
>> is just incrementally changed? Because if it is just incrementally
>> changed you may want to first rsync against the previous day's backup
>> to reduce network bandwidth.
> 
> My BackupPC is running on a VIA processor, max MB/s : less than 5 :-(
> So, backup 840 GB each time is not the best solution ...
> (this is the reason i did not configure the backup as you suggest)
> 
> Anyway, each daily file is quite similar to the each other, so rsync
> (or custom script) should be the better way to to the job.

That won't help unless each file is named the same as the previous one. 
Perhaps you could smb-mount the share into the backuppc server and move 
the files around so you always have the newest file under the same name 
in a subdirectory of the share for the duration of the backup - then you 
could put it back if you want. That would let you use the 'some number 
of fulls only' approach I suggested earlier and also transfer less data 
(but the rsync CPU vs. network tradeoff may be a wash).

If you don't use some approach to just get one file in the directory per 
day, you will probably run out of space on your 2nd full when you 
transfer the current week's files before the previous full can be 
deleted.  Or are you doing this already?

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] include directive in config.pl

2009-06-11 Thread Matthias Meyer
Jeffrey J. Kosowsky wrote:

> Matthias Meyer wrote at about 18:00:22 +0200 on Thursday, June 11, 2009:
>  > Hi,
>  > 
>  > Unfortunately I am not a perl programmer.
>  > Therefore my, maybee stupid, question ;-)
>  > 
>  > Is it possible to include a specification within the
>  > $Conf{BackupFilesExclude}?
>  > 
>  > Something like:
>  > $Conf{BackupFilesExclude} = {
>  >   'WINDOWS' => [
>  > '/Downloaded Program Files',
>  > '/Offline Web Pages',
>  > '/Temp',
>  > '/proc',
>  > '/System32/LogFiles/WMI/RtBackup'
>  >   ],
>  >   '*' => [
>  > 'pagefile.sys',
>  > 'hiberfil.sys',
>  > '/System Volume Information',
>  > '/RECYCLER',
>  > '/$Recycle.Bin',
>  > '/$RECYCLE.BIN',
>  > '/MSOCache',
>  > '/proc',
>  > '/Windows',
>  > #include VistaJunctions_english
>  > #include VistaJunctions_german
>  >   ]
>  > };
>  >   
> 
> Well, it's not C so you wouldn't expect to be able to use C
> syntax. But the config.pl is an executable perl file so you can use
> standard perl techniques to read in the contents of a variable in
> another file (for example using the 'do' construction) and then use
> standard perl hash & array constructions to merge the values from the
> file you read in with the values you specify in the config file. Just
> check out some of the perl tutorials online -- if you are messing with
> perl code, I highly recommend you learn a little perl.
> 

Thanks Jeffrey,

I hoped the necessary code would be simple enough and somebody would be so
nice to post it here.
Thanks anyway
br
Matthias
-- 
Don't Panic


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Mirco Piccin
Hi, thanks for reply.

>  > > Every day (except sunday) a procedure stores in this folder a 120GB file.
>  > > The name of the file is the day name.
>  > >
>  > > So, in a week, i have 6 different files generated (about 720 GB).
>  > > Every week the files are overwritten by the procedure.
>  > >
>  > > I'd like to backup only the newest file, and not all the folder.
>  > > The problem is that i suppose i must have a full backup of the folder
>  > > (720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
>  > > incremental backup.
...
>  > > and so on, for a total of 1440 GB (the double of the effective disk
>  > > space needed).
...
> Couldn't you just do daily full backups (with no incrementals) while
> setting $Conf{FullKeepCnt}=1. Then as long as you made sure that
> BackupPC_nightly didn't run in the middle, you would effectively just
> be adding one new backup to the pool each day and later when
> BackupPC_nightly runs you would be erasing the entry from 8 days
> earlier, so you would never have more than 720+120=840 GB in the
> pool. Now this wouldn't be particularly bandwidth efficient since you
> are always doing full rather than incrementals, but it would work...
>
> However, if you really are only trying to backup a single new 120GB
> file every day, I wonder whether you might be better off just using a
> daily 'rsync' cron job. It seems like that would be simpler, more
> reliable, and more efficient.
>
> Also, is each daily file completely distinct from the previous one or
> is just incrementally changed? Because if it is just incrementally
> changed you may want to first rsync against the previous day's backup
> to reduce network bandwidth.

My BackupPC is running on a VIA processor, max MB/s : less than 5 :-(
So, backup 840 GB each time is not the best solution ...
(this is the reason i did not configure the backup as you suggest)

Anyway, each daily file is quite similar to the each other, so rsync
(or custom script) should be the better way to to the job.

Regards
M

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] include directive in config.pl

2009-06-11 Thread Jeffrey J. Kosowsky
Matthias Meyer wrote at about 18:00:22 +0200 on Thursday, June 11, 2009:
 > Hi,
 > 
 > Unfortunately I am not a perl programmer.
 > Therefore my, maybee stupid, question ;-)
 > 
 > Is it possible to include a specification within the
 > $Conf{BackupFilesExclude}?
 > 
 > Something like:
 > $Conf{BackupFilesExclude} = {
 >   'WINDOWS' => [
 > '/Downloaded Program Files',
 > '/Offline Web Pages',
 > '/Temp',
 > '/proc',
 > '/System32/LogFiles/WMI/RtBackup'
 >   ],
 >   '*' => [
 > 'pagefile.sys',
 > 'hiberfil.sys',
 > '/System Volume Information',
 > '/RECYCLER',
 > '/$Recycle.Bin',
 > '/$RECYCLE.BIN',
 > '/MSOCache',
 > '/proc',
 > '/Windows',
 > #include VistaJunctions_english
 > #include VistaJunctions_german
 >   ]
 > };
 >   

Well, it's not C so you wouldn't expect to be able to use C
syntax. But the config.pl is an executable perl file so you can use
standard perl techniques to read in the contents of a variable in
another file (for example using the 'do' construction) and then use
standard perl hash & array constructions to merge the values from the
file you read in with the values you specify in the config file. Just
check out some of the perl tutorials online -- if you are messing with
perl code, I highly recommend you learn a little perl.

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-11 Thread Filipe Brandenburger
Hi,

On Thu, Jun 11, 2009 at 01:50, Magnus Larsson wrote:
>> mountpoint -q /path/to/usbdisk
>> (will set $? to 0 if it's mounted, non-zero otherwise)
>
> How would I use this value that mountpoint returns, do you mean?

The same way you would use the '[ -f
/path/to/usbdisk/.fileyouwouldhavetocreate ]', but without creating
the file...

I'm not really an expert on BackupPC (I only used the basic features
so far), but I believe you would use that as a script for PingCmd or
maybe DumpPreUserCmd and maybe together with UserCmdCheckStatus (see
Jeffrey's previous e-mail). I guess they could give you some more help
on how to configure this than I could...

HTH,
Filipe

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 07:57:11 -0500 on Thursday, June 11, 2009:
 > Mirco Piccin wrote:
 > > Hi all,
 > > i have to backup a folder (using smb).
 > > 
 > > Every day (except sunday) a procedure stores in this folder a 120GB file.
 > > The name of the file is the day name.
 > > 
 > > So, in a week, i have 6 different files generated (about 720 GB).
 > > Every week the files are overwritten by the procedure.
 > > 
 > > I'd like to backup only the newest file, and not all the folder.
 > > The problem is that i suppose i must have a full backup of the folder
 > > (720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
 > > incremental backup.
 > > So, configuring:
 > > $Conf{FullPeriod} = 6.97;
 > > $Conf{IncrKeepCnt} = 6;
 > > 
 > > i'll have :
 > > on sunday the full backup -> 720 GB
 > > on monday the incremental backup  -> 720 GB (the full backup) plus 120
 > > GB (the new monday file)
 > > on tuesday the incremental backup  -> 840 GB (the full backup plus
 > > incremental) plus 120 GB (the new tuesday file)
 > > 
 > > and so on, for a total of 1440 GB (the double of the effective disk
 > > space needed).
 > > 
 > > And again, sunday BackupPC will move 720 GB of files, and so on.
 > > 
 > > Is there a way to backup only the new file (maybe playing with
 > > $Conf{IncrLevels}), without a full?
 > > Or a way to optimize it?
 > 
 > I don't think there is a good way to handle this in backuppc.  Can you 
 > change the procedure so the current daily file is created in a directory 
 > by itself and older ones rotated to a different directory?  Then you 
 > could do a full of the one holding the current file every day and store 
 > as many as you want.
 > 

Couldn't you just do daily full backups (with no incrementals) while
setting $Conf{FullKeepCnt}=1. Then as long as you made sure that
BackupPC_nightly didn't run in the middle, you would effectively just
be adding one new backup to the pool each day and later when
BackupPC_nightly runs you would be erasing the entry from 8 days
earlier, so you would never have more than 720+120=840 GB in the
pool. Now this wouldn't be particularly bandwidth efficient since you
are always doing full rather than incrementals, but it would work...

However, if you really are only trying to backup a single new 120GB
file every day, I wonder whether you might be better off just using a
daily 'rsync' cron job. It seems like that would be simpler, more
reliable, and more efficient.

Also, is each daily file completely distinct from the previous one or
is just incrementally changed? Because if it is just incrementally
changed you may want to first rsync against the previous day's backup
to reduce network bandwidth.

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] include directive in config.pl

2009-06-11 Thread Matthias Meyer
Hi,

Unfortunately I am not a perl programmer.
Therefore my, maybee stupid, question ;-)

Is it possible to include a specification within the
$Conf{BackupFilesExclude}?

Something like:
$Conf{BackupFilesExclude} = {
  'WINDOWS' => [
'/Downloaded Program Files',
'/Offline Web Pages',
'/Temp',
'/proc',
'/System32/LogFiles/WMI/RtBackup'
  ],
  '*' => [
'pagefile.sys',
'hiberfil.sys',
'/System Volume Information',
'/RECYCLER',
'/$Recycle.Bin',
'/$RECYCLE.BIN',
'/MSOCache',
'/proc',
'/Windows',
#include VistaJunctions_english
#include VistaJunctions_german
  ]
};
  
-- 
Don't Panic


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-11 Thread Les Mikesell
Adam Goryachev wrote:
> 
>>> In fact, the POSIS/SUS specifications specifically states:
>>>Some implementations mark for update the st_ctime field of renamed
>>>files and some do not. Applications which make use of the st_ctime
>>>field may behave differently with respect to renamed files unless they
>>>are designed to allow for either behavior.
>>>
>>> However, it wouldn't be hard to add a "touch" to the chain renumbering
>>> routine if you want to be able to identify newly renumbered files. One
>>> would need to make sure that this doesn't have other unintended side
>>> effects but I don't think that BackupPC otherwise uses the file mtime.
>> Or, just do the explicit link/unlink operations to force the filesystem 
>> to do the right thing with ctime().
> 
> As long as the file you are dealing with has nlinks > 1 and those other
> files don't vanish in between the unlink/link rename is an atomic
> operation... unlink + link is not.

But that doesn't matter in this case (and it's link/unlink or you lose 
it).  You are working with the pool file name - and you don't really 
want the contents related to that name to atomically change without 
anything else knowing about it anyway.  Originally, backups weren't 
permitted at the same time as the nightly run to avoid that.  Now there 
must be some kind of locking.

>> And I'd like a quick/cheap way so you could just ignore the pool during 
>> a copy and rebuild it the same way it was built in the first place 
>> without thinking twice.  And maybe do things like backing up other 
>> instances of backuppc archives ignoring their pools and merging them so 
>> you could restore individual files directly.
> 
> Would that mean your data transfer is equal to the un-pooled size
> though? ie, if you transfer a single pc/ directory with 20
> full backups, you would need to transfer 20 X size of full backup of
> data. When it gets to the other side, you simply add the files from the
> first full backup to the pool, and then throw away (and link) the other
> 19 copies.

I'm not sure if rsync figures out the linked copies on the sending side 
or not.  It at least seems possible, and I've always been able to rsync 
any single pc tree.  Tar would only send one, but it includes one 
instance in each run so each incremental would repeat files you already 
have.

> Adds simplicity, but does it pose a problem with data sizes being
> transferred ?
> 
> One optimisation would be to examine the backuppc log, and only send the
> files that are not "same" or some such...

Some sort of client/server protocol would be needed to get it completely 
right.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-11 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Les Mikesell wrote:
> Jeffrey J. Kosowsky wrote:
>>  
>> In fact, the POSIS/SUS specifications specifically states:
>>Some implementations mark for update the st_ctime field of renamed
>>files and some do not. Applications which make use of the st_ctime
>>field may behave differently with respect to renamed files unless they
>>are designed to allow for either behavior.
>>
>> However, it wouldn't be hard to add a "touch" to the chain renumbering
>> routine if you want to be able to identify newly renumbered files. One
>> would need to make sure that this doesn't have other unintended side
>> effects but I don't think that BackupPC otherwise uses the file mtime.
> 
> Or, just do the explicit link/unlink operations to force the filesystem 
> to do the right thing with ctime().

As long as the file you are dealing with has nlinks > 1 and those other
files don't vanish in between the unlink/link rename is an atomic
operation... unlink + link is not.

> And I'd like a quick/cheap way so you could just ignore the pool during 
> a copy and rebuild it the same way it was built in the first place 
> without thinking twice.  And maybe do things like backing up other 
> instances of backuppc archives ignoring their pools and merging them so 
> you could restore individual files directly.

Would that mean your data transfer is equal to the un-pooled size
though? ie, if you transfer a single pc/ directory with 20
full backups, you would need to transfer 20 X size of full backup of
data. When it gets to the other side, you simply add the files from the
first full backup to the pool, and then throw away (and link) the other
19 copies.

Adds simplicity, but does it pose a problem with data sizes being
transferred ?

One optimisation would be to examine the backuppc log, and only send the
files that are not "same" or some such...

Anyway, I'll get out of the way and allow you to continue, I think you
understand the issue better than me by far ... :)

Regards,
Adam


- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkoxIp0ACgkQGyoxogrTyiUa9ACbBpbwsJjJ5VXJgL9E1K9ZNmNT
ahUAoK5Z+GyGrOk6YYuzIYAWH4ucwBqq
=MwA7
-END PGP SIGNATURE-

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it possible to split backuppc and gui to two different servers

2009-06-11 Thread Matthias Meyer
Holger Parplies wrote:

> Hi,
> 
> Matthias Meyer wrote on 2009-06-08 23:16:48 +0200 [[BackupPC-users] Is it
> possible to split backuppc and gui to two different servers]:
>> 
>> Is it possible to run Backuppc on another server as the web interface?
> 
> yes, but you'll need to mount the pool on the web server in order to be
> able to browse backups.
> 
> Regards,
> Holger
> 

Thanks.
Is there a howto anywhere?
If it is sufficient to mount the pool RO or if RW is necessary is easy to
evaluate.
But how to install/move the backuppc GUI?
Is it sufficient to move the /usr/share/backuppc/cgi-bin/ onto another
server?
Or do I have to install the complete backuppc onto both server?

br
Matthias
-- 
Don't Panic


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-11 Thread Les Mikesell
Jeffrey J. Kosowsky wrote:
>  
> Now that doesn't mean it *couldn't* happen and it doesn't mean we
> shouldn't always be paranoid and test, test, test... but I just don't
> have any good reason to think it would fail algorithmically. Now that
> doesn't mean it couldn't slow down dramatically or run out of memory
> as some have claimed, it just seems unlikely (to me) that it would
> complete without error yet still have some hidden error.

Even if everything is done right it would depend on the source directory 
not changing link targets during the (likely long) transfer process. 
Consider what would happen if a collision chain fixup happens and 
renames pool files after rsync reads the directory list and makes the 
inode mapping table but before the transfers start.

>  > 
>  > > And the renumbering will change the timestamps which should alert rsync 
> to
>  > > all the changes even without the --checksum flag.
>  > 
>  > This part I'm not sure on. Is it actually *guaranteed* that a rename(2) 
> must
>  > be implemented in terms of unlink(2) and link(2) (but atomically), i.e. 
> that
>  > it must modify the inode change time? The inode is not really changed, 
> except
>  > for the side effect of (atomically) decrementing and re-incrementing the 
> link
>  > count. By virtue of the operation being atomical, the link count is
>  > *guaranteed* not to change, so I, were I to implement a file system, would
>  > feel free to optimize the inode change away (or simply not implement it in
>  > terms of unlink() and link()), unless it is documented somewhere that 
> updating
>  > the inode change time is mandatory (though it really is *not* an inode 
> change,
>  > so I don't see why it should be).
>  > 
> 
> Good catch!!! I hadn't realized that this was implementation
> dependent. It seems that most Unix implementations (including BSD)
> have historically changed the ctime, however, Linux (at least
> ext2/ext3) does not at least as of kernel 2.6.26.6

I sort of recall some arguments about this in the early reiserfs days. 
I guess the "cheat and short-circuit" side won even though it makes it 
impossible to do a correct incremental backup as expected with any 
ordinary tool (rsync still can but it needs a previous copy and a full 
block checksum comparison).

> In fact, the POSIS/SUS specifications specifically states:
>Some implementations mark for update the st_ctime field of renamed
>files and some do not. Applications which make use of the st_ctime
>field may behave differently with respect to renamed files unless they
>are designed to allow for either behavior.
> 
> However, it wouldn't be hard to add a "touch" to the chain renumbering
> routine if you want to be able to identify newly renumbered files. One
> would need to make sure that this doesn't have other unintended side
> effects but I don't think that BackupPC otherwise uses the file mtime.

Or, just do the explicit link/unlink operations to force the filesystem 
to do the right thing with ctime().

>  > Does rsync even act on the inode change time? 
> No it doesn't. In fact, I have read that most linux systems don't allow
> you to set the ctime to anything other than the current system time.

You shouldn't be able to.  But backup-type operations should be able to 
use it to identify moved files in incrementals.

>  > > Or are you saying it would be difficult to do this manually with a
>  > > special purpose algorithm that tries to just track changes to the pool
>  > > and pc files?
>  > 
>  > I haven't given that topic much thought. The advantage in a special purpose
>  > algorithm is that we can make assumptions about the data we are dealing 
> with.
>  > We shouldn't do this unnecessarily, but if it has notable advantages, then 
> why
>  > not? "Difficult" isn't really a point. The question is whether it can be 
> done
>  > efficiently.
> 
> I meant more "difficult" in terms of being sure to track all special
> cases and that one would have to be careful, not that one shouldn't do
> it.
> 
> Personally, I don't like the idea of chain collisions and would have
> preferred using full file md5sums which as I have mentioned earlier
> would not be very costly at least for the rsync/rsyncd transfer
> methods under protocol 30.

And I'd like a quick/cheap way so you could just ignore the pool during 
a copy and rebuild it the same way it was built in the first place 
without thinking twice.  And maybe do things like backing up other 
instances of backuppc archives ignoring their pools and merging them so 
you could restore individual files directly.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___

Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-11 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 14:31:02 +0200 on Thursday, June 11, 2009:
 > Hi,
 > 
 > Jeffrey J. Kosowsky wrote on 2009-06-11 00:25:37 -0400 [Re: [BackupPC-users] 
 > backup the backuppc pool with bacula]:
 > > Holger Parplies wrote at about 04:22:03 +0200 on Thursday, June 11, 2009:
 > >  > Les Mikesell wrote on 2009-06-10 15:45:22 -0500 [Re: [BackupPC-users] 
 > > backup the backuppc pool with bacula]:
 > >  > [...]
 > >  > the file list [...] can and has been [optimized] in 3.0 (probably 
 > > meaning
 > >  > protocol version 30, i.e. rsync 3.x on both sides).
 > > 
 > > Holger, I may be wrong here, but I think that you get the more
 > > efficient memory usage as long as both client & server are version >=3.0 
 > > even if protocol version is set to < 30 (which is true for BackupPC
 > > where it defaults back to version 28). 
 > 
 > firstly, it's *not* true. BackupPC (as client side rsync) is not
 > version >= 3.0. It's not even really rsync at all, and I doubt File::RsyncP
 > is more memory efficient than rsync, even if the core code is in C and copied
 > from rsync.
 > 
I had (perhaps mistakenly) assumed that BackupPC still used rsync
since at least in the Fedora installation, the rpm requires rsync.

Still, I believe you do get at least some of the advantages of rsync
>=3.0 when you have it on the client side at least for the rsyncd
method. In fact, this might explain the following situation:
rsync 2.x and rsync method: Backups hang on certain files
rsync 3.x and rsync method: Backups hang on certain files
rsync 3.x and rsyncd method: Backups always work

Perhaps the combination of rsyncd and rsync 3.x on the client is what
allows taking advantage of some of the benefits of version 3.x.

 > Secondly, I'm *guessing* that for an incremental file list you'd need a
 > protocol modification. I understand it that instead of one big file list
 > comparison done before transfer, 3.0 does partial file list comparisons 
 > during
 > transfer (otherwise it would need to traverse the file tree at least twice,
 > which is something you'd normally avoid). That would clearly require a
 > protocol change, wouldn't it?

Maybe not if using rsyncd makes the server into the "master" so that
it controls the file listing. Stepping back, I think it all depends on
what you define as "protocol" - if protocol is more about recognized
commands and encoding, then the ordering of file listing may not be
part of the protocol but instead might be more part of the control
structure which could be protocol independent if the control is ceded
to the "master" side -- i.e., at least some changes to the control
structure could be made without having to coordinate the change with
"master" and "slave". I'm just speculating because there isn't much
documentation that I have been able to find.

 > 
 > Actually, I would think that rsync < 3.0 *does* need to traverse the file 
 > tree
 > twice, so the change might even have been made because of the wish to speed 
 > up
 > the transfer rather than to decrease the file list size (it does both, of
 > course, as well as better utilize network bandwidth by starting the transfer
 > earlier and allowing more parallelism between network I/O and disk I/O -
 > presuming my assumptions are correct).
 > 
 > > But I'm not an expert and my understanding is that the protocols themselves
 > > are not well documented other than looking through the source code.
 > 
 > Neither am I. I admit that I haven't even looked for documentation (or at the
 > source code). It just seems logical to implement it that way.
 > 
 > I can't rule out that the optimization could be possible with the older
 > protocol versions, but then, why wouldn't rsync have always operated that 
 > way?

You could say the same thing about why wasn't the protocol always that
way ;)

 > 
 > >  > > > and how the rest of the community deals with getting pools of
 > >  > > > 100+GB offsite in less than a week of transfer time.
 > >  > > 
 > >  > > 100 Gigs might be feasible - it depends more on the file sizes and 
 > > how 
 > >  > > many directory entries you have, though.  And you might have to make 
 > > the 
 > >  > > first copy on-site so subsequently you only have to transfer the 
 > > changes.
 > >  > 
 > >  > Does anyone actually have experience with rsyncing an existing pool to 
 > > an
 > >  > existing copy (as in: verification of obtaining a correct result)? I'm 
 > > kind of
 > >  > sceptical that pool chain renumbering will be handled correctly. At 
 > > least, it
 > >  > seems extremely complicated to get right.
 > > 
 > > Why wouldn't rsync -H handle this correctly? 
 > 
 > I'm not saying it doesn't. I'm saying it's complicated. I'm asking whether
 > anyone has actually verified that it does. I'm asking because it's an
 > extremely rare corner case that the developers may not have had in mind and
 > thus may not have tested. The massive usage of hardlinks in a BackupPC pool
 > clearly is something they did n

Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Mirco Piccin
Hi and thanks for the reply.

>> Every day (except sunday) a procedure stores in this folder a 120GB file.
>> The name of the file is the day name.
>>
>> So, in a week, i have 6 different files generated (about 720 GB).
>> Every week the files are overwritten by the procedure.
...
>> i'll have :
>> on sunday the full backup -> 720 GB
>> on monday the incremental backup  -> 720 GB (the full backup) plus 120
>> GB (the new monday file)
>> on tuesday the incremental backup  -> 840 GB (the full backup plus
>> incremental) plus 120 GB (the new tuesday file)
>>
>> and so on, for a total of 1440 GB (the double of the effective disk
>> space needed).
...

> I don't think there is a good way to handle this in backuppc.  Can you
> change the procedure so the current daily file is created in a directory
> by itself and older ones rotated to a different directory?  Then you
> could do a full of the one holding the current file every day and store
> as many as you want.


Maybe that's the best solution, but that procedure is not open source...
I can eventually do an additional script (and schedule it) that does the job.

> I think is better that you changer your xfermetoth to rsyncd

Also this is a good solution; but i'd like to maintain the backup
"agentless"

Thanks
M

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup the backuppc pool with bacula

2009-06-11 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-06-11 00:25:37 -0400 [Re: [BackupPC-users] 
backup the backuppc pool with bacula]:
> Holger Parplies wrote at about 04:22:03 +0200 on Thursday, June 11, 2009:
>  > Les Mikesell wrote on 2009-06-10 15:45:22 -0500 [Re: [BackupPC-users] 
> backup the backuppc pool with bacula]:
>  > [...]
>  > the file list [...] can and has been [optimized] in 3.0 (probably meaning
>  > protocol version 30, i.e. rsync 3.x on both sides).
> 
> Holger, I may be wrong here, but I think that you get the more
> efficient memory usage as long as both client & server are version >=3.0 
> even if protocol version is set to < 30 (which is true for BackupPC
> where it defaults back to version 28). 

firstly, it's *not* true. BackupPC (as client side rsync) is not
version >= 3.0. It's not even really rsync at all, and I doubt File::RsyncP
is more memory efficient than rsync, even if the core code is in C and copied
from rsync.

Secondly, I'm *guessing* that for an incremental file list you'd need a
protocol modification. I understand it that instead of one big file list
comparison done before transfer, 3.0 does partial file list comparisons during
transfer (otherwise it would need to traverse the file tree at least twice,
which is something you'd normally avoid). That would clearly require a
protocol change, wouldn't it?

Actually, I would think that rsync < 3.0 *does* need to traverse the file tree
twice, so the change might even have been made because of the wish to speed up
the transfer rather than to decrease the file list size (it does both, of
course, as well as better utilize network bandwidth by starting the transfer
earlier and allowing more parallelism between network I/O and disk I/O -
presuming my assumptions are correct).

> But I'm not an expert and my understanding is that the protocols themselves
> are not well documented other than looking through the source code.

Neither am I. I admit that I haven't even looked for documentation (or at the
source code). It just seems logical to implement it that way.

I can't rule out that the optimization could be possible with the older
protocol versions, but then, why wouldn't rsync have always operated that way?

>  > > > and how the rest of the community deals with getting pools of
>  > > > 100+GB offsite in less than a week of transfer time.
>  > > 
>  > > 100 Gigs might be feasible - it depends more on the file sizes and how 
>  > > many directory entries you have, though.  And you might have to make the 
>  > > first copy on-site so subsequently you only have to transfer the changes.
>  > 
>  > Does anyone actually have experience with rsyncing an existing pool to an
>  > existing copy (as in: verification of obtaining a correct result)? I'm 
> kind of
>  > sceptical that pool chain renumbering will be handled correctly. At least, 
> it
>  > seems extremely complicated to get right.
> 
> Why wouldn't rsync -H handle this correctly? 

I'm not saying it doesn't. I'm saying it's complicated. I'm asking whether
anyone has actually verified that it does. I'm asking because it's an
extremely rare corner case that the developers may not have had in mind and
thus may not have tested. The massive usage of hardlinks in a BackupPC pool
clearly is something they did not anticipate (or, at least, feel the need to
implement a solution for). There might be problems that appear only in
conjunction with massive counts of inodes with nlinks > 1.

In another thread, an issue was described that *could* have been caused by
this *not* working as expected (maybe crashing rather than doing something
wrong, not sure). It's unclear at the moment, and I'd like to be able to rule
it out on the basis of something more than "it should work, so it probably
does".

I'm also saying that pool backups are important enough to verify the contents
by looking closely at the corner cases we are aware of.

> And the renumbering will change the timestamps which should alert rsync to
> all the changes even without the --checksum flag.

This part I'm not sure on. Is it actually *guaranteed* that a rename(2) must
be implemented in terms of unlink(2) and link(2) (but atomically), i.e. that
it must modify the inode change time? The inode is not really changed, except
for the side effect of (atomically) decrementing and re-incrementing the link
count. By virtue of the operation being atomical, the link count is
*guaranteed* not to change, so I, were I to implement a file system, would
feel free to optimize the inode change away (or simply not implement it in
terms of unlink() and link()), unless it is documented somewhere that updating
the inode change time is mandatory (though it really is *not* an inode change,
so I don't see why it should be).

Does rsync even act on the inode change time? File modification time will be
unchanged, obviously. rsync's focus is on the file contents and optionally
keeping the attributes in sync (as far as it can). ctime is an indication that
attributes

Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Omar Llorens Crespo Domínguez
Mirco Piccin escribió:
> Hi all,
> i have to backup a folder (using smb).
>
> Every day (except sunday) a procedure stores in this folder a 120GB file.
> The name of the file is the day name.
>
> So, in a week, i have 6 different files generated (about 720 GB).
> Every week the files are overwritten by the procedure.
>
> I'd like to backup only the newest file, and not all the folder.
> The problem is that i suppose i must have a full backup of the folder
> (720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
> incremental backup.
> So, configuring:
> $Conf{FullPeriod} = 6.97;
> $Conf{IncrKeepCnt} = 6;
>
> i'll have :
> on sunday the full backup -> 720 GB
> on monday the incremental backup  -> 720 GB (the full backup) plus 120
> GB (the new monday file)
> on tuesday the incremental backup  -> 840 GB (the full backup plus
> incremental) plus 120 GB (the new tuesday file)
>
> and so on, for a total of 1440 GB (the double of the effective disk
> space needed).
>
> And again, sunday BackupPC will move 720 GB of files, and so on.
>
> Is there a way to backup only the new file (maybe playing with
> $Conf{IncrLevels}), without a full?
> Or a way to optimize it?
>
> Thanks
> Regards
> M
>
>   

Hi,

I think is better that you changer your xfermetoth to rsyncd. rsyncd 
only copy the new files.
Also you can change your configuration in $Conf{FullKeepCnt} = 1, 
because you only need the last copy and $Conf{IncrKeepCnt} = 1 or 2;

-- 


Omar Llorens Crespo Domínguez.
JPL TSOLUCIO, SL
o...@tsolucio.com
www.tsolucio.com
www.bearnas.com
902 88 69 38



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup only new file(s)

2009-06-11 Thread Les Mikesell
Mirco Piccin wrote:
> Hi all,
> i have to backup a folder (using smb).
> 
> Every day (except sunday) a procedure stores in this folder a 120GB file.
> The name of the file is the day name.
> 
> So, in a week, i have 6 different files generated (about 720 GB).
> Every week the files are overwritten by the procedure.
> 
> I'd like to backup only the newest file, and not all the folder.
> The problem is that i suppose i must have a full backup of the folder
> (720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
> incremental backup.
> So, configuring:
> $Conf{FullPeriod} = 6.97;
> $Conf{IncrKeepCnt} = 6;
> 
> i'll have :
> on sunday the full backup -> 720 GB
> on monday the incremental backup  -> 720 GB (the full backup) plus 120
> GB (the new monday file)
> on tuesday the incremental backup  -> 840 GB (the full backup plus
> incremental) plus 120 GB (the new tuesday file)
> 
> and so on, for a total of 1440 GB (the double of the effective disk
> space needed).
> 
> And again, sunday BackupPC will move 720 GB of files, and so on.
> 
> Is there a way to backup only the new file (maybe playing with
> $Conf{IncrLevels}), without a full?
> Or a way to optimize it?

I don't think there is a good way to handle this in backuppc.  Can you 
change the procedure so the current daily file is created in a directory 
by itself and older ones rotated to a different directory?  Then you 
could do a full of the one holding the current file every day and store 
as many as you want.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backup from the internet?

2009-06-11 Thread error403

Well, I tried this approach to use rsyncd:
http://gerwick.ucsd.edu/backuppc_manual/backuppc_winxp.html

But then at this place:

> Then send this file to the administrator and nicely ask them to place the 
> configuration file on the server ASAP. 

and I don't know what to do on my linux machine to "accept" the configurations

Any help please, again?



Chris Robertson wrote:
> Les Mikesell wrote:
> 
> > Chris Robertson wrote:
> > 
> > 
> > > error403 wrote:
> > > 
> > > 
> > > > I'm thinking of installing/using some sftp server sofware on their 
> > > > computer.
> > > > 
> > > > 
> > > > 
> > > Better would be an rsyncd service, as that would allow you to only 
> > > transfer changes.
> > > 
> > > 
> > 
> > If they are unix/linux/mac boxes you can use rsync over ssh.  On windows 
> > you can use ssh port forwarding to connect to rsync in daemon mode.
> > 
> > 
> 
> Indeed.
> 
> Given the mention of NetBios in the original message, I made the 
> assumption that Windows clients were (exclusively) involved.  Thanks for 
> clarifying the other available options.
> 
> Chris
> 
> --
> Crystal Reports - New Free Runtime and 30 Day Trial
> Check out the new simplified licensing option that enables unlimited
> royalty-free distribution of the report engine for externally facing 
> server and web deployment.
> http://p.sf.net/sfu/businessobjects
> ___
> BackupPC-users mailing list
> BackupPC-users < at > lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


+--
|This was sent by krunchyf...@videotron.ca via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup of novel servers

2009-06-11 Thread Tino Schwarze
Hi Benedict,

On Thu, Jun 11, 2009 at 09:05:36AM +0300, Benedict simon wrote:

> i am using BackupPC to succesfully backup up linux client and working fine.
> 
> i also have 2 novell Netware servers which i would like to backup with
> backuppc
> 
> does backuppc support backin up Novell Netware servers
> 
> I have 4.11 and 5 server

(Note: I don't know anything about Netware 4.11 nor 5).

If your Novell servers are reachable either by rsync (possibly over
ssh), have tar (and ssh) or do export a samba share, they may be backed
up by BackupPC.

If I remember correctly, Novell Netware has special file systems, so
a simple file-level backup might not be sufficient (ACLs missing or
something) - you should try a restore anyway.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup only new file(s)

2009-06-11 Thread Mirco Piccin
Hi all,
i have to backup a folder (using smb).

Every day (except sunday) a procedure stores in this folder a 120GB file.
The name of the file is the day name.

So, in a week, i have 6 different files generated (about 720 GB).
Every week the files are overwritten by the procedure.

I'd like to backup only the newest file, and not all the folder.
The problem is that i suppose i must have a full backup of the folder
(720 GB), because of $Conf{FullKeepCnt}  must be >= 1, plus
incremental backup.
So, configuring:
$Conf{FullPeriod} = 6.97;
$Conf{IncrKeepCnt} = 6;

i'll have :
on sunday the full backup -> 720 GB
on monday the incremental backup  -> 720 GB (the full backup) plus 120
GB (the new monday file)
on tuesday the incremental backup  -> 840 GB (the full backup plus
incremental) plus 120 GB (the new tuesday file)

and so on, for a total of 1440 GB (the double of the effective disk
space needed).

And again, sunday BackupPC will move 720 GB of files, and so on.

Is there a way to backup only the new file (maybe playing with
$Conf{IncrLevels}), without a full?
Or a way to optimize it?

Thanks
Regards
M

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] send emails to customers AND admin?

2009-06-11 Thread Omar Llorens Crespo Domínguez
error403 escribió:
> Hi, I'm trying to find  a way to send an email to the personal email of the 
> people I'm doing their backups for.  I tried to search but the terms email 
> and message are so general it gives me almost all the posts on the forum!  :?
>
>   
Hi,

In each host you can configure the email only with this, 
EMailAdminUserName 

 
= ho...@email.com, and backuppc send a warning or error to this host. 
The EMailAdminUserName 
,
 
it's for know who send the email.


> +--
> |This was sent by krunchyf...@videotron.ca via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> --
> Crystal Reports - New Free Runtime and 30 Day Trial
> Check out the new simplified licensing option that enables unlimited
> royalty-free distribution of the report engine for externally facing 
> server and web deployment.
> http://p.sf.net/sfu/businessobjects
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>   
-- 


Omar Llorens Crespo Domínguez.
JPL TSOLUCIO, SL
o...@tsolucio.com
www.tsolucio.com
www.bearnas.com
902 88 69 38



--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/