Re: [BackupPC-users] 3.1.0beta0 - CGI Column sorting

2007-09-11 Thread Craig Barratt
Michael writes:

 The numeric columns appear to sort alphabetically.  For example:
 
 Full Size
 (GB)
 0.81
 1.82
 14.77
 2.55
 3.93

Yes, I noticed that too.  I'll fix it in the next version.

Craig

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] problem with ${FullKeepCnt} on a per-machine basis

2007-09-11 Thread Nicolas STRANSKY
I still have a problem with hosts with a particular ${FullKeepCnt}.

For example, this host should have no more than 6 full backups because
${FullKeepCnt} = [4, 2].

I had deleted the oldest full backups so that I only have 6. But here is
what I get now (7 full backups):

329 full7/12 10:01
339 full7/27 03:26
349 full8/10 21:00
354 full8/17 21:00
361 full8/27 10:58
362 incremental 8/28 11:16
363 incremental 8/29 11:00
364 incremental 8/30 11:00
365 incremental 8/31 21:00
366 incremental 9/1 21:00
367 full9/3 11:39
368 incremental 9/4 11:00
369 incremental 9/5 11:26
370 incremental 9/6 11:41
371 full9/10 16:00

And here is the log file:

8--
2007-09-01 21:00:27 incr backup started back to 2007-08-31 21:00:15
(backup #365) for directory mesdoc
2007-09-01 21:23:04 incr backup started back to 2007-08-31 21:00:15
(backup #365) for directory documents_and_settings
2007-09-01 21:27:53 incr backup 366 complete, 25 files, 855693391 bytes,
23 xferErrs (0 bad files, 0 bad shares, 23 other)

2007-09-03 11:39:52 full backup started for directory mesdoc (baseline
backup #366)
2007-09-03 13:52:03 full backup started for directory
documents_and_settings (baseline backup #366)
2007-09-03 14:10:29 full backup 367 complete, 144999 files, 11798042
bytes, 26 xferErrs (0 bad files, 0 bad shares, 26 other)
2007-09-03 14:10:29 removing full backup 344
2007-09-04 11:00:21 incr backup started back to 2007-09-03 11:39:52
(backup #367) for directory mesdoc
2007-09-04 11:31:16 incr backup started back to 2007-09-03 11:39:52
(backup #367) for directory documents_and_settings
2007-09-04 11:47:40 incr backup 368 complete, 382 files, 4741632447
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)

2007-09-05 11:26:58 incr backup started back to 2007-09-04 11:00:21
(backup #368) for directory mesdoc
2007-09-05 11:58:15 incr backup started back to 2007-09-04 11:00:21
(backup #368) for directory documents_and_settings
2007-09-05 12:15:19 incr backup 369 complete, 235 files, 4804126234
bytes, 22 xferErrs (0 bad files, 0 bad shares, 22 other)
2007-09-06 11:41:29 incr backup started back to 2007-09-05 11:26:58
(backup #369) for directory mesdoc
2007-09-06 12:13:25 incr backup started back to 2007-09-05 11:26:58
(backup #369) for directory documents_and_settings
2007-09-06 12:29:12 incr backup 370 complete, 360 files, 4852734035
bytes, 23 xferErrs (0 bad files, 0 bad shares, 23 other)

2007-09-10 16:00:07 full backup started for directory mesdoc (baseline
backup #370)
2007-09-10 18:27:08 full backup started for directory
documents_and_settings (baseline backup #370)
2007-09-10 18:45:17 full backup 371 complete, 145085 files, 118178367677
bytes, 17 xferErrs (0 bad files, 0 bad shares, 17 other)
8--

As you can see, full backup 344 was deleted once, even if it was not the
oldest full backup, but at the next full backup, nothing was deleted,
and this is the problem.

Thanks for your help

-- 
Nicolas STRANSKY
Équipe Oncologie Moléculaire http://www.curie.fr/equipe/301
Institut Curie - UMR 144 - CNRS Tel : +33 1 42 34 63 40
26, rue d'Ulm - 75248 Paris Cedex 5 - FRANCEFax : +33 1 42 34 63 49

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] problem with ${FullKeepCnt} on a per-machine basis

2007-09-11 Thread Frans Pop
On Tuesday 11 September 2007, Nicolas STRANSKY wrote:
 I still have a problem with hosts with a particular ${FullKeepCnt}.

 For example, this host should have no more than 6 full backups because
 ${FullKeepCnt} = [4, 2].
[...]
 As you can see, full backup 344 was deleted once, even if it was not the
 oldest full backup, but at the next full backup, nothing was deleted,
 and this is the problem.

It looks like the code is only looking at the full backups it expects to 
be there given the current settings, and does not allow for the fact that 
the settings may be changed. It seems to me that it does not delete the 
backups 329 and 339 (and maybe even 349 now), because it just does not 
expect them to be there.

An error in the end condition of a loop maybe?

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] problem with ${FullKeepCnt} on a per-machine basis

2007-09-11 Thread Craig Barratt
Nicolas writes:

 I still have a problem with hosts with a particular ${FullKeepCnt}.
 
 For example, this host should have no more than 6 full backups because
 ${FullKeepCnt} = [4, 2].
 
 I had deleted the oldest full backups so that I only have 6. But here is
 what I get now (7 full backups):

Thanks for sending me the backups and config files off list.

The problem is that your host config file has this:

${FullKeepCnt} = [4, 2];

instead of

$Conf{FullKeepCnt} = [4, 2];

So your main config.pl file setting of $Conf{FullKeepCnt} = [4, 2, 2, 1]
is used instead, which means 9 full backups will be kept.

I ran a test with your backups file (with 7 full backups) and
the correct setting of $Conf{FullKeepCnt} and the oldest full
(154) does get deleted, leaving 6 full backups as requested.

Craig

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] using backuppc to backupc another backuppc server

2007-09-11 Thread Rob Owens
I think your plan is good, except that I've been told that rsync takes a
long time to duplicate all the hardlinks in the BackupPC pool.  About a
month ago I had the same question as you and somebody on this list
recommended that I just set up BackupPC on the remote server and
configure it to talk to the host machines directly.  The only drawback
(if it even is a drawback) is that the remote server won't have the
exact same backup info as the local BackupPC server, since they will
likely perform their backups at different times of the day.

-Rob

[EMAIL PROTECTED] wrote:
 I have backuppc running on my LAN, and I want to send the backups over a
 T1 line to a remote server. I have about 30Gig of data on my backuppc
 partition. I tried using an archive host and it's to slow to stream all
 that data.

 So I installed backuppc on the remote server and setup rsync between the
 two servers. The backups should flow in ca cascade effect to the
 remote server now. What do more experienced users think of this setup?



 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/
   

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread David Koski
I should have mentioned that I do use rsync and have since discovered 
some pool files do not appear to be downloaded during backup.

Thanks,
David

On Monday 10 September 2007 09:00, Rob Owens wrote:
 My understanding is that with tar and smb, all files are downloaded (and
 then discarded if they're already in the pool).  Rsync is smart enough,
 though, not to download files already in the pool.

 -Rob

 David Koski wrote:
  I have been trying to get a good backup with backuppc (2.1.1) but it has
  been taking days.  I ran a dump on the command line so I could see what
  is going on and I see the files that are in the pool are being
  downloaded.  For example:
 
pool 700   511/1008039 home/daler/My
  Documents/DRAWINGS/Lakeport/Pics/C_03.tif
 
  This is a large file and at 750kb/s takes a while.  Is this expected?  I
  thought if they are in the pool they do not need to be downloaded.
 
  Thanks in advance,
  David Koski
  [EMAIL PROTECTED]
 
  -
  This SF.net email is sponsored by: Microsoft
  Defy all challenges. Microsoft(R) Visual Studio 2005.
  http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/backuppc-users
  http://backuppc.sourceforge.net/

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded, Can't link..

2007-09-11 Thread David Koski
Another wrinkle: Many of these same pool files get an error:

Can't link /var/lib/backuppc/pc/bki/new/f%2f/fhome/path-and-file 
to /var/lib/backuppc/cpool/d/1/9/d19f21440531ec9046070a9ad79190c5

Yet, the pool file does not appear to have many links:

-rw-r-  9 backuppc backuppc 38 2007-01-12 
19:25 /var/lib/backuppc/cpool/d/1/9/d19f21440531ec9046070a9ad79190c5

Regards,
David

On Monday 10 September 2007 09:00, Rob Owens wrote:
 My understanding is that with tar and smb, all files are downloaded (and
 then discarded if they're already in the pool).  Rsync is smart enough,
 though, not to download files already in the pool.

 -Rob

 David Koski wrote:
  I have been trying to get a good backup with backuppc (2.1.1) but it has
  been taking days.  I ran a dump on the command line so I could see what
  is going on and I see the files that are in the pool are being
  downloaded.  For example:
 
pool 700   511/1008039 home/daler/My
  Documents/DRAWINGS/Lakeport/Pics/C_03.tif
 
  This is a large file and at 750kb/s takes a while.  Is this expected?  I
  thought if they are in the pool they do not need to be downloaded.
 
  Thanks in advance,
  David Koski
  [EMAIL PROTECTED]
 
  -
  This SF.net email is sponsored by: Microsoft
  Defy all challenges. Microsoft(R) Visual Studio 2005.
  http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/backuppc-users
  http://backuppc.sourceforge.net/

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] IncrLevels with rsync

2007-09-11 Thread Rob Owens
Craig Barratt wrote:
 Rob writes:

   
 I just noticed the $Conf{IncrLevels} setting.  I'm using rsync and
 rsyncd as my transport, and I'd like to minimize my network usage since
 I'm backing up over the internet.  I don't care about disk or cpu usage.

 Does setting:
  $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6];
 do anything to reduce my network usage?  Or does rsync and the pooling
 mechanism already take care of that behind the scenes.
 

 Yes, it will reduce the network usage.  In 3.x each incremental depends
 on the backup of the next lower level, so this means a new file that
 appears after the last full will only be transferred once.

 Craig
   
Thanks Craig.

Is there any disadvantage to setting $Conf{IncrLevels}  = [1, 2, 3, 4,
5, 6]; when using rsync as the transport?  I'm trying to figure out if
it increases my chances of anything being missed.  (Holger, I'm sure
you've got a good answer to this one). 

-Rob

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] IncrLevels with rsync

2007-09-11 Thread Stephen Joyce
To followup on Rob's question a bit:

When using traditional backups, there is a real benefit to using a media 
rotation. I really like the Tower of Hanoi rotation 
(0,3,2,5,4,7,6,9,8,1,3,2,5,4,...)

I'm using the rotation above for both my regular BackupPC and my 
BackupPC4AFS servers. I know it makes sense for the AFS backups, as they're 
individual volume dumps.

But on a regular BackupPC server, backing up data files and using 
BackupPC's pooling, what kind of rotation (IncrLevels) makes sense? My gut 
instinct tells me that Tower of Hanoi is still good, but I'm not 100% sure 
due to the pooling and linking.

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu

On Tue, 11 Sep 2007, Rob Owens wrote:

 Craig Barratt wrote:
 Rob writes:


 I just noticed the $Conf{IncrLevels} setting.  I'm using rsync and
 rsyncd as my transport, and I'd like to minimize my network usage since
 I'm backing up over the internet.  I don't care about disk or cpu usage.

 Does setting:
  $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6];
 do anything to reduce my network usage?  Or does rsync and the pooling
 mechanism already take care of that behind the scenes.


 Yes, it will reduce the network usage.  In 3.x each incremental depends
 on the backup of the next lower level, so this means a new file that
 appears after the last full will only be transferred once.

 Craig

 Thanks Craig.

 Is there any disadvantage to setting $Conf{IncrLevels}  = [1, 2, 3, 4,
 5, 6]; when using rsync as the transport?  I'm trying to figure out if
 it increases my chances of anything being missed.  (Holger, I'm sure
 you've got a good answer to this one).

 -Rob

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/


 -- 



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded, Can't link..

2007-09-11 Thread Les Mikesell
David Koski wrote:
 Another wrinkle: Many of these same pool files get an error:
 
 Can't link /var/lib/backuppc/pc/bki/new/f%2f/fhome/path-and-file 
 to /var/lib/backuppc/cpool/d/1/9/d19f21440531ec9046070a9ad79190c5
 
 Yet, the pool file does not appear to have many links:
 
 -rw-r-  9 backuppc backuppc 38 2007-01-12 
 19:25 /var/lib/backuppc/cpool/d/1/9/d19f21440531ec9046070a9ad79190c5

Is the entire /var/lib/backuppc/ on the same filesystem?  Does it have 
free inodes (df -i)?

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn


Rob Owens wrote:
 My understanding is that with tar and smb, all files are downloaded (and
 then discarded if they're already in the pool).  Rsync is smart enough,
 though, not to download files already in the pool.

 -Rob
   

I was about to post the same thing.  I moved/renamed some directories 
around on the server I am backing up, and it is downloading the entire 
file(s) again.   Is there any interest in having BackupPC w/ rsync check 
the pool first before downloading?   Is there a reason behind not doing 
it, or is it just something that hasn't been gotten to yet?

Rich

 David Koski wrote:
   
 I have been trying to get a good backup with backuppc (2.1.1) but it has been
 taking days.  I ran a dump on the command line so I could see what is going
 on and I see the files that are in the pool are being downloaded.  For 
 example:

   pool 700   511/1008039 home/daler/My 
 Documents/DRAWINGS/Lakeport/Pics/C_03.tif

 This is a large file and at 750kb/s takes a while.  Is this expected?  I 
 thought if
 they are in the pool they do not need to be downloaded.

 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Les Mikesell
Rich Rauenzahn wrote:
 
 Rob Owens wrote:
 My understanding is that with tar and smb, all files are downloaded (and
 then discarded if they're already in the pool).  Rsync is smart enough,
 though, not to download files already in the pool.

 -Rob
   
 
 I was about to post the same thing.  I moved/renamed some directories 
 around on the server I am backing up, and it is downloading the entire 
 file(s) again.   Is there any interest in having BackupPC w/ rsync check 
 the pool first before downloading?   Is there a reason behind not doing 
 it, or is it just something that hasn't been gotten to yet?

I don't think the remote rsync passes enough information to match the 
pool hashes.  The check is done against files of the same name/location 
from the last backup and when matches are found there, only file 
differences are transferred.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not ending DumpPostUserCmd

2007-09-11 Thread Tobias Brunner
Hi

Does no one have an idea?
Are more informations needed?

I appreciate any help...

Thanks!
Regards,
Tobias

Tobias Brunner wrote:
 Hi everybody
 
 On one server, I have DumpPostUserCmd configured to execute a script. This is 
 executed after the backup, but it doesn't end so the backup can never finish.
 
 Detailed description:
 The DumpPreUserCmd ($sshPath -q -x -l root $host 
 /usr/local/bin/zimbra_pre_back.sh) stops the mailserver on the remote 
 machine. That works perfectly and the backup runs correctly.
 After the backup has finished, the DumpPostUserCmd ($sshPath -q -x -l root 
 $host /usr/local/bin/zimbra_post_back.sh) is executed. But this process does 
 never end despite the exit 0 statement on the end of the bash script.
 When I do a ps -Af | grep 7224 (7224 is the running backup process) I get 
 this:
 
 backuppc  7224  4502  0 01:00 ?00:02:38 /usr/bin/perl 
 /usr/local/BackupPC/bin/BackupPC_dump server
 backuppc  7235  7224  0 01:01 ?00:00:01 [BackupPC_dump] defunct
 backuppc  7236  7224  1 01:01 ?00:09:49 [BackupPC_dump] defunct
 backuppc  7276  7224  0 01:24 ?00:00:00 /usr/bin/ssh -q -x -l root 
 server.domain.ch /usr/local/bin/zimbra_post_back.sh
 
 After killing 7276 the backup finishes and I can see this in the Xfer Log:
 
 Executing DumpPreUserCmd: /usr/bin/ssh -q -x -l root server.domain.ch 
 /usr/local/bin/zimbra_pre_back.sh
 Host server.domain.ch
   Stopping stats...Done
   Stopping mta...Done
   Stopping spell...Done
   Stopping snmp...Done
   Stopping antivirus...Done
   Stopping antispam...Done
   Stopping imapproxy...Done
   Stopping mailbox...Done
   Stopping logger...Done
   Stopping ldap...Done
 Successfully stopped Zimbra at 01:01:04
 incr backup started back to 2007-08-30 13:28:11 (backup #9) for directory etc
 ...
 Executing DumpPostUserCmd: /usr/bin/ssh -q -x -l root server.domain.ch 
 /usr/local/bin/zimbra_post_back.sh
 Host server.domain.ch
   Starting ldap...Done.
   Starting logger...Done.
   Starting mailbox...Done.
   Starting antispam...Done.
   Starting antivirus...Done.
   Starting snmp...Done.
   Starting spell...Done.
   Starting mta...Done.
   Starting stats...Done.
 Successfully started Zimbra at 01:25:22
 
 That looks ok, except that the script does not end...
 Does anyone have an idea why that is?
 I have tried everything even using timeoutd on the remote host (which works 
 perfectly) but the ssh process on the backuppc server does not exit =(.
 
 Thanks for your help
 
 Regards,
 Tobias
 
 
 zimbra_pre_back.sh:
 #!/bin/sh
 
 my_date=`date +%Y-%m-%d`
 my_time=`date +%H:%M:%S`
 echo Stopping time: $my_time  /opt/zimbra/backup/zm_backup-$my_date.log
 su - zimbra -c /opt/zimbra/bin/zmcontrol stop | tee -a 
 /opt/zimbra/backup/zm_backup-$my_date.log
 my_time=`date +%H:%M:%S`
 echo Successfully stopped Zimbra at $my_time
 exit 0
 
 zimbra_post_back.sh:
 #!/bin/sh
 
 my_date=`date +%Y-%m-%d`
 my_time=`date +%H:%M:%S`
 echo Starting time: $my_time  /opt/zimbra/backup/zm_backup-$my_date.log
 su - zimbra -c /opt/zimbra/bin/zmcontrol start | tee -a 
 /opt/zimbra/backup/zm_backup-$my_date.log
 my_time=`date +%H:%M:%S`
 echo Successfully started Zimbra at $my_time
 exit 0
 
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn



Les Mikesell wrote:

Rich Rauenzahn wrote:
  

Rob Owens wrote:


My understanding is that with tar and smb, all files are downloaded (and
then discarded if they're already in the pool).  Rsync is smart enough,
though, not to download files already in the pool.

-Rob
  
  
I was about to post the same thing.  I moved/renamed some directories 
around on the server I am backing up, and it is downloading the entire 
file(s) again.   Is there any interest in having BackupPC w/ rsync check 
the pool first before downloading?   Is there a reason behind not doing 
it, or is it just something that hasn't been gotten to yet?



I don't think the remote rsync passes enough information to match the 
pool hashes.  The check is done against files of the same name/location 
from the last backup and when matches are found there, only file 
differences are transferred.
  


I'm looking through the sources now.. I assumed that somehow the 
interface to File::RsyncP could return a checksum to BackupPC... can't 
tell if they are that tightly bound or not.  How/when does compression 
occur?  Ah, I see.  It passes an I/O object into RsyncP.  I think I'll 
move this to the devel list =-).


Rich
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not ending DumpPostUserCmd

2007-09-11 Thread Ambrose Li
Hi,

On 11/09/2007, Tobias Brunner [EMAIL PROTECTED] wrote:
[...]
Starting stats...Done.
  Successfully started Zimbra at 01:25:22
 
  That looks ok, except that the script does not end...
  Does anyone have an idea why that is?

Maybe try to pass the -n option to ssh? Sometimes that makes a difference.


-- 
cheers,
-ambrose

Gmail must die. Yes, I use it, but it still must die.
PS: Don't trust everything you read in Wikipedia. (Very Important)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Les Mikesell
Rich Rauenzahn wrote:
 
 I don't think the remote rsync passes enough information to match the 
 pool hashes.  The check is done against files of the same name/location 
 from the last backup and when matches are found there, only file 
 differences are transferred.
   
 
 I'm looking through the sources now.. I assumed that somehow the 
 interface to File::RsyncP could return a checksum to BackupPC... can't 
 tell if they are that tightly bound or not.  How/when does compression 
 occur?  Ah, I see.  It passes an I/O object into RsyncP.  I think I'll 
 move this to the devel list =-).

Don't forget that the other end of the conversation is running stock 
rsync and that you may have collisions in your initial file hash.

-- 
   Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not ending DumpPostUserCmd

2007-09-11 Thread mna.news
Le jeudi 6 septembre 2007, Tobias Brunner a écrit :
evening,

[...]
 echo Stopping time: $my_time  /opt/zimbra/backup/zm_backup-$my_date.log
 su - zimbra -c /opt/zimbra/bin/zmcontrol stop | tee -a

my guess is around the stdout and error redirections,
i think they are not close properly by redirection or tee program, so the ssh 
connection is still active because of that.

i would try a ssh -e which may help with coding correct escape char at the end 
of script or ssh -t may help also alocating tty

am not sure about those tips, says us if it's help.

mna.
-- 
Ma barbe vit puisqu'elle pousse.
Si je la coupe, elle ne crie pas.
Une plante vit et ne crie pas quand on la coupe.
Donc, ma barbe est une plante.
Boris Vian, Les b�tisseurs d'empire.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not ending DumpPostUserCmd

2007-09-11 Thread Stephen Joyce
Your ssh isn't exiting because a child process, in this case zimbra, 
remains. As some others have suggested, this is due to ssh waiting on both 
stdin and stdout of the child process to close.

For example, if you do the following, you'll experience a hang:
  pc1$ ssh pc2
  pc2$ xterm 
  pc2$ logout
...hang without exit.. until you close the xterm.

Something like the following should behave more like you expect:
  pc1$ ssh pc2
  pc2$ xterm  /dev/null  /dev/null 
  pc2$ logout
...ssh session terminates
  pc1$

Hope this helps.

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu

  Some people make the world turn and others just watch it spin.
-- Jimmy Buffet

On Tue, 11 Sep 2007, Tobias Brunner wrote:

 Hi

 Does no one have an idea?
 Are more informations needed?

 I appreciate any help...

 Thanks!
 Regards,
 Tobias

 Tobias Brunner wrote:
 Hi everybody

 On one server, I have DumpPostUserCmd configured to execute a script. This 
 is executed after the backup, but it doesn't end so the backup can never 
 finish.

 Detailed description:
 The DumpPreUserCmd ($sshPath -q -x -l root $host 
 /usr/local/bin/zimbra_pre_back.sh) stops the mailserver on the remote 
 machine. That works perfectly and the backup runs correctly.
 After the backup has finished, the DumpPostUserCmd ($sshPath -q -x -l root 
 $host /usr/local/bin/zimbra_post_back.sh) is executed. But this process does 
 never end despite the exit 0 statement on the end of the bash script.
 When I do a ps -Af | grep 7224 (7224 is the running backup process) I get 
 this:

 backuppc  7224  4502  0 01:00 ?00:02:38 /usr/bin/perl 
 /usr/local/BackupPC/bin/BackupPC_dump server
 backuppc  7235  7224  0 01:01 ?00:00:01 [BackupPC_dump] defunct
 backuppc  7236  7224  1 01:01 ?00:09:49 [BackupPC_dump] defunct
 backuppc  7276  7224  0 01:24 ?00:00:00 /usr/bin/ssh -q -x -l root 
 server.domain.ch /usr/local/bin/zimbra_post_back.sh

 After killing 7276 the backup finishes and I can see this in the Xfer Log:

 Executing DumpPreUserCmd: /usr/bin/ssh -q -x -l root server.domain.ch 
 /usr/local/bin/zimbra_pre_back.sh
 Host server.domain.ch
  Stopping stats...Done
  Stopping mta...Done
  Stopping spell...Done
  Stopping snmp...Done
  Stopping antivirus...Done
  Stopping antispam...Done
  Stopping imapproxy...Done
  Stopping mailbox...Done
  Stopping logger...Done
  Stopping ldap...Done
 Successfully stopped Zimbra at 01:01:04
 incr backup started back to 2007-08-30 13:28:11 (backup #9) for directory etc
 ...
 Executing DumpPostUserCmd: /usr/bin/ssh -q -x -l root server.domain.ch 
 /usr/local/bin/zimbra_post_back.sh
 Host server.domain.ch
  Starting ldap...Done.
  Starting logger...Done.
  Starting mailbox...Done.
  Starting antispam...Done.
  Starting antivirus...Done.
  Starting snmp...Done.
  Starting spell...Done.
  Starting mta...Done.
  Starting stats...Done.
 Successfully started Zimbra at 01:25:22

 That looks ok, except that the script does not end...
 Does anyone have an idea why that is?
 I have tried everything even using timeoutd on the remote host (which works 
 perfectly) but the ssh process on the backuppc server does not exit =(.

 Thanks for your help

 Regards,
 Tobias


 zimbra_pre_back.sh:
 #!/bin/sh

 my_date=`date +%Y-%m-%d`
 my_time=`date +%H:%M:%S`
 echo Stopping time: $my_time  /opt/zimbra/backup/zm_backup-$my_date.log
 su - zimbra -c /opt/zimbra/bin/zmcontrol stop | tee -a 
 /opt/zimbra/backup/zm_backup-$my_date.log
 my_time=`date +%H:%M:%S`
 echo Successfully stopped Zimbra at $my_time
 exit 0

 zimbra_post_back.sh:
 #!/bin/sh

 my_date=`date +%Y-%m-%d`
 my_time=`date +%H:%M:%S`
 echo Starting time: $my_time  /opt/zimbra/backup/zm_backup-$my_date.log
 su - zimbra -c /opt/zimbra/bin/zmcontrol start | tee -a 
 /opt/zimbra/backup/zm_backup-$my_date.log
 my_time=`date +%H:%M:%S`
 echo Successfully started Zimbra at $my_time
 exit 0


 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

 -
 This SF.net email is sponsored by: Microsoft
 Defy all 

[BackupPC-users] Backing up one share uncompressed

2007-09-11 Thread Mark Allison
Hi,

I am using compression to backup all my shares on all my pcs. There is
however one share with my music collection on it - I would like this share
only to be backed up uncompressed. Is this possible? What do I need to do?

Thanks!
Mark.
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Got fatal error during xfer (fileListReceive failed)

2007-09-11 Thread Robert Saunders
On Mon, 2007-09-10 at 23:49 -0700, Craig Barratt wrote:
 Robert writes:
 
  I removed the -v option from config.pl, restarted backuppc, this is the
  log:
  
  Running: /usr/bin/ssh -q -l root robert-laptop /usr/bin/rsync --server
  --sender --numeric-ids --perms --owner --group -D --links --times
  --block-size=2048 --recursive -D --ignore-times . /home/
  Xfer PIDs are now 5985
  Got remote protocol 1651076184
  Fatal error (bad version): Xlib: connection to :0.0 refused by server
 
 Ssh is saying Xlib: connection to :0.0 refused by server.
 
 Add the -x option to rsync (to disable X11 port forwarding).  Note that
 the user's and global ssh_config override the command-line options, so
 if you get the same error you should disable X11 port forwarding there
 too.
 
 Craig

I have added -x but this makes no difference.  I have (I think) disabled
X11 port forwarding in /etc/ssh/ssh_config (on the laptop).  Again no
difference.  Error is currently:

Running: /usr/bin/ssh -q -x -l root robert-laptop /usr/bin/rsync
--server --sender --numeric-ids --perms --owner --group -D --links
--times --block-size=2048 --recursive -D --ignore-times . /home/
Xfer PIDs are now 9126
Got remote protocol 1651076184
Fatal error (bad version): Xlib: connection to :0.0 refused by server
Xlib: 
Read EOF: 
Tried again: got 0 bytes
fileListReceive() failed
Done: 0 files, 0 bytes
Got fatal error during xfer (fileListReceive failed)
Backup aborted (fileListReceive failed)

Robert

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/