Re: [BackupPC-users] Trouble with tape archiving

2007-10-26 Thread Stian Jordet
Craig Barratt wrote:
 Stian writes:
   
 [EMAIL PROTECTED]:~$ /usr/share/backuppc/bin/BackupPC_tarCreate -h pontiac 
 -n -1 -s \* . | /bin/gzip  /dev/nst0
 which is what backuppc tries to do when archiving to tape. And this does 
 not work.
 

 As Dan and Ali mention, you can use buffer or dd to reblock
 the stream.

 You can make a copy of BackupPC_archiveHost and change it to
 run the commands of your choice.  Update $Conf{ArchiveClientCmd}
 to point at your customized archive script.
   
Hi,

sorry for the very late reply. I haven't had time to check this out 
until now. The problem was that the last block written had to be padded 
out to the blocksize. I don't know why this just had to be done when 
using gzip/bzip2. And GNU tar worked fine with the z or j option. Either 
way, I'm now running with this patch:

--- BackupPC_archiveHost2007-08-27 22:33:53.0 +0200
+++ BackupPC_archiveHost2   2007-10-10 15:45:55.392387027 +0200
@@ -110,7 +110,7 @@
 #
 # Output file is a device or a regular file, so don't use split
 #
-$cmd  .=  $outLoc;
+$cmd  .= | buffer -B -o $outLoc;
 $mesg .=  to $outLoc;
 } else {
 mkpath($outLoc) if ( !-d $outLoc );

I'm just curious, is there a better way to do this? Perhaps within perl, 
so I don't have to fork a process to run buffer?

Thanks for backuppc once again. It's perfect! (Although I wish it better 
supported off-site storage ).

Regards,
Stian

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] URGENT !! Windows+WIFI imcompatible with BACKUPPC

2007-10-26 Thread tuxoide
Hello

i have a wifi server with backuppc , connect to a freebox(wifi) and my wifi
laptop.

want to backup my windows laptop in ntfs ...

all Xfer method , i trying for backuppc FAILED to backuppc my windows machine.
i precise , that i test all possible solution : different rsync version of
cygwin (2.6 to 3.0cvs) and patch for adding mssleep delais !
i try cwrsync, deltacopy  ...

rsyncd : failed all the time
rsync+ssh : work but hang after one or two files
smb method : failed too : timeout 2 millisecond ...
tar : hang !!!

i try with ssh-hpn : failed too ...
i try all rsync parameter... but , the system hang all the time.
i see the problem comes from Cygwin pipe!!

for windows users :
- have you try with rsync client in python or perl version  ?
- do you know if robocopy from microsoft can help me ?
- NSF or ftp backup can make an issue ?

Best Regards, Olivier




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] tar over ssh error

2007-10-26 Thread Lai Chen Kang
I am using CentOS as the backup server try to backup a Solaris 5.8 server.

Below is the output from the logs

Running: /usr/bin/ssh -q -x -n -l root 10.250.2.200 /usr/bin/tar -cvf - -C /
./opt/rts/rtd ./etc/hosts ./etc/services ./var/spool/cron
full backup started for directory /
Xfer PIDs are now 5008,5007
a ./opt/rts/rtd/ 0K
a ./opt/rts/rtd/bin/ 0K
a ./opt/rts/rtd/bin/rtdcleantradedb.gz 277K
a ./opt/rts/rtd/bin/rtddbfixuniqueid.gz 577K
a ./opt/rts/rtd/bin/rtddbutil.gz 826K
...
...
...
  create d 755   0/3   0 var/spool/cron
  create d 755   0/3   0 var/spool/cron/atjobs
  create d 755   0/3   0 var/spool/cron/crontabs
  pool 644   0/3 190 var/spool/cron/crontabs/adm
  pool 444   0/0 750 var/spool/cron/crontabs/lp
  pool 400   0/1 511 var/spool/cron/crontabs/root
  pool 644   0/3 308 var/spool/cron/crontabs/sys
  pool 444   0/3 404 var/spool/cron/crontabs/uucp
  pool 400 0/9113094 var/spool/cron/crontabs/rts
tarExtract: Done: 0 errors, 1264 filesExist, 5624714949 sizeExist, 768158569
sizeExistComp, 1295 filesTotal, 5895322934 sizeTotal
Got fatal error during xfer (a ./var/spool/cron/crontabs/rts 4K)
Backup aborted (a ./var/spool/cron/crontabs/rts 4K)

Any ideas?
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Can't get excludes to work

2007-10-26 Thread Arch Willingham
I am trying to exclude one directory on a windows share but nothing I have 
tried works. In short, I want to exclude c:\backup on a machine called 
PARKS. I have a custom  config.pl for that computer. I set the exclude 
directory via the web address. The entry it made in PARKS.pl is:


$Conf{BackupFilesExclude} = {
  '\\backup\\*' = [
''
  ]
};



What did I do wrong?

BTW...I just noticed in the help Users report that for smbclient you should 
specify a directory followed by ``/*'', eg: ``/proc/*'', instead of just 
``/proc''.

Arch

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't get excludes to work

2007-10-26 Thread Toni Van Remortel
Arch Willingham wrote:
 Even though the slashes go the other way in Windows
Yes. It's a Unix system that is taking the backups, so you need to use 
the Unix way to address directories. So / is the separator, \ is just an 
escape character.

-- 
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 452 92 26 - [EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't get excludes to work

2007-10-26 Thread dan
you are specifying the directory incorrectly.  instead of \\backup\\* it
should be /backup/

On 10/26/07, Arch Willingham [EMAIL PROTECTED] wrote:

 I am trying to exclude one directory on a windows share but nothing I have
 tried works. In short, I want to exclude c:\backup on a machine called
 PARKS. I have a custom  config.pl for that computer. I set the exclude
 directory via the web address. The entry it made in PARKS.pl is:


 $Conf{BackupFilesExclude} = {
   '\\backup\\*' = [
 ''
   ]
 };



 What did I do wrong?

 BTW...I just noticed in the help Users report that for smbclient you
 should specify a directory followed by ``/*'', eg: ``/proc/*'', instead of
 just ``/proc''.

 Arch

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar over ssh error

2007-10-26 Thread dan
im not very familiar with solaris 5, does it do file locking on the cron
files?  if you exclude the cron directory to you get the same error on other
files?

On 10/26/07, Lai Chen Kang [EMAIL PROTECTED] wrote:

 I am using CentOS as the backup server try to backup a Solaris 5.8 server.

 Below is the output from the logs

 Running: /usr/bin/ssh -q -x -n -l root 10.250.2.200 /usr/bin/tar -cvf - -C
 / ./opt/rts/rtd ./etc/hosts ./etc/services ./var/spool/cron
 full backup started for directory /
 Xfer PIDs are now 5008,5007
 a ./opt/rts/rtd/ 0K
 a ./opt/rts/rtd/bin/ 0K
 a ./opt/rts/rtd/bin/rtdcleantradedb.gz 277K
 a ./opt/rts/rtd/bin/rtddbfixuniqueid.gz 577K
 a ./opt/rts/rtd/bin/rtddbutil.gz 826K
 ...
 ...
 ...
   create d 755   0/3   0 var/spool/cron
   create d 755   0/3   0 var/spool/cron/atjobs
   create d 755   0/3   0 var/spool/cron/crontabs
   pool 644   0/3 190 var/spool/cron/crontabs/adm
   pool 444   0/0 750 var/spool/cron/crontabs/lp
   pool 400   0/1 511 var/spool/cron/crontabs/root
   pool 644   0/3 308 var/spool/cron/crontabs/sys
   pool 444   0/3 404 var/spool/cron/crontabs/uucp
   pool 400 0/9113094 var/spool/cron/crontabs/rts
 tarExtract: Done: 0 errors, 1264 filesExist, 5624714949 sizeExist,
 768158569 sizeExistComp, 1295 filesTotal, 5895322934 sizeTotal
 Got fatal error during xfer (a ./var/spool/cron/crontabs/rts 4K)
 Backup aborted (a ./var/spool/cron/crontabs/rts 4K)

 Any ideas?

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't get excludes to work

2007-10-26 Thread Arch Willingham
Even though the slashes go the other way in Windows
 
Arch

-Original Message-
From: dan [mailto:[EMAIL PROTECTED]
Sent: Friday, October 26, 2007 10:43 AM
To: Arch Willingham
Cc: BackupPC-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Can't get excludes to work


you are specifying the directory incorrectly.  instead of \\backup\\* it should 
be /backup/


On 10/26/07, Arch Willingham   mailto:[EMAIL PROTECTED] [EMAIL PROTECTED] 
wrote: 

I am trying to exclude one directory on a windows share but nothing I have 
tried works. In short, I want to exclude c:\backup on a machine called 
PARKS. I have a custom   config.pl for that computer. I set the exclude 
directory via the web address. The entry it made in PARKS.pl is:


$Conf{BackupFilesExclude} = {
  '\\backup\\*' = [
''
  ]
};



What did I do wrong?

BTW...I just noticed in the help Users report that for smbclient you should 
specify a directory followed by ``/*'', eg: ``/proc/*'', instead of just 
``/proc''. 

Arch

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser. 
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:  http://backuppc.wiki.sourceforge.net 
http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Yaakov Chaikin
On 10/25/07, dan [EMAIL PROTECTED] wrote:
 are you login in to the web interface with the admin password or your own?
 if it your own, you dont have that configured in the host file

  hostdhcpusermoreUsers # --- do not
 edit this line
  tbiqdev 0   yaakov
  yaakovlt0   backup***

 ***is the managing username.


What do you mean by this?

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Yaakov Chaikin
Yes, I did restart the Apache server. In fact, I even restarted the
machine. Still the same thing.

Any ideas?

On 10/25/07, John Rouillard [EMAIL PROTECTED] wrote:
 On Thu, Oct 25, 2007 at 09:12:24PM -0400, Yaakov Chaikin wrote:
  I have one host working just fine for already a couple of months...
  So, now I wanted to add another machine to be backed up. So, I edited
  the hosts file and added another almost identical line to it. Now, it
  looks like this:
 
  hostdhcpusermoreUsers # --- do not edit this line
  tbiqdev 0   yaakov
  yaakovlt0   backup
 
  The tbiqdev is the old one. It's showing up and working just fine. The
  'yaakovlt' one doesn't show up in the browser (I have apache setup). I
  have restarted both BackupPC and Apache, but only 'tbiqdev' is showing
  up.
  [...]
  Anyone see what I did wrong?

 I think the webapp talks to the server to get host lists. Did
 you try reloading the server?

 --
 -- rouilj

 John Rouillard
 System Administrator
 Renesys Corporation
 603-643-9300 x 111


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Stephen Joyce
On Fri, 26 Oct 2007, Yaakov Chaikin wrote:

 On 10/25/07, dan [EMAIL PROTECTED] wrote:
 are you login in to the web interface with the admin password or your own?
 if it your own, you dont have that configured in the host file

 hostdhcpusermoreUsers # --- do not
 edit this line
 tbiqdev 0   yaakov
 yaakovlt0   backup***

 ***is the managing username.


 What do you mean by this?

He means that hosts are associated with users. Only a user associated with 
a host may view that host (although the admin user may view all hosts).

Since you have the host yaakovlt associated with user backup, only the user 
backup and any backuppc admin users will be able to view the host. Put 
another way, if yaakov is not an admin in backuppc, then he should not be 
able to see host yaakovlt because it is not his host.

Change the line above to
yaakovlt0   yaakov
restart backuppc, and user yaakov should see both hosts.

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread dan
kindof

the username you put in the host file in backuppc only applies to the web
session you login to backuppc as

if i pick some random name(it does not have to be for a user on the system)

franko

and add that users to the htpasswd file

htpasswd /etc/backuppc/htpasswd franko

it will ask me for a password.  now franko can log in with that passwod

now, i can add franko to the host file


host dhcpuser moreUsers
yaakovlt0 franko

now franko, and the backuppc admin user, can manage yaakovlt, but no one
else

if i change this to

host dhcpuser moreUsers
yaakovlt0 franko  stevo

now franko, stevo, and backuppc can manage.

that make sense?



On 10/26/07, Yaakov Chaikin [EMAIL PROTECTED] wrote:

 Really? Cool!

 So, the user listed in the 'hosts' files has nothing to do with the
 user set up on the yaakovlt machine, but has to do with the user setup
 in BackupPC?

 Thanks,
 Yaakov.

 On 10/26/07, Stephen Joyce [EMAIL PROTECTED] wrote:
  On Fri, 26 Oct 2007, Yaakov Chaikin wrote:
 
   On 10/25/07, dan [EMAIL PROTECTED] wrote:
   are you login in to the web interface with the admin password or your
 own?
   if it your own, you dont have that configured in the host file
  
   hostdhcpusermoreUsers # --- do not
   edit this line
   tbiqdev 0   yaakov
   yaakovlt0   backup***
  
   ***is the managing username.
  
  
   What do you mean by this?
 
  He means that hosts are associated with users. Only a user associated
 with
  a host may view that host (although the admin user may view all hosts).
 
  Since you have the host yaakovlt associated with user backup, only the
 user
  backup and any backuppc admin users will be able to view the host. Put
  another way, if yaakov is not an admin in backuppc, then he should not
 be
  able to see host yaakovlt because it is not his host.
 
  Change the line above to
  yaakovlt0   yaakov
  restart backuppc, and user yaakov should see both hosts.
 
  Cheers, Stephen
  --
  Stephen Joyce
  Systems AdministratorP A N I
 C
  Physics  Astronomy Department Physics 
 Astronomy
  University of North Carolina at Chapel Hill Network
 Infrastructure
  voice: (919) 962-7214and
 Computing
  fax: (919) 962-0480
 http://www.panic.unc.edu
 
 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Stephen Joyce
On Fri, 26 Oct 2007, Yaakov Chaikin wrote:

 Really? Cool!

 So, the user listed in the 'hosts' files has nothing to do with the
 user set up on the yaakovlt machine, but has to do with the user setup
 in BackupPC?

Correct. Look at User name and More users at 
http://backuppc.sourceforge.net/faq/BackupPC.html#step_4__setting_up_the_hosts_file
 
for more info.

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Yaakov Chaikin
Really? Cool!

So, the user listed in the 'hosts' files has nothing to do with the
user set up on the yaakovlt machine, but has to do with the user setup
in BackupPC?

Thanks,
Yaakov.

On 10/26/07, Stephen Joyce [EMAIL PROTECTED] wrote:
 On Fri, 26 Oct 2007, Yaakov Chaikin wrote:

  On 10/25/07, dan [EMAIL PROTECTED] wrote:
  are you login in to the web interface with the admin password or your own?
  if it your own, you dont have that configured in the host file
 
  hostdhcpusermoreUsers # --- do not
  edit this line
  tbiqdev 0   yaakov
  yaakovlt0   backup***
 
  ***is the managing username.
 
 
  What do you mean by this?

 He means that hosts are associated with users. Only a user associated with
 a host may view that host (although the admin user may view all hosts).

 Since you have the host yaakovlt associated with user backup, only the user
 backup and any backuppc admin users will be able to view the host. Put
 another way, if yaakov is not an admin in backuppc, then he should not be
 able to see host yaakovlt because it is not his host.

 Change the line above to
 yaakovlt0   yaakov
 restart backuppc, and user yaakov should see both hosts.

 Cheers, Stephen
 --
 Stephen Joyce
 Systems AdministratorP A N I C
 Physics  Astronomy Department Physics  Astronomy
 University of North Carolina at Chapel Hill Network Infrastructure
 voice: (919) 962-7214and Computing
 fax: (919) 962-0480   http://www.panic.unc.edu



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Hanging rsync backup on /usr/local

2007-10-26 Thread John Rouillard
Hi all:

Figured I should start a new thread on this as it is a separate
problem from the SIGPIPE issue.

I have a backup hanging until the SIGALARM triggers some 20 hours
later.

The (partial) config is:

  $Conf{XferMethod} = 'rsync';
  $Conf{RsyncClientPath} = '/usr/bin/rsync';
  $Conf{RsyncClientCmd} = '$sshPath -q -x -l backup \
 -o ServerAliveInterval=30 $host sudo $rsyncPath $argList+';
  $Conf{RsyncShareName} = [
'/etc',
'/var/bak',
'/var/log',
'/usr/local',
  ];
  $Conf{RsyncArgs} = [
#
# Do not edit these!
#
'--numeric-ids',
'--perms',
'--owner',
'--group',
'-D',
'--links',
'--hard-links',
'--times',
'--block-size=2048',
'--recursive',
'--one-file-system',

#
# Rsync = 2.6.3 supports the --checksum-seed option
# which allows rsync checksum caching on the server.
# Uncomment this to enable rsync checksum caching if
# you have a recent client rsync version and you want
# to enable checksum caching.
#
'--checksum-seed=32761',

#
# Add additional arguments here
#
  ];

The server side uses (a cpan2rpm locally built)
perl-File-RsyncP-0.68-1, with perl:

  Summary of my perl5 (revision 5 version 8 subversion 5) configuration:
Platform:
  osname=linux, osvers=2.6.9-42.elsmp,
  archname=i386-linux-thread-multi
  uname='linux build-i386 2.6.9-42.elsmp #1 smp sat aug 12 09:39:11
  cdt 2006 i686 i686 i386 gnulinux '
  config_args='-des -Doptimize=-O2 -g -pipe -m32 -march=i386
  -mtune=pentium4 -Dversion=5.8.5 -Dmyhostname=localhost
  [EMAIL PROTECTED] -Dcc=gcc -Dcf_by=Red Hat,
  Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux
  -Dvendorprefix=/usr -Dsiteprefix=/usr -Duseshrplib -Dusethreads
  -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db
  -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio
  -Dinstallusrbinperl -Ubincompat5005 -Uversiononly
  -Dpager=/usr/bin/less -isr -Dinc_version_list=5.8.4 5.8.3 5.8.2 5.8.1
  5.8.0'
  hint=recommended, useposix=true, d_sigaction=define
  usethreads=define use5005threads=undef useithreads=define
  usemultiplicity=define
  useperlio=define d_sfio=undef uselargefiles=define usesocks=undef

and runs Centos 4.4.

The client side box is a CentOS release 4.5 (Final) running
rsync-2.6.3-1 and the rsync process is run via sudo and results in ps
output of:

  root 31103  3749  0 14:59 ?00:00:00 sshd: backup [priv]
  backup   31105 31103  0 14:59 ?00:00:00 sshd: [EMAIL PROTECTED]

  root 31106 31105  0 14:59 ?00:00:00 sesh /usr/bin/rsync
--server --sender --numeric-ids --perms --owner --group -D --links
--hard-links --times --block-size=2048 --recursive --one-file-system
--checksum-seed=32761 --ignore-times . /usr/local/

  root 31107 31106  0 14:59 ?00:00:00 /usr/bin/rsync
--server --sender --numeric-ids --perms --owner --group -D --links
--hard-links --times --block-size=2048 --recursive --one-file-system
--checksum-seed=32761 --ignore-times . /usr/local/

and an strace of the rsync (pid 31107) looks like it is waiting for
input:

  [EMAIL PROTECTED] ~]$ sudo strace -p 31107
  Process 31107 attached - interrupt to quit
  select(1, [0], [], NULL, {20, 441000})  = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0} ...

On the server side I have:

  BackupPC_dump,26148 /tools/BackupPC-3.1.0beta0/bin/BackupPC_dump -f...
(BackupPC_dump,26355)
(BackupPC_dump,26874)
(BackupPC_dump,26915)
BackupPC_dump,27716 /tools/BackupPC-3.1.0beta0/bin/BackupPC_dump ...
  (ssh,26219)
  (ssh,26855)
  (ssh,26883)
  ssh,27681 -q -x -l backup -o ServerAliveInterval=30 ...

where ()'s processes are defunct.

stracing the ssh that should be the server side of the rsync client
above produces:

  Process 27681 attached - interrupt to quit
  select(7, [3 4], [], NULL, {28, 254000}) = 0 (Timeout)
  select(7, [3 4], [3], NULL, {30, 0})= 1 (out [3], left {30, 0})
  write(3,
-\310\375\373\356\377\214\1^\310\335\266\377a\326\31v\260..., 64) =
64
  select(7, [3 4], [], NULL, {30, 0}) = 1 (in [3], left {29,
 99})
  read(3, GF\314\303\230e\23\272f\372\212#J\204sR\205\30\266\v\201...,
 8192) = 32
  select(7, [3 4], [], NULL, {30, 0}) = 0 (Timeout)
  select(7, [3 4], [3], NULL, {30, 0})= 1 (out [3], left {30, 0})
  write(3, \350\307\213\306\263\6\225\240\32}\247p\32\345f;qo\33h...,
 64) = 64
  select(7, [3 4], [], NULL, 

Re: [BackupPC-users] Debugging a SIGPIPE error killing my backups

2007-10-26 Thread John Rouillard
On Fri, Oct 26, 2007 at 11:13:14AM -0500, Les Mikesell wrote:
 John Rouillard wrote:
 
   $Conf{ClientTimeout} = 72000;
 
 which is 20 hours and the sigpipe is occurring before then.
 You'd see sigalarm instead of sigpipe if you had a timeout.
 
 Something like this I assume:
 
 [...]
 create d 755   0/1   12288 src/fastforward-0.51
   finish: removing in-process file .
   Child is aborting
   Done: 17 files, 283 bytes
   Got fatal error during xfer (aborted by signal=ALRM)
   Backup aborted by user signal
 
 Yes, that one is a timeout on the backuppc side.
 
 Also I straced the rsync process on the remote system while it was hung
 (I assume on whatever occurred after the src/fastforward-0.51)
 directory and got:
 
   [EMAIL PROTECTED] ~]$ ps -ef | grep 6909
   root  6909  6908  0 Oct25 ?00:00:00 /usr/bin/rsync
   --server --sender --numeric-ids --perms --owner --group -D --links
   --hard-links --times --block-size=2048 --recursive --one-file-system
   --checksum-seed=32761 --ignore-times . /usr/local/
   rouilj   10603 10349  0 05:36 pts/000:00:00 grep 6909
   [EMAIL PROTECTED] ~]$ strace -p 6909
   attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
   [EMAIL PROTECTED] ~]$ sudo strace -p 6909
   Process 6909 attached - interrupt to quit
   select(1, [0], [], NULL, {42, 756000})  = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0} unfinished ...
   Process 6909 detached
 
 And similar results on the server side process. Maybe a deadlock
 somewhere? The ssh pipe appeared open. I set it up to forward traffic
 and was able to pass traffic from the server to the client.
 
 Are these 2 different scenarios (the sigalarm and sigpipe)?

Yes, I just started a new thread on the hang/sigalarm problem.
 I don't 
 think I've ever seen a real deadlock on a unix/linux rsync although I 
 always got them on windows when trying to run rsync under sshd (and I'd 
 appreciate knowing the right versions to use if that works now).

Well, its not really rsync - rsync right, its File::RsyncP- rsync.

 The 
 sigpipe scenario sounded like the remote rsync crashed or quit (perhaps 
 not being able to handle files 2gigs).  This looks like something 
 different.  Can you start the remote strace before the hang so you have 
 a chance of seeing the file and activity in progress when the hang occurs?

I can try. As far as the sigpipe issue, looks like there is a missing
email in this thread. I was able to run an rsync of the 22GB file that
was the active transfer atthe time of the SIGPIPE without
problem. I'll repost that missing email.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-643-9300 x 111

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Debugging a SIGPIPE error killing my backups

2007-10-26 Thread Les Mikesell
John Rouillard wrote:

   $Conf{ClientTimeout} = 72000;

 which is 20 hours and the sigpipe is occurring before then.
 You'd see sigalarm instead of sigpipe if you had a timeout.
 
 Something like this I assume:
 
[...]
 create d 755   0/1   12288 src/fastforward-0.51
   finish: removing in-process file .
   Child is aborting
   Done: 17 files, 283 bytes
   Got fatal error during xfer (aborted by signal=ALRM)
   Backup aborted by user signal

Yes, that one is a timeout on the backuppc side.

 Also I straced the rsync process on the remote system while it was hung
 (I assume on whatever occurred after the src/fastforward-0.51)
 directory and got:
 
   [EMAIL PROTECTED] ~]$ ps -ef | grep 6909
   root  6909  6908  0 Oct25 ?00:00:00 /usr/bin/rsync
   --server --sender --numeric-ids --perms --owner --group -D --links
   --hard-links --times --block-size=2048 --recursive --one-file-system
   --checksum-seed=32761 --ignore-times . /usr/local/
   rouilj   10603 10349  0 05:36 pts/000:00:00 grep 6909
   [EMAIL PROTECTED] ~]$ strace -p 6909
   attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
   [EMAIL PROTECTED] ~]$ sudo strace -p 6909
   Process 6909 attached - interrupt to quit
   select(1, [0], [], NULL, {42, 756000})  = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0} unfinished ...
   Process 6909 detached
 
 And similar results on the server side process. Maybe a deadlock
 somewhere? The ssh pipe appeared open. I set it up to forward traffic
 and was able to pass traffic from the server to the client.

Are these 2 different scenarios (the sigalarm and sigpipe)?  I don't 
think I've ever seen a real deadlock on a unix/linux rsync although I 
always got them on windows when trying to run rsync under sshd (and I'd 
appreciate knowing the right versions to use if that works now).  The 
sigpipe scenario sounded like the remote rsync crashed or quit (perhaps 
not being able to handle files 2gigs).  This looks like something 
different.  Can you start the remote strace before the hang so you have 
a chance of seeing the file and activity in progress when the hang occurs?

--
   Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host not showing up

2007-10-26 Thread Yaakov Chaikin
THANKS!

On 10/26/07, dan [EMAIL PROTECTED] wrote:
 kindof

 the username you put in the host file in backuppc only applies to the web
 session you login to backuppc as

 if i pick some random name(it does not have to be for a user on the system)

 franko

 and add that users to the htpasswd file

 htpasswd /etc/backuppc/htpasswd franko

 it will ask me for a password.  now franko can log in with that passwod

 now, i can add franko to the host file


 host dhcpuser moreUsers
 yaakovlt0 franko

 now franko, and the backuppc admin user, can manage yaakovlt, but no one
 else

 if i change this to

 host dhcpuser moreUsers
  yaakovlt0 franko  stevo

 now franko, stevo, and backuppc can manage.

 that make sense?




 On 10/26/07, Yaakov Chaikin [EMAIL PROTECTED] wrote:
  Really? Cool!
 
  So, the user listed in the 'hosts' files has nothing to do with the
  user set up on the yaakovlt machine, but has to do with the user setup
  in BackupPC?
 
  Thanks,
  Yaakov.
 
  On 10/26/07, Stephen Joyce  [EMAIL PROTECTED] wrote:
   On Fri, 26 Oct 2007, Yaakov Chaikin wrote:
  
On 10/25/07, dan [EMAIL PROTECTED]  wrote:
are you login in to the web interface with the admin password or your
 own?
if it your own, you dont have that configured in the host file
   
hostdhcpusermoreUsers # --- do
 not
edit this line
tbiqdev 0   yaakov
yaakovlt0   backup***
   
***is the managing username.
   
   
What do you mean by this?
  
   He means that hosts are associated with users. Only a user associated
 with
   a host may view that host (although the admin user may view all hosts).
  
   Since you have the host yaakovlt associated with user backup, only the
 user
   backup and any backuppc admin users will be able to view the host. Put
   another way, if yaakov is not an admin in backuppc, then he should not
 be
   able to see host yaakovlt because it is not his host.
  
   Change the line above to
   yaakovlt0   yaakov
   restart backuppc, and user yaakov should see both hosts.
  
   Cheers, Stephen
   --
   Stephen Joyce
   Systems Administrator
  P A N I C
   Physics  Astronomy Department
 Physics  Astronomy
   University of North Carolina at Chapel Hill Network
 Infrastructure
   voice: (919) 962-7214
  and Computing
   fax: (919) 962-0480
 http://www.panic.unc.edu
  
  
 



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Debugging a SIGPIPE error killing my backups

2007-10-26 Thread John Rouillard
This is a resend. The original went missing apparently.

On Thu, Oct 25, 2007 at 02:10:12PM -0500, Les Mikesell wrote:
 John Rouillard wrote:
   2007-10-25 10:54:00 Aborting backup up after signal PIPE
   2007-10-25 10:54:01 Got fatal error during xfer (aborted by signal=PIPE)
 
 This means the problem is on the other end of the link - or at least 
 that the ssh driving it exited.

Hmm, ok, what happens if I add -v to the remote rsync args? Will the
extra output in the rsync stream screw things up? Maybe I can use:

  rsync  -v ... 2 /tmp/rsync.log

to get debugging at the rsync level without sending the debugging
output to BackupPC.

I'll also try adding -o ServerAliveInterval=30 and -vvv to see if that
improves the reliability of the ssh session and generates output,
since -v sends debugging output to stderr and I can grab that with:

 ssh -v concord 2 /tmp/log

Does BackupPC need to use stderr to the remote system for anything?
 
   lastlog got digests fdb1c560d9ba822ab4ffa635d4b5f67f vs
   fdb1c560d9ba822ab4ffa635d4b5f67f
 create   400   0/0   65700 lastlog
   Can't write 33932 bytes to socket
   Sending csums, cnt = 16, phase = 1
   Read EOF: Connection reset by peer
 
 The process on the remote side is gone at this point.

I'll buy that, but I expect some death message. A dying gasp if you
will.
 
 If I am reading this right, the last file handled before the signal is
 /var/log/lastlog which is  2GB (65K approx). When the signal occurs,
 I guess /var/log/ldap is the file in progress.
 
 The ldap file is 22GB in size:
 
   [EMAIL PROTECTED] log]$ ls -l ldap 
   -rw---  1 root root 22978928497 Oct 25 18:46 ldap
 
 Could the size be the issue?
 
 Yes, it sounds very likely that whatever is sending the file on the 
 remote side can't handle files larger than 2 gigs.

I just did an sudo rsync -e ssh ops02.mht1:/var/log/ldap . and it
completed without a problem. All 22 GB of the file transfered fine
8-(. However now I have the same sigpipe issue on another host, that
has been backing up fine (3 full and 3 incremental) until now:

  incr backup started back to 2007-10-25 17:28:40 (backup #6) for
  directory /var/spool/nagios
  Running: /usr/bin/ssh -q -x -l backup ops01.mht1.renesys.com sudo
  /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group
  -D --links --hard-links --times --block-size=2048 --recursive
  --one-file-system --checksum-seed=32761 . /var/spool/nagios/
  Xfer PIDs are now 24197
  Rsync command pid is 24197
  Got remote protocol 28
  Negotiated protocol version 28
  Checksum caching enabled (checksumSeed = 32761)
  Got checksumSeed 0x7ff9
  Got file list: 11 entries
  Child PID is 24213
  Xfer PIDs are now 24197,24213
  Sending csums, cnt = 11, phase = 0
create d2775   306/2004096 .
create p 660   306/521   0 nagios.cmd
create d2775   306/2004096 tmp
  tmp/host-perfdata got digests 46a0099d178d1b97aa39e454ae083d3f vs
  46a0099d178d1b97aa39e454ae083d3f
  Skipping tmp/service-perfdata..bz2 (same attr)
  Skipping tmp/service-perfdata.0001.gz (same attr)
  Skipping tmp/service-perfdata.4.gz (same attr)
  Skipping tmp/service-perfdata.5.gz (same attr)
  Sending csums, cnt = 0, phase = 1
create   664   306/200   916165956 tmp/host-perfdata
  tmp/nagios_daemon_pids got digests 7bfc0cffe0f114dd6eea7514c44422cd vs
  7bfc0cffe0f114dd6eea7514c44422cd
create   664   306/200   6 tmp/nagios_daemon_pids
  tmp/old_list got digests 0e258a7527fe053eea032e6d58f1de7c vs
  0e258a7527fe053eea032e6d58f1de7c
create   664   306/200  48 tmp/old_list
  Read EOF: 
  Tried again: got 0 bytes
  Can't write 4 bytes to socket
  finish: removing in-process file tmp/service-perfdata
delete   644   306/200   343155581 tmp/service-perfdata.0001.gz
delete   664   306/200   343250131 tmp/service-perfdata.5.gz
delete   644   306/200   186949772 tmp/service-perfdata..bz2
delete   664   306/200   341890997 tmp/service-perfdata.4.gz
delete   644   306/200  1427879157 tmp/service-perfdata
  Child is aborting
  Done: 4 files, 916199608 bytes
  Got fatal error during xfer (aborted by signal=PIPE)
  Backup aborted by user signal

Is there anything I can do to get better diagnostics. If rsync
--server --sender exits with an error, how well does the File::RsyncP
module do grabbing stderr (or stdout which it would see as a breaking
of the protocol) and sending it back to the xfer log?

Is there a flag/option I can set in File::RsyncP?

(Time to perldoc File::RsyncP I guess.)

 Also is there a way to tail the xfer logs in realtime while the daemon
 is controling the backup? So I don't have to wait for the backup to
 finish?
 
 You aren't going to see a problem in the log file - the other end is 
 crashing.

Well I have two backups still running (3+ hours later) and I am trying
to find out what file they are stuck on. Nothing that I can see should
be hanging the rsync this long compared to when I run an rsync

Re: [BackupPC-users] Debugging a SIGPIPE error killing my backups

2007-10-26 Thread Les Mikesell
John Rouillard wrote:

 And similar results on the server side process. Maybe a deadlock
 somewhere? The ssh pipe appeared open. I set it up to forward traffic
 and was able to pass traffic from the server to the client.
 Are these 2 different scenarios (the sigalarm and sigpipe)?
 
 Yes, I just started a new thread on the hang/sigalarm problem.
 I don't 
 think I've ever seen a real deadlock on a unix/linux rsync although I 
 always got them on windows when trying to run rsync under sshd (and I'd 
 appreciate knowing the right versions to use if that works now).
 
 Well, its not really rsync - rsync right, its File::RsyncP- rsync.

In the windows case where I've had problems , rsync - rsync behaves the 
same (hanging when sshd is involved, working with rsync in daemon mode).

 The 
 sigpipe scenario sounded like the remote rsync crashed or quit (perhaps 
 not being able to handle files 2gigs).  This looks like something 
 different.  Can you start the remote strace before the hang so you have 
 a chance of seeing the file and activity in progress when the hang occurs?
 
 I can try. As far as the sigpipe issue, looks like there is a missing
 email in this thread. I was able to run an rsync of the 22GB file that
 was the active transfer atthe time of the SIGPIPE without
 problem. I'll repost that missing email.

If rsync works but File::RsyncP doesn't, I'd suspect a problem in your 
perl or OS install where you might have some component not built for 
large file support - perhaps the compression libraries.  But SIGPIPE 
still sounds like the problem is at the other end.

-- 
   Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar over ssh error

2007-10-26 Thread Craig Barratt
Lai writes:

 I am using CentOS as the backup server try to backup a Solaris 5.8 server.
 
 Below is the output from the logs
 
 Running: /usr/bin/ssh -q -x -n -l root 10.250.2.200http://10.250.2.200 
 /usr/bin/tar -cvf - -C / ./opt/rts/rtd ./etc/hosts ./etc/services 
 ./var/spool/cron
 full backup started for directory /
 Xfer PIDs are now 5008,5007
 a ./opt/rts/rtd/ 0K
 a ./opt/rts/rtd/bin/ 0K
 a ./opt/rts/rtd/bin/rtdcleantradedb.gz 277K
 a ./opt/rts/rtd/bin/rtddbfixuniqueid.gz 577K
 a ./opt/rts/rtd/bin/rtddbutil.gz 826K

This doesn't look like GNU tar.  BackupPC expects the tar program to
be GNU tar.

Craig

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] odd problem

2007-10-26 Thread dan
i have an odd problem.

i am missing some of the statistics!

shows Pool is 0.00GB comprising 0 files and 1894 directories
but Pool file system was recently at 59%

anyone seen this before?  here are some system details below.


see here:

   - The servers PID is 12931, on host backupa, version 3.0.0, started at
   10/24 17:15.
   - This status was generated at 10/26 11:25.
   - The configuration was last loaded at 10/25 15:48.
   - PCs will be next queued at 10/26 11:30.
   - Other info:
  - 0 pending backup requests from last scheduled wakeup,
  - 0 pending user backup requests,
  - 0 pending command requests,
  - Pool is 0.00GB comprising 0 files and 1894 directories (as of
  10/26 01:00),
  - Pool hashing gives 0 repeated files with longest chain 0,
  - Nightly cleanup removed 0 files of size 0.00GB (around 10/26
  01:00),
  - Pool file system was recently at 59% (10/26 11:20), today's
  max is 59% (10/26 01:50) and yesterday's max was 56%


and here
Compression Summary

Compression performance for files already in the pool and newly compressed
files.

 Existing Files  New Files  Backup#  Type  Comp Level  Size/MB
Comp/MB Comp Size/MB Comp/MB Comp
0 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=0  full
 3  0.00.0

 7 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=7 full 3
0.00.0

 8 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=8 incr 3
0.00.0

 9 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=9 incr 3
0.00.0

 10 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=10
incr 3
0.00.0

 11 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=11
incr 3
0.00.0

 12 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=12
incr 3
0.00.0

 13 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=13
incr 3
0.00.0

 14 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=14
full 3
0.00.0

 15 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=15
incr 3
0.00.0

 16 http://10.223.8.8/backuppc/index.cgi?action=browsehost=c1pcnum=16
incr 3
0.00.0


and

Program Paths 
SshPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_sshpath_
NmbLookupPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_nmblookuppath_
PingPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_pingpath_
DfPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_dfpath_
SplitPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_splitpath_
ParPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_parpath_
CatPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_catpath_
GzipPathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_gzippath_
Bzip2Pathhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_bzip2path_
  Install
Paths 
TopDirhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_topdir_
ConfDirhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_confdir_
LogDirhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_logdir_
CgiDirhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_cgidir_
InstallDirhttp://10.223.8.8/backuppc/index.cgi?action=viewtype=docs#item__conf_installdir_
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc Windows client failed !

2007-10-26 Thread [EMAIL PROTECTED]
Hello

i have a wifi server with backuppc , connect to a freebox(wifi) and my wifi
laptop.

want to backup my windows laptop in ntfs ...

all Xfer method , i trying for backuppc FAILED to backuppc my windows 
machine.
i precise , that i test all possible solution : different rsync version of
cygwin (2.6 to 3.0cvs) and patch for adding mssleep delais !
i try cwrsync, deltacopy  ...

rsyncd : failed all the time
rsync+ssh : work but hang after one or two files
smb method : failed too : timeout 2 millisecond ...
tar : hang !!!

i try with ssh-hpn : failed too ...
i try all rsync parameter... but , the system hang all the time.
i see the problem comes from Cygwin pipe!!

for windows users :
- have you try with rsync client in python or perl version  ?
- do you know if robocopy from microsoft can help me ?
- NSF or ftp backup can make an issue ?

Best Regards, Olivier




-- 
No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.503 / Virus Database: 269.15.11/1093 - Release Date: 25/10/2007 
17:38


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hanging rsync backup on /usr/local (w/ strace output)

2007-10-26 Thread John Rouillard
Trying to split theads here. In a prior discussion on the hang issue,
on Fri, Oct 26, 2007 at 11:13:14AM -0500, Les Mikesell wrote:
 John Rouillard wrote:
  [rouilj]
   $Conf{ClientTimeout} = 72000;
 
 which is 20 hours and the sigpipe is occurring
 before then.  You'd see sigalarm instead of sigpipe
 if you had a timeout.
 
 Something like this I assume:
 
 [...]
 create d 755   0/1   12288 src/fastforward-0.51
   finish: removing in-process file .
   Child is aborting
   Done: 17 files, 283 bytes
   Got fatal error during xfer (aborted by signal=ALRM)
   Backup aborted by user signal

 Yes, that one is a timeout on the backuppc side.

 Also I straced the rsync process on the remote
 system while it was hung (I assume on whatever
 occurred after the src/fastforward-0.51) directory
 and got:
 
   [EMAIL PROTECTED] ~]$ ps -ef | grep 6909
   root  6909  6908  0 Oct25 ?00:00:00 /usr/bin/rsync
   --server --sender --numeric-ids --perms --owner --group -D --links
   --hard-links --times --block-size=2048 --recursive --one-file-system
   --checksum-seed=32761 --ignore-times . /usr/local/
   rouilj   10603 10349  0 05:36 pts/000:00:00 grep 6909
   [EMAIL PROTECTED] ~]$ strace -p 6909
   attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
   [EMAIL PROTECTED] ~]$ sudo strace -p 6909
   Process 6909 attached - interrupt to quit
   select(1, [0], [], NULL, {42, 756000})  = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0} unfinished ...
   Process 6909 detached
 
 And similar results on the server side
 process. Maybe a deadlock somewhere? The ssh pipe
 appeared open. I set it up to forward traffic and
 was able to pass traffic from the server to the
 client.

 Are these 2 different scenarios (the sigalarm and
 sigpipe)?  I don't think I've ever seen a real
 deadlock on a unix/linux rsync although I always got
 them on windows when trying to run rsync under sshd
 (and I'd appreciate knowing the right versions to use
 if that works now). The sigpipe scenario sounded like
 the remote rsync crashed or quit (perhaps not being
 able to handle files 2gigs).  This looks like
 something different.  Can you start the remote strace
 before the hang so you have a chance of seeing the
 file and activity in progress when the hang occurs?


Ask and you will receive. Here is part of an strace of the
rsync on the client machine. Continuation of long lines are
indented by 2 spaces, line number in parens.

Starts:

execve(/usr/bin/rsync, [/usr/bin/rsync, --server,  (line 1)
  --sender, --numeric-ids, --perms, --owner,
  --group, -D, --links, --hard-links, --times,
  --block-size=2048, --recursive,
  --one-file-system, --checksum-seed=32761,
  --ignore-times, ...], [/* 16 vars */]) = 0 
uname({sys=Linux, node=vpn01.fp.psm1.renesys.com, ...}) = 0

some initialization and then:

lstat64(/usr/local/., {st_mode=S_IFDIR|0755, st_size=4096,
  ...}) = 0
chdir(/usr/local) = 0  (line 63)
stat64(., {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64(., {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
mmap2(NULL, 266240, PROT_READ|PROT_WRITE,
  MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7ef7000
mmap2(NULL, 135168, PROT_READ|PROT_WRITE,
  MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7ed6000
open(., O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 3
fstat64(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
getdents64(3, /* 12 entries */, 4096)   = 320
lstat64(share, {st_mode=S_IFDIR|0755, st_size=4096, ...})
  = 0
open(share, O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) =
  4
fstat64(4, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(4, F_SETFD, FD_CLOEXEC) = 0
getdents64(4, /* 4 entries */, 4096)= 96
lstat64(share/info, {st_mode=S_IFDIR|0755, st_size=4096,
  ...}) = 0
open(share/info,
  O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 5
fstat64(5, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(5, F_SETFD, FD_CLOEXEC) = 0
getdents64(5, /* 2 entries */, 4096)= 48
getdents64(5, /* 0 entries */, 4096)= 0
close(5)= 0
lstat64(share/man, {st_mode=S_IFDIR|0755, st_size=4096,
  ...}) = 0
open(share/man,
  O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 5
fstat64(5, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(5, F_SETFD, FD_CLOEXEC) = 0
getdents64(5, /* 12 entries */, 4096)   = 288

it starts walking through the directory tree.
[fl]stats all over the place.

Then file writes:

open(/usr/local/bin/addcr, O_RDONLY|O_LARGEFILE) = 3 (line 635)
fstat64(3, {st_mode=S_IFREG|0755, st_size=4264, ...}) = 0
read(3,
  \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\2\0\3\0\1\0\0\0h\203\4...,
  4264) = 4264
select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left
  {60, 0})
write(1,
  \374\17\0\7\2\0\0\0\0\0\0\0\0\10\0\0\2\0\0\0\0\0\0\0\250...,
  4096) = 4096
close(3) 

Re: [BackupPC-users] Hanging rsync backup on /usr/local (w/ strace output)

2007-10-26 Thread Les Mikesell
John Rouillard wrote:
 
 open(/usr/local/src/fastforward-0.51/warn-auto.sh,
   O_RDONLY|O_LARGEFILE) = 3
 fstat64(3, {st_mode=S_IFREG|0644, st_size=64, ...}) = 0
 read(3, #!/bin/sh\n# WARNING: This file w..., 64) = 64
 close(3)= 0
 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left  (line 2799)
   {60, 0})
 write(1, \300\0\0\7\0waitpid\0__errno_location\0er...,
   196) = 196
 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left
   {60, 0})
 write(1, \4\0\0\7\377\377\377\377, 8) = 8
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 
 and toast city.
 
 What's wierd is the select 2 after the close. 

There are earlier selects on fd2 that aren't followed by a write.  The 
real problem is the select on fd1 (stdout) that tells you that a write 
would block.

 I can make the whole file available on the web if you or anybody else
 want's it. Contact me off list, no sense spamming people.

I don't think it would help.  The question is, why can't you write to 
stdout?  It should be connected to sshd which should be passing stuff to 
the invoking ssh and perl should be consuming it.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPCd: dead?

2007-10-26 Thread dan
interesting little project there.

it definitely looks to be dead but i think the same thing could be done with
cygwin, ssh remote commands, and rsyncd. and if anyone is curious about how
to deploy this on many PCs, i suggest looking into NSIS, which is the
nullsoft installer.

i use NSIS on my network to manager a couple hundred desktops and a couple
dozen laptops.i use it to install deltacopy and the default configuration
for backuppc to use.  i also deploy help files, remote desktop shortcuts,
and even windows updates.

i am sure that quite a few people would benefit from having a client side
setup wizard/installer.  i convenient way to setup windows PCs to run rsyncd
and configure it for the network would be great.  i will attempt a generic
installer and post it on the users mailing list when i get some time.

On 10/25/07, Olivier LAHAYE [EMAIL PROTECTED] wrote:


 Is BackuPCd project dead?

 I know that one major feature of BackupPC is the ability to backup clients
 without having to install software on the client itself, but in some
 cases,
 it would be realy cool to backup openned files on windows computers like
 PST
 files.
 BackupPCd brought such a hope, but since a long time, it looks dead.

 Does BackupPC projects aims to have a solution for this kind of problem?

 Best regards.

 --
 Olivier LAHAYE
 Motorola Labs IT Manager
 Computer  Information Systems
 European Communications Research

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-devel mailing list
 [EMAIL PROTECTED]
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-devel
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hanging rsync backup on /usr/local (w/ strace output)

2007-10-26 Thread John Rouillard
On Fri, Oct 26, 2007 at 03:28:28PM -0500, Les Mikesell wrote:
 John Rouillard wrote:
 
 open(/usr/local/src/fastforward-0.51/warn-auto.sh,
   O_RDONLY|O_LARGEFILE) = 3
 fstat64(3, {st_mode=S_IFREG|0644, st_size=64, ...}) = 0
 read(3, #!/bin/sh\n# WARNING: This file w..., 64) = 64
 close(3)= 0
 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left  (line 2799)
   {60, 0})
 write(1, \300\0\0\7\0waitpid\0__errno_location\0er...,
   196) = 196
 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left
   {60, 0})
 write(1, \4\0\0\7\377\377\377\377, 8) = 8
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 
 and toast city.
 
 What's wierd is the select 2 after the close. 
 
 There are earlier selects on fd2 that aren't followed by a write.

Correct, but they occur before the input file closes.

 The real problem is the select on fd1 (stdout) that tells you that
 a write would block.
 
 I can make the whole file available on the web if you or anybody else
 want's it. Contact me off list, no sense spamming people.
 
 I don't think it would help.  The question is, why can't you write to 
 stdout?  It should be connected to sshd which should be passing stuff to 
 the invoking ssh and perl should be consuming it.

Do you mean this select?

 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left  (line 2799)
   {60, 0})

My C is rusty, but I think that means:

  look at no fd's for reading and fd 1 for writing and no fd's for
  errors. Time out in 60.000 seconds.
  
  What I am not sure of is why the first argument is 2. I would expect
  that if the [1] was [1, 2] with two fd's. Since there is
  only one fd in the set (namely fd 1), I would expect the 2 to be 1.

  In nay case, the select call returns 1 meaning that there is one
  file descriptor ready for writing and it waited 0 seconds to
  determine the write handle was ready to be written to. Then the write
  occurs:

 write(1, \300\0\0\7\0waitpid\0__errno_location\0er...,
   196) = 196

  writing 196 bytes.

 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left
   {60, 0})

  again indicates that fd 1 is available for writing. an 8 byte write is
  done then fd 0 is checked to see if there is anything to read

 write(1, \4\0\0\7\377\377\377\377, 8) = 8
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)

  and there never is anything to read.

So by that point it is waiting for data/info from the server and there
is no data forthcoming. Can you point out where my analysis is wrong?

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-643-9300 x 111

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hanging rsync backup on /usr/local (w/ strace output)

2007-10-26 Thread Les Mikesell
John Rouillard wrote:
 Do you mean this select?
 
 select(2, NULL, [1], NULL, {60, 0}) = 1 (out [1], left  (line 2799)
  {60, 0})
 
 My C is rusty, but I think that means:
 
   look at no fd's for reading and fd 1 for writing and no fd's for
   errors. Time out in 60.000 seconds.
   
   What I am not sure of is why the first argument is 2. I would expect
   that if the [1] was [1, 2] with two fd's. Since there is
   only one fd in the set (namely fd 1), I would expect the 2 to be 1.

I guess I had that backwards - the first argument is really the highest 
numbered fd to consider plus 1.

   again indicates that fd 1 is available for writing. an 8 byte write is
   done then fd 0 is checked to see if there is anything to read
 
 write(1, \4\0\0\7\377\377\377\377, 8) = 8
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 
   and there never is anything to read.
 
 So by that point it is waiting for data/info from the server and there
 is no data forthcoming. Can you point out where my analysis is wrong?

Yes, I think that is right.  I wonder if that 8-byte write is sitting in 
a buffer somewhere.   Did this break on previously working machines or 
have you always had this problem?

-- 
   Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't get excludes to work

2007-10-26 Thread Travis Fraser
On Fri, 2007-10-26 at 16:53 +0200, Toni Van Remortel wrote:
 Arch Willingham wrote:
  Even though the slashes go the other way in Windows
 Yes. It's a Unix system that is taking the backups, so you need to use 
 the Unix way to address directories. So / is the separator, \ is just an 
 escape character.
 
In playing around with excludes a while back, I found it depends on what
the transfer method is. For smb, the backslash works for excludes. For
rsync, the forward slash.
-- 
Travis Fraser [EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hanging rsync backup on /usr/local (w/ strace output)

2007-10-26 Thread John Rouillard
Sent this via personal email. Here is the copy for the
list.

On Fri, Oct 26, 2007 at 06:01:16PM -0500, Les Mikesell wrote:
 John Rouillard wrote:
   again indicates that fd 1 is available for writing. an 8 byte write is
   done then fd 0 is checked to see if there is anything to read
 
 write(1, \4\0\0\7\377\377\377\377, 8) = 8
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 
   and there never is anything to read.
 
 So by that point it is waiting for data/info from the server and there
 is no data forthcoming. Can you point out where my analysis is wrong?
 
 Yes, I think that is right.  I wonder if that 8-byte write is sitting in 
 a buffer somewhere.   Did this break on previously working machines or 
 have you always had this problem?

On this machine I have always had the problem, but it is the only
machine of the 6 machines I am testing. I have a second machine that
has the identical kernel, ssh, rsync version that isn't showing any
issues. However it does have different data to back up, so I can't
rule out a data driven bug.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-643-9300 x 111

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar over ssh error

2007-10-26 Thread Lai Chen Kang
Installing GNU Tar fixed it.

Thank you

On 10/27/07, Craig Barratt [EMAIL PROTECTED] wrote:

 Lai writes:

  I am using CentOS as the backup server try to backup a Solaris 5.8server.
 
  Below is the output from the logs
 
  Running: /usr/bin/ssh -q -x -n -l root 10.250.2.200http://10.250.2.200
 /usr/bin/tar -cvf - -C / ./opt/rts/rtd ./etc/hosts ./etc/services
 ./var/spool/cron
  full backup started for directory /
  Xfer PIDs are now 5008,5007
  a ./opt/rts/rtd/ 0K
  a ./opt/rts/rtd/bin/ 0K
  a ./opt/rts/rtd/bin/rtdcleantradedb.gz 277K
  a ./opt/rts/rtd/bin/rtddbfixuniqueid.gz 577K
  a ./opt/rts/rtd/bin/rtddbutil.gz 826K

 This doesn't look like GNU tar.  BackupPC expects the tar program to
 be GNU tar.

 Craig

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/