Re: [BackupPC-users] BackupPC: Wrong user

2008-03-10 Thread Kai Grunau
Kai Grunau wrote:
 Nils Breunese (Lemonbit) wrote:
   
 Kai Grunau wrote:

   
 
 is there someone who could send me a /etc/init.d/backuppc script  
 file for
 OpenSuse 10.3 2.6.22.17-0.1-default i686

 When I try to start backuppc I get the error :
 ---
 
   
 /etc/init.d/backuppc  start
   
 
 Starting backuppc: ok.
 /home/backuppc/bin/BackupPC: Wrong user: my userid is 0, instead of  
 1000
 (backuppc)
 Please su backuppc first
 BackupPC::Lib-new failed
 ---
 
   
 Your init.d script tries to start BackupPC as root, while it should  
 run as the backuppc user. It is started in my init.d script (on CentOS  
 4) as follows in the start() function:

  daemon --user backuppc /opt/backuppc/bin/BackupPC -d

 Did you create this init.d script yourself? If you installed using the  
 source distribution then configure.pl should have created an init.d  
 script for you as init.d/linux-backuppc. I copied that script to /etc/ 
 init.d/backuppc and ran chmod 755 on it. Worked just fine.
   
 
 I copied the  suse-backuppc from the installation source to 
 /etc/init.d/backuppc

 When I try to start the backuppc software  manual with  su backuppc -c 
 /usr/local/BackupPC/bin/BackupPC -d
 I got the error :
 --
 /home/backuppc/bin/BackupPC: Wrong user: my userid is 0, instead of 1000 
 (backuppc)
 Please su backuppc first
 BackupPC::Lib-new failed
 ---
   
maybe I found a solution :

After  chmod   4550   $HOME/bin/* it was possible to run the 
/etc/init.d/backuppc script without an error.
The processes :
-
backuppc 18665  0.0  0.7  10532  7500 ?S13:57   0:00 
/usr/bin/perl /home/backuppc/bin/BackupPC -d
backuppc 18666  3.6  0.4   6920  4784 ?S13:57   0:00 
/usr/bin/perl /home/backuppc/bin/BackupPC_trashClean
-
are owend by backuppc.

I remember on friday I changed the ownership from the /usr/bin/suidperl 
binary
(https://secure-support.novell.com/KanisaPlatform/Publishing/980/3436932_f.SAL_Public.html)
to solve another problem.

I don't know if this is all correct but now I will try to run some 
backup jobs .

regards  thanx, Kai



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-10 Thread Adam Goryachev
Carl Wilhelm Soderstrom wrote:
 On 03/03 02:29 , Les Mikesell wrote:
   
 The seek time for these may be the real killer since you drag the parity 
 drive's head along for the ride.
 
 The more drives you have in an array, the closer your seek time will tend to
 approach worst-case, as the controller waits for the drive with the longest
 seek time for a given operation. Does anyone know anything about
 synchronizing drive spindles? I've heard of it, and I know it requires
 drives that are built for it; but never worked with such hardware.

   

I was always led to believe that the more drives you had in an array the
faster it would get. ie, comparing the same HDD and controller, if you
have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.

Is that an invalid assumption? How does RAID6 compare in all this? Would
it be faster than RAID5 for the same number of HDD's ? (Exclude CPU
overheads in all this)

Regards,
Adam


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running Top

2008-03-10 Thread dan
That assumption is generally true especially for larger files.  for small
files, the whole array is dependant on the slowest drive in the array so the
access time is slowest drive - controller overhead - parity penalty whcih
means that in all circumstances, a file that is less than the stripe size
will be written significantly slower than any one drive in the array could
do.  when files are as large as the stripe size or larger, performance
generally improves.

On Mon, Mar 3, 2008 at 4:50 PM, Adam Goryachev [EMAIL PROTECTED]
wrote:

 Carl Wilhelm Soderstrom wrote:
  On 03/03 02:29 , Les Mikesell wrote:
 
  The seek time for these may be the real killer since you drag the
 parity
  drive's head along for the ride.
 
  The more drives you have in an array, the closer your seek time will
 tend to
  approach worst-case, as the controller waits for the drive with the
 longest
  seek time for a given operation. Does anyone know anything about
  synchronizing drive spindles? I've heard of it, and I know it requires
  drives that are built for it; but never worked with such hardware.
 
 

 I was always led to believe that the more drives you had in an array the
 faster it would get. ie, comparing the same HDD and controller, if you
 have 3 HDD in a RAiD5 it would be slower than 6 HDD in a RAID5.

 Is that an invalid assumption? How does RAID6 compare in all this? Would
 it be faster than RAID5 for the same number of HDD's ? (Exclude CPU
 overheads in all this)

 Regards,
 Adam


 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MOTD file entry breaks authentication

2008-03-10 Thread Nicholas Hall
On Fri, Mar 7, 2008 at 6:15 PM, Paul [EMAIL PROTECTED] wrote:

 I noticed when setting up my backuppc install that Windows clients with
 rsyncd software worked OK. A linux box did not. In the log file was the
 error message auth required, but service  is open/insecure.

 I double-checked the settings, and then tried a simple command-line
 rsync. I was prompted for a password, as expected. I rsynced to a
 Windows box, and was also prompted for password. Both were successful at
 rsyncing a small directory.

 I turned off authorization requirement, and received a new message in
 the log for the linux box: unexpected response: ''  Puzzled, I looked
 closer at the command line. There was an extra linefeed on the Linux
 output:

# rsync -azv linuxbox::module .

Password:


 vs. the windows box:

# rsync -azv winbox::module .
Password:



 I then checked the rsyncd.conf file on Linux, and discovered that there
 was an MOTD file listed. The file did not actually exist, but the
 reference was there. I added a file with Message of the day text in
 it, and voila - that message showed up in the unexpected response
 quotation marks.

 I removed the motd file reference from rsyncd.conf, and backups to the
 linux server work fine. The command-line login now looks just like the
 Windows login - no extra line.

 I see this error message listed around on the mailing list, so I know
 I'm not the only one who's run into this.

  - Paul

 * *

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


Hello

I believe this error is generated from the Perl Rsync module.  What version
of File::RsyncP are you running?

-- 
Nicholas Hall
[EMAIL PROTECTED]
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Small patch to graph the pool size (v2 patch)

2008-03-10 Thread Simone Marzona
Hi all

can this patch be modified for showing the percentage of cpool space
sorted by host in some kind of stacked graph?

I thing that cound be usefull to know how much of the pool storage is
used by host A and how much by host B and so on. I'm not interested in
the size of the backup for each host, but in the percentage of the cpool
used by each host.

thanks a lot!

On Fri, 2008-02-29 at 09:51 +0100, Ludovic Drolez wrote:
 rOn Thu, Feb 28, 2008 at 10:17:43AM -0700, Kimball Larsen wrote:
  but the images appear as busted images on the status page.
 
 (With the patch :-D )
 
 Hi !
 
 This new patch should fix this bug. Anyway the graphs will appear
 after backuppc nightly has run.
 
 I've also fixed another problem, which comes from the fact that I assumed 
 that the CGI was index.cgi (only true for Debian users ?).
 
 Cheers,
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___ BackupPC-users mailing list 
 BackupPC-users@lists.sourceforge.net List: 
 https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: 
 http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Small patch to graph the pool size (v2 patch)

2008-03-10 Thread Les Mikesell
Simone Marzona wrote:
 
 can this patch be modified for showing the percentage of cpool space
 sorted by host in some kind of stacked graph?
 
 I thing that cound be usefull to know how much of the pool storage is
 used by host A and how much by host B and so on. I'm not interested in
 the size of the backup for each host, but in the percentage of the cpool
 used by each host.
 

You can't really determine what you want to know.  That is, both host A 
and B will have links to a single pooled copy for any file they have in 
common so it doesn't exactly belong to one or the other.  Also, it isn't 
very practical to locate the 'other' links to a file so it would not 
even be easy to determine which files have links only from multiple runs 
on the same host and which are linked across hosts.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 3.1.0 failing

2008-03-10 Thread Steen Eugen Poulsen
I've been running 3.1.0 since it was created and I finished setting 
things up a while ago and it ran for months rock solid, but then one day 
some machines stopped working.


One Gentoo vserver is running 4 of the machines. Gentoo host OS, Ubuntu, 
Debian and a Gentoo. BackupPC manages to backup the Ubuntu, but not the 
host OS or the two other vservers.


It also fails on a remote internet server running Gentoo, but not the 
other server that runs Ubuntu.


The Debian vserver log file looks like this and that seems to be the 
same error for all of the failing machines:


full backup started for directory /; updating partial #117
Running: /usr/bin/ssh -q -x -l root liferaft vserver debian exec 
/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group 
-D --links --hard-links --times --block-size=2048 --recursive 
--checksum-seed=32761 --ignore-times . /

Xfer PIDs are now 23937
Got remote protocol 30
Negotiated protocol version 28
Checksum caching enabled (checksumSeed = 32761)
Sent exclude: /dev
Sent exclude: /exports
Sent exclude: /home
Sent exclude: /media
Sent exclude: /mnt
Sent exclude: /proc
Sent exclude: /pub
Sent exclude: /srv
Sent exclude: /sys
Sent exclude: /tmp
Sent exclude: /usr/portage
Sent exclude: /var/lock
Sent exclude: /var/run
Sent exclude: /var/tmp
Xfer PIDs are now 23937,23949
  create d 755   0/04096 .
  create d 755   0/04096 bin
  pool 755   0/0  688492 bin/bash
  pool 755   0/0   25216 bin/bunzip2
  pool 755   0/0   25216 bin/bzcat - bin/bunzip2
  pool   l 777   0/0   6 bin/bzcmp - bzdiff
  pool 755   0/02128 bin/bzdiff
  pool   l 777   0/0   6 bin/bzegrep - bzgrep
  pool 755   0/04874 bin/bzexe
  pool   l 777   0/0   6 bin/bzfgrep - bzgrep
  pool 755   0/03642 bin/bzgrep
  pool 755   0/0   25216 bin/bzip2 - bin/bunzip2
  pool 755   0/08064 bin/bzip2recover
  pool   l 777   0/0   6 bin/bzless - bzmore
  pool 755   0/01297 bin/bzmore
  pool 755   0/0   26860 bin/cat
  pool 755   0/0   45344 bin/chgrp
  pool 755   0/0   42744 bin/chmod
  pool 755   0/0   47356 bin/chown
  pool 755   0/0   69284 bin/cp
  pool 755   0/0   55052 bin/date
  pool 755   0/0   47852 bin/dd
  pool 755   0/0   45016 bin/df
  pool 755   0/0   92312 bin/dir
  pool 755   0/04428 bin/dmesg
  pool 755   0/08592 bin/dnsdomainname
  pool 755   0/0   24228 bin/echo
  pool 755   0/0   92436 bin/egrep
  pool 755   0/0   22120 bin/false
  pool 755   0/0   52880 bin/fgrep
  pool 755   0/0  100468 bin/grep
  same 755   0/0  61 bin/gunzip
  same 755   0/05864 bin/gzexe
  pool 755   0/0   53420 bin/gzip
  pool 755   0/08592 bin/hostname
  pool 755   0/0   12348 bin/kill
Read EOF:
Tried again: got 0 bytes
finish: removing in-process file bin/ln
Child is aborting
Parent read EOF from child: fatal error!
Done: 34 files, 1591016 bytes
Got fatal error during xfer (Child exited prematurely)
Backup aborted (Child exited prematurely)



This one died quick, but it's completely random at what point it fails 
on the machines. As protocol 30 shows, I've upgraded rsync to 3.0.0 to 
see if I had a bad rsync 2.6.x (All the dist has upgraded to the same 
one) that for some reaosn only Ubuntu had fixed, but it still fails.


It sometimes aborts on a singal PIPE on some of the machines and some 
logs has a can't write to socket error.


The total randomness of what works and whats broken has me completely 
puzzled.


smime.p7s
Description: S/MIME Cryptographic Signature
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.1.0 failing

2008-03-10 Thread Steen Eugen Poulsen
incr backup started back to 2008-03-01 10:11:36 (backup #158) for 
directory /
Running: /usr/bin/ssh -q -x -l root dragonslair /usr/bin/rsync --server 
--sender --numeric-ids --perms --owner --group -D --links --hard-links 
--times --block-size=2048 --recursive --checksum-seed=32761 . /

Xfer PIDs are now 24098
Got remote protocol 29
Negotiated protocol version 28
Checksum caching enabled (checksumSeed = 32761)
Sent exclude: /dev
Sent exclude: /media
Sent exclude: /mnt
Sent exclude: /proc
Sent exclude: /pub
Sent exclude: /srv
Sent exclude: /sys
Sent exclude: /tmp
Sent exclude: /usr/portage
Sent exclude: /var/run
Sent exclude: /var/lock
Sent exclude: /var/tmp
Xfer PIDs are now 24098,24099
[ skipped 1865 lines ]
Unexpected call BackupPC::Xfer::RsyncFileIO-unlink(usr/lib/libslang.a)
[ skipped 1270 lines ]
Can't write 32772 bytes to socket
[ skipped 10 lines ]
Done: 0 files, 0 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal

The signal=PIPE failure I get on two machines.


Host OS and remote Gentoo server both does the signal=PIPE abort.

vserver OS Gentoo and Debian both gives the error in the first messages. 
(But the Ubuntu vserver backup fine)




smime.p7s
Description: S/MIME Cryptographic Signature
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Signal=PIPE on restore, and other errors

2008-03-10 Thread backuppc
Searching through the mail archives, I see lots of posts about getting
aborts with signal=PIPE on backups.  I've got this problem on restore.

I have been backing up 5 machines (linux and windows) using rsyncd
flawlessly for a few months now.

Thought it was about time to check restoring (backups are useless unless
they restore!).  I had no luck.

I had two types of errors.

The first thing I tried was restoring from the backuppc server (running
debian testing/lenny) to a ubuntu 7.10 box.  There, I got a failure with
the signal=PIPE error.  Responses to questions people have about this
error during backup say that some big files can cause it.  So I retried
restoring a single small file and got the same result.

I figured I would try restoring a single small file to other hosts on
the network.  I tried another debian box and a winxp box (both of which
have been happily backed up for months).  Both of those fail with
unable to read 4 bytes.  According to Les Mikesell in a response (to
someone having this problem on a backup), it means:

 Usually this means that ssh did not authenticate correctly to start the
 connection. Be sure you have tested the passwordless access running as
 the backuppc user on the server (the key setup is per-user).

Not sure where to go with that.  I have rsyncd daemons running on all
the hosts being backed up.  According to step 5 of the docs, the rsyncd
approach doesn't use ssh.  Do I need to set up a passwordless ssh setup?
 I thought I have read through the docs pretty thoroughly but I haven't
seen how to do this.  I'll check again, but if someone can point me in
the right direction, that would be helpful.

Is the lack of a passwordless ssh also the cause of the signal=PIPE I
see going to the ubuntu machine?

Thanks.

J.S.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Signal=PIPE on restore, and other errors

2008-03-10 Thread Les Mikesell
[EMAIL PROTECTED] wrote:
 Searching through the mail archives, I see lots of posts about getting
 aborts with signal=PIPE on backups.  I've got this problem on restore.
 
 I have been backing up 5 machines (linux and windows) using rsyncd
 flawlessly for a few months now.
 
 Thought it was about time to check restoring (backups are useless unless
 they restore!).  I had no luck.
 
 I had two types of errors.
 
 The first thing I tried was restoring from the backuppc server (running
 debian testing/lenny) to a ubuntu 7.10 box.  There, I got a failure with
 the signal=PIPE error.  Responses to questions people have about this
 error during backup say that some big files can cause it.  So I retried
 restoring a single small file and got the same result.
 
 I figured I would try restoring a single small file to other hosts on
 the network.  I tried another debian box and a winxp box (both of which
 have been happily backed up for months).  Both of those fail with
 unable to read 4 bytes.  According to Les Mikesell in a response (to
 someone having this problem on a backup), it means:
 
 Usually this means that ssh did not authenticate correctly to start the
 connection. Be sure you have tested the passwordless access running as
 the backuppc user on the server (the key setup is per-user).

That's if your xfer method is rsync.  You can check this by issuing an 
ssh command as the backuppc user on the server.  I usually do something 
like:
ssh -l root client id
to be sure that the command executes correctly on the client without a 
password prompt.

 Not sure where to go with that.  I have rsyncd daemons running on all
 the hosts being backed up.  According to step 5 of the docs, the rsyncd
 approach doesn't use ssh.  Do I need to set up a passwordless ssh setup?
  I thought I have read through the docs pretty thoroughly but I haven't
 seen how to do this.  I'll check again, but if someone can point me in
 the right direction, that would be helpful.
 
 Is the lack of a passwordless ssh also the cause of the signal=PIPE I
 see going to the ubuntu machine?


If your xfer method is rsyncd, you authenticate with a username and 
passsword that must match what is in the secrets file on the client. And 
for a restore, this must give you write access to the target.  You can 
test this by running the command line rsync, using double :'s to 
separate the host and path, like:
rsync  [EMAIL PROTECTED]::share/path .
or reverse to write.  If these succeed, you should have the same access 
from backuppc.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/