[BackupPC-users] Unable to read 4 bytes error

2006-01-16 Thread Khaled Hussain
Hi,

I am trying to set up the rsync XferMethod to backup a linux (Mandrake)
machine.

- I have added the host to the hosts list
- created a config.pl file that overrides the general smb method config
- set up openssh on the client and server according to the backuppc
documentation

I cannot understand why this appears on the Xfer Log when I try to backup:

Running: /usr/bin/ssh -q -x -l root wisla
/usr/bin/rsync --server --sender --verbose --numeric-ids --perms --owner --g
roup --devices --links --times --block-size=2048 --recursive --ignore-times
. /var/lib/asterisk/agi-bin/
Xfer PIDs are now 18527
Read EOF: Connection reset by peer
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
Backup aborted (Unable to read 4 bytes)

Any ideas anyone please?

I am used to setting up rsyncd and smb and am using rsync as a last resort
method now.

Kind Regards

Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


FW: [BackupPC-users] Unable to read 4 bytes error (solved)

2006-01-16 Thread Khaled Hussain
Hi All,

I think I have solved the problem...
Following instructions on the backuppc configuration documentation, I
created the following per-pc config.pl for my linux client (wisla):

do "../conf/config.pl";

$Conf{XferMethod} = 'rsync';

$Conf{RsyncClientPath} = '/usr/bin/rsync';

$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+';

$Conf{RsyncClientRestoreCmd} = '$sshPath -q -x l root $host $rsyncPath
$argList+';

$Conf{RsyncShareName} = [
'/var/lib/asterisk/agi-bin',
'/usr/local/apache2/htdocs',
'/etc/asterisk',
'/home/voip',
'/var/log/asterisk',
'/home/voip/dbbackup',
'/var/spool/asterisk/voicemail',
'/var/spool/asterisk/monitor',
'/var/spool/asterisk/fax'
];

$Conf{RsyncArgs} = [
'--verbose',
'--numeric-ids',
'--perms',
'--owner',
'--group',
'--devices',
'--links',
'--times',
'--block-size=2048',
'--recursive'
];

$Conf{RsyncRestoreArgs} = [
   '--numeric-ids',
   '--perms',
   '--owner',
   '--group',
   '--devices',
   '--links',
   '--times',
   '--block-size=2048',
   '--relative',
   '--ignore-times',
   '--recursive'
];

But for some reason, it seemed that $argList in:
$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+'
was replaced with the restoreArgs rather than the rsyncArgs listed above
(see the error msg in my first email below).

So after commenting the restore variables $Conf{RsyncClientRestoreCmd} and
$Conf{RsyncRestoreArgs}, I was able to backup.

However, anyone have any idea why this happened?

Kindly,

Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Khaled
Hussain
Sent: 16 January 2006 15:31
To: [EMAIL PROTECTED] Sourceforge. Net
Subject: [BackupPC-users] Unable to read 4 bytes error


Hi,

I am trying to set up the rsync XferMethod to backup a linux (Mandrake)
machine.

- I have added the host to the hosts list
- created a config.pl file that overrides the general smb method config
- set up openssh on the client and server according to the backuppc
documentation

I cannot understand why this appears on the Xfer Log when I try to backup:

Running: /usr/bin/ssh -q -x -l root wisla
/usr/bin/rsync --server --sender --verbose --numeric-ids --perms --owner --g
roup --devices --links --times --block-size=2048 --recursive --ignore-times
. /var/lib/asterisk/agi-bin/
Xfer PIDs are now 18527
Read EOF: Connection reset by peer
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
Backup aborted (Unable to read 4 bytes)

Any ideas anyone please?

I am used to setting up rsyncd and smb and am using rsync as a last resort
method now.

Kind Regards

Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] How Rsync and rsyncd work

2006-01-19 Thread Khaled Hussain
Hello List,

I was wondering if someone could help me understand how the rsync and rsyncd
methods used by backupPC defer in the way data is backed up.
There is limited explanation of this in the documentation and I was curious
how backuppc 'directly' connects to rsyncd, where rsync requires ssh/rsh to
backup data; does not the rsyncd method require an underlying ssh
connection?
Any light on how these methods work would be greatly appreciated.

Kind Regards

Khaled



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_link

2006-02-17 Thread Khaled Hussain
Hi all,

I have two questions that I would be glad to have answered:

1. What is meant by pool exactly? Is this referring to all previous backups?
Is this reffering to files that are common between computers?

2. I have seen on the list archives some emails about link errors and it
seems that .../log/LOG shows lots of link errors. I have confirmed that my
cpool and host dirs are on two different file systems, well cpool is on '/'
(/dev/hda2) and the host dirs are on a software raid setup (so /dev/md0). It
seems that since BackupPC was setup on our systems some 2/3 years ago, we
were getting these link errors but all machines were being backed up fine.
So, my question is or questions are: What are the implications of having the
link errors? Am I duplicating identical data and using up unnecessary disk
space? What is the purpose of cpool?

So much for two questions...

Any clarification on this whole issue will be greatly appreciated.

Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


RE: [BackupPC-users] BackupPC_link

2006-02-22 Thread Khaled Hussain
Hi Craig,

> "Khaled Hussain" writes:
>
> > 1. What is meant by pool exactly? Is this referring to all
> previous backups?
> > Is this reffering to files that are common between computers?
>
> A single copy of every file is stored in the pool, whether or
> not it appears multiple times among the backups.

So I take it that this is a single unique copy of every file regardless of
host and permissions/timestamps.

> > 2. I have seen on the list archives some emails about link errors and it
> > seems that .../log/LOG shows lots of link errors. I have
> confirmed that my
> > cpool and host dirs are on two different file systems, well
> cpool is on '/'
> > (/dev/hda2) and the host dirs are on a software raid setup (so
> /dev/md0). It
> > seems that since BackupPC was setup on our systems some 2/3
> years ago, we
> > were getting these link errors but all machines were being
> backed up fine.
> > So, my question is or questions are: What are the implications
> of having the
> > link errors?
>
>
> You must have the host and cpool directories on the same
> file system.  The result is that a lot of space will be
> wasted since the backups cannot be hardlinked to the pool.
> The backups should still be fine, since the original files
> should be preserved if the links fail.

Yes, I am always able to restore backed up files without any issues so the
backups are fine. However, if I was to move the cpool (currently on root
partition) to the md0 device, I understand that the old backups will not be
affected and all new backups should be linked successfully. The old backups
will gradually expire away leaving more recent backups with hardlinked files
in the pool; this will clear the server of duplicate data...is this correct?

Why would it matter that cpool is on a different filesystem since
backupPC_link uses pool, which is on the same filesystem?

I would be grateful if you could give me a clearer definition for 'hardlink'
to clarify backupPC's operation in my head.

>
> Craig

Many thanks in advance for your patience.

Khaled



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


RE: [BackupPC-users] BackupPC_link

2006-02-23 Thread Khaled Hussain
Hi Guillaume,

Thanks to you and to Craig for your replies.


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of
> Guillaume Filion
> Sent: 22 February 2006 19:20
> To: BackupPC-users@lists.sourceforge.net
> Subject: Re: [BackupPC-users] BackupPC_link
>
>
> Khaled Hussain a écrit :
> > So I take it that this is a single unique copy of every file
> regardless of
> > host and permissions/timestamps.
>
> Yes, and it's a unique copy regardless of filename too.
>
> > Yes, I am always able to restore backed up files without any
> issues so the
> > backups are fine. However, if I was to move the cpool (currently on root
> > partition) to the md0 device, I understand that the old backups
> will not be
> > affected and all new backups should be linked successfully. The
> old backups
> > will gradually expire away leaving more recent backups with
> hardlinked files
> > in the pool; this will clear the server of duplicate data...is
> this correct?
>
> Yes.
>
> > Why would it matter that cpool is on a different filesystem since
> > backupPC_link uses pool, which is on the same filesystem?
>
> I'm pretty sure that you're not using pool at all. As Craig said, you
> can only use cpool or pool, but not both. Recent versions of BackupPC
> always use cpool.
>
> > I would be grateful if you could give me a clearer definition
> for 'hardlink'
> > to clarify backupPC's operation in my head.
>
> See http://en.wikipedia.org/wiki/Hard_link
>
> Best,
> GFK's
> --
> Guillaume Filion, ing. jr
> Logidac Tech., Beaumont, Québec, Canada - http://logidac.com/
> PGP Key and more: http://guillaume.filion.org/

Regards

Khaled



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] no xfer log

2006-02-24 Thread Khaled Hussain
Hi all,

For one of my XP hosts I dont seem to be generating Xfer log files only a
LOG file...I am getting 'child exitted prematurely' error after 1 hour since
backup starts for this host in the LOG file and that's all it says - I
understand the Xfer log is useful for debugging info but why does this not
exist?

Any help is greatly appreciated...

Kind Regards

Khaled



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] change root user for rsync over ssh?

2006-03-01 Thread Khaled Hussain
Hi all,

I am in a bit of a dilemma:

I have setup rsync+ssh to backup a freebsd server. However, the importance
of this server is such that I cannot permit root logon in the sshd_config
file. This in turn means that backuppc cannot login as root on the remote
server.

How can I get around this? I know I can change the user to be used in the
per-pc config file but I need to create a user that has root privelages. The
only solution seems to be to create another user with root's privelages: but
the problem seems to be that certain files that have only owner read/write
permissions cannot be accessed by anyone other than root!

I obviously cannot change permissions on these files.

Being a novice, I am hoping for an answer like: you dont need permissions to
rsync files over ssh only to 'access' them (if there is a difference), but I
am doubting this.

I really dont want to have to permit root login because if I do, then Ill
have to do it for 10s of other servers - which doesn't make our network
secure at all.

Kind Regards

Khaled Hussain
Server Administrator
Coulomb Ltd
020 8114 1013



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] a one off backup of /boot partition in raid5 array - your thoughts/advice

2006-03-15 Thread Khaled Hussain
Hi all,

I have setup a software raid5 array consisting of 3 sata hard drives with a
small boot partition at the beginning of the first disk. Therefore the setup
is as follows:
_
sda | /boot | swap  |   raid5   
|
|___|___|___|


sdb |   raid5   |   
|
|___|___|


sdc |   raid5   |   
|
|___|___|


The purpose of what I would like to do arrises from the fact that in the
event of sda failure, I wont be able to boot the raid array. Therefore,
perhaps I can make a one off backup of the /boot partition to backuppc and
then should failure happen, I can just restore the backup of the /boot
partition and have the machine boot, and raid5 will automatically recreate
the failed raid partition on sda.
I hope this makes sense; please correct me if I'm wrong.

However, another option I am considering and am pushing for is to copy the
boot partition over to sdb and sdc (in the area available after the raid
partition). I know this can be done using the dd command but finding out
where the free partitions on sdb and sdc start aswell as where the /boot
partition ends is giving me problems. The partitions dont seem to add up!

FYI:
df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/md0  540G  1.4G  511G   1% /
/dev/sda1  76M   13M   60M  18% /boot
/dev/shm  501M 0  501M   0% /dev/shm

hdparm /dev/sdXX

/dev/sda1:
 IO_support   =  0 (default 16-bit)
 readonly =  0 (off)
 readahead= 256 (on)
 geometry = 36483/255/63, sectors = 82220544, start = 63

/dev/sda2:
 IO_support   =  0 (default 16-bit)
 readonly =  0 (off)
 readahead= 256 (on)
 geometry = 36483/255/63, sectors = 1061061120, start = 160650

/dev/sda3:
 IO_support   =  0 (default 16-bit)
 readonly =  0 (off)
 readahead= 256 (on)
 geometry = 36483/255/63, sectors = 298939576320, start = 2233035

/dev/sdb1:
 IO_support   =  0 (default 16-bit)
 readonly =  0 (off)
 readahead= 256 (on)
 geometry = 36481/255/63, sectors = 300066407424, start = 63

/dev/sdc1:
 IO_support   =  0 (default 16-bit)
 readonly =  0 (off)
 readahead= 256 (on)
 geometry = 36481/255/63, sectors = 300066407424, start = 63

As far as I understand, with raid5, all raid5 partitions have to be the same
size, so why aren't sda3, sdb1, sdc1 not the same size?

I appreciate this is slightly off-topic but please advise me.

Kindest Regards

Khaled Hussain
Server Administrator
Coulomb Ltd







---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] backupPC nightly backups spilling into the day...

2006-03-17 Thread Khaled Hussain
Hi All,

I have recently added about 15-20 hosts to the backup system, which was
already backing up about that number of hosts nightly. My $Conf{MaxBackups}
was set to 2 but is now set to 4. Also, hardlinks were not being created in
the cpool because it was on a different filesystem than that of the PCs and
therefore I had corrected these and so hard links are now being created.
Also, an important note, I am using smb to backup windows servers (NT) (to
avoid installing cwrsync) and rsync and ssh to backup all new linux/bsd
boxes since none have rsyncd installed and have endless dependencies when
trying to install it.

I would imagine at first instance that a machine using rsync+SSH is put on
backup but the auto-login setup in .ssh is incorrect and so is prompting for
a password on the server and so delaying the backup...is this possible? But
the logs dont seem to hint at this possiblity.

Ontop of all that my backupPC server has slowed down dramatically...

Now it seems that the consequence of some or all of these actions is a
constant running of backup at night and for most part of the day. Possible
solutions seem to be:
- increase MaxBackups variable further
- decrease the WakeupSchedule variable since the clients are constantly on
the network; currently backupPC wakes up every two hours between 18:00 and
7:00, but I think I can change it to 2 or 3 times in the night(?); I am
assuming here that unneccesary wakeups waste time?!
- increase the BackupPCNightlyPeriod variable from 8 (current) to about 16
to further shorten the time it takes to traverse the pool (keeping in mind
the pool will grow since links have only just been started to be created).
- increase the FullPeriod and IncrePeriod variables so that the server is
not congested every night; current settings are 6.97 and 0.97 respectively,
but how can set incrementals for different machines to happen on alternate
days so I get load balancing on the backupPC server?

Anyway, please could anyone suggest why I am having this problem and how to
overcome my problem and/or comment on the above proposal in terms of
effectiveness, etc.

BTW

1. How can I clear all pending backups other than by going into each host
being backed up and stopping it?

2. At every wakeup, does backupPC reload the per-pc configs? Or do I need to
restart backupPC everytime I update a per-pc config?

Kindest Regards

Khaled



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] specify a particular day for fulls

2006-03-28 Thread Khaled Hussain
Hi All,

Is it possible to configure backupPC to run full backups on Saturdays
throughout the day, and even on Sundays?

I am trying in essence not to run fulls during week nights and save them for
the weekend, when they can run day and night to complete.

As far as I'm aware, you can only specify the interval between backups and
not when to run them(?).

Kind Regards

Khaled



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] FullPeriod and IncrPeriod

2006-04-04 Thread Khaled Hussain
Hi All,

I was curious why backuppc stops incremental backups when $Conf{FullPeriod}
is disabled completely (-2) or partially (-1). I have set this up for a
couple of hosts which span Windows and linux OSs, but incrementals stop
aswell. The following is an example of on econfig.pl (per PC):

begin-

do "../conf/config.pl";

$Conf{XferMethod} = 'rsyncd';

$Conf{RsyncdUserName} = 'username';
$Conf{RsyncShareName} = ['backup'];
$Conf{RsyncdPasswd} = 'password';


$Conf{ClientTimeout} = '2';

$Conf{FullPeriod} = '-1';

$Conf{IncrPeriod} = '0.97';

$Conf{RsyncArgs} = [
'--verbose',
'--numeric-ids',
'--perms',
'--owner',
'--group',
'--devices',
'--links',
'--times',
'--block-size=2048',
'--recursive',

end

and for a windows machine:

begin-

do "../conf/config.pl";

$Conf{SmbShareUserName} = 'username';

$Conf{SmbSharePasswd} = 'password';

$Conf{BackupFilesOnly} = '/Documents and Settings';

$Conf{FullPeriod} = '-2';

end-

It usually performs incrementals untill incrementals are full (resemble a
full backup) after which no further backups are made.

Also, FYI, I have noticed that those hosts with FullPeriod disabled, the
Last Attempt field in backupPC is occupied by 'nothing to do (host not
found)'.

Any help as to what Im doing wrong and how to correctly achieve what I am
trying to do, will be greatly appreciated.

Kind Regards

Khaled



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/