Re: [BackupPC-users] dissapointed and lost

2007-07-09 Thread ilias
 
> Windows normally locks some files so you can't back them up without some 
> special tricks.

thanks for your replies

I fixed the missing bytes problem and I'm able to restore files normally now. I
still get the linking error though. What do you mean special tricks ? I would
like to learn some :)



-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Krsnendu dasa

On 04/07/07, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]> wrote:


Stefan Degen wrote:

> /var/lib/backuppc has its own harddisk. The problem is, that the
> harddisk is full and there is no LVM or RAID.
>
> So is it possible to change the harddisk like this (without to
> lose the backups)?
>
> 1. stop backuppc
>
> 2. Install the new harddisk and set the mountpoint
> to /var/lib/backupc
>
> 3. copy all data stored on the old harddisk to the new one with
> rsync -H /old_harddisk /new_var/lib/backuppc
>
> 4. start backuppc

rsync -paH or something might be better, but because of all the
hardlinks it might take quite a while to copy all the data. The
fastest way to copy and preserve everything is probably to use dd to
copy everything to the new drive and then grow the filesystem
afterwards.

I have a dedicated disk for BackupPC it is using LVM. Can I use dd to

clone this to a newer harddrive?
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] PostXXXXCmd REMOTEUSER $ubstitution

2007-07-09 Thread Mark Sopuch

Holger Parplies wrote:
> Hi,
>
> Mark Sopuch wrote on 02.07.2007 at 19:50:33 [Re: [BackupPC-users] PostCmd 
> REMOTEUSER $ubstitution]:
>   
>> [...]
>> Okay I got around to running some tests with PostDumpCmd without mailing 
>> or schedules but just with some echo'ing and relying on stdout/stderr to 
>> print in the XferLog output (which appears to be a good debugging 
>> tool!). So from the XferLOGs:
>>
>> from one dump;
>>
>> Executing DumpPostShareCmd: echo backuppc_admins
>> backuppc_admins
>>
>>
>> from another after the change to BackupPC_dump;
>> 
>^
> the hack was to the BackupPC daemon, not to BackupPC_dump. It's probably
> just a typo, but just to be clear ...
>   
That did the trick and without use Env exactly as you said. Excellent 
result and now my mailouts are personalised to the initiating user.
>   
>> Executing DumpPostShareCmd: echo $ENV{REMOTEUSER}
>> $ENV{REMOTEUSER}
>>
>>
>> I have tried setting and not setting Env to Load in the global config 
>> and each time had a "use Env;" in the BackupPC_dump.
>> 
>
> I believe that's not your problem. You don't need "use Env;". The pre- and
> post-commands are neither passed to a shell nor eval()led, so accessing the
> environment might be a bit awkward. I'd try either:
>
> 1.) $Conf {DumpPostShareCmd} = 'sh -c "echo $REMOTEUSER"';
> Explicitly let the shell expand the environment variable.
>   
I'll need to toy with this one a bit more just to see but cut-paste of 
the above didn't appear to work. Must be path stuff.
> 2.) $Conf {DumpPostShareCmd} = '/some/shell/script';
> where /some/shell/script does an 'echo $REMOTEUSER'. You'd probably end
> up using a shell script in the end anyway, wouldn't you? There's of
> course no reason not to use a Perl script instead of a shell script :).
>   
I did this one.
> 3.) $Conf {DumpPostShareCmd} = '&{sub {print $ENV {REMOTEUSER}, "\n";}}';
> do it in Perl without forking. If it's in BackupPC_dump's environment,
> it will be passed on to a "real" DumpPostShareCmd.
>   
Sweet closures.
>   
>> I have tried it with reloading just the server config and also with 
>> restarting the BackupPC service.
>> 
>
> You need to restart the BackupPC daemon after modifying its code, but
> changing $Conf {DumpPostShareCmd} should not require any action.
> BackupPC_dump reads the configuration from the configuration files. It does
> not "inherit" the values from the daemon, so it should always operate on
> current values.
>
> Regards,
> Holger
>   
Thanks Holger. I'll try to report back the results earlier next time too.
Mark

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at /usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Nils Breunese (Lemonbit)

Arch Willingham wrote:

I am trying to do my first backup. For whatever reason its not  
working. If I look in "/var/log/BackupPC/LOG" I see this error any  
time I try to start a backup:


"2007-07-08 20:00:01 truck: mkdir /data/BackupPC: Permission denied  
at /usr/local/BackupPC/bin/BackupPC_dump line 193"


Any ideas?


It seems the backuppc user doesn't have permission to write to /data/ 
BackupPC (which I guess you use as the location for your BackupPC pool).


Nils Breunese.


PGP.sig
Description: Dit deel van het bericht is digitaal ondertekend
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Archive only files changed on incremental backup

2007-07-09 Thread Vjacheslav V. Borisov
Hello!

Is it possible to archive only files changed on last incremental backup?

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Carl Wilhelm Soderstrom
On 07/09 08:25 , Krsnendu dasa wrote:
> I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
> clone this to a newer harddrive?

Try it and find out. :)
(Yes, you can).

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsync incremental-only backups for eternity

2007-07-09 Thread Rob Owens
I see that the only difference between rsync "full" and rsync 
"incremental" backups is that "full" uses the --ignore-times option.  
Under what circumstances would this option be desirable?  Seems to me 
that doing incremental backups forever would suffice, but maybe I'm 
missing something.  What is the risk associated with only performing 
incremental backups with rsync?

Thanks

-Rob

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync incremental-only backups for eternity

2007-07-09 Thread Carl Wilhelm Soderstrom
On 07/09 08:22 , Rob Owens wrote:
> I see that the only difference between rsync "full" and rsync 
> "incremental" backups is that "full" uses the --ignore-times option.  
> Under what circumstances would this option be desirable?  Seems to me 
> that doing incremental backups forever would suffice, but maybe I'm 
> missing something.  What is the risk associated with only performing 
> incremental backups with rsync?

BackupPC takes 'incrementals' against the last 'full' backup. So the farther
you get from the last 'full', the bigger your delta against it will be.
Obviously this depends heavily on how often your data changes; but it makes
sense to run a full backup occasionally. 

I think there may be some other things as well, regarding testing for file
corruption on the server side.

The way I tend to think of it (and I may be completely off base here) is
that 'fulls' are like what you get if you just run the 'rsync' tool from the
command line with no fancy options. 'Incrementals' are a faster way to do
things.

Always keep in mind tho that BackupPC is *not* using the rsync tool on the
server side. It's using the File::RsyncP Perl module.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Ralf Gross
Krsnendu dasa schrieb:
> I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
> clone this to a newer harddrive?

I've done this a few weeks ago and it worked.

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] backuppc+ssh troubbles.

2007-07-09 Thread Robin-David Hammond

I have configured the user backuppc on my backuppc server to login to the 
client 
machine using an authed key.

[EMAIL PROTECTED] ssh backup_client

works FINE!

i have configured the server thusly:



#
$Conf{ClientNameAlias} = 'backup_client';

#
# If the server can't be pinged, put this in so ping "works"
#
$Conf{PingPath} = '/usr/bin/true';

#
# Tell BackupPC which user name and password to use.  This should
# match the userName:password pair in the C:\rsyncd\rsyncd.secrets
# file on the client.
#

$Conf{rsyncPath} = '/usr/pkg/bin/rsync';
$Conf{sshPath} = '/usr/local/bin/ssh';
# $Conf{sshPath} =  'echo >>/tmp/rsync.cmd ' ; # tried to force some
#logging.  didnt work.

$quote = "'";
$Conf{RsyncClientCmd} = '$sshPath -vvv -x -l backuppc backup_client '. 
'$rsyncPath $argList+';

# $Conf{RsyncdUserName} = 'techadm';
# $Conf{RsyncdPasswd} = '';

$Conf{RsyncArgs} = [
  # original arguments here
  '--numeric-ids',
  '--perms',
  '--owner',
  '--group',
  '--devices',
  '--links',
  '--times',
  '--block-size=2048',
  '--recursive',

  # my args
  '--specials',  # <---
  '--one-file-system'
];

#
# Tell BackupPC which share to backup.  This should be the name
# of the module from C:\rsyncd\rsyncd.conf on the client (the
# name inside the square brackets).  In the sample rsynd.conf
# file the cDrive module is the entire C drive.
#

$Conf{RsyncShareName} = '/mnt/weekly/';

#$Conf{BackupFilesOnly} = {
#  'Inventory' => [
#'/home/backup/weekly'
#  ]
#};

$Conf{XferMethod} = 'rsync';
$Conf{XferLogLevel} = 5;



but the result in the logs is:

007-07-05 19:03:14 full backup started for directory /mnt/weekly/
2007-07-05 19:03:19 Got fatal error during xfer (fileListReceive failed)



Both systems are using rsync protocol v 29.



Any ideas where to look first?


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at/usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Arch Willingham
For the heck of it, I gave that directory (and all its subdirectories) full 
permission to backuppc as well as all users. I still get the same error message.

Arch

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Nils
Breunese (Lemonbit)
Sent: Monday, July 09, 2007 6:46 AM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Permission denied
at/usr/local/BackupPC/bin/BackupPC_dump line 193


Arch Willingham wrote:

> I am trying to do my first backup. For whatever reason its not  
> working. If I look in "/var/log/BackupPC/LOG" I see this error any  
> time I try to start a backup:
>
> "2007-07-08 20:00:01 truck: mkdir /data/BackupPC: Permission denied  
> at /usr/local/BackupPC/bin/BackupPC_dump line 193"
>
> Any ideas?

It seems the backuppc user doesn't have permission to write to /data/ 
BackupPC (which I guess you use as the location for your BackupPC pool).

Nils Breunese.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at/usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Les Mikesell
Arch Willingham wrote:
> For the heck of it, I gave that directory (and all its subdirectories) full 
> permission to backuppc as well as all users. I still get the same error 
> message.
> 

Are you running a linux distribution that has SELinux enabled?

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at/usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Arch Willingham
Yes...Fedora FC7 but, as usual, the problem was not due to Fedora or 
BackupPC...it was my fault. When I said I set the file permissions to allow 
user backuppc full access to full I was incorrect. I set the permission wrong. 
I just went back, set them correctly and a backup is running as I type this 
message!

Sorry about my stupidity.

Arch

-Original Message-
From: Les Mikesell [mailto:[EMAIL PROTECTED]
Sent: Monday, July 09, 2007 11:35 AM
To: Arch Willingham
Cc: Nils Breunese (Lemonbit); backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Permission denied
at/usr/local/BackupPC/bin/BackupPC_dump line 193


Arch Willingham wrote:
> For the heck of it, I gave that directory (and all its subdirectories) full 
> permission to backuppc as well as all users. I still get the same error 
> message.
> 

Are you running a linux distribution that has SELinux enabled?

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync incremental-only backups for eternity

2007-07-09 Thread Jean-Michel Beuken
Hello Carl,

Carl Wilhelm Soderstrom wrote:
> BackupPC takes 'incrementals' against the last 'full' backup. So the farther
> you get from the last 'full', the bigger your delta against it will be.
> Obviously this depends heavily on how often your data changes; but it makes
> sense to run a full backup occasionally. 
>   
in theory, in the version 3.x of BackupPC, we can take incrementals of 
different levels ( $Conf{IncrLevels} )...
but, my little experience shows that we don't gain a lot of time with 
high level (with rsync) worse some incr takes more
times that the full :-(

what is the cause ?  
- design of BackupPC : (hardLink, pool, ... -> a lot of disk seeks...)
- design of File::Rsync

somebody had a better experience of use of the $Conf{IncrLevels} ?

regards

jmb

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Backing up Vista

2007-07-09 Thread CoolBreeze

Can anyone who has been successful in backing up Vista, please share how you
were able to configure rsync to run as a service on Vista? This was trivial
in Windows XP, but due to all of the changes made in Vista it's not a simple
process. At least i haven't been able to get it to work. Oh and I am using
Vista Ultimate. Either steps posted here or a referring link would be most
helpful.

Thank you much
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync incremental-only backups for eternity

2007-07-09 Thread Carl Wilhelm Soderstrom
On 07/09 06:31 , Jean-Michel Beuken wrote:
> in theory, in the version 3.x of BackupPC, we can take incrementals of 
> different levels ( $Conf{IncrLevels} )...
> but, my little experience shows that we don't gain a lot of time with 
> high level (with rsync) worse some incr takes more
> times that the full :-(
> 
> what is the cause ?  
> - design of BackupPC : (hardLink, pool, ... -> a lot of disk seeks...)
> - design of File::Rsync

I've not experimented with 3.x yet; so I have no experience with varying
levels.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Matthias Meyer
Am Montag 09 Juli 2007 15:25 schrieb Ralf Gross:
> Krsnendu dasa schrieb:
> > I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
> > clone this to a newer harddrive?
>
> I've done this a few weeks ago and it worked.
>
> Ralf
>
> -
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
cc -a also works
-- 
Don't Panic

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at /usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Matthias Meyer
Am Montag 09 Juli 2007 02:42 schrieb Arch Willingham:
> I am trying to do my first backup. For whatever reason its not working.
> If I look in "/var/log/BackupPC/LOG" I see this error any time I try to
> start a backup:
>
> "2007-07-08 20:00:01 truck: mkdir /data/BackupPC: Permission denied at
> /usr/local/BackupPC/bin/BackupPC_dump line 193"
>
> Any ideas?
>
> Thanks!
>
> Arch
>
> -
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
Check your $Conf{TopDir} (e.g. grep TopDir /etc/backuppc/config.pl)
If it is set to /data/BackupPC you have to mkdir /data/BackupPC.
If it is not set to /data/BackupPC you could set it (wether 
nano /etc/backuppc/config.pl or within the web interface
Thats all.

br
Matthias
-- 
Don't Panic

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied at /usr/local/BackupPC/bin/BackupPC_dump line 193

2007-07-09 Thread Matthias Meyer
Am Montag 09 Juli 2007 12:45 schrieb Nils Breunese (Lemonbit):
> Arch Willingham wrote:
> > I am trying to do my first backup. For whatever reason its not
> > working. If I look in "/var/log/BackupPC/LOG" I see this error any
> > time I try to start a backup:
> >
> > "2007-07-08 20:00:01 truck: mkdir /data/BackupPC: Permission denied
> > at /usr/local/BackupPC/bin/BackupPC_dump line 193"
> >
> > Any ideas?
>
> It seems the backuppc user doesn't have permission to write to /data/
> BackupPC (which I guess you use as the location for your BackupPC pool).
>
> Nils Breunese.
I believe backuppc don't need write acces to /data or /data/BackupPC. It 
only need execute to this directories and write access to the files and 
directories within /data/BackupPC.

br
Matthias
-- 
Don't Panic

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Backup in Progress

2007-07-09 Thread Jim Elliott
To the backuppc list,


I am new to backupPC so please excuse my ignorance in the software.
I have a job that has a status of a "backup in progress" The last backup was 
run on 7/1/07. Is there a way that I can check to see what this backup is 
waiting for?

Please email me with any suggestions.

Thanks,

Jim<>-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] tarextract, checksum error

2007-07-09 Thread Craig Barratt
Bruno writes:

> I have installed backuppc 2.1.2pl1 on debian etch. 
> 
> Currently i backup 7 servers, on 5 servers all works fine but:
> 
> On two server i got the following xferlog:
> 
> tarExtract: .
> 
> tarExtract: : checksum error at 
> 
> tarExtract: Can't open /var/lib/backuppc/pc/dayta.brain-tec.ch/new/f%2f/
> for empty output
> 
>   create 0   0/0   0 .
> 
> tarExtract: : checksum error at 
> 
> the tar version on both machines is 1.16, the xfer method ist tar. 
> 
> Now i have updated to 3.0.0 but the problem persists

Can you send me the whole XferLOG file off-list?

The file just prior to the error is not being extracted
(or archived) correctly.  I assume it is a large file.

The next step would be to run just the tar command and
send me the tar file (which might be hard since I expect
it will be big).  Otherwise, I can send you some debug
code to add the BackupPC_tarExtract.

Craig

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] dissapointed and lost

2007-07-09 Thread Holger Parplies
Hi,

ilias wrote on 09.07.2007 at 08:01:13 [Re: [BackupPC-users] dissapointed and 
lost]:
> I still get the linking error though.

pool and PC directory on different file systems?

Regards,
Holger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] /var/lib/backuppc replace HDD

2007-07-09 Thread Holger Parplies
Hi,

> Am Montag 09 Juli 2007 15:25 schrieb Ralf Gross:
> > Krsnendu dasa schrieb:
> > > I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
> > > clone this to a newer harddrive?
> >
> > I've done this a few weeks ago and it worked.

you can 'dd' the LV to a LV (or disk partition) on the new disk. You can
probably 'dd' the PV to a PV of the same size too, but I wouldn't recommend
that. Since there's no 'pvextend' command, I doubt you'll be able to resize
the PV to take advantage of more space on the destination drive.

In both cases make sure to have the file system on it *unmounted* (or mounted
read-only if you must) during the whole operation. Taking an LVM snapshot
should also give you a reasonable (though not clean) file system state.

If you want to find out why you're actually using LVM, use 'pvmove' (note
though that your kernel/dm_mod needs enabled "device mapper mirror support"
(CONFIG_DM_MIRROR) - if in doubt, try it out; 'pvmove' will complain if it
is missing):

1. pvcreate a PV on the new harddisk,
2. vgextend the VG to include it,
3. pvmove the LV (and any other ones if applicable) off the old harddisk and
4. vgreduce the old PV (on the old harddisk) out of the VG.

Refer to the respective man pages. I'm not giving the exact syntax here,
because you should really know what you are doing and what you *can* do with
LVM :).

The advantage of this is that you can do it without unmounting the file
system - during running backups if you like. In theory, it should not even
slow the backups down, though it surely won't speed the 'pvmove' up if there
is heavy disk activity :). No changing device names, no changing minor device
numbers, no risk of confusing source and destination devices.

Matthias Meyer recommended on 09.07.2007 at 22:08:36 [Re: [BackupPC-users] 
/var/lib/backuppc replace HDD]:
> cc -a also works

The C compiler? Promising idea. At least better than suggesting 'cp'.

For the sake of the archives and those searching them: 'cp' will take *long*
for any reasonably sized pool (as reported many times). For a large pool, you
may spend days or even weeks on copying with 'cp' even though you thought it
should be a matter of hours. There is probably no good estimate how long a
specific pool size will take, because it depends on what your file trees look
like. There is probably no good way to check on progress, because the hard
part of the job is re-creating hard links. Having your destination file
system contain 99% of the amount of data it is supposed to contain does not
mean 99% of the copy time has elapsed. You would not be the first person to
abort copying your pool this way after days of waiting for it to complete.

With 'dd', it's a nice linear copy of one disk to another. Modern SATA disks
should give you transfer rates of 30MB/s and more. 'dd_rescue' even shows
its progress and is particularly handy if your source disk is failing. You
can interrupt 'dd'/'dd_rescue' and restart it later, leaving out [a large
part of] what you previously copied, if you get the parameters right :).

'pvmove' also copies below the FS level. I would expect it to be slightly
slower than 'dd'. You can interrupt and resume a 'pvmove' operation if need
be. You basically tell 'pvmove' to "move all extents of LV xxx off a
particular disk", so you don't need to figure out how far it got and where
to continue. You can keep the FS mounted and even run backup or restore
operations during the move. You can seemlessly move a LV from one source disk
to several (eg. smaller) destination disks or vice versa. The only downside
compared to 'dd' (yes, and 'cp') is that you don't get a free backup copy of
your pool file system. Also, I don't know how well 'pvmove' handles disks
with read errors. I'm rather sure 'cp' doesn't handle them well at all,
unless of course they're confined to unused space within the file system.

Regards,
Holger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync incremental-only backups for eternity

2007-07-09 Thread Holger Parplies
Hi,

Carl Wilhelm Soderstrom wrote on 09.07.2007 at 08:12:18 [Re: [BackupPC-users] 
rsync incremental-only backups for eternity]:
> On 07/09 08:22 , Rob Owens wrote:
> > I see that the only difference between rsync "full" and rsync 
> > "incremental" backups is that "full" uses the --ignore-times option.  
> > Under what circumstances would this option be desirable?  Seems to me 
> > that doing incremental backups forever would suffice, but maybe I'm 
> > missing something.  What is the risk associated with only performing 
> > incremental backups with rsync?

http://sourceforge.net/mailarchive/message.php?msg_name=20070501024958.GM25826%40mail.parplies.de
http://sourceforge.net/mailarchive/message.php?msg_name=20070615011832.GL25826%40mail.parplies.de

> The way I tend to think of it (and I may be completely off base here) is
> that 'fulls' are like what you get if you just run the 'rsync' tool from the
> command line with no fancy options. 'Incrementals' are a faster way to do
> things.

I believe 'rsync's default mode of operation is actually incremental-type
(i.e. optimize based on modification times). That makes sense for
interactive operation, as you can decide what you need each time.
'--ignore-times' is, in general, rarely needed, and it is an expensive
operation, much like '-H' (which is also not default, even in '-a'). If you
want to be absolutely sure you're making an exact copy, you need
'rsync -aH --ignore-times', but usually 'rsync -a' will do (and be much
faster). Sometimes you might need to leave out '-o' and '-g' if you require
copies with identical content but different ownership. As always, you need
to know what you want before you can select the correct options to achieve
it. If your interactive invocation does *not* do what you expect, you'll
hopefully notice and correct it.

For use as an automatic backup tool you want an invocation which produces an
exact copy of your data under all circumstances - for every machine, whatever
data your users happen to keep there, however they manipulate it. In addition,
you likely won't notice anything going wrong until you need to restore the data
and the restored data is not correct.

As backups are commonly done frequently and with large data sets, it may be
necessary to speed things up at an acceptable cost. Incremental backup
strategies based on modification times are common enough. The drawbacks are
basically known (if not always taken into account). With rsync, such a strategy
*does* speed things up significantly, so it's natural to adopt a well known
term and its well known implementation, even though rsync "full" backups in
BackupPC already have almost all the benefits of conventional "incremental"
backups (storage and bandwidth wise).

> Always keep in mind tho that BackupPC is *not* using the rsync tool on the
> server side. It's using the File::RsyncP Perl module.

Meaning not all options it respects are visible in the remote rsync
invocation, and not all rsync options are supported by File::RsyncP.
The '--delete' options are examples of things the remote instance does
not need to worry about. BackupPC will take note of deleted files
although these options are not explicitly visible.

> On 07/09 06:31 , Jean-Michel Beuken wrote:
> > in theory, in the version 3.x of BackupPC, we can take incrementals of 
> > different levels ( $Conf{IncrLevels} )...
> > but, my little experience shows that we don't gain a lot of time with 
> > high level (with rsync) worse some incr takes more
> > times that the full :-(

Some incrementals need to transfer more data than a full backup would in the
same situation :-). Note also the cost of constructing a backup view which
increases with the level of the backup.

Regards,
Holger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup in Progress

2007-07-09 Thread Nicholas Hall

On 7/9/07, Jim Elliott <[EMAIL PROTECTED]> wrote:


To the backuppc list,


I am new to backupPC so please excuse my ignorance in the software.
I have a job that has a status of a "backup in progress" The last backup
was run on 7/1/07. Is there a way that I can check to see what this backup
is waiting for?



You could try using strace (ktrace on bsd) on the job PID.  It should give
you a general idea of what it's doing, if anything.  You could run something
similar on the backup client process.

--
Nicholas Hall
[EMAIL PROTECTED]
262.208.6271
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/